American computer scientist
POPULARITY
Software Engineering Radio - The Podcast for Professional Software Developers
Asanka Abeysinghe, CTO at WSO2, joins host Giovanni Asproni to discuss cell-based architecture -- a style that's intended to combine application, deployment, and team architecture to help organizations respond quickly to changes in the business environment, customer requirements, or enterprise strategy. Cell-based architecture is aimed at creating scalable, modular, composable systems with effective governance mechanisms. The conversation starts by introducing the context and some vocabulary before exploring details about the main elements of the architecture and how they fit together. Finally, Asanka offers some advice on how to implement a cell-based architecture in practice. Brought to you by IEEE Computer Society and IEEE Software magazine. Related Episodes SE Radio 396: Barry O'Reilly on Antifragile Architecture SE Radio 331: Kevin Goldsmith on Architecture and Organizational Design SE Radio 263: Camille Fournier on Real-World Distributed Systems SE Radio 236: Rebecca Parsons on Evolutionary Architecture SE Radio 213: James Lewis on Microservices SE Radio 210: Stefan Tilkov on Architecture and Micro Services SE Radio 203: Leslie Lamport on Distributed Systems
Sarah Christoff discusses her experiences and challenges as an open source maintainer with a focus on her work with the Porter and Zarf projects. Sarah shares insights into the frustrations and isolation often felt by maintainers, and emphasizes the importance of community and human connections in navigating these roles. We chatted about of Porter and its function in simplifying complex DevOps tool integrations. Additionally, Sarah talks about Zarf, a project recently donated to the OpenSSF aimed at facilitating air-gapped Kubernetes deployments. 00:00 Introduction 01:29 Challenges of Being an Open Source Maintainer 03:12 The Human Element in Software Development 05:45 Advice for Aspiring Maintainers 08:42 The Porter Project 11:10 The Zarf Project 13:09 The Importance of Community in Open Source 15:31 Women in Tech and Role Models 21:45 Animal Rescue and Community Building 26:10 Final Thoughts and Hot Takes on Open Source Guest: Sarah Christoff is a software engineer at Defense Unicorns who loves making complex code more digestible. She is the self-proclaimed founder of the Leslie Lamport fan club. When she's not bugbusting, she is running her animal rescue and competing in triathlons. She believes code should be like cats: intelligent, fluffy, and easy to take care of.
Please excuse the poor quality of my microphone, as the wrong microphone was selected. In research, we are all just building on the shoulders of true giants, and there are few larger giants than Leslie Lamport — the creator of LaTeX. For me, every time I open up a LaTeX document, I think of the work he did on creating LaTeX, and which makes my research work so much more productive. If I was still stuck with Microsoft Office for research, I would spend half of my time in that horrible equation editor, or in trying to integrate the references into the required format, or in formatting Header 1 and Header 2 to have a six-point spacing underneath. So, for me, the contest between LaTeX and Microsoft Word is a knock-out in the first round. And one of the great things about Leslie is that his work is strongly academic — and which provides foundations for others to build on. For this, he did a great deal on the ordering of task synchronisation, in state theory, cryptography signatures, and fault tolerance. LaTeX I really can say enough about how much LaTeX — created in 1984 — helps my work. I am writing a few books just now, and it allows me to lay out the books in the way that I want to deliver the content. There's no need for a further mark-up, as I work on the output that the reader will see. But the true genius of LaTeX is the way that teams can work on a paper, and where there can be async to GitHub and where version control is then embedded. Clocks Many in the research community think that the quality measure of a paper is the impact factor of the journal that it is submitted to, or in the amount of maths that it contains. But, in the end, it is the impact of the paper, and how it changes thinking. For Leslie, in 1978, his paper on clocks changed our scientific world and is one of the most cited papers in computer science. Byzantine Generals Problem In 1981, Leslie B Lamport defined the Byzantine Generals Problem. And in a research world where you can have 100s of references in a paper, Leslie only used four (and which would probably not be accepted these days for having so few references). Within this paper, the generals of a Byzantine army have to agree to their battle plan, in the face of adversaries passing in order information. In the end, we aim to create a way of passing messages where if at least two out of three of the generals are honest, we will end up with the correct battle plan. The Lamport Signature Sometime soon, we perhaps need to wean ourselves of our existing public key methods and look to techniques that are more challenging for quantum computers. With the implementation of Shor's algorithm [here] on quantum computers, we will see our RSA and Elliptic Curve methods being replaced by methods which are quantum robust. One method is the Lamport signature method and which was created by Leslie B. Lamport in 1979.
Marty Cagan is a luminary in the world of product. He's the author of two of the most foundational books for product teams and product leaders (Inspired and Empowered), he's the founder of Silicon Valley Product Group (one of the longest-running product advisory groups), and he's almost certainly worked with more product leaders and teams than any human alive. Now he's releasing his newest book, Transformed, which is sure to become a staple of tech-powered companies worldwide. Marty's previous appearance on our show remains one of the most popular episodes to date. In this conversation, we discuss:• The rise of “product management theater”• Changes in the PM role post-ZIRP and the shift from growth to build functions• The disconnect between good product companies and online product advice• How over-hiring has created challenges in the product industry• The most important skills for PMs to build• How to know if you're on a “feature team”• The potential disruption of product management by AI• Marty's new book, Transformed: Moving to the Product Operating Model• Four new competencies required for successful product organizations—Brought to you by:• Sprig—Build a product people love• Eppo—Run reliable, impactful experiments• Vanta—Automate compliance. Simplify security.—Find the transcript for this episode and all past episodes at: https://www.lennyspodcast.com/episodes/. Today's transcript will be live by 8 a.m. PT.—Where to find Marty Cagan:• X: https://twitter.com/cagan• LinkedIn: https://www.linkedin.com/in/cagan/• Silicon Valley Product Group: https://www.svpg.com/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Marty's background(04:46) His take on the state of product management(12:08) Product management theater(18:33) Feature teams vs. empowered product teams(24:48) Skills of a real product manager(29:27) The product management reckoning is here(32:05) Taking control of your product management career(34:59) The challenge of finding reliable product management advice(40:18) The disconnect between good product companies and the product management community(44:23) Top-down vs. bottom-up cultures(47:06) The shift in product management post-ZIRP era(49:44) The changing landscape of product management(52:05) The disruption of PM skills by AI(55:56) The purpose and content of Marty's new book, Transformed(01:02:05) The product operating model(01:08:27) New competencies required for successful product teams(01:11:25) Marty's thoughts on product ops(01:15:13) Advice for founders who don't want product managers(01:18:06) Lightning round—Referenced:• Transformed: Moving to the Product Operating Model: https://www.amazon.com/Transformed-Becoming-Product-Driven-Company-Silicon/dp/1119697336• Inspired: How to Create Tech Products Customers Love: https://www.amazon.com/INSPIRED-Create-Tech-Products-Customers/dp/1119387507• Empowered: Ordinary People, Extraordinary Products: https://www.amazon.com/EMPOWERED-Ordinary-Extraordinary-Products-Silicon/dp/111969129X• The nature of product | Marty Cagan, Silicon Valley Product Group: https://www.lennyspodcast.com/the-nature-of-product-marty-cagan-silicon-valley-product-group/• Product Leadership Theater: https://www.svpg.com/product-leadership-theater/• Product Management Theater: https://www.svpg.com/product-management-theater/• Linear: https://linear.app/• How Linear builds product: https://www.lennysnewsletter.com/p/how-linear-builds-product• Brian Chesky's new playbook: https://www.lennyspodcast.com/brian-cheskys-new-playbook/• Mamas, don't let your babies grow up to be coders, Jensen Huang warns: https://www.theregister.com/2024/02/27/jensen_huang_coders/• Epic Waste: https://www.svpg.com/epic-waste/• What is scrum and how to get started: https://www.atlassian.com/agile/scrum• CSPO: https://www.scrumalliance.org/get-certified/product-owner-track/certified-scrum-product-owner• PSPO: https://www.scrum.org/courses/professional-scrum-product-owner-training• Jira: https://www.atlassian.com/software/jira• Continuous Discovery Habits: Discover Products That Create Customer Value and Business Value: https://www.amazon.com/Continuous-Discovery-Habits-Discover-Products/dp/1736633309• Shreyas Doshi on LinkedIn: https://www.linkedin.com/in/shreyasdoshi/• Ben Erez's LinkedIn post: https://www.linkedin.com/feed/update/urn:li:activity:7168978777966891008/• Oracle: https://www.oracle.com/• The essence of product management | Christian Idiodi (SVPG): https://www.lennyspodcast.com/the-essence-of-product-management-christian-idiodi-svpg/• Making Meta | Andrew ‘Boz' Bosworth (CTO): https://www.lennyspodcast.com/making-meta-andrew-boz-bosworth-cto/• Building a long and meaningful career | Nikhyl Singhal (Meta, Google): https://www.lennyspodcast.com/building-a-long-and-meaningful-career-nikhyl-singhal-meta-google/• Partners at SVPG: https://www.svpg.com/team/• Trainline: https://www.thetrainline.com/• Almosafer: https://global.almosafer.com/• Expedia: https://www.expedia.com/• Shopify: https://www.shopify.com/• Salesforce: https://www.salesforce.com/• The ultimate guide to product operations | Melissa Perri and Denise Tilles: https://www.lennyspodcast.com/the-ultimate-guide-to-product-operations-melissa-perri-and-denise-tilles/• Understanding the role of product ops | Christine Itwaru (Pendo): https://www.lennyspodcast.com/understanding-the-role-of-product-ops-christine-itwaru-pendo/• Build: An Unorthodox Guide to Making Things Worth Making: https://www.amazon.com/Build-Unorthodox-Guide-Making-Things/dp/0063046067• What's Our Problem?: A Self-Help Book for Societies: https://www.amazon.com/Whats-Our-Problem-Self-Help-Societies/dp/B0BVGH6T1Q• Rivian: https://rivian.com/• AI-1 airbag vest: https://www.klim.com/Ai-1-Airbag-Vest-3046-000• Leslie Lamport's quote: https://quotefancy.com/quote/3702194/Leslie-Lamport-If-you-re-thinking-without-writing-you-only-think-you-re-thinking• Joan Didion's quote: https://www.goodreads.com/quotes/264509-i-don-t-know-what-i-think-until-i-write-it—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe
Leslie Lamport is a computer scientist & mathematician who won ACM's Turing Award in 2013 for his fundamental contributions to the theory and practice of distributed and concurrent systems. He also created LaTeX and TLA+, a high-level language for “writing down the ideas that go into the program before you do any coding.”
Leslie Lamport is a computer scientist & mathematician who won ACM's Turing Award in 2013 for his fundamental contributions to the theory and practice of distributed and concurrent systems. He also created LaTeX and TLA+, a high-level language for “writing down the ideas that go into the program before you do any coding.”
Related page: https://medium.com/asecuritysite-when-bob-met-alice/clocks-latex-byzantine-generals-and-post-quantum-crypto-meet-the-amazing-leslie-b-lamport-b2ade4b590d7 Demo: https://asecuritysite.com/hashsig/lamport Introduction I write this article in Medium and with its limited text editor, but I really would love to write it in LaTeX. Before the monopoly of Microsoft Word, there were document mark-up systems such as Lotus Manuscript, and where we had a basic editor to produce publishing-ready content. The GUI came along, and all the back-end stuff was pushed away from the user. For many, this is fine, but for those whose output is focused on sharing and dissemination of research, it is often the only way to work. In research, LaTeX is King and is a fully formed method of laying out — and sharing — research outputs. In the past few years, we have published over 100 research papers, and not one of them has been created in Microsoft Word. And for this, I thank Leslie Lamport. In fact, ask our kids about Newton, Faraday or Einstein, and they could probably tell you something about them. But ask them about Whitfield Diffie, Shafi Goldwasser, or Leslie B Lamport, and they would probably look quizzical? Their future world, though, is probably going to be built around some of the amazing minds that built the most amazing structure ever created … The Internet. To Leslie Lamport So, I am so privileged to be an academic researcher. For me, teaching, innovation and research go hand-in-hand, and where the things I research into gives me ideas for innovation, and which I can then integrate these things into my teaching. The continual probing of questions from students also pushes me to think differently about things, and so the cycle goes on. But, we are all just building on the shoulders of true giants, and there are few larger giants than Leslie Lamport — the creator of LaTeX. For me, every time I open up a LaTeX document, I think of the work he did on creating LaTeX, and which makes my research work so much more productive. If I was still stuck with Microsoft Office for research, I would spend half of my time in that horrible equation editor, or in trying to integrate the references into the required format, or in formatting Header 1 and Header 2 to have a six-point spacing underneath. So, for me, the contest between LaTeX and Microsoft Word is a knock-out in the first round. And one of the great things about Leslie is that his work is strongly academic — and which provides foundations for others to build on. For this, he did a great deal on the ordering of task synchronisation, in state theory, cryptography signatures, and fault tolerance. LaTeX I really can say enough about how much LaTeX — created in 1984 — helps my work. I am writing a few books just now, and it allows me to lay out the books in the way that I want to deliver the content. There's no need for a further mark-up, as I work on the output that the reader will see. But the true genius of LaTeX is the way that teams can work on a paper, and where there can be async to GitHub and where version control is then embedded. Overall we use Overleaf, but we're not tie-in to that, and can move to any editor we want. But the process is just so much better than Microsoft Word, especially when creating a thesis. Word is really just the same old package it was in the 1990s, and still hides lots away, and which makes it really difficult to create content which can easily be changed for its layout. With LaTeX, you create the content and can then apply whatever style you want. Clocks Many in the research community think that the quality measure of a paper is the impact factor of the journal that it is submitted to, or in the amount of maths that it contains. But, in the end, it is the impact of the paper, and how it changes thinking. For Leslie, in 1978, his paper on clocks changed our scientific world and is one of the most cited papers in computer science [here]: Byzantine Generals Problem In 1981, Leslie B Lamport defined the Byzantine Generals Problem [here]: And in a research world where you can have 100s of references in a paper, Leslie only used four (and which would probably not be accepted these days for having so few references): Within this paper, the generals of a Byzantine army have to agree to their battle plan, in the face of adversaries passing in order information. In the end, we aim to create a way of passing messages where if at least two out of three of the generals are honest, we will end up with the correct battle plan. So why don't we build computer systems like this, and where we support failures in parts of the system, or where parts of the system may be taken over for malicious purposes? And the answer is … no reason, it just that we are stuck with our 1970s viewpoint of the computing world, and everything works perfectly, and security is someone else's problem to fix. So, we need a system where we create a number of trusted nodes to perform a computation, and then an election process at the end to see if we have a consensus for a result. If we have three generals (Bob, Alice and Eve), we need two of them to be honest, which means we can cope with one of our generals turning bad: In this case, Eve could try and sway Trent by sending the wrong command, but Bob and Alice will build a better consensus, and so Trent will go with them. The work can then be defined as MPC (Multiparty Computation) and where we have multiple nodes getting involved to produce the final result. In many cases, this result could just be as simple as a Yes or No, and as to whether a Bitcoin transaction is real or fake, or whether an IoT device has a certain voltage reading. The Lamport Signature Sometime soon we perhaps need to wean ourselves of our existing public key methods and look to techniques that are more challenging for quantum computers. With the implementation of Shor's algorithm [here] on quantum computers, we will see our RSA and Elliptic Curve methods being replaced by methods which are quantum robust. One method is the Lamport signature method and which was created by Leslie B. Lamport in 1979 [here]: At the current time, it is thought to be a quantum robust technique for signing messages. When we sign a message we take its hash and then encrypt it with our private key. The public key is then used to prove it and will prove that we signed it with our private key. The Lamport signature uses 512 random hashes for the private key, and which are split into Set A and Set B. The public key is the hash of each of these values. The size of the private key is 16KB (2×256×256 bits) and the public key size is also 16 KB (512 hashes with each of 256 bits). The basic method of creating a Lamport hash signature is: We create two data sets with 256 random 256-bit numbers (Set A and Set B). These are the private key (512 values). Next, we take the hash of each of the random numbers. This will give 512 hashes and will be the public key. We then hash the message using SHA-256, and then test each bit of the hash (0 … 255). If it is a 0, we use the ith number in Set A, else we use the ith number from Set B. The signature is then 256 random numbers (taken from either Set A or Set B) and the public key is the 512 hashes (of Set A and Set B). This process is illustrated below: We can use the Lamport method for one-time signing, but, in its core format, we would need a new public key for each signing. The major problem with Lamport is thus that we can only sign once with each public key. We can overcome this, though, by creating a hash tree which is a merger of many public keys into a single root. A sample run which just shows the first few private keys and the first public keys: ==== Private key (keep secret) =====Priv[0][0] (SetA): 6f74f11f20953dc91af94e15b7df9ae00ef0ab55eb08900db03ebdf06d59556cPriv[0][1] (SetB): 4b1012fc5669b45672e4ab4b659a6202dd56646371a258429ccc91cdbcf09619Priv[1][0] (SetA): 19f0f71e913ca999a23e152edfe2ca3a94f9869ba973651a4b2cea3915e36721Priv[1][1] (SetB): 04b05e62cc5201cafc2db9577570bf7d28c77e923610ad74a1377d64a993097ePriv[2][0] (SetA): 15ef65eda3ee872f56c150a5eeecff8abd0457408357f2126d5d97b58fc3f24ePriv[2][1] (SetB): 8b5e7513075ce3fbea71fbec9b7a1d43d049af613aa79c6f89c7671ab8921073Priv[3][0] (SetA): 1c408e62f4c44d73a2fff722e6d6115bc614439fff02e410b127c8beeaa94346Priv[3][1] (SetB): e9dcbdd63d53a1cfc4c23ccd55ce008d5a71e31803ed05e78b174a0cbaf43887==== Public key (show everyone)=====Pub[0][0]: 7f2c9414db83444c586c83ceb29333c550bedfd760a4c9a22549d9b4f03e9ba9Pub[0][1]: 4bc371f8b242fa479a20f5b6b15d36c2f07f7379f788ea36111ebfaa331190a3Pub[1][0]: 663cda4de0bf16a4650d651fc9cb7680039838d0ccb59c4300411db06d2e4c20Pub[1][1]: 1a853fde7387761b4ea22fed06fd5a1446c45b4be9a9d14f26e33d845dd9005f==== Message to sign ===============Message: The quick brown fox jumps over the lazy dogSHA-256: d7a8fbb307d7809469ca9abcb0082e4f8d5651e46d3cdb762d02d0bf37c9e592==== Signature =====================Sign[0]: 4b1012fc5669b45672e4ab4b659a6202dd56646371a258429ccc91cdbcf09619Sign[1]: 04b05e62cc5201cafc2db9577570bf7d28c77e923610ad74a1377d64a993097eSign[2]: 8b5e7513075ce3fbea71fbec9b7a1d43d049af613aa79c6f89c7671ab8921073Sign[3]: 1c408e62f4c44d73a2fff722e6d6115bc614439fff02e410b127c8beeaa94346The signature test is True In this case, we take the random number and then convert it to a string. So the SHA-256 signature of “6f74f11f20953dc91af94e15…0db03ebdf06d59556c” is 7f2c9414db83444c586c…49d9b4f03e9ba9. If can be seen that the hash of the message (“The quick brown fox jumps over the lazy dog”) has a hex D value at the start, which is 1101 in binary, and we see we take from SetB [0], SetB [1], SetA [2] and SetB [3]. A demonstration is given here. Conclusions The Internet we have now built on the foundations that Leslie applied. On 18 March 2013, he received the AM Turing Award (which is like a Nobel Prize in Computer Science). At the time, Bill Gates said: Leslie has done great things not just for the field of computer science, but also in helping make the world a safer place. Countless people around the world benefit from his work without ever hearing his name. … Leslie is a fantastic example of what can happen when the world's brightest minds are encouraged to push the boundaries of what's possible. Basically, much of our computing world is still using the amazing foundation that was created in the 1970s and 1980s. We tip our hats to Diffie Hellman, Shafi Goldwasser, Ralph Merkle, Ron Rivest, Adi Shamir, and, of course, Leslie B Lamport. As a note, that while Leslie's paper on Clocks is cited over 12,000 times, the Diffie Hellman paper is cited over 19,300 times: We really should be integrating computer science into our school curriculum, and show that it has equal standing to physics, biology and chemistry, and it will shape our future world as much as the others. Why not teach kids about public-key cryptography in the same way that we talk about Newton?
Vaughn and Greg discuss a lot of topics!CQRSEvent SourcingThe CQRS and Event Sourcing book Greg was completing at the time of this interviewEventStore and specific unique featuresEventStore Clustering complexity PAXOS (defined by Leslie Lamport in 1998)Data Mesh "The Old New Thing" UDP multicast for trading messagingHis upcoming next bookHis current work challengesGregory Young coined the term “CQRS” (Command Query Responsibility Segregation) and it was instantly picked up by the community who have elaborated upon it ever since. He is the founder and creator of Event Store. He has 15+ years of varied experience in computer science from embedded operating systems to business systems and he brings a pragmatic and often times unusual viewpoint to discussions. Hosted on Acast. See acast.com/privacy for more information.
Leslie Lamport is known for his fundamental contributions to the theory and practice of distributed and concurrent systems, notably the invention of concepts such as causality and logical clocks, safety and liveness, replicated state machines, and sequential consistency. Full Youtube video: https://youtu.be/rNQFPz2KSzQ
The cycle between research and application is often too long and can take decades to complete. It is often asked what bit of research or technology is the most important? Before we can answer that question, I think it's important to take a step back and share the story of why we believe The Dynamo Paper is so essential to our modern world and how we encountered it. Citations:DeCandia, G., Hastorun, D., Jampani, M., Kakulapati, G., Lakshman, A., Pilchin, A., ... & Vogels, W. (2007). Dynamo: Amazon's highly available key-value store. ACM SIGOPS operating systems review, 41(6), 205-220.Karger, D., Lehman, E., Leighton, T., Panigrahy, R., Levine, M., & Lewin, D. (1997, May). Consistent hashing and random trees: Distributed caching protocols for relieving hot spots on the world wide web. In Proceedings of the twenty-ninth annual ACM symposium on Theory of computing (pp. 654-663).Lamport, L. (2019). Time, clocks, and the ordering of events in a distributed system. In Concurrency: the Works of Leslie Lamport (pp. 179-196).Merkle, R. C. (1987). A digital signature based on conventional encryption. In Proceedings of the USENIX Secur. Symp (pp. 369-378). Our Team:Host: Angelo KastroulisExecutive Producer: Náture KastroulisProducer: Albert PerrottaCommunications Strategist: Albert PerrottaAudio Engineer: Ryan ThompsonMusic: All Things Grow by Oliver Worth
During a conversation in the #gotime channel of Gopher Slack, Jerod mentioned that some people paint with a blank canvas while others paint by numbers. In this 8th episode of the maintenance series, we're talking about maintaining our knowledge. With Jerod's analogy and a little help from a Leslie Lamport interview, our panel discusses the myth of incremental progress.
During a conversation in the #gotime channel of Gopher Slack, Jerod mentioned that some people paint with a blank canvas while others paint by numbers. In this 8th episode of the maintenance series, we're talking about maintaining our knowledge. With Jerod's analogy and a little help from a Leslie Lamport interview, our panel discusses the myth of incremental progress.
This week, Anna (https://twitter.com/annarrose) speaks with Adam Gągol (https://twitter.com/GagolAdam) and Matthew Niemerg (https://twitter.com/matthewniemerg) about Aleph Zero - an L1 project mixing ZKPs and MPCs with a DAG consensus algorithm. The aim of their project is to enable private smart contracts. In the conversation, they explore the underlying DAG structure and their privacy solutions that leverage zero-knowledge proofs (zk-SNARKs) and Secure Multiparty Computation (sMPC). Here are some links for this episode: * Paper - Aleph: Efficient Atomic Broadcast in Asynchronous Networks with Byzantine Nodes (https://arxiv.org/abs/1908.05156) * Time, Clocks, and the Ordering of Events in a Distributed System by Leslie Lamport (https://lamport.azurewebsites.net/pubs/time-clocks.pdf) * Episode 188: Analyzing Osmosis & Preventing MEV with Sunny and Dev (https://zeroknowledge.fm/188-2/) ZK Hack kicked off on October 26th and goes until December 7th — a multi-round online event with workshops and puzzle solving competitions. Put together by the Zero Knowledge podcast and the ZKValidator and supported by all of our fantastic sponsors. Even if you missed the first session you can still join the next ones, head to the website to check out the schedule of workshops and puzzles: https://www.zkhack.dev/ Thank you to this week's sponsor DeversiFi! DeversiFi's mission is to make the opportunities of DeFi available to everyone and their platform enables this with an impressive user experience and simple to use interface. Built with some of the cutting edge StarkWare scaling tech you've heard about on the podcast, this is a great way to start using zk tech live today. Right now on the platform you can invest, trade and send tokens and manage your portfolio all without paying gas fees. Find out more about how you can make to most of DeFi at https://www.deversifi.com/. If you like what we do: Subscribe to our podcast newsletter (https://zeroknowledge.substack.com/) to not miss any event! Follow us on Twitter - @zeroknowledgefm (https://twitter.com/zeroknowledgefm) Join us on Telegram (https://t.me/joinchat/TORo7aknkYNLHmCM) Catch us on Youtube (https://www.youtube.com/channel/UCYWsYz5cKw4wZ9Mpe4kuM_g) Read up on the r/ZKPodcast subreddit (https://www.reddit.com/r/zkpodcast) Give us feedback! -https://forms.gle/iKMSrVtcAn6BByH6A Support our Gitcoin Grant (https://gitcoin.co/grants/329/zero-knowledge-podcast-2) Support us on the ZKPatreon (https://www.patreon.com/zeroknowledge) Donate through coinbase.commerce (https://commerce.coinbase.com/checkout/f1e56274-c92b-4a99-802f-50727d651b38)
In this collaboration with ACM ByteCast and Hanselminutes, Scott welcomes 2013 ACM A.M. Turing Award laureate Leslie Lamport of Microsoft Research, best known for his seminal work in distributed and concurrent systems, and as the initial developer of the document preparation system LaTeX and the author of its first manual. Among his many honors and recognitions, Lamport is a Fellow of ACM and has received the IEEE Emanuel R. Piore Award, the Dijkstra Prize, and the IEEE John von Neumann Medal.Leslie shares his journey into computing, which started out as something he only did in his spare time as a mathematician. Scott and Leslie discuss the differences and similarities between computer science and software engineering, the math involved in Leslie’s high-level temporal logic of actions (TLA), which can help solve the famous Byzantine Generals Problem, and the algorithms Leslie himself has created. He also reflects on how the building of distributed systems has changes since the 60s and 70s.Subscribe to the ACM ByteCast at https://learning.acm.org/bytecast Time-Clocks Paperhttp://lamport.azurewebsites.net/pubs/time-clocks.pdf Bakery Algorithmhttps://en.wikipedia.org/wiki/Lamport%27s_bakery_algorithm Mutual Exclusion Algorithmhttps://en.wikipedia.org/wiki/Lamport%27s_distributed_mutual_exclusion_algorithm
In this episode of ACM ByteCast, our special guest host Scott Hanselman (of The Hanselminutes Podcast) welcomes 2013 ACM A.M. Turing Award laureate Leslie Lamport of Microsoft Research, best known for his seminal work in distributed and concurrent systems, and as the initial developer of the document preparation system LaTeX and the author of its first manual. Among his many honors and recognitions, Lamport is a Fellow of ACM and has received the IEEE Emanuel R. Piore Award, the Dijkstra Prize, and the IEEE John von Neumann Medal. Leslie shares his journey into computing, which started out as something he only did in his spare time as a mathematician. Scott and Leslie discuss the differences and similarities between computer science and software engineering, the math involved in Leslie’s high-level temporal logic of actions (TLA), which can help solve the famous Byzantine Generals Problem, and the algorithms Leslie himself has created. He also reflects on how the building of distributed systems has changes since the 60s and 70s. Links: Time-Clocks Paper Bakery Algorithm Mutual Exclusion Algorithm
Balaji Arun, a PhD Student in the Systems of Software Research Group at Virginia Tech, joins us today to discuss his research of distributed systems through the paper “Taming the Contention in Consensus-based Distributed Systems.” Works Mentioned “Taming the Contention in Consensus-based Distributed Systems” by Balaji Arun, Sebastiano Peluso, Roberto Palmieri, Giuliano Losa, and Binoy Ravindranhttps://www.ssrg.ece.vt.edu/papers/tdsc20-author-version.pdf “Fast Paxos” by Leslie Lamport https://link.springer.com/article/10.1007/s00446-006-0005-x
Computer Science research fellow of Cambridge University, Heidi Howard discusses Paxos, Raft, and distributed consensus in distributed systems alongside with her work “Paxos vs. Raft: Have we reached consensus on distributed consensus?” She goes into detail about the leaders in Paxos and Raft and how The Raft Consensus Algorithm actually inspired her to pursue her PhD. Paxos vs Raft paper: https://arxiv.org/abs/2004.05074 Leslie Lamport paper “part-time Parliament” https://lamport.azurewebsites.net/pubs/lamport-paxos.pdf Leslie Lamport paper "Paxos Made Simple" https://lamport.azurewebsites.net/pubs/paxos-simple.pdf Twitter : @heidiann360 Thank you to our sponsor Monday dot com! Their apps challenge is still accepting submissions! find more information at monday.com/dataskeptic
Sobre los tres principios fundamentales de la programación: estados (cambios), mensajes (objetos) y transparencia referencial (sustitución). Esta última ligada al trabajo del filósofo W. V. Quine en su libro "Word and Object".Además hablaremos de la distinción que realiza Leslie Lamport a los términos: "programming" y "coding".
Un podcast dedicado a Leslie Lamport sobre su trabajo y visión.
In this episode of the podcast we are joined by Chris Keathley to continue our exploration of Elixir internals as he tells us about two very popular libraries that he developed, Wallaby and Raft. We start off with some background and his initial experiences with Elixir and open source projects before diving into Wallaby and some of the biggest lessons that Chris learned during and after his work on the library. Chris does a great job of explaining concurrent tests and the Sandbox and some of the reasons he has pretty much stopped working on the front end of projects. From there we move onto another one of Chris' exciting projects, Raft! In order to introduce the library, Chris explains more about consensus algorithms, Leslie Lamport and his groundbreaking work on Paxos. Raft is, in some ways, a simplified, more accessible version of Paxos for Elixir and Chris goes on to give a brief rundown of its inner workings. For this great conversation with a great guest, join us today! Key Points From This Episode: Chris' background, history with Elixir and his current employment. How Chris got started with open source work. Why Chris has moved away from front end work in the last while. The major lessons Chris learned while building Wallaby. How the concurrent tests work on Wallaby and the Sandbox. Why Chris is still excited about Raft, even though he hasn't touched it in a while. Explaining Raft, consensus algorithms and Paxos. How the Raft library actually works; building Raft systems and processes. Where to find and connect with Chris online! And much more! Links Mentioned in Today’s Episode: SmartLogic — https://www.smartlogic.io/ Chris Keathley — https://keathley.io/ Chris Keathley on github — https://github.com/keathley Bleacher Report — https://bleacherreport.com/ Wallaby — https://hexdocs.pm/wallaby/Wallaby.html Raft — https://raft.github.io/ Erlang — https://www.erlang.org/ Slack — https://slack.com/ Leslie Lamport — http://www.lamport.org/ Paxos Made Live — https://blog.acolyer.org/2015/03/05/paxos-made-live/ Elixir Outlaws Podcast — https://elixiroutlaws.com/ Special Guest: Chris Keathley.
Leslie Lamport is an American computer scientist. Lamport is best known for his seminal work in distributed systems and as the initial developer of the document preparation system LaTeX. Leslie Lamport was the winner of the 2013 Turing Award for imposing clear, well-defined coherence on the seemingly chaotic behavior of distributed computing systems, in which several autonomous computers communicate with each other by passing messages. He devised important algorithms and developed formal modeling and verification protocols that improve the quality of real distributed systems. These contributions have resulted in improved correctness, performance, and reliability of computer systems.
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past, we're able to be prepared for the innovations of the future! Todays episode is on Digital Equipment Corporation, or DEC. DEC was based in Maynard Massachusetts and a major player in the computer industry from the 1950s through the 1990s. They made computers, software, and things that hooked into computers. My first real computer was a DEC Alpha. And it would be over a decade before I used 64-bit technology again. DEC was started in 1957 by Ken Olsen, Stan Olsen, and Harlan Anderson of the MIT Lincoln Laboratory using a $70,000 loan because they could sell smaller machines than the big mainframes to users where output and realtime operation were more important than performance. Technology was changing so fast and there were so few standards for computers that investors avoided them. So they decided to first ship modules, or transistors that could be put on circuit boards and then ship systems. They were given funds and spent the next few years building a module business to fund a computer business. IBM was always focused on big customers. In the 1960s, this gave little DEC the chance to hit the smaller customers with their PDP-8, the first successful mini-computer, at the time setting customers back around $18,500. The “Straight-8” as it was known was designed by Edson de Castro and was about the size of a refrigerator, weighing in at 250 pounds. This was the first time a company could get a computer for less than $20k and DEC sold over 300,000 of them! The next year came the 8/s. No, that's not an iPhone model. It only set customers back $10k. Just imagine the sales team shows up at your company talking about the discrete transistors, the transistor-transistor logic, or TTL. And it wouldn't bankrupt you like that IBM. The sales pitch writes itself. Sign me up! What really sold these though, was the value engineering. They were simpler. Sure, programming was a little harder, and more code. Sure, sometimes that caused the code to overflow the memory. But at the cost savings, you could hire another programmer! The rise of the compiler kinda' made that a negligible issue anyway. The CPU had only four 12-bit registers. But it could run programs using the FORTRAN compiler anruntime, or DECs FOCAL interpreter. Or later you could use PAL-III Assembly, BASIC, or DIBOL. DEC also did a good job of energizing their user base. The Digital Equipment Corporation User Society was created in 1961 by Edward Fredkin and was subsidized by DEC. Here users could trade source code and documentation, with two DECUS US symposia per year - and there people would actually trade code and later tapes. It would later merge with HP and other groups during the merger era and is alive today as the Connect User Group Community, with over 70,000 members! It is still independent today. The User Society was an important aspect of the rise of DEC and of the development of technology and software for mini computers. The feeling of togetherness through mutual support helped keep the costs of vendor support down while also making people feel like they weren't alone in the world. It's also important as part of the history of free software, something we'll talk about in more depth in a later episode. The PDP continued to gain in popularity until 1977, when the VAX came along. The VAX brought with it the virtual address extension for which it derives its name. This was really the advent of on-demand paged virtual memory, although that had been initially adopted by Prime Computer without the same level of commercial success. This was a true 32-bit CISC, or Complex Instruction Set Computer. It ran Digital's VAX/VMS which would later be called OpenVMS; although some would run BSD on it, which maintained VAX support until 2016. This thing set standards in 1970s computing. You know Millions of instructions per second (MIPS) - the VAX was the benchmark. The performance was on par with the IBM System/360. The team at DEC was iterating through chips at a fast rate. Over the next 20 years, they got so good that Soviet engineers bought them just to try and reverse engineer the chips. In fact it got to the point that “when you care enough to steal the very best” was etched into microprocessor die. DEC sold another 400,000 of the VAX. They must have felt on top of the world when they took the #2 computer company spot! DEC was the first computer company with a website, launching dec.com in 85. The DEC Western Research Library started to build a RISC chip called Titan in 1982, meant to run Unix. Alan Kotok and Dave Orbits started designing a 64-bit chip to run VMS (maybe to run Spacewar faster). Two other chips, HR-32 and CASCADE were being designed in 1984. And Prism began in 1985. With all of these independent development efforts, turf wars stifled the ability to execute. By 1988, DEC canceled the projects. By then Sun had SPARC, and were nipping at the heels. Something else was happening. DEC made mini-computers. Those were smaller than mainframes. But microcomputers showed up in the 1980s with he first IBM PC shipping in 1981. But by the early 90s they too were 32-bit. DEC was under the gun to bring the world into 64-bit. The DEC Alpha started at about the same time (if not in the same meeting as the termination of the Prism project. It would not be released in 1992 and while it was a great advancement in computing, it came into a red ocean where there were vendors competing to set the standard of the computers used at every level of the industry. The old chips could have been used to build microcomputers and at a time when IBM was coming into the business market for desktop computers and starting to own it, DEC stayed true to the microcomputer business. Meanwhile Sun was growing, open architectures were becoming standard (if not standardized), and IBM was still a formidable beast in the larger markets. The hubris. Yes, DEC had some of the best tech in the market. But they'd gotten away from value engineering the solutions customers wanted. Sales slumped through the 1990s. Linus Torvalds wrote Linux on a DEC Alpha in the mid-late 90s. Alpha chips would work with Windows and other operating systems but were very expensive. X86 chips from Intel were quickly starting to own the market (creating the term Wintel). Suddenly DEC wasn't an industry leader. When you've been through those demoralizing times at a company, it's hard to get out of a rut. Talent leaves. Great minds in computing like Radia Perlman. She invented Spanning Tree Protocol. Did I mention that DEC played a key role in making ethernet viable. They also invented clustering. More brain drain - Jim Grey (he probably invented half the database terms you use), Leslie Lamport (who wrote LaTex), Alan Eustace (who would go on to become the Senior VP of Engineering and then Senior VP of Knowledge at Google), Ike Nassi (chief scientist at SAP), Jim Keller (who designed Apple's A4/A5), and many, many others. Fingers point in every direction. Leadership comes and goes. By 2002 it was clear that a change was needed. DEC was acquired by Compaq in the largest merger in the computer industry at the time, in part to get the overseas markets that DEC was well entrenched in. Compaq started to cave from too many mergers that couldn't be wrangled into an actual vision. So they later merged with HP in 2002, continuing to make PDP, VAX, and Alpha servers. The compiler division was sold to Intel, and DEC goes down as a footnote in history. Innovative ideas are critical to a company surviving after the buying tornadoes. Strong leaders must reign in territorialism, turf wars and infighting in favor of actually shipping products. And those should be products customers want. Maybe even products you value engineered to meet them where they're at as DEC did in their early days.
Hillel Wayne is a technical writer and consultant on a variety of formal methods, including TLA+ and Alloy. In this episode, Hillel gives a whirlwind tour of the 4 main flavors of formal methods, and explains which are practical today and which we may have to wait patiently for. The episode begins with a very silly joke from Steve (about a radioactive Leslie Lamport) and if you make it to the end you're in store for a few fun tales from Twitter. https://futureofcoding.org/episodes/038
TLA+ is a formal specification language. TLA+ is used to design, model, and verify concurrent systems. TLA+ allows a user to describe a system formally with simple, precise mathematics. TLA+ was designed by Leslie Lamport, a computer scientist and Turing Award winner. Leslie joins the show to talk about the purpose of TLA+. Since its The post TLA+ with Leslie Lamport appeared first on Software Engineering Daily.
Lindsey Kuper (@lindsey) spoke with us about !!Con West, being a new professor, and reading technical journals. The call for speakers for !!Con West is open until November 30, 2018. The conference will be in Santa Cruz, CA on February 23-24. Lindsey’s blog is Composition.al and it has advice for !!Con proposals, advice for potential grad students, and updates on Lindsay’s work. The Banana Slug is the UCSC mascot. Time, Clocks, and the Ordering of Events in a Distributed System by Leslie Lamport, 1978
Leslie Lamport es un científico en la industria, un rara avis cuyas contribuciones han tenido un impacto tan sutil como importante. Lamport fue capaz de resolver problemas básicos en la construcción de los sistemas distribuidos y concurrentes de forma elegante e intuitiva lo que le hizo merecedor del Premio Turing en 2013. Con el crecimieno […] Lee la entrada completa en De la panadería a los sistemas distribuidos.
We take a look at two-faced Oracle, cover a FAMP installation, how Netflix works the complex stuff, and show you who the patron of yak shaving is. This episode was brought to you by Headlines Why is Oracle so two-faced over open source? (https://www.theregister.co.uk/2017/10/12/oracle_must_grow_up_on_open_source/) Oracle loves open source. Except when the database giant hates open source. Which, according to its recent lobbying of the US federal government, seems to be "most of the time". Yes, Oracle has recently joined the Cloud Native Computing Foundation (CNCF) to up its support for open-source Kubernetes and, yes, it has long supported (and contributed to) Linux. And, yes, Oracle has even gone so far as to (finally) open up Java development by putting it under a foundation's stewardship. Yet this same, seemingly open Oracle has actively hammered the US government to consider that "there is no math that can justify open source from a cost perspective as the cost of support plus the opportunity cost of forgoing features, functions, automation and security overwhelm any presumed cost savings." That punch to the face was delivered in a letter to Christopher Liddell, a former Microsoft CFO and now director of Trump's American Technology Council, by Kenneth Glueck, Oracle senior vice president. The US government had courted input on its IT modernisation programme. Others writing back to Liddell included AT&T, Cisco, Microsoft and VMware. In other words, based on its letter, what Oracle wants us to believe is that open source leads to greater costs and poorly secured, limply featured software. Nor is Oracle content to leave it there, also arguing that open source is exactly how the private sector does not function, seemingly forgetting that most of the leading infrastructure, big data, and mobile software today is open source. Details! Rather than take this counterproductive detour into self-serving silliness, Oracle would do better to follow Microsoft's path. Microsoft, too, used to Janus-face its way through open source, simultaneously supporting and bashing it. Only under chief executive Satya Nadella's reign did Microsoft realise it's OK to fully embrace open source, and its financial results have loved the commitment. Oracle has much to learn, and emulate, in Microsoft's approach. I love you, you're perfect. Now change Oracle has never been particularly warm and fuzzy about open source. As founder Larry Ellison might put it, Oracle is a profit-seeking corporation, not a peace-loving charity. To the extent that Oracle embraces open source, therefore it does so for financial reward, just like every other corporation. Few, however, are as blunt as Oracle about this fact of corporate open-source life. As Ellison told the Financial Times back in 2006: "If an open-source product gets good enough, we'll simply take it. So the great thing about open source is nobody owns it – a company like Oracle is free to take it for nothing, include it in our products and charge for support, and that's what we'll do. "So it is not disruptive at all – you have to find places to add value. Once open source gets good enough, competing with it would be insane... We don't have to fight open source, we have to exploit open source." "Exploit" sounds about right. While Oracle doesn't crack the top-10 corporate contributors to the Linux kernel, it does register a respectable number 12, which helps it influence the platform enough to feel comfortable building its IaaS offering on Linux (and Xen for virtualisation). Oracle has also managed to continue growing MySQL's clout in the industry while improving it as a product and business. As for Kubernetes, Oracle's decision to join the CNCF also came with P&L strings attached. "CNCF technologies such as Kubernetes, Prometheus, gRPC and OpenTracing are critical parts of both our own and our customers' development toolchains," said Mark Cavage, vice president of software development at Oracle. One can argue that Oracle has figured out the exploitation angle reasonably well. This, however, refers to the right kind of exploitation, the kind that even free software activist Richard Stallman can love (or, at least, tolerate). But when it comes to government lobbying, Oracle looks a lot more like Mr Hyde than Dr Jekyll. Lies, damned lies, and Oracle lobbying The current US president has many problems (OK, many, many problems), but his decision to follow the Obama administration's support for IT modernisation is commendable. Most recently, the Trump White House asked for feedback on how best to continue improving government IT. Oracle's response is high comedy in many respects. As TechDirt's Mike Masnick summarises, Oracle's "latest crusade is against open-source technology being used by the federal government – and against the government hiring people out of Silicon Valley to help create more modern systems. Instead, Oracle would apparently prefer the government just give it lots of money." Oracle is very good at making lots of money. As such, its request for even more isn't too surprising. What is surprising is the brazenness of its position. As Masnick opines: "The sheer contempt found in Oracle's submission on IT modernization is pretty stunning." Why? Because Oracle contradicts much that it publicly states in other forums about open source and innovation. More than this, Oracle contradicts much of what we now know is essential to competitive differentiation in an increasingly software and data-driven world. Take, for example, Oracle's contention that "significant IT development expertise is not... central to successful modernization efforts". What? In our "software is eating the world" existence Oracle clearly believes that CIOs are buyers, not doers: "The most important skill set of CIOs today is to critically compete and evaluate commercial alternatives to capture the benefits of innovation conducted at scale, and then to manage the implementation of those technologies efficiently." While there is some truth to Oracle's claim – every project shouldn't be a custom one-off that must be supported forever – it's crazy to think that a CIO – government or otherwise – is doing their job effectively by simply shovelling cash into vendors' bank accounts. Indeed, as Masnick points out: "If it weren't for Oracle's failures, there might not even be a USDS [the US Digital Service created in 2014 to modernise federal IT]. USDS really grew out of the emergency hiring of some top-notch internet engineers in response to the Healthcare.gov rollout debacle. And if you don't recall, a big part of that debacle was blamed on Oracle's technology." In short, blindly giving money to Oracle and other big vendors is the opposite of IT modernisation. In its letter to Liddell, Oracle proceeded to make the fantastic (by which I mean "silly and false") claim that "the fact is that the use of open-source software has been declining rapidly in the private sector". What?!? This is so incredibly untrue that Oracle should score points for being willing to say it out loud. Take a stroll through the most prominent software in big data (Hadoop, Spark, Kafka, etc.), mobile (Android), application development (Kubernetes, Docker), machine learning/AI (TensorFlow, MxNet), and compare it to Oracle's statement. One conclusion must be that Oracle believes its CIO audience is incredibly stupid. Oracle then tells a half-truth by declaring: "There is no math that can justify open source from a cost perspective." How so? Because "the cost of support plus the opportunity cost of forgoing features, functions, automation and security overwhelm any presumed cost savings." Which I guess is why Oracle doesn't use any open source like Linux, Kubernetes, etc. in its services. Oops. The Vendor Formerly Known As Satan The thing is, Oracle doesn't need to do this and, for its own good, shouldn't do this. After all, we already know how this plays out. We need only look at what happened with Microsoft. Remember when Microsoft wanted us to "get the facts" about Linux? Now it's a big-time contributor to Linux. Remember when it told us open source was anti-American and a cancer? Now it aggressively contributes to a huge variety of open-source projects, some of them homegrown in Redmond, and tells the world that "Microsoft loves open source." Of course, Microsoft loves open source for the same reason any corporation does: it drives revenue as developers look to build applications filled with open-source components on Azure. There's nothing wrong with that. Would Microsoft prefer government IT to purchase SQL Server instead of open-source-licensed PostgreSQL? Sure. But look for a single line in its response to the Trump executive order that signals "open source is bad". You won't find it. Why? Because Microsoft understands that open source is a friend, not foe, and has learned how to monetise it. Microsoft, in short, is no longer conflicted about open source. It can compete at the product level while embracing open source at the project level, which helps fuel its overall product and business strategy. Oracle isn't there yet, and is still stuck where Microsoft was a decade ago. It's time to grow up, Oracle. For a company that builds great software and understands that it increasingly needs to depend on open source to build that software, it's disingenuous at best to lobby the US government to put the freeze on open source. Oracle needs to learn from Microsoft, stop worrying and love the open-source bomb. It was a key ingredient in Microsoft's resurgence. Maybe it could help Oracle get a cloud clue, too. Install FAMP on FreeBSD (https://www.linuxsecrets.com/home/3164-install-famp-on-freebsd) The acronym FAMP refers to a set of free open source applications which are commonly used in Web server environments called Apache, MySQL and PHP on the FreeBSD operating system, which provides a server stack that provides web services, database and PHP. Prerequisites sudo Installed and working - Please read Apache PHP5 or PHP7 MySQL or MariaDB Install your favorite editor, ours is vi Note: You don't need to upgrade FreeBSD but make sure all patches have been installed and your port tree is up-2-date if you plan to update by ports. Install Ports portsnap fetch You must use sudo for each indivdual command during installations. Please see link above for installing sudo. Searching Available Apache Versions to Install pkg search apache Install Apache To install Apache 2.4 using pkg. The apache 2.4 user account managing Apache is www in FreeBSD. pkg install apache24 Confirmation yes prompt and hit y for yes to install Apache 2.4 This installs Apache and its dependencies. Enable Apache use sysrc to update services to be started at boot time, Command below adds "apache24enable="YES" to the /etc/rc.conf file. For sysrc commands please read ```sysrc apache24enable=yes Start Apache service apache24 start``` Visit web address by accessing your server's public IP address in your web browser How To find Your Server's Public IP Address If you do not know what your server's public IP address is, there are a number of ways that you can find it. Usually, this is the address you use to connect to your server through SSH. ifconfig vtnet0 | grep "inet " | awk '{ print $2 }' Now that you have the public IP address, you may use it in your web browser's address bar to access your web server. Install MySQL Now that we have our web server up and running, it is time to install MySQL, the relational database management system. The MySQL server will organize and provide access to databases where our server can store information. Install MySQL 5.7 using pkg by typing pkg install mysql57-server Enter y at the confirmation prompt. This installs the MySQL server and client packages. To enable MySQL server as a service, add mysqlenable="YES" to the /etc/rc.conf file. This sysrc command will do just that ```sysrc mysqlenable=yes Now start the MySQL server service mysql-server start Now run the security script that will remove some dangerous defaults and slightly restrict access to your database system. mysqlsecureinstallation``` Answer all questions to secure your newly installed MySQL database. Enter current password for root (enter for none): [RETURN] Your database system is now set up and we can move on. Install PHP5 or PHP70 pkg search php70 Install PHP70 you would do the following by typing pkg install php70-mysqli mod_php70 Note: In these instructions we are using php5.7 not php7.0. We will be coming out with php7.0 instructions with FPM. PHP is the component of our setup that will process code to display dynamic content. It can run scripts, connect to MySQL databases to get information, and hand the processed content over to the web server to display. We're going to install the modphp, php-mysql, and php-mysqli packages. To install PHP 5.7 with pkg, run this command ```pkg install modphp56 php56-mysql php56-mysqli Copy sample PHP configuration file into place. cp /usr/local/etc/php.ini-production /usr/local/etc/php.ini Regenerate the system's cached information about your installed executable files rehash``` Before using PHP, you must configure it to work with Apache. Install PHP Modules (Optional) To enhance the functionality of PHP, we can optionally install some additional modules. To see the available options for PHP 5.6 modules and libraries, you can type this into your system pkg search php56 Get more information about each module you can look at the long description of the package by typing pkg search -f apache24 Optional Install Example pkg install php56-calendar Configure Apache to Use PHP Module Open the Apache configuration file vim /usr/local/etc/apache24/Includes/php.conf DirectoryIndex index.php index.html Next, we will configure Apache to process requested PHP files with the PHP processor. Add these lines to the end of the file: SetHandler application/x-httpd-php SetHandler application/x-httpd-php-source Now restart Apache to put the changes into effect service apache24 restart Test PHP Processing By default, the DocumentRoot is set to /usr/local/www/apache24/data. We can create the info.php file under that location by typing vim /usr/local/www/apache24/data/info.php Add following line to info.php and save it. Details on info.php info.php file gives you information about your server from the perspective of PHP. It' useful for debugging and to ensure that your settings are being applied correctly. If this was successful, then your PHP is working as expected. You probably want to remove info.php after testing because it could actually give information about your server to unauthorized users. Remove file by typing rm /usr/local/www/apache24/data/info.php Note: Make sure Apache / meaning the root of Apache is owned by user which should have been created during the Apache install is the owner of the /usr/local/www structure. That explains FAMP on FreeBSD. IXsystems IXsystems TrueNAS X10 Torture Test & Fail Over Systems In Action with the ZFS File System (https://www.youtube.com/watch?v=GG_NvKuh530) How Netflix works: what happens every time you hit Play (https://medium.com/refraction-tech-everything/how-netflix-works-the-hugely-simplified-complex-stuff-that-happens-every-time-you-hit-play-3a40c9be254b) Not long ago, House of Cards came back for the fifth season, finally ending a long wait for binge watchers across the world who are interested in an American politician's ruthless ascendance to presidency. For them, kicking off a marathon is as simple as reaching out for your device or remote, opening the Netflix app and hitting Play. Simple, fast and instantly gratifying. What isn't as simple is what goes into running Netflix, a service that streams around 250 million hours of video per day to around 98 million paying subscribers in 190 countries. At this scale, providing quality entertainment in a matter of a few seconds to every user is no joke. And as much as it means building top-notch infrastructure at a scale no other Internet service has done before, it also means that a lot of participants in the experience have to be negotiated with and kept satiated?—?from production companies supplying the content, to internet providers dealing with the network traffic Netflix brings upon them. This is, in short and in the most layman terms, how Netflix works. Let us just try to understand how Netflix is structured on the technological side with a simple example. Netflix literally ushered in a revolution around ten years ago by rewriting the applications that run the entire service to fit into a microservices architecture?—?which means that each application, or microservice's code and resources are its very own. It will not share any of it with any other app by nature. And when two applications do need to talk to each other, they use an application programming interface (API)?—?a tightly-controlled set of rules that both programs can handle. Developers can now make many changes, small or huge, to each application as long as they ensure that it plays well with the API. And since the one program knows the other's API properly, no change will break the exchange of information. Netflix estimates that it uses around 700 microservices to control each of the many parts of what makes up the entire Netflix service: one microservice stores what all shows you watched, one deducts the monthly fee from your credit card, one provides your device with the correct video files that it can play, one takes a look at your watching history and uses algorithms to guess a list of movies that you will like, and one will provide the names and images of these movies to be shown in a list on the main menu. And that's the tip of the iceberg. Netflix engineers can make changes to any part of the application and can introduce new changes rapidly while ensuring that nothing else in the entire service breaks down. They made a courageous decision to get rid of maintaining their own servers and move all of their stuff to the cloud?—?i.e. run everything on the servers of someone else who dealt with maintaining the hardware while Netflix engineers wrote hundreds of programs and deployed it on the servers rapidly. The someone else they chose for their cloud-based infrastructure is Amazon Web Services (AWS). Netflix works on thousands of devices, and each of them play a different format of video and sound files. Another set of AWS servers take this original film file, and convert it into hundreds of files, each meant to play the entire show or film on a particular type of device and a particular screen size or video quality. One file will work exclusively on the iPad, one on a full HD Android phone, one on a Sony TV that can play 4K video and Dolby sound, one on a Windows computer, and so on. Even more of these files can be made with varying video qualities so that they are easier to load on a poor network connection. This is a process known as transcoding. A special piece of code is also added to these files to lock them with what is called digital rights management or DRM?—?a technological measure which prevents piracy of films. The Netflix app or website determines what particular device you are using to watch, and fetches the exact file for that show meant to specially play on your particular device, with a particular video quality based on how fast your internet is at that moment. Here, instead of relying on AWS servers, they install their very own around the world. But it has only one purpose?—?to store content smartly and deliver it to users. Netflix strikes deals with internet service providers and provides them the red box you saw above at no cost. ISPs install these along with their servers. These Open Connect boxes download the Netflix library for their region from the main servers in the US?—?if there are multiple of them, each will rather store content that is more popular with Netflix users in a region to prioritise speed. So a rarely watched film might take time to load more than a Stranger Things episode. Now, when you will connect to Netflix, the closest Open Connect box to you will deliver the content you need, thus videos load faster than if your Netflix app tried to load it from the main servers in the US. In a nutshell… This is what happens when you hit that Play button: Hundreds of microservices, or tiny independent programs, work together to make one large Netflix service. Content legally acquired or licensed is converted into a size that fits your screen, and protected from being copied. Servers across the world make a copy of it and store it so that the closest one to you delivers it at max quality and speed. When you select a show, your Netflix app cherry picks which of these servers will it load the video from> You are now gripped by Frank Underwood's chilling tactics, given depression by BoJack Horseman's rollercoaster life, tickled by Dev in Master of None and made phobic to the future of technology by the stories in Black Mirror. And your lifespan decreases as your binge watching turns you into a couch potato. It looked so simple before, right? News Roundup Moving FreshPorts (http://dan.langille.org/2017/11/15/moving-freshports/) Today I moved the FreshPorts website from one server to another. My goal is for nobody to notice. In preparation for this move, I have: DNS TTL reduced to 60s Posted to Twitter Updated the status page Put the website put in offline mode: What was missed I turned off commit processing on the new server, but I did not do this on the old server. I should have: sudo svc -d /var/service/freshports That stops processing of incoming commits. No data is lost, but it keeps the two databases at the same spot in history. Commit processing could continue during the database dumping, but that does not affect the dump, which will be consistent regardless. The offline code Here is the basic stuff I used to put the website into offline mode. The main points are: header(“HTTP/1.1 503 Service Unavailable”); ErrorDocument 404 /index.php I move the DocumentRoot to a new directory, containing only index.php. Every error invokes index.php, which returns a 503 code. The dump The database dump just started (Sun Nov 5 17:07:22 UTC 2017). root@pg96:~ # /usr/bin/time pg_dump -h 206.127.23.226 -Fc -U dan freshports.org > freshports.org.9.6.dump That should take about 30 minutes. I have set a timer to remind me. Total time was: 1464.82 real 1324.96 user 37.22 sys The MD5 is: MD5 (freshports.org.9.6.dump) = 5249b45a93332b8344c9ce01245a05d5 It is now: Sun Nov 5 17:34:07 UTC 2017 The rsync The rsync should take about 10-20 minutes. I have already done an rsync of yesterday's dump file. The rsync today should copy over only the deltas (i.e. differences). The rsync started at about Sun Nov 5 17:36:05 UTC 2017 That took 2m9.091s The MD5 matches. The restore The restore should take about 30 minutes. I ran this test yesterday. It is now Sun Nov 5 17:40:03 UTC 2017. $ createdb -T template0 -E SQL_ASCII freshports.testing $ time pg_restore -j 16 -d freshports.testing freshports.org.9.6.dump Done. real 25m21.108s user 1m57.508s sys 0m15.172s It is now Sun Nov 5 18:06:22 UTC 2017. Insert break here About here, I took a 30 minute break to run an errand. It was worth it. Changing DNS I'm ready to change DNS now. It is Sun Nov 5 19:49:20 EST 2017 Done. And nearly immediately, traffic started. How many misses? During this process, XXXXX requests were declined: $ grep -c '" 503 ' /usr/websites/log/freshports.org-access.log XXXXX That's it, we're done Total elapsed time: 1 hour 48 minutes. There are still a number of things to follow up on, but that was the transfers. The new FreshPorts Server (http://dan.langille.org/2017/11/17/x8dtu-3/) *** Using bhyve on top of CEPH (https://lists.freebsd.org/pipermail/freebsd-virtualization/2017-November/005876.html) Hi, Just an info point. I'm preparing for a lecture tomorrow, and thought why not do an actual demo.... Like to be friends with Murphy :) So after I started the cluster: 5 jails with 7 OSDs This what I manually needed to do to boot a memory stick Start een Bhyve instance rbd --dest-pool rbddata --no-progress import memstick.img memstick rbd-ggate map rbddata/memstick ggate-devvice is available on /dev/ggate1 kldload vmm kldload nmdm kldload iftap kldload ifbridge kldload cpuctl sysctl net.link.tap.uponopen=1 ifconfig bridge0 create ifconfig bridge0 addm em0 up ifconfig ifconfig tap11 create ifconfig bridge0 addm tap11 ifconfig tap11 up load the GGate disk in bhyve bhyveload -c /dev/nmdm11A -m 2G -d /dev/ggate1 FB11 and boot a single from it. bhyve -H -P -A -c 1 -m 2G -l com1,/dev/nmdm11A -s 0:0,hostbridge -s 1:0,lpc -s 2:0,virtio-net,tap11 -s 4,ahci-hd,/dev/ggate1 FB11 & bhyvectl --vm=FB11 --get-stats Connect to the VM cu -l /dev/nmdm11B And that'll give you a bhyve VM running on an RBD image over ggate. In the installer I tested reading from the bootdisk: root@:/ # dd if=/dev/ada0 of=/dev/null bs=32M 21+1 records in 21+1 records out 734077952 bytes transferred in 5.306260 secs (138341865 bytes/sec) which is a nice 138Mb/sec. Hope the demonstration does work out tomorrow. --WjW *** Donald Knuth - The Patron Saint of Yak Shaves (http://yakshav.es/the-patron-saint-of-yakshaves/) Excerpts: In 2015, I gave a talk in which I called Donald Knuth the Patron Saint of Yak Shaves. The reason is that Donald Knuth achieved the most perfect and long-running yak shave: TeX. I figured this is worth repeating. How to achieve the ultimate Yak Shave The ultimate yak shave is the combination of improbable circumstance, the privilege to be able to shave at your hearts will and the will to follow things through to the end. Here's the way it was achieved with TeX. The recount is purely mine, inaccurate and obviously there for fun. I'll avoid the most boring facts that everyone always tells, such as why Knuth's checks have their own Wikipedia page. Community Shaving is Best Shaving Since the release of TeX, the community has been busy working on using it as a platform. If you ever downloaded the full TeX distribution, please bear in mind that you are downloading the amassed work of over 40 years, to make sure that each and every TeX document ever written builds. We're talking about documents here. But mostly, two big projects sprung out of that. The first is LaTeX by Leslie Lamport. Lamport is a very productive researcher, famous for research in formal methods through TLA+ and also known laying groundwork for many distributed algorithms. LaTeX is based on the idea of separating presentation and content. It is based around the idea of document classes, which then describe the way a certain document is laid out. Think Markdown, just much more complex. The second is ConTeXt, which is far more focused on fine grained layout control. The Moral of the Story Whenever you feel like “can't we just replace this whole thing, it can't be so hard” when handling TeX, don't forget how many years of work and especially knowledge were poured into that system. Typesetting isn't the most popular knowledge around programmers. Especially see it in the context of the space it is in: they can't remove legacy. Ever. That would break documents. TeX is also not a programming language. It might resemble one, but mostly, it should be approached as a typesetting system first. A lot of it's confusing lingo gets much better then. It's not programming lingo. By approaching TeX with an understanding for its history, a lot of things can be learned from it. And yes, a replacement would be great, but it would take ages. In any case, I hope I thoroughly convinced you why Donald Knuth is the Patron Saint of Yak Shaves. Extra Credits This comes out of a enjoyable discussion with [Arne from Lambda Island](https://lambdaisland.com/https://lambdaisland.com/, who listened and said “you should totally turn this into a talk”. Vincent's trip to EuroBSDCon 2017 (http://www.vincentdelft.be/post/post_20171016) My euroBSDCon 2017 Posted on 2017-10-16 09:43:00 from Vincent in Open Bsd Let me just share my feedback on those 2 days spent in Paris for the EuroBSDCon. My 1st BSDCon. I'm not a developer, contributor, ... Do not expect to improve your skills with OpenBSD with this text :-) I know, we are on October 16th, and the EuroBSDCon of Paris was 3 weeks ago :( I'm not quick !!! Sorry for that Arrival at 10h, I'm too late for the start of the key note. The few persons behind a desk welcome me by talking in Dutch, mainly because of my name. Indeed, Delft is a city in Netherlands, but also a well known university. I inform them that I'm from Belgium, and the discussion moves to the fact the Fosdem is located in Brussels. I receive my nice T-shirt white and blue, a bit like the marine T-shirts, but with the nice EuroBSDCon logo. I'm asking where are the different rooms reserved for the BSD event. We have 1 big on the 1st floor, 1 medium 1 level below, and 2 smalls 1 level above. All are really easy to access. In this entrance we have 4 or 5 tables with some persons representing their company. Those are mainly the big sponsors of the event providing details about their activity and business. I discuss a little bit with StormShield and Gandi. On other tables people are selling BSD t-shirts, and they will quickly be sold. "Is it done yet ?" The never ending story of pkg tools In the last Fosdem, I've already hear Antoine and Baptiste presenting the OpenBSD and FreeBSD battle, I decide to listen Marc Espie in the medium room called Karnak. Marc explains that he has rewritten completely the pkg_add command. He explains that, at contrario with other elements of OpenBSD, the packages tools must be backward compatible and stable on a longer period than 12 months (the support period for OpenBSD). On the funny side, he explains that he has his best idea inside his bath. Hackathons are also used to validate some ideas with other OpenBSD developers. All in all, he explains that the most time consuming part is to imagine a good solution. Coding it is quite straightforward. He adds that better an idea is, shorter the implementation will be. A Tale of six motherboards, three BSDs and coreboot After the lunch I decide to listen the talk about Coreboot. Indeed, 1 or 2 years ago I had listened the Libreboot project at Fosdem. Since they did several references to Coreboot, it's a perfect occasion to listen more carefully to this project. Piotr and Katazyba Kubaj explains us how to boot a machine without the native Bios. Indeed Coreboot can replace the bios, and de facto avoid several binaries imposed by the vendor. They explain that some motherboards are supporting their code. But they also show how difficult it is to flash a Bios and replace it by Coreboot. They even have destroyed a motherboard during the installation. Apparently because the power supply they were using was not stable enough with the 3v. It's really amazing to see that open source developers can go, by themselves, to such deep technical level. State of the DragonFly's graphics stack After this Coreboot talk, I decide to stay in the room to follow the presentation of Fran?ois Tigeot. Fran?ois is now one of the core developer of DrangonflyBSD, an amazing BSD system having his own filesystem called Hammer. Hammer offers several amazing features like snapshots, checksum data integrity, deduplication, ... Francois has spent his last years to integrate the video drivers developed for Linux inside DrangonflyBSD. He explains that instead of adapting this code for the video card to the kernel API of DrangonflyBSD, he has "simply" build an intermediate layer between the kernel of DragonflyBSD and the video drivers. This is not said in the talk, but this effort is very impressive. Indeed, this is more or less a linux emulator inside DragonflyBSD. Francois explains that he has started with Intel video driver (drm/i915), but now he is able to run drm/radeon quite well, but also drm/amdgpu and drm/nouveau. Discovering OpenBSD on AWS Then I move to the small room at the upper level to follow a presentation made by Laurent Bernaille on OpenBSD and AWS. First Laurent explains that he is re-using the work done by Antoine Jacoutot concerning the integration of OpenBSD inside AWS. But on top of that he has integrated several other Open Source solutions allowing him to build OpenBSD machines very quickly with one command. Moreover those machines will have the network config, the required packages, ... On top of the slides presented, he shows us, in a real demo, how this system works. Amazing presentation which shows that, by putting the correct tools together, a machine builds and configure other machines in one go. OpenBSD Testing Infrastructure Behind bluhm.genua.de Here Jan Klemkow explains us that he has setup a lab where he is able to run different OpenBSD architectures. The system has been designed to be able to install, on demand, a certain version of OpenBSD on the different available machines. On top of that a regression test script can be triggered. This provides reports showing what is working and what is not more working on the different machines. If I've well understood, Jan is willing to provide such lab to the core developers of OpenBSD in order to allow them to validate easily and quickly their code. Some more effort is needed to reach this goal, but with what exists today, Jan and his colleague are quite close. Since his company is using OpenBSD business, to his eyes this system is a "tit for tat" to the OpenBSD community. French story on cybercrime Then comes the second keynote of the day in the big auditorium. This talk is performed by the colonel of french gendarmerie. Mr Freyssinet, who is head of the Cyber crimes unit inside the Gendarmerie. Mr Freyssinet explains that the "bad guys" are more and more volatile across countries, and more and more organized. The small hacker in his room, alone, is no more the reality. As a consequence the different national police investigators are collaborating more inside an organization called Interpol. What is amazing in his talk is that Mr Freyssinet talks about "Crime as a service". Indeed, more and more hackers are selling their services to some "bad and temporary organizations". Social event It's now time for the famous social event on the river: la Seine. The organizers ask us to go, by small groups, to a station. There is a walk of 15 minutes inside Paris. Hopefully the weather is perfect. To identify them clearly several organizers takes a "beastie fork" in their hands and walk on the sidewalk generating some amazing reactions from some citizens and toursits. Some of them recognize the Freebsd logo and ask us some details. Amazing :-) We walk on small and big sidewalks until a small stair going under the street. There, we have a train station a bit like a metro station. 3 stations later they ask us to go out. We walk few minutes and come in front of a boat having a double deck: one inside, with nice tables and chairs and one on the roof. But the crew ask us to go up, on the second deck. There, we are welcome with a glass of wine. The tour Eiffel is just at few 100 meters from us. Every hour the Eiffel tower is blinking for 5 minutes with thousands of small lights. Brilliant :-) We see also the "statue de la libertee" (the small one) which is on a small island in the middle of the river. During the whole night the bar will be open with drinks and some appetizers, snacks, ... Such walking diner is perfect to talk with many different persons. I've discussed with several persons just using BSD, they are not, like me, deep and specialized developers. One was from Switzerland, another one from Austria, and another one from Netherlands. But I've also followed a discussion with Theo de Raadt, several persons of the FreeBSD foundation. Some are very technical guys, other just users, like me. But all with the same passion for one of the BSD system. Amazing evening. OpenBSD's small steps towards DTrace (a tale about DDB and CTF) On the second day, I decide to sleep enough in order to have enough resources to drive back to my home (3 hours by car). So I miss the 1st presentations, and arrive at the event around 10h30. Lot of persons are already present. Some faces are less "fresh" than others. I decide to listen to Dtrace in OpenBSD. After 10 minutes I am so lost into those too technical explainations, that I decide to open and look at my PC. My OpenBSD laptop is rarely leaving my home, so I've never had the need to have a screen locking system. In a crowded environment, this is better. So I was looking for a simple solution. I've looked at how to use xlock. I've combined it with the /ets/apm/suspend script, ... Always very easy to use OpenBSD :-) The OpenBSD web stack Then I decide to follow the presentation of Michael W Lucas. Well know person for his different books about "Absolute OpenBSD", Relayd", ... Michael talks about the httpd daemon inside OpenBSD. But he also present his integration with Carp, Relayd, PF, FastCGI, the rules based on LUA regexp (opposed to perl regexp), ... For sure he emphasis on the security aspect of those tools: privilege separation, chroot, ... OpenSMTPD, current state of affairs Then I follow the presentation of Gilles Chehade about the OpenSMTPD project. Amazing presentation that, on top of the technical challenges, shows how to manage such project across the years. Gilles is working on OpenSMTPD since 2007, thus 10 years !!!. He explains the different decisions they took to make the software as simple as possible to use, but as secure as possible, too: privilege separation, chroot, pledge, random malloc, ? . The development starts on BSD systems, but once quite well known they received lot of contributions from Linux developers. Hoisting: lessons learned integrating pledge into 500 programs After a small break, I decide to listen to Theo de Raadt, the founder of OpenBSD. In his own style, with trekking boots, shorts, backpack. Theo starts by saying that Pledge is the outcome of nightmares. Theo explains that the book called "Hacking blind" presenting the BROP has worried him since few years. That's why he developed Pledge as a tool killing a process as soon as possible when there is an unforeseen behavior of this program. For example, with Pledge a program which can only write to disk will be immediately killed if he tries to reach network. By implementing Pledge in the +-500 programs present in the "base", OpenBSD is becoming more secured and more robust. Conclusion My first EuroBSDCon was a great, interesting and cool event. I've discussed with several BSD enthusiasts. I'm using OpenBSD since 2010, but I'm not a developer, so I was worried to be "lost" in the middle of experts. In fact it was not the case. At EuroBSDCon you have many different type of enthusiasts BSD's users. What is nice with the EuroBSDCon is that the organizers foresee everything for you. You just have to sit and listen. They foresee even how to spend, in a funny and very cool attitude, the evening of Saturday. > The small draw back is that all of this has a cost. In my case the whole weekend cost me a bit more than 500euro. Based on what I've learned, what I've saw this is very acceptable price. Nearly all presentations I saw give me a valuable input for my daily job. For sure, the total price is also linked to my personal choice: hotel, parking. And I'm surely biased because I'm used to go to the Fosdem in Brussels which cost nothing (entrance) and is approximately 45 minutes of my home. But Fosdem is not the same atmosphere and presentations are less linked to my daily job. I do not regret my trip to EuroBSDCon and will surely plan other ones. Beastie Bits Important munitions lawyering (https://www.jwz.org/blog/2017/10/important-munitions-lawyering/) AsiaBSDCon 2018 CFP is now open, until December 15th (https://2018.asiabsdcon.org/) ZSTD Compression for ZFS by Allan Jude (https://www.youtube.com/watch?v=hWnWEitDPlM&feature=share) NetBSD on Allwinner SoCs Update (https://blog.netbsd.org/tnf/entry/netbsd_on_allwinner_socs_update) *** Feedback/Questions Tim - Creating Multi Boot USB sticks (http://dpaste.com/0FKTJK3#wrap) Nomen - ZFS Questions (http://dpaste.com/1HY5MFB) JJ - Questions (http://dpaste.com/3ZGNSK9#wrap) Lars - Hardening Diffie-Hellman (http://dpaste.com/3TRXXN4) ***
Leslie Lamport wrote a paper describing the Paxos algorithm in plain English. This is an algorithm that guarantees that a distributed system reaches consensus on the value of an attribute. Lamport claims that the algorithm can be derived from the problem statement, which would imply that it is the only algorithm that can achieve these results. Indeed, many of the distributed systems that we use today are based at least in part on Paxos. Linq not only introduced the ability to query a database directly from C#. It also introduced a set of language features that were useful in their own right. Lambda expressions did not exist in C# prior to 3.0, but now they are preferred to their predecessor, anonymous delegates. Extension methods were added to the language specifically to make Linq work, but nevertheless provide an entirely new form of expressiveness. Linq works because its parts were designed to compose. This is not quite so true of language features that came later, like async and primary constructors. Fermat was quite fond of writing little theorems and not providing their proofs. One such instance, known as "Fermat's Little Theorem", turns out to be fairly easy to prove, but provides the basis for much of cryptography. It states that a base raised to one less than a prime in the prime's modulus will always be equal to 1. This is useful in generating a good trap door function, where we can easily compute the discrete exponent of a number, but not the discrete logarithm.
Claude Shannon followed up one incredibly important paper with a second of even greater significance. In Communication Theory of Secrecy Systems, he analyzes cryptosystems based on the probabilities of certain plaintext messages given an intercepted cyphertext. Understanding this form of analysis will help us to design more effective systems. The Lambda Calculus computes using nothing but symbol replacement. If we are going to run programs like a computer, we need to express conditional branches. We can represent the value "true" as a function λa.λb.a. In other words, the function that returns the first of two arguments. Similarly, the value "false" is represented by the function λa.λb.b. To create a conditional "if-else" statement, capture two branches and then apply the third argument to select between them: λa.λb.λc.c a b. Suppose that you needed to reach an agreement among several people by passing messages. Now suppose that some of those people could not be trusted. Under what conditions could you find a protocol to reach an agreement? Leslie Lamport, Robert Shostak, and Marshall Pease studied the Byzantine Generals problem to determine how to design algorithms for distributed systems.
This episode is a republication from my interview with Leslie Lamport on Software Engineering Radio. Leslie Lamport won a Turing Award in 2013 for his work in distributed and concurrent systems. He also designed the document preparation tool LaTex. Leslie is employed by Microsoft Research, and has recently been working with TLA+, a language that is The post Distributed Systems with Leslie Lamport appeared first on Software Engineering Daily.
This episode is a republication from my interview with Leslie Lamport on Software Engineering Radio. Leslie Lamport won a Turing Award in 2013 for his work in distributed and concurrent systems. He also designed the document preparation tool LaTex. Leslie is employed by Microsoft Research, and has recently been working with TLA+, a language that is The post Distributed Systems with Leslie Lamport appeared first on Software Engineering Daily.
本期节目由 思客教学 赞助,思客教学 “专注 IT 领域远程学徒式” 教育。 本期由 Terry 主持, 请到了他的最好基友 Jan, 和他聊聊比特币背后的技术, 分布式系统, 算法以及Blockchain. Intridea Peatio ethfans LMAX Disruptor archlinux bspwm plan9 ranger Is bitcoinn a good idea? Merkle tree Linked list Hash List Mixing 椭圆曲线签名算法 (ECDSA) Checksum RSA Zerocash Zero-knowledge proof The Byzantine Generals Problem Leslie Lamport LaTeX TeX Donald Knuth Lamport signature PoW’s pros and cons PoS’s pros and cons DAO and DAC scrypt Proof-of-stake Vitalik Buterin Ethereum gollum Nick Szabo’s Smart Contracts Idea Bitcoin Script Special Guest: Jan.
In Folge 88 gehts um eine neue und verbesserte Version eines richtig alten Internetdienstes, nämlich den Internet Relay Chatservice, kurz IRC. Seit bald 40 Jahren gibt es das schon. Mit RobustIRC kommt nun eine Implementation, die einiges besser macht, und aber doch abwärtskompatibel ist. Michael “Secure” Stapelberg, einer der Hauptentwickler war im Hackerfunk zu Gast und erklärt, was RobustIRC anders macht, damit es eben robuster ist. Trackliste Paniq – Future Of The Internet Hooligan – Shining Level 99 – Radiance Monotron – Only You Intro Intro :: Max Kuttner - Die schöne Adrienne hat auch UKW-Wochen RobustIRC :: RobustIRC RobustIRC Github :: RobustIRC auf Github RFC 2812 :: RFC 2812 Internet Relay Chat: Client Protocol IRC Networks Top 100 :: Die 100 grössten IRC-Netzwerke RobustSession :: RobustSession protocol Raft :: Raft Consensus Algorithm Paxos :: Leslie Lamport about Paxos Go :: Programmiersprache Go Docker :: Docker # :: Gartenhag, Rautezeichen, Doppelkreuz Känguru Chroniken :: Das Känguru-Wiki Neues vom Känguru :: Podcast bei Radio Fritz Apple Watch :: Wotsch ä Watch? Buy some Apples! File Download (108:29 min / 167 MB)
In Folge 88 gehts um eine neue und verbesserte Version eines richtig alten Internetdienstes, nämlich den Internet Relay Chatservice, kurz IRC. Seit bald 40 Jahren gibt es das schon. Mit RobustIRC kommt nun eine Implementation, die einiges besser macht, und aber doch abwärtskompatibel ist. Michael “Secure” Stapelberg, einer der Hauptentwickler war im Hackerfunk zu Gast und erklärt, was RobustIRC anders macht, damit es eben robuster ist. Trackliste Paniq – Future Of The Internet Hooligan – Shining Level 99 – Radiance Monotron – Only You Intro Intro :: Max Kuttner - Die schöne Adrienne hat auch UKW-Wochen RobustIRC :: RobustIRC RobustIRC Github :: RobustIRC auf Github RFC 2812 :: RFC 2812 Internet Relay Chat: Client Protocol IRC Networks Top 100 :: Die 100 grössten IRC-Netzwerke RobustSession :: RobustSession protocol Raft :: Raft Consensus Algorithm Paxos :: Leslie Lamport about Paxos Go :: Programmiersprache Go Docker :: Docker # :: Gartenhag, Rautezeichen, Doppelkreuz Känguru Chroniken :: Das Känguru-Wiki Neues vom Känguru :: Podcast bei Radio Fritz Apple Watch :: Wotsch ä Watch? Buy some Apples! File Download (108:29 min / 167 MB)
Knowing that a directed graph is acyclic is useful. If you construct directed graphs in a certain way, then you can prove that they have no cycles. Design systems to produce constructable graphs, and you can take advantage of all of the theorems known to be true of DAGs. Alan Turing proved that there could not be a general procedure to determine whether an expression in first order predicate calculus is satisfyable. This extends Godel's Incompleteness Theorem by asserting not only that there are some true statements that cannot be proven, but also that there is no way to decide whether a statement in general even has a proof. In the process of constructing this proof, Alan Turing defined the Universal Machine, which can perform any mechanical computation. This is the genesis of the computers that we use today. Leslie Lamport finishes off his paper on Time, Clocks, and the Order of Events in a Distributed System by showing how to synchronize physical clocks. This is useful to avoid anomalies caused by communications out-of-band. He calculates how quickly we can expect the clocks to become synchronized, so that we can set expectations and improve our designs.
In a directed acyclic graph, we can use reachability to determine a partial order. But sometimes we want a total ordering of the nodes to accomplish some result. There are usually many possible total orderings that satisfy the partial ordering. Topological sorting algorithms can help us discover them, whether we want just one, or all possible orderings. Unit tests can only cover one scenario. Mathematical proof, while it generalizes to many scenarios, is not always feasible. Thankfully, we have tools that will search the input space to help us find problems in our code. Parameterized unit tests come in two flavors: generitive tests and exploratory tests. Generitive testing tools, like Quick Check and Test.Check, try random inputs. Exploratory testsing tools, like Pex and Code Digger, discover scenarios that cover as many branches in the code as possible. These tools are a good addition to the other two, helping us fill that gap. Leslie Lamport used logical clocks in a distributed system to solve the mutual exclusion problem. He used the partial order of "happened before" to construct one possible total ordering of events. He then applies this total ordering to the mutual exclusion problem so that processes can determine whether they own the shared resource.
A fully ordered set is one in which we can tell for any two members which one comes before the other. Think integers. A partially ordered set, however, only gives us an answer for some pairs of nodes. A directed acyclic graph defines a partial order over its nodes. Find out how to compute this partial order, and some of the ways in which it can be applied. The number of degrees of freedom in a system of equations is equal to the number of unknowns minus the number of equations. We can add unknowns, as long as we also add the same number of equations, without changing the behavior of the system. Then we can rewrite the equations, again without changing behavior, to solve for some of those unknowns in terms of others. Doing so, we divide the system into independent and dependent variables. The number of degrees of freedom is equal to the number of independent variables. In 1978, Leslie Lamport wrote "Time, Clocks, and the Ordering of Events in a Distributed System". In this paper, he constructs a clock that gives us a way of saying which events in a distributed system happened before another. See how he did it, and start to imagine why this might be useful.
Software Engineering Radio - The Podcast for Professional Software Developers
Leslie Lamport won a Turing Award in 2013 for his work in distributed and concurrent systems. He also designed the document preparation tool LaTex. Leslie is employed by Microsoft Research, and has recently been working with TLA+, a language that is useful for specifying concurrent systems from a high level. The interview begins with a […]