Podcasts about Lamport

  • 55PODCASTS
  • 84EPISODES
  • 44mAVG DURATION
  • ?INFREQUENT EPISODES
  • Dec 16, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Lamport

Latest podcast episodes about Lamport

Dads Lads & Kebabs
E132 - Devils Bridge, Lamport Hall & Christmas Mad Libs

Dads Lads & Kebabs

Play Episode Listen Later Dec 16, 2024 34:11


2 mates, Niall & Miki, discussing the day to day struggles in life, from a Man's point of view. This week the boys discuss Miki's visit to Devils Bridge, the hell that was Lamport Hall & the viewers Christmas Mad Libs forms.Hope you enjoy...Support the show

Cinémaradio LE podcast cinéma
Il a inspiré Zorro : William Lamport

Cinémaradio LE podcast cinéma

Play Episode Listen Later Sep 8, 2024 13:11


Aujourd'hui on s'intéresse à Zorro ! Qui était Zorro? A-t-il vraiment existé? De qui s'est inspiré l'auteur de Zorro?  Réponse dans ce podcast de La Petite Histoire! Retrouvez tous nos podcasts de La Fabrik Audio sur notre site et en écoute sur  Cinémaradio. N'hésitez pas à soutenir ce podcast en lui donnant une note sur iTunes, Apple Podcasts, et en faisant des commentaires. ;-) Zorro serait en fait Irlandais... Bon en tous cas, il est possible que l'auteur de Zorro se soit inspiré de William Lamport, un irlandais né au début du 17 e siecle et dont la vie agitée a été tout simplement fascinante ! William Lamport a fait des études à Londres. Très cultivé, à 21 ans il ne parlait pas moins de 14 langues. En 1627, Lamport est suspecté de catholicisme et est arrêté à Londres pour avoir distribué des pamphlets catholiques. Il décide donc de fuir la Grande-Bretagne pour débarquer en Espagne où il prend le nom de Guillén Lombardo. Très vite, il attire l'attention d'un marquis qui lui permet d'intégrer un régiment irlandais patronné par l'Espagne. C'est dans ce cadre qu'il prend part au combat contre les forces suédoises aux Pays-Bas espagnols. Lors d'une bataille, il est remarqué par le comte-duc d'Olivares, alors ministre en chef de Philippe IV d'Espagne. Ce dernier lui propose de se mettre au service du roi.  Ce dernier l'envoie rapidement au Mexique pour être l'espion officiel de la Couronne ! Il doit donc donner des informations sur la situation politique dans la région. Nous sommes en 1640 et les événements politiques méritent un suivi attentif puisque la situation au Mexique est politiquement difficile ! Le Mexique est à ce moment-là une vice-royauté gouvernée par un vice-roi qui pille sans vergogne dans les caisses de l'Etat. Vers 1641, Guillen Lamport commence à préparer un complot pour renverser le vice-roi, et pour cela il essaye de persuader Indiens, les Noirs et les marchands créoles de se joindre à ce soulèvement.  Il essaye de se faire un peu d'argent pour cette organisation qui coute cher et se met donc à faire du trafic de tabac la nuit alors que la journée il mène la vie d'un gentilhomme aisé. Et il semble qu'a ce moment là William Lamport allias Guillén Lombardo signait ses actes d'un… Z pour « Ziza ». Ziza c'est un terme hébreu qui est employé par les francs-maçons pour désigner la lumière, la force de vie. Alors qu'il tente de soulever le peuple contre le vice-roi du mexique il est dénoncé et est arrêté par la police qui le conduit au cachot où il va rester pendant 8 ans avant de s'évader la veille de Noël 1650, avec son compagnon de cellule, un certain Diego Pinto Bravo. Mais il est rattrapé et l'Inquisition du Mexique va le condamner à être brûlé sur le bûcher... Fin des aventures de William Lamport / Guillén Lombardo / Zorro !Hébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.

WFAN: On-Demand
Ann Liguori with Mark Lamport

WFAN: On-Demand

Play Episode Listen Later Jul 21, 2024 14:01


Ann talks with Mark Lamport Stokes, long-time golf journalist, about Tiger missing the cut, why he continues to play; if Rory McIlroy, who missed the cut, will ever win another Major; and about the R&A trying to not go crazy with the amount of prize money at the Open Championship.

ASecuritySite Podcast
World-leaders in Cryptography: Leslie Lamport

ASecuritySite Podcast

Play Episode Listen Later May 10, 2024 65:10


Please excuse the poor quality of my microphone, as the wrong microphone was selected.   In research, we are all just building on the shoulders of true giants, and there are few larger giants than Leslie Lamport — the creator of LaTeX. For me, every time I open up a LaTeX document, I think of the work he did on creating LaTeX, and which makes my research work so much more productive. If I was still stuck with Microsoft Office for research, I would spend half of my time in that horrible equation editor, or in trying to integrate the references into the required format, or in formatting Header 1 and Header 2 to have a six-point spacing underneath. So, for me, the contest between LaTeX and Microsoft Word is a knock-out in the first round. And one of the great things about Leslie is that his work is strongly academic — and which provides foundations for others to build on. For this, he did a great deal on the ordering of task synchronisation, in state theory, cryptography signatures, and fault tolerance. LaTeX I really can say enough about how much LaTeX — created in 1984 — helps my work. I am writing a few books just now, and it allows me to lay out the books in the way that I want to deliver the content. There's no need for a further mark-up, as I work on the output that the reader will see. But the true genius of LaTeX is the way that teams can work on a paper, and where there can be async to GitHub and where version control is then embedded. Clocks Many in the research community think that the quality measure of a paper is the impact factor of the journal that it is submitted to, or in the amount of maths that it contains. But, in the end, it is the impact of the paper, and how it changes thinking. For Leslie, in 1978, his paper on clocks changed our scientific world and is one of the most cited papers in computer science. Byzantine Generals Problem In 1981, Leslie B Lamport defined the Byzantine Generals Problem. And in a research world where you can have 100s of references in a paper, Leslie only used four (and which would probably not be accepted these days for having so few references). Within this paper, the generals of a Byzantine army have to agree to their battle plan, in the face of adversaries passing in order information. In the end, we aim to create a way of passing messages where if at least two out of three of the generals are honest, we will end up with the correct battle plan. The Lamport Signature Sometime soon, we perhaps need to wean ourselves of our existing public key methods and look to techniques that are more challenging for quantum computers. With the implementation of Shor's algorithm [here] on quantum computers, we will see our RSA and Elliptic Curve methods being replaced by methods which are quantum robust. One method is the Lamport signature method and which was created by Leslie B. Lamport in 1979.

Clutter Corner - Organize, Clean and Transform Your Home
Letting Go of Fashion Clutter with Imogen Lamport

Clutter Corner - Organize, Clean and Transform Your Home

Play Episode Listen Later Mar 29, 2024 7:35


Today's episode is a guide to letting go of fashion clutter and embracing your lifestyle. This enlightening conversation between Angela Brown and international image consultant Imogen Lamport explores the complexities of managing a wardrobe over time.  Imogen shares valuable insights on the challenges of holding onto clothing items for too long, particularly when trends and lifestyles evolve. She emphasizes the importance of assessing whether garments still suit one's current lifestyle rather than merely focusing on their past cost or potential future use.  Imogen highlights the significance of understanding personal preferences and developing criteria for smarter shopping decisions, advocating for quality over quantity to avoid accumulating unworn items.  This discussion offers practical advice and thought-provoking perspectives, offering a roadmap for decluttering closets, aligning wardrobes with lifestyle needs, and making more intentional fashion choices. #HoardingWorld #AskAngelaBrown Chapters: 00:00:00 Introduction to Image Consultation  00:00:59 Fashion Trends Over Time  00:01:51 Adapting to Climate and Lifestyle Changes  00:03:11 Assessing Winter Wardrobe Needs  00:04:32 Quality Over Quantity in Clothing Choices  00:06:09 Importance of Setting Buying Criteria SPONSORSHIPS & BRANDS ------------------- Today's #ClutterCornerLive sponsor is #SavvyCleaner training for house cleaners and maids. - https://savvycleaner.com/join And your host today is #AngelaBrown - https://g.page/r/CbMI6YFuLU2GEBI/review ** HELPFUL RESOURCES ** (“Paid Link”) ------- Organizing for Your Lifestyle: Adaptable Inspirations from Socks to Suitcases: https://amzn.to/3IDvwzH Decluttering For Dummies: https://amzn.to/3Tj20Ea A No Nonsense Guide to Organizing Your Home: https://amzn.to/42b0l7G The Clutter Fix: https://amzn.to/3TKyp8u Real Life Organizing: https://amzn.to/3TJx4yW REPROGRAM YOUR MIND SLEEP TAPES ---------- The New Normal Sleep Tape - https://youtu.be/ebLJJA6rUHw The New Tape | Affirmations Of A Clean and Orderly Home | "I AM" - https://youtu.be/n13ZBvaCMjw * When available, we use affiliate links, and as Amazon Associates, we earn on qualifying purchases. SOCIAL MEDIA ------------- ** CONNECT WITH IMOGEN LAMPORT ** Website: https://insideoutstyleblog.com/ Facebook Page: https://www.facebook.com/InsideOutStyleBlog/ Instagram: https://www.instagram.com/insideoutstyleblog/ Twitter: https://twitter.com/ImogenLamport Pinterest: https://www.pinterest.com/imogenlamport/ YouTube: https://www.youtube.com/imogenlamport LinkedIn: https://www.linkedin.com/in/imogenlamport/ ** CONNECT WITH HOARDING WORLD ** HoardingWorld Support Group on Facebook: https://www.facebook.com/groups/hoardingworld YouTube: https://www.youtube.com/@HoardingWorld Facebook Page: https://Facebook.com/HoardingWorld Twitter: https://twitter.com/HoardingWorld Instagram: https://Instagram.com/hoarding.world Pinterest: https://www.pinterest.com/HoardingWorld TikTok: https://www.tiktok.com/@HoardingWorld Hashtags: #ClutterCorner #HoardingWorld ** CONNECT WITH ANGELA BROWN ** YouTube: https://www.youtube.com/@AskAngelaBrown Facebook Page: https://Facebook.com/AskAngelaBrown Twitter: https://Twitter.com/AskAngelaBrown Instagram: https://Instagram.com/AskAngelaBrown Pinterest: https://Pinterest.com/AskAngelaBrown Linkedin: https://www.linkedin.com/in/AskAngelaBrown TikTok: https://www.tiktok.com/@AskAngelaBrown Amazon Store: https://amazon.com/shop/AngelaBrown *** ADVERTISE WITH US ***  We do work with sponsors and brands. If you are interested in working with us and you have a product or service that makes sense for the decluttering or hoarding space here's how to work with us -https://savvycleaner.com/brand-deals *** SAVVY CLEANER BRANDS ***  SAVVY CLEANER - House Cleaner Training and Certification – https://savvycleaner.com/join VRBO AIRBNB CLEANING – Cleaning tips and strategies for your short-term rental  https://TurnoverCleaningTips.com  HOARDING WORLD - Helping you change your relationship with stuff https://HoardingWorld.com REALTY SUCCESS HUB - Helping you sell your home fast https://realtysuccesshub.com CREDITS -------------------------- Show Produced by: Savvy Cleaner: https://savvycleaner.com Show Host: Angela Brown Show Editors: PJ Barnes & Sally K. Naidu Show Producer: Sally K. Naidu DISCLAIMER: This work is not intended to substitute for professional medical or counseling advice. If you suffer from a physical or mental illness, please always seek professional help.

Clutter Corner - Organize, Clean and Transform Your Home
Organize Your Closet for Every Era with Imogen Lamport

Clutter Corner - Organize, Clean and Transform Your Home

Play Episode Listen Later Mar 20, 2024 6:00


How do you organize your closet for every era? In this enlightening conversation, Angela Brown engages with renowned international image consultant Imogen Lamport to unveil the intricacies of fashion trends and the pitfalls of falling for passing fads.  Imogen delves into the distinction between timeless classics, enduring trends, and fleeting fads, shedding light on how individuals often succumb to the allure of transient styles.  Through insightful examples like the evolution of the classic black pants and the rise and fall of colored jeans, she underscores the importance of discerning wearable trends from extreme fads.  Imogen's advice extends to practical strategies for evaluating one's wardrobe's relevance, urging listeners to consider their body shape, coloring, and personality when making fashion choices.  Focusing on empowerment through style education, Imogen equips individuals with the tools to curate wardrobes that authentically reflect their identity, enhancing confidence and comfort in personal expression.  As the discussion concludes, listeners depart armed with newfound insights, ready to navigate the ever-changing landscape of fashion with savvy and self-assurance. #HoardingWorld #AskAngelaBrown Chapters: 00:00:00 Introduction to Image Consulting  00:00:54 Fashion Classics vs. Trends vs. Fads  00:02:55 Identifying In vs. Out of Fashion  00:03:40 Choosing Trends that Suit Your Style  00:04:07 Understanding Body Shape, Color, and Personality SPONSORSHIPS & BRANDS ------------------- Today's #ClutterCornerLive sponsor is #SavvyCleaner training for house cleaners and maids. - https://savvycleaner.com/join And your host today is #AngelaBrown - https://g.page/r/CbMI6YFuLU2GEBI/review ** HELPFUL RESOURCES ** (“Paid Link”) ------- Organizing for Your Lifestyle: Adaptable Inspirations from Socks to Suitcases: https://amzn.to/3IDvwzH Decluttering For Dummies: https://amzn.to/3Tj20Ea A No Nonsense Guide to Organizing Your Home: https://amzn.to/42b0l7G The Clutter Fix: https://amzn.to/3TKyp8u Real Life Organizing: https://amzn.to/3TJx4yW REPROGRAM YOUR MIND SLEEP TAPES ---------- The New Normal Sleep Tape - https://youtu.be/ebLJJA6rUHw The New Tape | Affirmations Of A Clean and Orderly Home | "I AM" - https://youtu.be/n13ZBvaCMjw * When available, we use affiliate links, and as Amazon Associates, we earn on qualifying purchases. SOCIAL MEDIA ------------- ** CONNECT WITH IMOGEN LAMPORT ** Website: https://insideoutstyleblog.com/ Facebook Page: https://www.facebook.com/InsideOutStyleBlog/ Instagram: https://www.instagram.com/insideoutstyleblog/ Twitter: https://twitter.com/ImogenLamport Pinterest: https://www.pinterest.com/imogenlamport/ YouTube: https://www.youtube.com/imogenlamport LinkedIn: https://www.linkedin.com/in/imogenlamport/ ** CONNECT WITH HOARDING WORLD ** HoardingWorld Support Group on Facebook: https://www.facebook.com/groups/hoardingworld YouTube: https://www.youtube.com/@HoardingWorld Facebook Page: https://Facebook.com/HoardingWorld Twitter: https://twitter.com/HoardingWorld Instagram: https://Instagram.com/hoarding.world Pinterest: https://www.pinterest.com/HoardingWorld TikTok: https://www.tiktok.com/@HoardingWorld Hashtags: #ClutterCorner #HoardingWorld ** CONNECT WITH ANGELA BROWN ** YouTube: https://www.youtube.com/@AskAngelaBrown Facebook Page: https://Facebook.com/AskAngelaBrown Twitter: https://Twitter.com/AskAngelaBrown Instagram: https://Instagram.com/AskAngelaBrown Pinterest: https://Pinterest.com/AskAngelaBrown Linkedin: https://www.linkedin.com/in/AskAngelaBrown TikTok: https://www.tiktok.com/@AskAngelaBrown Amazon Store: https://amazon.com/shop/AngelaBrown *** ADVERTISE WITH US ***  We do work with sponsors and brands. If you are interested in working with us and you have a product or service that makes sense for the decluttering or hoarding space, here's how to work with us -https://savvycleaner.com/brand-deals *** SAVVY CLEANER BRANDS ***  SAVVY CLEANER - House Cleaner Training and Certification – https://savvycleaner.com/join VRBO AIRBNB CLEANING – Cleaning tips and strategies for your short-term rental  https://TurnoverCleaningTips.com  HOARDING WORLD - Helping you change your relationship with stuff https://HoardingWorld.com REALTY SUCCESS HUB - Helping you sell your home fast https://realtysuccesshub.com CREDITS -------------------------- Show Produced by: Savvy Cleaner: https://savvycleaner.com Show Host: Angela Brown Show Editors: PJ Barnes & Sally K. Naidu Show Producer: Sally K. Naidu DISCLAIMER: This work is not intended to substitute for professional medical or counseling advice. If you suffer from a physical or mental illness, please always seek professional help.

Pleb UnderGround
Can Lamport-Schemes Fix Bitcoin's Scaling Problem? | Guest: Yuri S Villas Boas | EP 75

Pleb UnderGround

Play Episode Listen Later Feb 19, 2024 74:59


► Can Lamport-Schemes Fix Bitcoin's Scaling Problem? We are joined by fellow bitcoiner and coder Yuri S Villas Boas and we discuss Yuri's solution LSVbig lamport scheme to bitcoins scaling issue. He proposes a way to decrease overall transactions footrpints which could lead to lower prices for on chain fees. We also dive into the laser eyed joe Biden meme, senators warrens satoshi endorsement and a whole lot more! ✔ Special Guest/Fireside Chat: ► Twitter Handle @yurivillasboas ► LVBsig: https://github.com/Yuri-SVB/LVBsig/ ► Coercion-resistance project: https://linktr.ee/greatwallt3 ► email: yurisvb@pm.me ✔ Numbers: ► https://twitter.com/BitcoinMagazine/status/1758152940454330708 ► https://twitter.com/mrhodl/status/1758156506120266109?t=CKH2brGypO5fEYTgQ-EFhQ ► https://twitter.com/markgoodw_in/status/1758260558179004830?t=CKH2brGypO5fEYTgQ-EFhQ ► https://x.com/hodlmagoo/status/1758234845430284332?s=46&t=CKH2brGypO5fEYTgQ-EFhQ ✔ REKT: ► https://twitter.com/martybent/status/1756792033996366039?s=46 ► https://twitter.com/pledditor/status/1757954234211835916?s=46 ► https://twitter.com/reardencode/status/1758262607772024868?s=46 ► https://twitter.com/arthur_van_pelt/status/1757774517638717502?s=46 ✔ Bitcoin Hopium: ► https://twitter.com/bitcoinundisc/status/1758121847692861553?t=CKH2brGypO5fEYTgQ-EFhQ ► https://twitter.com/leishman/status/1758169797462794702?t=CKH2brGypO5fEYTgQ-EFhQ ► https://twitter.com/ghost_prick/status/1758189028388557183?t=CKH2brGypO5fEYTgQ-EFhQ ✔ Twitter Handles: @coinicarus @AEHW1 ✔ ShoutOuts: ► @BitkoYinowsky - our PlebUnderground Logo ► @WorldofRusty - Our YT backgrounds and segment transitions ► @luckyredfish - Outro Graphic ► @plebsTaverne - Intro video ► @robbieP808x - Outro music ✔ Links Mentioned: ► https://www.defendingbtc.com/ SUPPORT HODLONAUT ► https://timechainstats.com/ ✔ Check out our Sponsor, support Bitcoin ONLY Businesses: ► https://www.representltd.com/ It's your life...represent accordingly. Sweatpants, Tees, hoodies, a huge variety of Fresh Great Quality clothing by a Bitcoiner for Bitcoiners! USE CODE PLEBUNDERGROUND FOR 10% OFF ► https://cyphersafe.io/ We offer a full line of physical stainless steel and brass products to help you protect your bitcoin from various modes of failure. CypherSafe creates metal BIP39 / SLIP39 bitcoin seed word storage devices that backup your bitcoin wallet and protect them from physical disaster ► http://btcpins.com/ Pick up the latest pin about the latest thing! Btcpins has it all; rare bitcoiner themed pins, stickers, apparel and a whole lot more. Get a 5% discount on any of the iconic gear found here using code ‘plebunderground' ► For Awesome pleb content daily http://plebunderground.com/ GM #Bitcoin (mon-fri 10:00 am ET) and The #Bitcoin Council of Autism Spaces on twitter Timecodes: 0:00 - Intro 00:24 - Waltons Rap 02:28 - Numbers 15:34 - FiresideChat: Yuri S Villas Boas 45:12 - REKT 1:03:08 - Hopium #Bitcoin #crypto #cryptocurrency #weekly The information provided by Pleb Underground ("we," "us," or "our") on Youtube.com (the "Site") our show is for general informational purposes only. All information on the show is provided in good faith, however we make no representation or warranty of any kind, express or implied, regarding the accuracy, adequacy, validity, reliability, availability, or completeness of any information on the Site. UNDER NO CIRCUMSTANCE SHALL WE HAVE ANY LIABILITY TO YOU FOR ANY LOSS OR DAMAGE OF ANY KIND INCURRED AS A RESULT OF THE USE OF THE SHOW OR RELIANCE ON ANY INFORMATION PROVIDED ON THE SHOW. YOUR USE OF THE SHOW AND YOUR RELIANCE ON ANY INFORMATION ON THE SHOW IS SOLELY AT YOUR OWN RISK.

ASecuritySite Podcast
World-leading Computer Scientists: Leslie B Lamport (Clocks, LaTeX, Byzantine Generals and Post Quantum Crypto)

ASecuritySite Podcast

Play Episode Listen Later Jul 24, 2023 15:51


Related page: https://medium.com/asecuritysite-when-bob-met-alice/clocks-latex-byzantine-generals-and-post-quantum-crypto-meet-the-amazing-leslie-b-lamport-b2ade4b590d7 Demo: https://asecuritysite.com/hashsig/lamport  Introduction I write this article in Medium and with its limited text editor, but I really would love to write it in LaTeX. Before the monopoly of Microsoft Word, there were document mark-up systems such as Lotus Manuscript, and where we had a basic editor to produce publishing-ready content. The GUI came along, and all the back-end stuff was pushed away from the user. For many, this is fine, but for those whose output is focused on sharing and dissemination of research, it is often the only way to work. In research, LaTeX is King and is a fully formed method of laying out — and sharing — research outputs. In the past few years, we have published over 100 research papers, and not one of them has been created in Microsoft Word. And for this, I thank Leslie Lamport. In fact, ask our kids about Newton, Faraday or Einstein, and they could probably tell you something about them. But ask them about Whitfield Diffie, Shafi Goldwasser, or Leslie B Lamport, and they would probably look quizzical? Their future world, though, is probably going to be built around some of the amazing minds that built the most amazing structure ever created … The Internet. To Leslie Lamport So, I am so privileged to be an academic researcher. For me, teaching, innovation and research go hand-in-hand, and where the things I research into gives me ideas for innovation, and which I can then integrate these things into my teaching. The continual probing of questions from students also pushes me to think differently about things, and so the cycle goes on. But, we are all just building on the shoulders of true giants, and there are few larger giants than Leslie Lamport — the creator of LaTeX. For me, every time I open up a LaTeX document, I think of the work he did on creating LaTeX, and which makes my research work so much more productive. If I was still stuck with Microsoft Office for research, I would spend half of my time in that horrible equation editor, or in trying to integrate the references into the required format, or in formatting Header 1 and Header 2 to have a six-point spacing underneath. So, for me, the contest between LaTeX and Microsoft Word is a knock-out in the first round. And one of the great things about Leslie is that his work is strongly academic — and which provides foundations for others to build on. For this, he did a great deal on the ordering of task synchronisation, in state theory, cryptography signatures, and fault tolerance. LaTeX I really can say enough about how much LaTeX — created in 1984 — helps my work. I am writing a few books just now, and it allows me to lay out the books in the way that I want to deliver the content. There's no need for a further mark-up, as I work on the output that the reader will see. But the true genius of LaTeX is the way that teams can work on a paper, and where there can be async to GitHub and where version control is then embedded. Overall we use Overleaf, but we're not tie-in to that, and can move to any editor we want. But the process is just so much better than Microsoft Word, especially when creating a thesis. Word is really just the same old package it was in the 1990s, and still hides lots away, and which makes it really difficult to create content which can easily be changed for its layout. With LaTeX, you create the content and can then apply whatever style you want. Clocks Many in the research community think that the quality measure of a paper is the impact factor of the journal that it is submitted to, or in the amount of maths that it contains. But, in the end, it is the impact of the paper, and how it changes thinking. For Leslie, in 1978, his paper on clocks changed our scientific world and is one of the most cited papers in computer science [here]: Byzantine Generals Problem In 1981, Leslie B Lamport defined the Byzantine Generals Problem [here]: And in a research world where you can have 100s of references in a paper, Leslie only used four (and which would probably not be accepted these days for having so few references): Within this paper, the generals of a Byzantine army have to agree to their battle plan, in the face of adversaries passing in order information. In the end, we aim to create a way of passing messages where if at least two out of three of the generals are honest, we will end up with the correct battle plan. So why don't we build computer systems like this, and where we support failures in parts of the system, or where parts of the system may be taken over for malicious purposes? And the answer is … no reason, it just that we are stuck with our 1970s viewpoint of the computing world, and everything works perfectly, and security is someone else's problem to fix. So, we need a system where we create a number of trusted nodes to perform a computation, and then an election process at the end to see if we have a consensus for a result. If we have three generals (Bob, Alice and Eve), we need two of them to be honest, which means we can cope with one of our generals turning bad: In this case, Eve could try and sway Trent by sending the wrong command, but Bob and Alice will build a better consensus, and so Trent will go with them. The work can then be defined as MPC (Multiparty Computation) and where we have multiple nodes getting involved to produce the final result. In many cases, this result could just be as simple as a Yes or No, and as to whether a Bitcoin transaction is real or fake, or whether an IoT device has a certain voltage reading. The Lamport Signature Sometime soon we perhaps need to wean ourselves of our existing public key methods and look to techniques that are more challenging for quantum computers. With the implementation of Shor's algorithm [here] on quantum computers, we will see our RSA and Elliptic Curve methods being replaced by methods which are quantum robust. One method is the Lamport signature method and which was created by Leslie B. Lamport in 1979 [here]: At the current time, it is thought to be a quantum robust technique for signing messages. When we sign a message we take its hash and then encrypt it with our private key. The public key is then used to prove it and will prove that we signed it with our private key. The Lamport signature uses 512 random hashes for the private key, and which are split into Set A and Set B. The public key is the hash of each of these values. The size of the private key is 16KB (2×256×256 bits) and the public key size is also 16 KB (512 hashes with each of 256 bits). The basic method of creating a Lamport hash signature is: We create two data sets with 256 random 256-bit numbers (Set A and Set B). These are the private key (512 values). Next, we take the hash of each of the random numbers. This will give 512 hashes and will be the public key. We then hash the message using SHA-256, and then test each bit of the hash (0 … 255). If it is a 0, we use the ith number in Set A, else we use the ith number from Set B. The signature is then 256 random numbers (taken from either Set A or Set B) and the public key is the 512 hashes (of Set A and Set B). This process is illustrated below: We can use the Lamport method for one-time signing, but, in its core format, we would need a new public key for each signing. The major problem with Lamport is thus that we can only sign once with each public key. We can overcome this, though, by creating a hash tree which is a merger of many public keys into a single root. A sample run which just shows the first few private keys and the first public keys: ==== Private key (keep secret) =====Priv[0][0] (SetA): 6f74f11f20953dc91af94e15b7df9ae00ef0ab55eb08900db03ebdf06d59556cPriv[0][1] (SetB): 4b1012fc5669b45672e4ab4b659a6202dd56646371a258429ccc91cdbcf09619Priv[1][0] (SetA): 19f0f71e913ca999a23e152edfe2ca3a94f9869ba973651a4b2cea3915e36721Priv[1][1] (SetB): 04b05e62cc5201cafc2db9577570bf7d28c77e923610ad74a1377d64a993097ePriv[2][0] (SetA): 15ef65eda3ee872f56c150a5eeecff8abd0457408357f2126d5d97b58fc3f24ePriv[2][1] (SetB): 8b5e7513075ce3fbea71fbec9b7a1d43d049af613aa79c6f89c7671ab8921073Priv[3][0] (SetA): 1c408e62f4c44d73a2fff722e6d6115bc614439fff02e410b127c8beeaa94346Priv[3][1] (SetB): e9dcbdd63d53a1cfc4c23ccd55ce008d5a71e31803ed05e78b174a0cbaf43887==== Public key (show everyone)=====Pub[0][0]: 7f2c9414db83444c586c83ceb29333c550bedfd760a4c9a22549d9b4f03e9ba9Pub[0][1]: 4bc371f8b242fa479a20f5b6b15d36c2f07f7379f788ea36111ebfaa331190a3Pub[1][0]: 663cda4de0bf16a4650d651fc9cb7680039838d0ccb59c4300411db06d2e4c20Pub[1][1]: 1a853fde7387761b4ea22fed06fd5a1446c45b4be9a9d14f26e33d845dd9005f==== Message to sign ===============Message: The quick brown fox jumps over the lazy dogSHA-256: d7a8fbb307d7809469ca9abcb0082e4f8d5651e46d3cdb762d02d0bf37c9e592==== Signature =====================Sign[0]: 4b1012fc5669b45672e4ab4b659a6202dd56646371a258429ccc91cdbcf09619Sign[1]: 04b05e62cc5201cafc2db9577570bf7d28c77e923610ad74a1377d64a993097eSign[2]: 8b5e7513075ce3fbea71fbec9b7a1d43d049af613aa79c6f89c7671ab8921073Sign[3]: 1c408e62f4c44d73a2fff722e6d6115bc614439fff02e410b127c8beeaa94346The signature test is True In this case, we take the random number and then convert it to a string. So the SHA-256 signature of “6f74f11f20953dc91af94e15…0db03ebdf06d59556c” is 7f2c9414db83444c586c…49d9b4f03e9ba9. If can be seen that the hash of the message (“The quick brown fox jumps over the lazy dog”) has a hex D value at the start, which is 1101 in binary, and we see we take from SetB [0], SetB [1], SetA [2] and SetB [3]. A demonstration is given here. Conclusions The Internet we have now built on the foundations that Leslie applied. On 18 March 2013, he received the AM Turing Award (which is like a Nobel Prize in Computer Science). At the time, Bill Gates said: Leslie has done great things not just for the field of computer science, but also in helping make the world a safer place. Countless people around the world benefit from his work without ever hearing his name. … Leslie is a fantastic example of what can happen when the world's brightest minds are encouraged to push the boundaries of what's possible. Basically, much of our computing world is still using the amazing foundation that was created in the 1970s and 1980s. We tip our hats to Diffie Hellman, Shafi Goldwasser, Ralph Merkle, Ron Rivest, Adi Shamir, and, of course, Leslie B Lamport. As a note, that while Leslie's paper on Clocks is cited over 12,000 times, the Diffie Hellman paper is cited over 19,300 times: We really should be integrating computer science into our school curriculum, and show that it has equal standing to physics, biology and chemistry, and it will shape our future world as much as the others. Why not teach kids about public-key cryptography in the same way that we talk about Newton?

Cover Up: Ministry of Secrets
4. Detective Lamport and the Missing Page

Cover Up: Ministry of Secrets

Play Episode Listen Later Mar 22, 2023 36:02


Giles and Sarah investigate the first few days that followed Crabb's disappearance with the help of Peter Marshall, the 90-year-old journalist who first broke the story. As they piece together what might have happened, they discover a sinister 1950s cover-up — one that extends to the very centre of power. Subscribe to The Binge to get all episodes of Cover Up: Ministry of Secrets ad-free right now. Click "Subscribe" at the top of the Cover Up show page on Apple Podcasts or visit GetTheBinge.com to get access wherever you get your podcasts. A Sony Music Entertainment production. Find more great podcasts from Sony Music Entertainment at sonymusic.com/podcasts and follow us @sonypodcasts. Show notes: Newspaper headlines: The Times, 30 April 1956 The Times, 3 May 1956 Portsmouth Evening News, 1 May 1956 Le Monde, 7 May 1956 Le Monde, 14 May 1956 Bild Zeitung, circa June 1956 The Daily Mirror, 8 May 1956 The Daily Mirror, 9 May 1956 The Daily Mirror, 15 May 1956 Die Zeit, 10 May 1956 The New York Times, 13 June 1957 The New York Times, 29 May 1960 The Irish Times, 19 May 1956 The Irish Times, 10 May 1956 Der Spiegel, 22 May 1956 El Universal, 2006 HC Deb, Commander Crabb (Presumed Death), 09 May 1956 vol 552. Contains Parliamentary information licensed under the Open Parliament Licence v3.0. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Fringe Network: Alien State
Detective Lamport and the Missing Page

Fringe Network: Alien State

Play Episode Listen Later Mar 22, 2023 36:02


Giles and Sarah investigate the first few days that followed Crabb's disappearance with the help of Peter Marshall, the 90-year-old journalist who first broke the story. As they piece together what might have happened, they discover a sinister 1950s cover-up – one that extends to the very centre of power. Subscribe to The Binge to get all episodes of Cover Up: Ministry of Secrets ad-free right now. Click "Subscribe" at the top of the Cover Up: Ministry of Secrets show page on Apple Podcasts or visit GetTheBinge.com to get access wherever you get your podcasts. A Somethin' Else & Sony Music Entertainment production. Find more great podcasts from Sony Music Entertainment at sonymusic.com/podcasts and follow us @sonypodcasts Show notes: Newspaper headlines: The Times, 30 April 1956 The Times, 3 May 1956 Portsmouth Evening News, 1 May 1956 Le Monde, 7 May 1956 Le Monde, 14 May 1956 Bild Zeitung, circa June 1956 The Daily Mirror, 8 May 1956 The Daily Mirror, 9 May 1956 The Daily Mirror, 15 May 1956 Die Zeit, 10 May 1956 The New York Times, 13 June 1957 The New York Times, 29 May 1960 The Irish Times, 19 May 1956 The Irish Times, 10 May 1956 Der Spiegel, 22 May 1956 El Universal, 2006 Learn more about your ad choices. Visit podcastchoices.com/adchoices

Beyond the Breakers
Episode 90 - SS Vestris ft. Old Shipping Lines

Beyond the Breakers

Play Episode Listen Later Feb 5, 2023 47:07


This week we have special guest Jamie (from Old Shipping Lines on YouTube) to discuss the sinking of the British liner SS Vestris in 1928. Sources:Berchtold, William E. "More Fodder for Photomaniacs." The North American Review, vol. 239, no. 1, January 1935, pp. 19-30.Grace, Michael. "Disaster at Sea - SS Vestris" Cruising the Past, 21 Nov 2009. https://www.cruiselinehistory.com/disaster-at-sea-ss-vestris/ "Lamport & Holt's S.S. Vestris." 3 Feb 2012. www.bluestarline.org/lamports/vestris.html Olivier, Clint. The Last Dance of the Vestris. 2013. Shenna, Paddy. "The sinking of SS Vestris - the shipping disaster that time forgot." Liverpool Echo, 21 Oct 2013. www.liverpoolecho.co.uk/news/nostalgia/evening-read-sinking-ss-vestris-6219504 Weiss, Holger. “Reopening Work Among Colonial Seamen.” A Global Radical Waterfront: The International Propaganda Committee of Transport Workers and the International of Seamen and Harbour Workers, 1921 - 1937. Brill, 2021, pp. 161 - 179. Check out our Patreon here!Support the show

The Express Rally Podcast
ER Podcast EP 54 - Freddy Lamport

The Express Rally Podcast

Play Episode Listen Later Feb 2, 2023 59:19


Join us on this episode of the Express Rally Podcast as we sit down with Freddy Lamport. If you've been on recent events, you know Freddy as the absolute mad man behind the wheel of a true Porsche 911 Cup Car. Tonight, we talk about Freddys start into cars, careers, and fascinating stories about his Porsche. 

El búnquer
William Lamport, el pirata irland

El búnquer

Play Episode Listen Later Jan 4, 2023 51:04


Programa 3x75. Resulta que en William tenia un avi molt nacionalista que tot el dia estava criticant els anglesos. Els pares no volien que en William sort

Canicross Conversations
Episode 38 - Two Dog Scooter with Kevin Lamport

Canicross Conversations

Play Episode Listen Later Oct 28, 2022 36:35


In this episode, Louise and Michelle chat to Kevin Lamport, British national champion in two dog scooter. Find out all about this exciting dog team sport, what training looks like, and just how fast the fastest can go!  

Zero Knowledge
Episode 246: Adversarial Machine Learning Research with Florian Tramèr

Zero Knowledge

Play Episode Listen Later Sep 21, 2022 66:44


This week, Anna (https://twitter.com/annarrose) and Tarun (https://twitter.com/tarunchitra) chat with Florian Tramèr (https://twitter.com/florian_tramer), Assistant Professor at ETH Zurich (https://ethz.ch/en.html). They discuss his earlier work on side channel attacks on privacy blockchains, as well as his academic focus on Machine Learning (ML) and adversarial research. They define some key ML terms, tease out some of the nuances of ML training and models, chat zkML and other privacy environments where ML can be trained, and look at why the security around ML will be important as these models become increasingly used in production. Here are some additional links for this episode: * Episode 228: Catch-up at DevConnect AMS with Tarun, Guillermo and Brendan (https://zeroknowledge.fm/228a/) * Florian Tramèr's Github (https://github.com/ftramer) * Florian Tramèr's Publications & Papers (https://floriantramer.com/publications/) * ETH Zurich (https://ethz.ch/en.html) * DevConnect (https://devconnect.org/) * Tarun Chritra's Github (https://github.com/pluriholonomic) * Single Secret Leader Election by Dan Boneh, Saba Eskandarian, Lucjan Hanzlik, and Nicola Greco (https://eprint.iacr.org/2020/025) * GasToken: A Journey Through Blockchain Resource Arbitrage by Tramèr, Daian, Breidenbach and Juels (https://floriantramer.com/docs/slides/CESC18gastoken.pdf) * Enter the Hydra: Towards Principled Bug Bounties and Exploit-Resistant Smart Contracts by Tramèr, Daian, Breidenbach and Juels (https://eprint.iacr.org/2017/1090) * Ronin Bridge Hack – Community Alert: Ronin Validators Compromised (https://roninblockchain.substack.com/p/community-alert-ronin-validators?s=w) * InstaHide: Instance-hiding Schemes for Private Distributed Learning, Huang et al. 2020. (https://arxiv.org/abs/2010.02772) * Is Private Learning Possible with Instance Encoding? (https://arxiv.org/abs/2011.05315) * OpenAI's GPT-3 model (https://openai.com/api/) * OpenAI's GPT-2 model (https://openai.com/blog/tags/gpt-2/) * OpenAI's GPT-2 model (https://openai.com/blog/tags/gpt-2/) * The Part-Time Parliament, Lamport, 1998. (https://lamport.azurewebsites.net/pubs/lamport-paxos.pdf) * You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion (https://arxiv.org/abs/2007.02220) ZK Whiteboard Sessions (https://zkhack.dev/whiteboard/) – as part of ZK Hack and powered by Polygon – a new series of educational videos that will help you get onboarded into the concepts and terms that we talk about on the ZK front. ZK Jobs Board (https://jobsboard.zeroknowledge.fm/) – has a fresh batch of open roles from ZK-focused projects. Find your next opportunity working in ZK! Today's episode is sponsored by Mina Protocol (https://minaprotocol.com/). With Mina's zero knowledge smart contracts – or zkApps – developers can create apps that offer privacy, security, and verifiability for your users. Head to minaprotocol.com/zkpodcast (http://minaprotocol.com/zkpodcast) to learn about their developer bootcamps and open grants. If you like what we do: * Find all our links here! @ZeroKnowledge | Linktree (https://linktr.ee/zeroknowledge) * Subscribe to our podcast newsletter (https://zeroknowledge.substack.com) * Follow us on Twitter @zeroknowledgefm (https://twitter.com/zeroknowledgefm) * Join us on Telegram (https://zeroknowledge.fm/telegram) * Catch us on Youtube (https://zeroknowledge.fm/) * Head to the ZK Community Forum (https://community.zeroknowledge.fm/) * Support our Gitcoin Grant (https://zeroknowledge.fm/gitcoin-grant-329-zkp-2)

Inside Out Style
Style Inspiration | 16 Style Types | Imogen Lamport & Jill Chivers

Inside Out Style

Play Episode Listen Later Aug 22, 2022 14:57


If You'd Like to Define Your Style and Discover Your Colours If you're sick of wasting money on clothes that don't work and you know there is a better way, then join my 7 Steps to Style program and get the right information for you and your style. Subscribe to my Podcast You can now get these…

Inside Out Style
Authentic Style | 16 Style Types | Imogen Lamport & Jill Chivers

Inside Out Style

Play Episode Listen Later Aug 22, 2022 14:49


If You'd Like to Define Your Style and Discover Your Colours If you're sick of wasting money on clothes that don't work and you know there is a better way, then join my 7 Steps to Style program and get the right information for you and your style. Subscribe to my Podcast You can now get these…

Inside Out Style
Behind the dressing room curtain | 16 Style Types | Imogen Lamport & Jill Chivers

Inside Out Style

Play Episode Listen Later Aug 15, 2022 16:24


If You'd Like to Define Your Style and Discover Your Colours If you're sick of wasting money on clothes that don't work and you know there is a better way, then join my 7 Steps to Style program and get the right information for you and your style. Subscribe to my Podcast You can now get these…

Inside Out Style
Would you rather be underdressed or overdressed | 16 Style Types | Imogen Lamport & Jill Chivers

Inside Out Style

Play Episode Listen Later Aug 8, 2022 20:50


If You'd Like to Define Your Style and Discover Your Colours If you're sick of wasting money on clothes that don't work and you know there is a better way, then join my 7 Steps to Style program and get the right information for you and your style. Subscribe to my Podcast You can now get these…

Inside Out Style
Future Focused Wardrobe | 16 Style Types | Imogen Lamport & Jill Chivers

Inside Out Style

Play Episode Listen Later Aug 8, 2022 31:57


If You'd Like to Define Your Style and Discover Your Colours If you're sick of wasting money on clothes that don't work and you know there is a better way, then join my 7 Steps to Style program and get the right information for you and your style. Subscribe to my Podcast You can now get these…

Inside Out Style
Are you in a Style Rut | 16 Style Types | Imogen Lamport & Jill Chivers

Inside Out Style

Play Episode Listen Later Aug 1, 2022 12:06


If You'd Like to Define Your Style and Discover Your Colours If you're sick of wasting money on clothes that don't work and you know there is a better way, then join my 7 Steps to Style program and get the right information for you and your style. Subscribe to my Podcast You can now get these…

Inside Out Style
Stability vs Variety | 16 Style Types | Imogen Lamport & Jill Chivers

Inside Out Style

Play Episode Listen Later Aug 1, 2022 20:15


If You'd Like to Define Your Style and Discover Your Colours If you're sick of wasting money on clothes that don't work and you know there is a better way, then join my 7 Steps to Style program and get the right information for you and your style. Subscribe to my Podcast You can now get these…

Inside Out Style
Blending in or Standing Out | 16 Style Types | Imogen Lamport & Jill Chivers

Inside Out Style

Play Episode Listen Later Jul 27, 2022 16:53


If You'd Like to Define Your Style and Discover Your Colours If you're sick of wasting money on clothes that don't work and you know there is a better way, then join my 7 Steps to Style program and get the right information for you and your style. Subscribe to my Podcast You can now get these…

House of Mystery True Crime History
Robbie Thomas - Psychic Detective

House of Mystery True Crime History

Play Episode Listen Later Jul 11, 2022 67:06


Robbie Thomas, renowned psychic and spiritual visionary, has been helping people around the world for many years, bringing them closer to the other side through contact with their loved ones. As you read Signs from Heaven, you're taking an enlightened journey, giving you much more perspective of life on the other side and within you. The remarkable written work will guide you in knowing more about your ability and sheds much light on hope, love and faith in spirit and God. You're going to witness many accounts that are verified by testimonials from many individuals from around the world. Each account is different, giving much weight to the validity within! This book has been craved for a long while, finally coming to light to share in the very essence of spirit it was meant for, and the realm of spirit of you.Robbie Thomas, has helping internationally, families and Police fight against crime, while bringing solace to those who need it. Over many years of assisting in many murder/missing person's cases, Robbie has been able to give great details to these devastating crimes or happenings, which have led to finding lost people, arrests being made in murder cases, and bringing closure to families who desperately need it. Robbie works closely with families in conjunction with Law Enforcement and has never charged for his assistance or taken any monies as a reward. Being highly respected in the Paranormal/Spiritual fields of work, Robbie continues to be a great presence, working alongside many integral individuals in Film, Television and Radio, lending his ability to further along his experience as a Spiritual Visionary. Robbie lives in Canada and is often called upon from the international community for assistance with his ability. Being a prolific Author/Writer in the Metaphysical/Spiritual, Horror/Paranormal genre's, he's published eight books of his own and co-contributed to four other books. He's written television treatments, which have been looked at and has accomplished one of them to be in development with Lamport and Sheppard Entertainment (Canada). Other projects of Robbie's have also been given consideration with other production companies (United States and Canada). Movies have also been a part of Robbie's life as he has either created or starred in, Dead Whisper 2005-2006, The Sallie House 2007, and Paradox (Parasylum Directors Cut 2009). Robbie has been seen on one hour specials and news media, NBC, CBS, ABC, CKCO, Daytime Live Rogers Television, StarChoice, Bell Expressview, New RO (CTV), and many other television programs. He's been featured on, CBS Radio, Kevin Smith Show, Barbara Mackey Show, True Ghost Stories Show, The Eagle 103.3 FM, CHOK 1070 AM, The Fox 99.9 and more. He has been featured in many magazines worldwide such as Taps Paranormal Magazine (United States), Visions Magazine (United States), Signs Magazine (England), Bellsprit Magazine (United States), Suspense Magazine (United States), Paranormal Magazine (England), Unexplained Paranormal Magazine (United States), Fix Magazine (Canada), Pen It Magazine (United States) and more.Support this show http://supporter.acast.com/houseofmysteryradio. See acast.com/privacy for privacy and opt-out information. Become a member at https://plus.acast.com/s/houseofmysteryradio.

Counting Sand
Dynamo: The Research Paper that Changed the World

Counting Sand

Play Episode Listen Later Jul 5, 2022 35:08


The cycle between research and application is often too long and can take decades to complete. It is often asked what bit of research or technology is the most important? Before we can answer that question, I think it's important to take a step back and share the story of why we believe The Dynamo Paper is so essential to our modern world and how we encountered it. Citations:DeCandia, G., Hastorun, D., Jampani, M., Kakulapati, G., Lakshman, A., Pilchin, A., ... & Vogels, W. (2007). Dynamo: Amazon's highly available key-value store. ACM SIGOPS operating systems review, 41(6), 205-220.Karger, D., Lehman, E., Leighton, T., Panigrahy, R., Levine, M., & Lewin, D. (1997, May). Consistent hashing and random trees: Distributed caching protocols for relieving hot spots on the world wide web. In Proceedings of the twenty-ninth annual ACM symposium on Theory of computing (pp. 654-663).Lamport, L. (2019). Time, clocks, and the ordering of events in a distributed system. In Concurrency: the Works of Leslie Lamport (pp. 179-196).Merkle, R. C. (1987). A digital signature based on conventional encryption. In Proceedings of the USENIX Secur. Symp (pp. 369-378). Our Team:Host: Angelo KastroulisExecutive Producer: Náture KastroulisProducer: Albert PerrottaCommunications Strategist: Albert PerrottaAudio Engineer: Ryan ThompsonMusic: All Things Grow by Oliver Worth

Había una vez un algoritmo...
Senderos de una desilusión: ¿por qué la programación no puede ser matemática? | E-86

Había una vez un algoritmo...

Play Episode Play 59 sec Highlight Listen Later May 30, 2022 13:00


En este episodio hablaré sobre por qué la informática no puede ser matemática, pero que, sin embargo, deberíamos seguir algunas ideas de Dijkstra y Lamport, incluso si ellas nos llevan a un callejón sin salida.

Losers, Pretenders & Scoundrels
William Lamport, The Irish Zorro of Mexico

Losers, Pretenders & Scoundrels

Play Episode Listen Later Apr 5, 2022 44:06


Was Zorro real? You bet he was! Except he was Irish. And didn't wear a mask. And Probably couldn't carve a "Z" with his sword. And proclaimed himself king of Mexico.  But other than that, William Lamport is the spitting image of Zorro, who was actually based on him. Heaton and Young take a look at what gave this redhead the cahones to try to take the Mexican throne, and what stopped him.

Yoga Therapy
48. Yoga & Chiropractical Work with Rob Lamport

Yoga Therapy

Play Episode Listen Later Mar 23, 2022 62:54


Rob Lamport shares with Shira Cohen, how he turned to yoga for back issues and it enhanced his work, mental health and slowly change his career. We talk about how spinal alignment equals physical and physiological health, and movement is key to our health on every level, affecting every system, vessel and cell. We touch on the power of being present in the body, and body awareness as the basis to the safest way to create healthy movement first and correct foundation. Work with Us: Rob's Courses  Therapy with Shira   Resources: T. Krishnamarcharya Heart formed first; Yoga, Fascia, Anatomy & Movement- Joanne Sarah Avison  Judith Lasater - Experiential Anatomy Invisible Women- Caroline Criado Perez TKV Desikachar  

Powerful Personal Brand
How Your Style Communicates Personal Brand: Imogen Lamport

Powerful Personal Brand

Play Episode Listen Later Nov 10, 2021 21:46


This week I sit down with certified, award-winning image consultant Imogen Lamport as she dives into the art and science behind personal style. We discuss how you can intentionally and authentically stay on-brand with your company values through your choice of clothing color, material, and structure. Imogen Lamport is an internationally certified, award-winning image consultant and image trainer whose passion is demystifying the science and art of style so that you can define your personal style and curate a wardrobe full of clothes you love to wear as they express your personality from the inside out. Connect with Imogen on her website: http://www.insideoutstyleblog.com/ (www.insideoutstyleblog.com) or on Instagram and Facebook: https://www.instagram.com/insideoutstyleblog/ (https://www.instagram.com/insideoutstyleblog/) https://www.facebook.com/InsideOutStyleBlog (https://www.facebook.com/InsideOutStyleBlog) For more personal branding tips and strategies, follow along with Claire: FREE Personal Brand Masterclass: https://clairebahn.com/personal-branding-masterclass (https://clairebahn.com/personal-branding-masterclass) Website: https://clairebahn.com/ (https://clairebahn.com/) Instagram: https://www.instagram.com/clairebahn/ (https://www.instagram.com/clairebahn/) LinkedIn: https://www.linkedin.com/in/clairebahn/ (https://www.linkedin.com/in/clairebahn/) Twitter: https://twitter.com/clairebahn (https://twitter.com/clairebahn) YouTube Channel: https://www.youtube.com/clairebahn (https://www.youtube.com/clairebahn) This podcast uses the following third-party services for analysis: Chartable - https://chartable.com/privacy

Churchfront Worship Leader Podcast
Khalia J. Williams - Theological Foundations of Worship

Churchfront Worship Leader Podcast

Play Episode Listen Later Nov 8, 2021 74:20


In today's episode, we have the privilege of hearing from author Khalia J. Williams. She introduces us to her new book, Theological Foundations of Worship, and talks about how this book can be a fantastic resource for those serving in Worship Ministry. Khalia discusses the thought behind the topics that are covered by her and the other authors that contributed to the chapters of Theological Foundations of Worship.   Theological Foundations of Worship: Biblical, Systematic, and Practical Perspectives by: Khalia J. Williams and Mark A. Lamport http://www.bakerpublishinggroup.com/ Instagram: @ladykjw Facebook: Khalia J. Williams https://worshipministryschool.com/  

The Morning Show
"Everyone's responsibility to take care of our community": Toronto lawyer offering pro bono services for those arrested at Lamport Stadium

The Morning Show

Play Episode Listen Later Jul 23, 2021 10:06


Greg Brady guest hosts 640 Toronto's Morning Show   GUEST: Maria Rosa Muia, Criminal Defence Lawyer at Muia Criminal Law See omnystudio.com/listener for privacy information.

Kelly Cutrara
Arrests made as city officials and police move to clear Lamport Stadium encampment

Kelly Cutrara

Play Episode Listen Later Jul 22, 2021 11:06


Kelly talks to Global News reporter Sean O'Shea, who was on scene at Lamport Stadium where city officials and police officers cleared out a homeless encampment. See omnystudio.com/listener for privacy information.

The Morning Show
Chaos erupts again as Toronto officials, officers clear homeless encampment at Lamport Stadium

The Morning Show

Play Episode Listen Later Jul 22, 2021 12:37


Greg Brady guest hosts 640 Toronto's Morning Show   GUEST: Diana Chan McNally, Training and engagement coordinator with Toronto Drop-in Network See omnystudio.com/listener for privacy information.

ACM ByteCast
Leslie Lamport - Episode 16

ACM ByteCast

Play Episode Listen Later May 27, 2021 36:40


In this episode of ACM ByteCast, our special guest host Scott Hanselman (of The Hanselminutes Podcast) welcomes 2013 ACM A.M. Turing Award laureate Leslie Lamport of Microsoft Research, best known for his seminal work in distributed and concurrent systems, and as the initial developer of the document preparation system LaTeX and the author of its first manual. Among his many honors and recognitions, Lamport is a Fellow of ACM and has received the IEEE Emanuel R. Piore Award, the Dijkstra Prize, and the IEEE John von Neumann Medal. Leslie shares his journey into computing, which started out as something he only did in his spare time as a mathematician. Scott and Leslie discuss the differences and similarities between computer science and software engineering, the math involved in Leslie’s high-level temporal logic of actions (TLA), which can help solve the famous Byzantine Generals Problem, and the algorithms Leslie himself has created. He also reflects on how the building of distributed systems has changes since the 60s and 70s. Links: Time-Clocks Paper Bakery Algorithm Mutual Exclusion Algorithm

Hanselminutes - Fresh Talk and Tech for Developers
Leslie Lamport - in partnership with ACM Bytecast

Hanselminutes - Fresh Talk and Tech for Developers

Play Episode Listen Later May 27, 2021 39:53


In this collaboration with ACM ByteCast and Hanselminutes, Scott welcomes 2013 ACM A.M. Turing Award laureate Leslie Lamport of Microsoft Research, best known for his seminal work in distributed and concurrent systems, and as the initial developer of the document preparation system LaTeX and the author of its first manual. Among his many honors and recognitions, Lamport is a Fellow of ACM and has received the IEEE Emanuel R. Piore Award, the Dijkstra Prize, and the IEEE John von Neumann Medal.Leslie shares his journey into computing, which started out as something he only did in his spare time as a mathematician. Scott and Leslie discuss the differences and similarities between computer science and software engineering, the math involved in Leslie’s high-level temporal logic of actions (TLA), which can help solve the famous Byzantine Generals Problem, and the algorithms Leslie himself has created. He also reflects on how the building of distributed systems has changes since the 60s and 70s.Subscribe to the ACM ByteCast at https://learning.acm.org/bytecast Time-Clocks Paperhttp://lamport.azurewebsites.net/pubs/time-clocks.pdf Bakery Algorithmhttps://en.wikipedia.org/wiki/Lamport%27s_bakery_algorithm Mutual Exclusion Algorithmhttps://en.wikipedia.org/wiki/Lamport%27s_distributed_mutual_exclusion_algorithm

Sew Mindful Podcast
040: How to reconnect with your style with Imogen Lamport

Sew Mindful Podcast

Play Episode Play 31 sec Highlight Listen Later May 15, 2021 44:03 Transcription Available


I can't believe we are at Episode 40 already! Thank you so much for listening and supporting the show. This week I am joined again by my very first guest on the podcast, the wonderful Imogen Lamport. If you haven't listened to the previous episodes with Imogen and you at all interested in anything related to style then I highly recommend going back and picking those up (episodes 2, 8 and 14). Imogen has dedicated her energy to learning about all things style and then helping other women to apply that to create wardrobes that they love. I have been on a personal journey to learn about my own style preferences but feel like it is still a work in progress so in this episode Imogen gives her take on how to start to reconnect with your style if like me you feel a bit lost.In this episode you will hear:> Some of the reasons we can feel like we have lost our style> How to change your perception of your style> Tips on how to have fun with the clothes you already have> Questions to help you identify what's changed for you and how to move forward> Practical tips to discover your own sometimes hidden values> How to reflect your values through your clothing> Why the connection between values and satisfaction is so importantUseful links and resources:> Episode 2: Use your face to pick your fabric> Episode 8: Use Style Lines to flatter your figure> Episode 14: Tips to turn frightful photos into sexy selfies> Developing your Style Recipe - Tips and Tricks from the Professional> 7 Steps to Style programI hope you enjoy this episode as much as I did and I'd love to know what your biggest takeaways or a-has are from listening so be sure to DM me on instagram or email me via the Sew Much More Fun website.Connect with ImogenWebsite: https://insideoutstyleblog.comInstagram: https://instagram.com/insideoutstyleblogFacebook: https;//facebook.com/insideoutstyleblogConnect with me (Jacqui)Website: https://sewmuchmorefun.co.ukInstagram: https://instagram.com/sewmuchmorefunFacebook: https://facebook.com/sewmuchmorefunThank you so much for listening!Support the show (https://www.buymeacoffee.com/sewmuchmorefun)

Local to Legend
23 - From Local Image Consultant to Globally Recognized Style Blogger with Imogen Lamport

Local to Legend

Play Episode Listen Later Apr 13, 2021 35:00


This week's podcast guest is coming all the way from the land Down Under! Yep, that's right, Emily is chatting with Imogen Lamport, Australian native and professionally trained, internationally certified Image Consultant (aka a personal stylist) and a certified Type Practitioner. Imogen loves to share what she knows so that others can stop wasting money on clothes that don't work, and discover the ones that make them look and feel fabulous. Imogen uses her business, Inside Out Style, to fulfill her life's mission to “empower you to express your style.”Imogen discovered she had a love of the science behind style and how personality played an influential role. When she realized image consulting was something she could actually study and get certified in, she knew it was the perfect opportunity to combine her passion with her desire to one day start her own business.In 2008 Imogen began writing 3-5 posts a week as she started creating a local community based blog. Consistently showing up and providing content for her website greatly expanded her SEO, helping to build relationships with other bloggers. Her blog posts led to the release of her first eBook, which led to more posts and more eBooks, and eventually led to her writing virtual programs. As people continued to ask Imogen for personal consultations and to be trained by her themselves to become personal stylists, she realized she needed a way to do so virtually to meet her growing demand. Leaning into technology to deepen connections, create opportunities, and reach farther than she ever imagined, Imogen has continued to aim for viability and scalability. This episode will leave you feeling inspired to create amazing content and expand your reach.Tune in for topics like:How Imogen grew her local styling blog to a global marketWhy creating *useful* content on your website is what moves the needleThe benefit of creating a business that isn't solely dollars for timeThe importance of diversifying your revenue streams Why an online presence is so criticalLinks from the episode:Learn more from Imogen's encyclopedia of style by visiting the Inside Out Style website Connect with Inside Out Style on FacebookBe inspired to create your personal style by following Imogen on InstagramFollow me (Emily Steele) (Love Local) on Instagram for a little business + a little life, and a whole lot of positive energy!Thank you for listening, friend. See you in the next episode!

Super League Pod
SLP E293 Lamport Lane & The Abattoir Five

Super League Pod

Play Episode Listen Later Mar 22, 2021 143:21


Mark and Tim, plus a very special guest appearance, bring you our first weekly round up of 2021. We'll look back over the off-season news and law changes, plus we want to know which Rugby League players could be Breaking at the 2024 Olympics. We recap Round 1 of the Challenge Cup, missing the traditional community clubs but featuring lots of special tries. We throw in a bit of NRL Brit Watch and un peu de discussion sur le rugby à treize. Then we get our crystal balls out to make our wild guesses for round 1 of Super League.This episode is sponsored by Rob's Toy Shop.Find a wide range of toys, gifts, rugby league birthday cards and more at Rob’s Toy Shop on eBay. Visit stores.ebay.co.uk/robstoyshop and on any orders over £5 you can earn 5% cashback, and also 1% of your order value will go into the SLP coffers, by putting 'SLPDiscount' at checkout.Episode running order:News, from 12:45Results, from 97:45Predictions, from 111:45Quiz & recommendations, from 124:15

The H2G Podcast
The H2G Podcast Season 4 Episode 17 "Artist Spotlight - Emily Lamport"

The H2G Podcast

Play Episode Listen Later Mar 2, 2021 34:34


H2G IS BACK AND BETTER THAN EVER!! ALL NEW ARTISTS ALL NEW GAMES ALL NEW TIME SLOT MON-FRI 7PM EST Join Marvin, Tony Ray, Chelsea, Richie, Christy, and Justo as they welcome Season 4's second Artist Spotlight : Singer/Songwriter : Emily Lamport --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app --- Send in a voice message: https://anchor.fm/h2gpodcast/message

Microwaved Coffee
Episode 87 - William Lamport

Microwaved Coffee

Play Episode Listen Later Jan 21, 2021 25:58


We talk about Irish explorer William Lamport and how he became the inspiration for legendary character, Zorro! --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app

Había una vez un algoritmo...
Lamport y el problema de los generales Bizantinos | S2-E16

Había una vez un algoritmo...

Play Episode Listen Later Jan 10, 2021 19:02


Sobre una de las metáforas más populares en los sistemas distribuidos.

Westminster Institute talks
Dr. Hratch Tchilingirian: Challenges Facing Christian Communities in Turkey Today

Westminster Institute talks

Play Episode Listen Later Nov 24, 2020 80:15


https://westminster-institute.org/events/challenges-facing-christian-communities-in-turkey-today/ Dr. Hratch Tchilingirian is a sociologist (with a particular reference to sociology of religion) and Associate Faculty Member of Oriental Studies, University of Oxford. He has published extensively and lectures on minorities in contemporary Middle East; inter-ethnic conflicts in the Caucasus, the Armenian Church, Diaspora, and Turkish-Armenian relations. From 2002 to 2012 he taught and held various positions at University of Cambridge, and was co-founder of the Eurasia Research Centre at Judge Business School. Dr. Tchilingirian is the author of numerous studies, articles and publications and has lectured internationally in leading universities, academic institutions and international NGOs (see www.hratch.info). His television, radio and newspaper interviews have appeared in international media outlets, including the New York Times, Financial Times, BBC News, Al-Jazeera and Radio Vaticana. A transcript is unavailable for this talk. Dr. Tchilingirian’s remarks will be a chapter of Handbook of Christianity in the Middle East (eds. Mitri Raheb and Mark A. Lamport). Rowman & Littlefield Publishers, 2021.

Sew Mindful Podcast
008: Use Style Lines to Flatter your Figure - with Imogen Lamport

Sew Mindful Podcast

Play Episode Play 47 sec Highlight Listen Later Sep 19, 2020 47:08 Transcription Available


My guest is the fabulous Imogen Lamport of Inside Out Style back by popular demand. If you haven’t listened to Episode 2 where Imogen made her first appearance then be sure to go and listen to that after this episode. If you have listened to that episode then you will know that Imogen is a walking encyclopaedia on all things style. At the time of recording we are in the transition phase between seasons and I am planning what to make and wear as the weather begins to change. I’ve become fascinated by the optical illusions that clothing can create and so I decided to ask the style guru, Imogen, about the ways in which we can use style lines in our clothing and the rules to help me understand what will flatter me and what I should avoid. It’s a juicy episode packed to the brim with great rules, tips and advice.In this episode you will hear:The four most common style lines and how they relate to our clothingThe three rules of horizontal lines – how to use them to create balance and where to avoid themHow to create height and lose pounds using verticals and why rolling up your sleeves can make your legs look longerThe best place to use a curve in your clothes and which neckline is the most flattering for you…and sooooo much more!Useful Links and resources: Episode 002: Use your face to pick your fabric: https://www.sewmuchmorefun.co.uk/podcast/episode/4c8e89fb/002-use-your-face-to-pick-your-fabric-with-imogen-lamportSeven Steps to Style program - https://insideoutstyleblog.com/7-steps-to-style-system Imogen’s Three Rules of Horizontal Lines - https://insideoutstyleblog.com/2016/11/horiztonal-lines-clothing.htmlHow Diagonal Lines Work in outfits - https://insideoutstyleblog.com/2016/11/how-diagonal-lines-work.htmlHow Curved Lines work - https://insideoutstyleblog.com/2016/12/curved-lines-work.html Where to end tops to make your hips and tummy look slimmer - https://insideoutstyleblog.com/2018/03/where-to-end-tops-to-make-your-hips-and-tummy-look-slimmer.html Connect with Imogen Lamport:| Website address | https://insideoutstyleblog.com/ | Facebook page | https://www.facebook.com/InsideOutStyleBlog/ | Instagram | https://www.instagram.com/insideoutstyleblog/ | Twitter | https://www.instagram.com/insideoutstyleblog/ | LinkedIn | https://www.linkedin.com/in/imogenlamport/ Connect with me, Jacqui Blakemore:Instagram & Facebook!

Sew Mindful Podcast
002: Use Your Face to Pick Your Fabric - with Imogen Lamport

Sew Mindful Podcast

Play Episode Play 34 sec Highlight Listen Later Aug 14, 2020 35:13 Transcription Available


In this episode I am talking to the fabulous Imogen Lamport of Inside Out Style. Imogen is based in Melbourne Australia and her blog and website for Inside Out Style has become regarded as the Encyclopaedia of Image Consulting. She is an innovator and leader in the style industry and created the Absolute Colour System of 18 tonal colour directions that is now used by consultants around the world.She is also a certified Psychological Type Practictioner and co-creator of the 16 Style Types which bring together the relationship of how your personality influences your personal style using your psychological type. She has written and published 4 books The Finishing Touch: perfecting the art of accessorizing, Never Short on Style: dressing and finessing the petite frame, Travelling Light: learn the art of packing light and co-authored the book Svelte in Style: How to Look and Feel Great While Losing Weight‘She has also been interviewed widely including being featured in Tatler magazine. Over many years she has created a wealth of resources on her website as well as working directly with clients on all things style and I was drawn to her because a lot of the knowledge I picked up initially from her website seemed to tackle the topic of style from angles I hadn’t seen before. One of the topics that is in the forefront of dressmaker’s minds is that of choosing fabrics. It can be daunting for beginners but even seasoned sewers can find it challenging to know which fabrics to choose. The fabric is such a key component to the success of the finished item that I am always keen to discover any tips and tricks that will help me narrow down the huge array of fabrics out there. In this episode Imogen talks through the value of reflecting and repeating attributes of your own body features in the fabrics we choose and how this can ensure that the garment you create is in harmony with you and as a result looks fabulous. She explains a wide range of different aspects including how we can use the colour tone and texture of our hair and skin to pick fabrics that will really show off our best features. And if you want to wear more boucle then maybe ditch that wrinkle cream.Resources:Notes from the episode: Fabric Harmony FrameworkImogen on FB: https://www.facebook.com/InsideOutStyleBlog/Imogen on Instagram: https://www.instagram.com/insideoutstyleblog/ Imogen on Twitter: https://www.instagram.com/insideoutstyleblog/Imogen on Linkedin: https://www.linkedin.com/in/imogenlamport/ 7 Steps to Style: https://insideoutstyleblog.com/7-steps-to-style-systemBody Shape Calculator: https://insideoutstyleblog.com/body-shape-calculator12 point plan for harmonious outfits: https://insideoutstyleblog.com/2016/04/create-harmony.html SewMuchMoreFun:

The Progressive Rugby League Podcast
PRL 25 July 2020 - Wolfpack Backchat - The word from downtown Toronto with Bryan Thiel

The Progressive Rugby League Podcast

Play Episode Listen Later Jul 25, 2020 46:59


The Wolfpack are down, but are they out? Canadian broadcaster and reporter Bryan Thiel gives us the view from downtown Toronto. How did it come to this? Can something be salvaged from the 2020 ruins? Plus Bryan gives us a glimpse into the Lamport experience, the challenges of winning over UK fans, and how Wolfpack fans have taken to Rugby League in Toronto.

The RAG Podcast - Recruitment Agency Growth Podcast
Season 2 | Ep 21 - Richard Harris & Ella Lamport on their journey from falling in love as colleagues to now being co-founders of their own agency!

The RAG Podcast - Recruitment Agency Growth Podcast

Play Episode Listen Later Mar 11, 2020 70:35


This week I was lucky enough to be joined by Ella and Rich, an inspirational couple who are the founders of Park Avenue Recruitment, a niche property recruitment agency headquartered in London.  Their story of dating in the workplace isn't uncommon, it's where so many of us find our significant other. However not many of us later decide to start a business together - and make a success of it!  They were extremely honest about the circumstances that led to them becoming business owners and the struggle they have faced with trying to be professional leaders during the day and leave it at the door in the evenings.  In 3 years they have grown to a team of 16 and have built the foundations for a healthy future together which is amazing to see!  If you would like more information, you can connect and chat with them both directly on Linkedin here - https://www.linkedin.com/in/harrisrichard/ (https://www.linkedin.com/in/harrisrichard/) or https://www.linkedin.com/in/ellalamport/ (https://www.linkedin.com/in/ellalamport/)

The Podlets - A Cloud Native Podcast
Learning Distributed Systems (Ep 12)

The Podlets - A Cloud Native Podcast

Play Episode Listen Later Jan 13, 2020 47:09


In this episode of The Podlets Podcast, we welcome Michael Gasch from VMware to join our discussion on the necessity (or not) of formal education in working in the realm of distributed systems. There is a common belief that studying computer science is a must if you want to enter this field, but today we talk about the various ways in which individuals can teach themselves everything they need to know. What we establish, however, is that you need a good dose of curiosity and craziness to find your feet in this world, and we discuss the many different pathways you can take to fully equip yourself. Long gone are the days when you needed a degree from a prestigious school: we give you our hit-list of top resources that will go a long way in helping you succeed in this industry. Whether you are someone who prefers learning by reading, attending Meetups or listening to podcasts, this episode will provide you with lots of new perspectives on learning about distributed systems. Follow us: https://twitter.com/thepodlets Website: https://thepodlets.io Feeback: info@thepodlets.io https://github.com/vmware-tanzu/thepodlets/issues Hosts: Carlisia Campos Duffie Cooley Michael Gasch Key Points From This Episode: • Introducing our new host, Michael Gasch, and a brief overview of his role at VMware. • Duffie and Carlisia’s educational backgrounds and the value of hands-on work experience. • How they first got introduced to distributed systems and the confusion around what it involves. • Why distributed systems are about more than simply streamlining communication and making things work. • The importance and benefit of educating oneself on the fundamentals of this topic. • Our top recommended resources for learning about distributed systems and their concepts. • The practical downside of not having a formal education in software development. • The different ways in which people learn, index and approach problem-solving. • Ensuring that you balance reading with implementation and practical experience. • Why it’s important to expose yourself to discussions on the topic you want to learn about. • The value of getting different perspectives around ideas that you think you understand. • How systems thinking is applicable to things outside of computer science.• The various factors that influence how we build systems. Quotes: “When people are interacting with distributed systems today, or if I were to ask like 50 people what a distributed system is, I would probably get 50 different answers.” — @mauilion [0:14:43] “Try to expose yourself to the words, because our brains are amazing. Once you get exposure, it’s like your brain works in the background. All of a sudden, you go, ‘Oh, yeah! I know this word.’” — @carlisia [0:14:43] “If you’re just curious a little bit and maybe a little bit crazy, you can totally get down the rabbit hole in distributed systems and get totally excited about it. There’s no need for having formal education and the degree to enter this world.” — @embano1 [0:44:08] Learning resources suggested by the hosts: Book, Designing Data-Intensive Applications, M. Kleppmann Book, Distributed Systems, M. van Steen and A.S. Tanenbaum (free with registration) Book, Thesis on Raft, D. Ongaro. - Consensus - Bridging Theory and Practice (free PDF) Book, Enterprise Integration Patterns, B.Woolf, G. Hohpe Book, Designing Distributed Systems, B. Burns (free with registration) Video, Distributed Systems Video, Architecting Distributed Cloud Applications Video, Distributed Algorithms Video, Operating System - IIT Lectures Video, Intro to Database Systems (Fall 2018) Video, Advanced Database Systems (Spring 2018) Paper, Time, Clocks, and the Ordering of Events in a Distributed System Post, Notes on Distributed Systems for Young Bloods Post, Distributed Systems for Fun and Profit Post, On Time Post, Distributed Systems @The Morning Paper Post, Distributed Systems @Brave New Geek Post, Aphyr’s Class materials for a distributed systems lecture series Post, The Log - What every software engineer should know about real-time data’s unifying abstraction Post, Github - awesome-distributed-systems Post, Your Coffee Shop Doesn’t Use Two-Phase Commit Podcast, Distributed Systems Engineering with Apache Kafka ft. Jason Gustafson Podcast, The Systems Bible - The Beginner’s Guide to Systems Large and Small - John Gall Podcast, Systems Programming - Designing and Developing Distributed Applications - Richard Anthony Podcast, Distributed Systems - Design Concepts - Sunil Kumar Links Mentioned in Today’s Episode: The Podlets on Twitter — https://twitter.com/thepodlets Michael Gasch on LinkedIn — https://de.linkedin.com/in/michael-gasch-10603298 Michael Gasch on Twitter — https://twitter.com/embano1 Carlisia Campos on LinkedIn — https://www.linkedin.com/in/carlisia Duffie Cooley on LinkedIn — https://www.linkedin.com/in/mauilion VMware — https://www.vmware.com/ Kubernetes — https://kubernetes.io/ Linux — https://www.linux.org Brian Grant on LinkedIn — https://www.linkedin.com/in/bgrant0607 Kafka — https://kafka.apache.org/ Lamport Article — https://lamport.azurewebsites.net/pubs/time-clocks.pdf Designing Date-Intensive Applications — https://www.amazon.com/Designing-Data-Intensive-Applications-Reliable-Maintainable-ebook/dp/B06XPJML5D Designing Distributed Systems — https://www.amazon.com/Designing-Distributed-Systems-Patterns-Paradigms/dp/1491983647 Papers We Love Meetup — https://www.meetup.com/papers-we-love/ The Systems Bible — https://www.amazon.com/Systems-Bible-Beginners-Guide-Large/dp/0961825170 Enterprise Integration Patterns — https://www.amazon.com/Enterprise-Integration-Patterns-Designing-Deploying/dp/0321200683 Transcript: EPISODE 12 [INTRODUCTION] [0:00:08.7] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores Cloud Native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [EPISODE] [00:00:41] CC: Hi, everybody. Welcome back. This is Episode 12, and we are going to talk about distributed systems without a degree or even with a degree, because who knows how much we learn in university. I am Carlisia Campos, one of your hosts. Today, I also have Duffie Cooley. Say hi, Duffie. [00:01:02] DC: Hey, everybody. [00:01:03] CC: And a new host for you, and this is such a treat. Michael Gasch, please tell us a little bit of your background. [00:01:11] MG: Hey! Hey, everyone! Thanks, Carlisia. Yes. So I’m new to the show. I just want to keep it brief because I think over the show we’ll discuss our backgrounds a little bit further. So right now, I’m with VMware. So I’ve been with VMware almost for five years. Currently, I'm in the office of the CTO. I’m a platform architect in the office of the CTO and I mainly use Kubernetes on a daily basis from an engineering perspective. So we build a lot of prototypes based on customer input or ideas that we have, and we work with different engineering teams. Kurbernetes has become kind of my bread and butter but lately more from a consumer perspective like developing with Kurbenetes or against Kubernetes, instead of the formal ware of mostly being around implementing and architecting Kubernetes. [00:01:55] CC: Nice. Very impressive. Duffie? [00:01:58] MG: Thank you. [00:01:59] DC: Yeah. [00:02:00] CC: Let’s give the audience a little bit of your backgrounds. We’ve done this before but just to frame the episodes, so people will know how we come in as distributed systems. [00:02:13] DC: Sure. In my experience, I spent – I don’t have a formal education history. I spent most of my time kind of like in a high school time. Then from there, basically worked into different systems administration, network administration, network architect, and up into virtualization and now containerization. So I’ve got a pretty hands-on kind of bootstrap experience around managing infrastructure, both at small-scale, inside of offices, and the way up to very large scale, working for some of the larger companies here in the Silicon Valley. [00:02:46] CC: All right. My turn I guess. So I do have a computer science degree but I don’t feel that I really went deep at all in distributed systems. My degree is also from a long time ago. So mainly, what I do know now is almost entirely from hands-on work experience. Even so, I think I'm very much lacking and I’m very interested in this episode, because we are going to go through some great resources that I am also going to check out later. So let’s get this party started. [00:03:22] DC: Awesome. So you want to just talk about kind of the general ideas behind distributed systems and like how you became introduced to them or like where you started in that journey? [00:03:32] CC: Yeah. Let’s do that. [00:03:35] DC: My first experience with the idea of distributed systems was in using them before I knew that they were distributed systems, right? One of the very first distributed systems as I look back on it that I ever actually spent any real time with was DNS, which I consider to be something of a distributed system. If you think about it, they have name servers, they have a bunch of caching servers. They solve many of the same sorts of problems. In a previous episode, we talked about how networking, just the general idea of networking and handling large-scale architecting networks. It’s also in a way very – has a lot of analogues into distributed systems. For me, I think working with and helping solve the problems that are associated with them over time gave me a good foundational understanding for when we were doing distributed systems as a thing later on in my career. [00:04:25] CC: You said something that caught my interest, and it’s very interesting, because obviously for people who have been writing algorithms, writing papers about distributed systems, they’re going to go yawning right now, because I’m going to say the obvious. As you start your journey programming, you read job requirements. You read or you must – should know distributed systems. Then I go, “What is distributed system? What do they really mean?” Because, yes, we understand apps stuck to apps and then there is API, but there’s always for me at least a question at the back of my head. Is that all there is to it? It sounds like it should be a lot more involved and complex and complicated than just having an app stuck on another app. In fact, it is because there are so many concepts and problems involved in distributed systems, right? From timing, clock, and sequence, and networking, and failures, how do you recover. There is a whole world in how do you log this properly, how do you monitor. There’s a whole world that revolves around this concept of systems residing in different places and [inaudible 00:05:34] each other. [00:05:37] DC: I think you made a very good point. I think this is sort of like there’s an analog to this in containers, oddly enough. When people say, “I want a container within and then the orchestration systems,” they think that that's just a thing that you can ask for. That you get a container and inside of that is going to be your file system and it’s going to do all those things. In a way, I feel like that same confusion is definitely related to distributed systems. When people are interacting with distributed systems today or if I were to ask like 50 people what a distributed system is, I would probably get 50 different answers. I think that you got a pretty concise definition there in that it is a set of systems that intercommunicate to perform some function. It’s like found at its base line. I feel like that's a pretty reasonable definition of what distributed systems are, and then we can figure out from there like what functions are they trying to achieve and what are some of the problems that we’re trying to solve with them. [00:06:29] CC: Yeah. That’s what it’s all about in my head is solving the problems because at the beginning, I was thinking, “Well, it must be just about communicating and making things work.” It’s the opposite of that. It’s like that’s a given. When a job says you need to understand about distributed systems, what they are really saying is you need to know how to deal with failures, not just to make it work. Make it work is sort of the easy part, but the whole world of where the failures can happen, how do you handle it, and that, to me is what needing to know distributed system comes in handy. In a couple different things, like at the top layer or 5% is knowing how to make things work, and 95% is knowing how to handle things when they don’t work, because it’s inevitable. [00:07:19] DC: Yeah, I agree. What do you think, Michael? How would you describe the context around distributed systems? What was the first one that you worked with? [00:07:27] MG: Exactly. It’s kind of similar to your background, Duffie, which is no formal degree or education on computer science right after high school and jumping into kind of my first job, working with computers, computer administration. I must say that from the age of I think seven or so, I was interested in computers and all that stuff but more from a hardware perspective, less from a software development perspective. So my take always was on disassembling the pieces and building my own computers than writing programs. In the early days, that just was me. So I completely almost missed the whole education and principles and fundamentals of how you would write a program for a single computer and then obviously also for how to write programs that run across a network of computers. So over time, as I progress on my career, especially kind of in the first job, which was like seven years of different Linux systems, Linux administrations, I kind of – Like you, Duffie, I dealt with distributed systems without necessarily knowing that I'm dealing with distributed systems. I knew that it was mostly storage systems, Linux file servers, but distributed file servers. Samba, if some of you recall that project. So I knew that things could fail. I know it could fail, for example, or I know it could not be writable, and so a client must be stuck but not necessarily I think directly related to fundamentals of how distributed systems work or don’t work. Over time, and this is really why I appreciate the Kubernetes project in community, I got more questions, especially when this whole container movement came up. I got so many questions around how does that thing work. How does scheduling work? Because scheduling kind of was close to my interest in the hardware design and low-level details. But I was looking at Kubernetes like, “Okay. There is the scheduler.” In the beginning, the documentation was pretty scarce around the implementation and all the control as for what’s going on. So I had to – I listen to a lot of podcasts and Brian Grant’s great talks and different shows that he gave from the Kubernetes space and other people there as well. In the end, I had more questions than answers. So I had to dig deeper. Eventually, that led me to a path of wanting to understand more formal theory behind distributed systems by reading the papers, reading books, taking some online classes just to get a basic understanding of those issues. So I got interested in results scheduling in distributed systems and consensus. So those were two areas that kind of caught my eyes like, “What is it? How do machines agree in a distributed system if so many things can go wrong?” Maybe we can explore this later on. So I’m going to park this for a bit. But back to your question, which was kind of a long-winded answer or a road to answering your question, Duffie. For me, a distributed system is like this kind of coherent network of computer machines that from the outside to an end-user or to another client looks like one gigantic big machine that is [inaudible 00:10:31] to run as fast. That is performing also efficient. It constitutes a lot of characteristics and properties that we want from our systems that a single machine usually can’t handle. But it looks like it's a big single machine to a client. [00:10:46] DC: I think that – I mean, it is interesting like, I don’t want to get into – I guess this is probably not just a distributed systems talk. But obviously, one of the questions that falls out for me when I hear that answer is then what is the difference between a micro service architecture and distributed systems, because I think it's – I mean, to your point, the way that a lot of people work with the app to learn to develop software, it’s like we’re going to develop a monolithic application just by nature. We’re going to solve a software problem using code. Then later on, when we decide to actually scale this thing or understand how to better operate it under a significant load, then we started thinking about, “Okay. Well, how do we have to architect this differently in such a way that it can support that load?” That’s where I feel like the beams cut across, right? We’re suddenly in a world where you’re not only just talking about microservices. You’re also talking about distributed systems because you’re going to start thinking about how to understand transactionality throughout that system, how to understand all of those consensus things that you're referring to. How do they affect it when I add mister network in there? That’s cool. [00:11:55] MG: Just one comment on this, Duffie, which took me a very long time to realize, which is coming – From my definition of what a distributed system is like this group of machines that they perform work in a certain sense or maybe even more abstracted like at a bunch of computers network together. What I kind of missed most of the time, and this goes back to the DNS example that you gave in the beginning, was the client or the clients are also part of this distributed system, because they might have caches, especially in DNS. So you always deal with this kind of state that is distributed everywhere. Maybe you don't even know where it kind of is distributed, and the client kind of works with a local stale data. So that is also part of a distributed system, and something I want to give credit to the Kafka community and some of the engineers on Kafka, because there was a great talk lately that I heard. It’s like, “Right. The client is also part of your distributed system, even though usually we think it's just the server. That those many server machines, all those microservices.” At least I missed that a long time. [00:12:58] DC: You should put a link to that talk in our [inaudible 00:13:00]. That would be awesome. It sounds great. So what do you think, Carlisia? [00:13:08] CC: Well, one thing that I wanted to mention is that Michael was saying how he’s been self-teaching distributed systems, and I think if we want to be competent in the area, we have to do that. I’m saying this to myself even. It’s very refreshing when you read a book or you read a paper and you really understand the fundamentals of an aspect of distributed system. A lot of things fall into place in your hands. I’m saying this because even prioritizing reading about and learning about the fundamentals is really hard for me, because you have your life. You have things to do. You have the minutiae in things to get done. But so many times, I struggle. In the rare occasions where I go, “Okay. Let me just learn this stuff trial and error,” it makes such a difference. Then once you learn, it stays with you forever. So it’s really good. It’s so refreshing to read a paper and understand things at a different level, and that is what this episode is. I don’t know if this is the time to jump in into, “So there are our recommendations.” I don't know how deep, Michael, you’re going to go. You have a ton of things listed. Everything we mention on the show is going to be on our website, on the show notes. So nobody needs to be necessarily taking notes. Anything thing I wanted to say is it would be lovely if people would get back to us once you listened to this. Let us know if you want to add anything to this list. It would be awesome. We can even add it to this list later and give a shout out to you. So it’d be great. [00:14:53] MG: Right. I don’t want to cover this whole list. I just wanted to be as complete as possible about a stuff that I kind of read or watched. So I just put it in and I just picked some highlights there if you want. [00:15:05] CC: Yeah. Go for it. [00:15:06] MG: Yeah. Okay. Perfect. Honestly, even though not the first in the list, but the first thing that I read, so maybe from kind of my history of how I approach things, was searching for how do computers work and what are some of the issues and how do computers and machines agree. Obviously, the classic paper that I read was the Lamport paper on “Time, Clocks, and the Ordering of Events in a Distributed System”. I want to be honest. First time I read it, I didn’t really get the full essence of the paper, because it doesn't prove in there. The mathematic proof for me didn't click immediately, and there were so many things and concepts and physics and time that were thrown at me where I was looking for answers and I had more questions than answers. But this is not to Leslie. This is more like by the time I just wasn't prepared for how deep the rabbit hole goes. So I thought, if someone asked me for – I only have time to read one book out of this huge list that I have there and all the other resources. Which one would it be? Which one would I recommend? I would recommend Designing Data-Intensive Apps by Martin Kleppmann, which I’ve been following his blog posts and some partial releases that he's done before fully releasing that book, which took him more than four years to release that book. It’s kind of almost the Bible, state-of-the-art Bible when it comes to all concepts in distributed systems. Obviously, consensus, network failures, and all that stuff but then also leading into modern data streaming, data platform architectures inspired by, for example, LinkedIn and other communities. So that would be the book that I would recommend to someone if – Who does have time to read one book. [00:16:52] DC: That’s a neat approach. I like the idea of like if you had one thing, if you have one way to help somebody ramp on distributed systems and stuff, what would it be? For me, it’s actually I don't think I would recommend a book, oddly enough. I feel like I would actually – I’d probably drive them toward the kind of project, like the kind [inaudible 00:17:09] project and say, “This is a distributed system all but itself.” Start tearing it apart to pieces and seeing how they work and breaking them and then exploring and kind of just playing with the parts. You can do a lot of really interesting things. This is actually another book in your list that was written by Brendan Burns about Designing Distributed Systems I think it’s called. That book, I think he actually uses Kubernetes as a model for how to go about achieving these things, which I think is incredibly valuable, because it really gets into some of the more stable distributed systems patterns that are around. I feel like that's a great entry point. So if I had one thing, if I had to pick one way to help somebody or to push somebody in the direction of trying to learn distributed systems, I would say identify those distributed systems that maybe you’re already aware of and really explore how they work and what the problems with them are and how they went about solving those problems. Really dig into the idea of it. It’s something you could put your hands on and play with. I mean, Kubernetes is a great example of this, and this is actually why I referred to it. [00:18:19] CC: The way that works for me when I’m learning something like that is to really think about where the boundaries are, where the limitations are, where the tradeoffs are. If you can take a smaller system, maybe something like The Kind Project and identify what those things are. If you can’t, then ask around. Ask someone. Google it. I don’t know. Maybe it will be a good episode topic for us to do that. This part is doing this to map things out. So maybe we can understand better and help people understand things better. So mainly like yeah. They try to do the distributed system thesis are. But for people who don’t even know what they could be, it’s harder to identify it. I don’t know what a good strategy for that would be, because you can read about distributed systems and then you can go and look at a project. How do you map the concept to learning to what you’re seeing in the code base? For me, that’s the hardest thing. [00:19:26] MG: Exactly. Something that kind of I had related experience was like when I went into software development, without having formal education on algorithms and data structures, sometimes in your head, you have the problem statement and you're like, “Okay. I would do it like that.” But you don't know the word that describes, for example, a heap structure or queue because you’ve never – Someone told you that is heap, that is a queue, and/or that is a stick. So, for me, reading the book was a bit easier. Even though I have done distributed systems, if you will, administration for many years, many years ago, I didn't realize that it was a distributed system because I never had this definition or I never had those failure scenarios in mind and it never had a word for consensus. So how would I search for something like how do machines agree? I mean, if you put that on Google, then likely they will come – Have a lot of stuff. But if you put it in consensus algorithm, likely you get a good hit on what the answer should be. [00:20:29] CC: It is really problematic when we don't know the names of things because – What you said is so right, because we are probably doing a lot of distributed systems without even knowing that that’s what it is. Then we go in the job interview, and people are, “Oh! Have you done a distributed system?” No. You have but you just don’t know how to name things. But that’s one – [00:20:51] DC: Yeah, exactly. [00:20:52] CC: Yeah. Right? That’s one issue. Another issue, which is a bigger issue though is at least that’s how it is for me. I don’t want to speak for anybody else but for me definitely. If I can’t name things and I face a problem and I solve it, every time I face that problem it’s a one-off thing because I can’t map to a higher concept. So every time I face that problem, it’s like, “Oh!” It’s not like, “Oh, yeah!” If this is this kind of problem, I have a pattern. I’m going to use that to this problem. So that’s what I’m saying. Once you learn the concept, you need to be able to name it. Then you can map that concept to problems you have. All of a sudden, if you have like three things [inaudible 00:21:35] use to solve this problem, because as you work with computers, coding, it’s like you see the same thing over and over again. But when you don’t understand the fundamentals, things are just like – It’s a bunch of different one-offs. It’s like when you have an argument with your spouse or girlfriend or boyfriend. Sometimes, it’s like you’re arguing 10 times in a month and you thought, “Oh! I had 10 arguments.” But if you’d stop and think about it, no. We had one argument 10 times. It’s very different than having 10 problems versus having 1 problem 10 times, if that makes sense. [00:22:12] MG: It does. [00:22:11] DC: I think it does, right? [00:22:12] MG: I just want to agree. [00:22:16] DC: I think it does make sense. I think it’s interesting. You’ve highlighted kind of an interesting pattern around the way that people learn, which I think is really interesting. That is like some people are able to read about patterns or software patterns or algorithms or architectures and have that suddenly be an index of their heads. They can actually then later on correlate what they've read with the experience that they’re having around the things they're working on. For some, it needs to be hands-on. They need to actually be able to explore that idea and understand and manipulate it and be able to describe how it works or functions in person, in reality. They need to have that hands-on like, “I need to touch it to understand it,” kind of experience. Those people also, as they go through those experiences, start building this index of patterns or algorithms in their head. They have this thing that they can correlate to, right, like, “Oh! This is a time problem,” or, “This is a consensus problem,” or what have you, right? [00:23:19] CC: Exactly. [00:23:19] DC: You may not know the word for that saying but you're still going to develop a pattern in your mind like the ability to correlate this particular problem with some pattern that you’ve seen before. What's interesting is I feel like people have taken different approaches to building that index, right? For me, it’s been troubleshooting. Somebody gives me a hard problem, and I dig into it and I figure out what the problem is, regardless of whether it's to do with distributed systems or cooking. It could be anything, but I always want to get right in there and figure out what that problem and start building a map in my mind of all of the players that are involved. For others, I feel like with an educational background, if you have an education background, I think that sometimes you end up coming to this with a set of patterns already instilled that you understand and you're just trying to apply those patterns to the experience you’re having instead. It’s just very – It’s like horse before the cart or cart before the horse. It’s very interesting when you think about it. [00:24:21] CC: Yes. [00:24:22] MG: The recommendation that I just want to give to people that are like me who like reading is that I went overboard a bit in the beginnings because I was so fascinated by all the stuff, and it went down the rabbit hole deeper, deeper, deeper, deeper. Reading and reading and reading. At some point, even coming to weird YouTube channels that talk about like, “Is time real and where does time emerge from?” It became philosophical even like the past where I went to. Now, the thing is, and this is why I like Duffie’s approach with like breaking things and then undergo like trying to break things and understanding how they work and how they can fail is that immediately you practice. You’re hands-on. So that would be my advice to people who are more like me who are fascinated by reading and all the theory that your brain and your mind is not really capable of kind of absorbing all the stuff and then remembering without practicing. Practicing can be breaking things or installing things or administrating things or even writing software. But for me, that was also a late realization that I should have maybe started doing things earlier than the time I spent reading. [00:25:32] CC: By doing, you mean, hands-on? [00:25:35] MG: Yeah. [00:25:35] CC: Anything specific that you would have started with? [00:25:38] MG: Yes. On Kubernetes – So going back those 15 years to my early days of Linux and Samba, which is a project. By the time, I think it was written in C or C++. But the problem was I wasn’t able to read the code. So the only thing that I had by then was some mailing lists and asking questions and not even knowing which questions to ask because of lack of words of understanding. Now, fast-forward into Kubernetes’ time, which got me deeper in distributed systems, I still couldn't read the code because I didn't know [inaudible 00:26:10]. But I forced myself to read the code, which helped a little bit for myself to understand what was going on because the documentation by then was lacking. These days, it’s easier, because you can just install [inaudible 00:26:20] way easier today. The hands-on piece, I mean. [00:26:23] CC: You said something interesting, Michael, and I have given this advice before because I use this practice all the time. It's so important to have a vocabulary. Like you just said, I didn't know what to ask because I didn’t know the words. I practice this all the time. To people who are in this position of distributed systems or whatever it is or something more specific that you are trying to learn, try to expose yourself to the words, because our brains are amazing. Once you get exposure, it’s like your brain works in the background. All of a sudden, you go, “Oh, yeah! I know this word.” So podcasts are great for me. If I don't know something, I will look for a podcast on the subject and I start listening to it. As the words get repeated, just contextually. I don’t have to go and get a degree or anything. Just by listening to the words being spoken in context, absorb the meaning of it. So podcasting is great or YouTube or anything that you can listen. Just in reading too, of course. The best thing is talking to people. But, again, it’s really – Sometimes, it’s not trivial to put yourself in positions where people are discussing these things. [00:27:38] DC: There are actually a number of Meetups here in the Bay Area, and there’s a number of Meetups – That whole Meetup thing is sort of nationwide across the entire US and around the world it seems like now lately. Those Meetups I feel like there are a number of Meetups in different subject areas. There’s one here in the Bay Area called Papers We Love, where they actually do explore interesting technical papers, which are obviously a great place to learn the words for things, right? This is actually where those words are being defined, right? When you get into the consensus stuff, they really get into – One even is Raft. There are many papers on Raft and many papers on multiple things that get into consensus. So definitely, whether you explore a meetup on a distributed system or in a particular application or in a particular theme like Kubernetes, those things are great places just to kind of get more exposure to what people are thinking about in these problems. [00:28:31] CC: That is such a great tip. [00:28:34] MG: Yeah. The podcast is twice as good as well, because for people, non-natives – English speaker, I mean. Oh, people. Not speakers. People. The thing is that the word you’re looking for might be totally different than the English word. For example, consensus in Germany has this totally different meaning. So if I would look that up in German, likely I would find nothing or not really related at all. So you have to go through translation and then finding the stuff. So what you said, Duffie, with PWL, Papers We Love, or podcasts, those words, often they are in English, those podcasts and they are natural consensus or charting or partitioning. Those are the words that you can at least look up like what does it mean. That’s what I did as well thus far. [00:29:16] CC: Yes. I also wanted to do a plus one for Papers We Love. It’s – They are everywhere and they also have an online. They have an online version of the Papers We Love Meetup, and a lot of the local ones film their meetups. So you can go through the history and see if they talked about any paper that you are interested in. Probably, I’m sure multiple locations talk about the same paper, so you can get different takes too. It’s really, really cool. Sometimes, it’s completely obscure like, “I didn’t get a word of what they were saying. Not one. What am I doing here?” But sometimes, they talk about things. You at least know what the thing is and you get like 10% of it. But some paper you don’t. People who deal with papers day in and day out, it’s very much – I don’t know. [00:30:07] DC: It’s super easy when going through a paper like that to have the imposter syndrome wash over you, right, because you’re like – [00:30:13] CC: Yes. Thank you. That’s what I wanted to say. [00:30:15] DC: I feel like I’ve been in this for 20 years. I probably know a few things, right. But in talking about reading this consensus paper going, “Can I buy a vowel? What is happening?” [00:30:24] CC: Yeah. Can I buy a vowel? That’s awesome, Duffie. [00:30:28] DC: But the other piece I want to call out to your point, which I think is important is that some people don't want to go out and be there in person. They don’t feel comfortable or safe exploring those things in person. So there are tons of resources like you have just pointed out like the online version of Papers We Love. You can also sign into Slack and just interact with people via text messaging, right? There’s a lot of really great resources out there for people of all types, including the amount of time that you have. [00:30:53] CC: For Papers We Love, it’s like going to language class. If you go and take a class in Italian, your first day, even though that is going to be super basic, you’re going to be like, “What?” You’ll go back in your third week. You start, “Oh! I’m getting this.” Then a month, three months, “Oh! I’m starting to be competent.” So you go once. You’re going to feel lost and experience imposter syndrome. But you keep going, because that is a format. First, you start absorbing what the format is, and that helps you understand the content. So once your mind absorbs the format, you’re like, “Okay. Now, I have – I know how to navigate this. I know what’s coming next.” So you don’t have to focus on that. You start focusing in the content. Then little but little, you become more proficient in understanding. Very soon, you’re going to be willing to write a paper. I’m not there yet. [00:31:51] DC: That’s awesome. [00:31:52] CC: At least that’s how I think it goes. I don’t know. [00:31:54] MG: I agree. [00:31:55] DC: It’s also changed over time. It’s fascinating. If you read papers from like 20 years ago and you read papers that are written more recently, it's interesting. The papers have changed their language when considering competition. When you're introducing a new idea with a paper, frequently that you are introducing it into a market full of competition. You're being very careful about the language, almost in a way to complicate the idea rather than to make it clear, which is challenging. There are definitely some papers that I’ve read where I was like, “Why are you using so many words to describe this simple idea?” It makes no sense, but yeah. [00:32:37] CC: I don’t want to make this episode all about Papers We Love. It was so good that you mentioned that, Duffie. It’s really good to be in a room where we’ll be watching something online where you see people asking questions and people go, “Oh! Why is this thing like this? Why is X like this,” or, “Why is Y doing like this?” Then you go, “Oh! I didn’t even think that X was important. I didn’t even know that Y was important.” So you stop picking up what the important things are, and that’s what makes it click is now you’ve – Hooking into the important concepts because people who know more than you are pointing out and asking questions. So you start paying attention to learning what the main things it should be paying attention to, which is different from reading the paper by yourself. It’s just a ton of content that you need to sort through. [00:33:34] DC: Yeah. I frequently self-describe it as a perspective junkie, because I feel like for any of us really to learn more about a subject that we feel we understand, we need the perspective of others to really engage, to expand our understanding of that thing. I feel like and I know how to make a peanut butter and jelly sandwich. I’ve done it a million times. It’s a solid thing. But then I watch my kid do it and I’m like, “I hadn’t thought of that problem.” [inaudible 00:33:59], right? This is a great example of that. Those communities like Papers We Love are great opportunity to understand the perspective of others around these hard ideas. When we’re trying to understand complex things like distributed systems, this is where it’s at. This is actually how we go about achieving this. There is a lot that you can do on your own but there is always going to be more that you can do together, right? You can always do more. You can always understand this idea faster. You can understand the complexity of a system and how to break it down into these things by exploiting it with other people. That's I feel like – [00:34:40] CC: That is so well said, so well said, and it’s the reason for this show to exist, right? We come on a show and we give our perspectives, and people get to learn from people with different backgrounds, what their takes are on distributed systems, cloud native. So this was such a major plug for the show. Keep coming back. You’re going to learn a ton. Also, it was funny that you – It was the second time you mentioned cooking, made a cooking reference, Duffie, which brings me to something I want to make sure I say on this episode. I added a few things for reference, three books. But the one that I definitely would recommend starting with is The Systems Bible by John Gall. This book is so cool, because it helps you see everything through systems. Everything is a system. A conversation can be a system. An interaction between two people can be a system. I’m not saying this book says that. It’s just like my translation and that you can look – Cooking is a system. There is a process. There is a sequence. It’s really, really cool and it really helps to have things framed in this way and then go out and read the other books on systems. I think it helps a lot. This is definitely what I am starting with and what I would recommend people start with, The Systems Bible. Did you two know this book? [00:36:15] MG: I did not. I don’t. [00:36:17] DC: I’m not aware of it either but I really appreciate the idea. I do think that that's true. If you develop a skill for understanding systems as they are, you’ll basically develop – Frequently, what you’re developing is the ability to recognize patterns, right? [00:36:32] CC: Exactly. [00:36:32] DC: You could recognize those patterns on anything. [00:36:37] MG: Yeah. That's a good segue for just something that came to my mind. Recently, I gave a talk on event-driven architectures. For someone who's not a software developer or architect, it can be really hard to grab all those concepts on asynchrony and eventual consistency and idempotency. There are so many words of like, “What is this all – It sounds weird, way too complex.” But I was reading a book some years ago by Gregor Hohpe. He’s the guy behind Enterprise Integration Patterns. That’s also a book that I have on my list here. He said, “Your barista doesn't use two-phase commit.” So he was basically making this analogy of he was in a coffee shop and he was just looking at the process of how the barista makes the coffee. You pay for it and all the things that can go wrong while your coffee is brewed and served to you. So he was making this relation between the real world and the life and human society to computer systems. There it clicked to me where I was like, “So many problems we solve every day, for example, agreeing on a time where we should meet for dinner or cooking, is a consensus problem, and we solve it.” We even solve it in the case of failure. I might not be able to call Duffie, because he is not available right now. So somehow, we figure out. I always thought that those problems just exist in computer science and distributed systems. But I realized actually that's just a subset of the real world as is. Looking at those problems through the lens of your daily life and you get up and all the stuff, there are so many things that are related to computer systems. [00:38:13] CC: Michael, I missed it. Was it an article you read? [00:38:16] MG: Yes. I need to put that in there as well. Yeah. It’s a plug. [00:38:19] CC: Please put that in there. Absolutely. So far from being any kind of expert in distributed systems, but I have noticed. I have caught myself using systems thinking for even complicated conversations. Even in my personal life, I started approaching things in the systems oriented and just the – just a high-level example. When I am working with systems, I can approach from the beginning, the end. It’s like a puzzle, putting the puzzle together, right? Sometimes, it starts from the middle. Sometimes, it starts from the edges. When I‘m having conversations that I need to be very strategic like I have one shot. Let’s say maybe I’m in a school meeting and I have to reach a consensus or have a solution or have a plan of action. I have to ask the right questions. My private self would do things linearly. Historically like, “Let’s go from the beginning and work out through the end.” Now, I don’t do that anymore. Not necessarily. Sometimes, I like, “Let me maybe ask the last question I would ask and see where it leads and just approach things from a different way.” I don’t know if this is making sense. [00:39:31] MG: It does. It does. [00:39:32] CC: But my thinking has changed. The way I see the possibilities is not a linear thing anymore. I see how you can truly switch things. I use this in programming a lot and also writing. Sometimes, when you’re a beginner writer, you start at the top and you go down to the conclusion. Sometimes, I start I the middle and go up, right? So you can start anywhere. It’s beautiful or it just gives you so many more options. Or maybe I’m just crazy. Don’t listen to me. [00:40:03] DC: I don’t think you’re crazy. I was going to say, one of the funny things about Michael’s point and your point both, it’s like in a way that they have kind of referred to Conway's law, the idea that people will build systems in the way that they communicate. So this is actually – It totally brings it back to that same point of thing, right? We by nature will build systems that we can understand, because that is the constraint in which we have to work, right? So it’s very interesting. [00:40:29] CC: Yeah. But it’s an interesting thing, because we are [inaudible 00:40:32] by the way we are forced to work. For example, I work with constraints and what I'm saying is that that has been influencing my way of thinking. So, yes, I built systems in the way I think but also because of the constraints that I’m dealing with that I have to be – the tradeoffs I need to make, that also turns around and influences the way I think, the way I see the world and the rest of the systems and all the rest of the world. Of course, as I change my thinking, possibly you can theorize that you go back and apply that. Apply things that you learn outside of your work back to your work. It’s a beautiful back-and-forth I think. [00:41:17] MG: I had the same experience with some – When I had to design kind of my first API and think of, “Okay. What would the consumer contract be and what would a consumer expect me to deliver in response and so on?” I was forcing myself and being explicit in communicating and not throwing everything at the client back to confusing but being very explicit and precise. Also on communication every day when you talk to people, being explicit and precise really helps to avoid a lot of problems and trouble. Be it partnership or amongst friends or at work. This is what I took from computer science actually back into my real world in order to taking all those perceptions, perceiving things from a different perspective, and being more precise and explicit in how I respond or communicated. [00:42:07] CC: My take on what you just said, Michael, is we design systems thinking how is this going to fail. We know this is going to fail. We’re going to design for that. We’re going to implement for that. In real life, for example, if I need to get an agreement from someone, I try to understand the person's thinking and just go, “I just had this huge thing this week. This is in my mind.” I’m not constantly thinking about this, I’m not crazy like that. Just a little bit crazy. It’s like, “How does this person think? What do they need to know? How far can I push?” Right? We need to make a decision quickly, so the approach is everything, and sometimes you only get one shot, so yeah. I mean, correct me if I’m wrong. That's how I heard or I interpreted what you just said. [00:42:52] MG: Yeah, absolutely. Spot on. Spot on. So I’m not crazy as well. [00:42:55] CC: Basically, I think we ended up turning this episode into a little bit of like, “Here are great references,” and also a huge endorsement for really going deep into distributed systems, because it’s going to be good for your jobs. It’s going to be good for your life. It’s going to be good for your health. We are crazy. [00:43:17] DC: I’m definitely crazy. You guys might be. I’m not. All right. So we started this episode with the idea of coming to learning distributed systems perhaps without a degree or without a formal education in it. We talked about a ride of different ideas on that subject. Like different approaches that each of us took, how each of us see the problem. Is there any important point that either of you want to throw back into the mix here or bring up in relation to that? [00:43:48] MG: Well, what I take from this episode, being my first episode and getting to know your background, Duffie and Carlisia, is that whoever is going to listen to this episode, whatever background you have, even though you might not be in computer systems or industry at all, I think we three all had approved that whatever background you have, if you’re just curious a little bit and maybe a little bit crazy, you can totally get down the rabbit hole in distributed systems and get totally excited about it. There’s no need for having formal education and the degree to enter this world. It might help but it’s kind of not a high bar that I was perceiving it to be 10 years ago, for example. [00:44:28] CC: Yeah. That’s a good point. My takeaway is it always puzzled me how some people are so good and experienced and such experts in distributed systems. I always look at myself. It’s like, “How am I lacking?” It’s like, “What memo did I miss? What class did I miss? What project did I not work on to get the experience?” What I’m seeing is you just need to put yourself in that place. You need to do the work. But the good news is achieving competency in distributed systems is doable. [00:45:02] DC: My takeaway is as we discussed before, I think that there is no one thing that comprises a distributed system. It is a number of things, right, and basically a number of behaviors or patterns that we see that comprise what a distributed system is. So when I hear people say, “I’m not an expert in distributed systems,” I think, “Well, perhaps you are and maybe you don’t know it already.” Maybe there's some particular set of patterns with which you are incredibly familiar. Like you understand DNS better than the other 20 people in the room. That exposes you to a set of patterns that certainly give you the capability of saying that you are an expert in that particular set of patterns. So I think that to both of your points, it’s like you can enter this stage where you want to learn about distributed systems from pretty much any direction. You can learn it from a CIS background. You can come it with no computer experience whatsoever, and it will obviously take a bit more work. But this is really just about developing and understanding around how these things communicate and the patterns with which they accomplish that communication. I think that’s the important part. [00:46:19] CC: All right, everybody. Thank you, Michael Gasch, for being with us now. I hope to – [00:46:25] MG: Thank you. [00:46:25] CC: To see you in more episodes [inaudible 00:46:27]. Thank you, Duffie. [00:46:30] DC: My pleasure. [00:46:31] CC: Again, I’m Carlisia Campos. With us was Duffie Cooley and Michael Gesh. This was episode 12, and I hope to see you next time. Bye. [00:46:41] DC: Bye. [00:46:41] MG: Goodbye. [END OF EPISODE] [00:46:43] ANNOUNCER: Thank you for listening to The Podlets Cloud Native Podcast. Find us on Twitter at https://twitter.com/ThePodlets and on the http://thepodlets.io/ website, where you'll find transcripts and show notes. We'll be back next week. Stay tuned by subscribing. [END]See omnystudio.com/listener for privacy information.

Journeyman
13: Journey Man - George Lamport

Journeyman

Play Episode Listen Later Dec 15, 2019 25:33


George’s talks about been bullied at school what boxing has done for him and much more 

Había una vez un algoritmo...
Leslie Lamport, sistemas distribuidos y la importancia de las matemáticas en la programación | E2

Había una vez un algoritmo...

Play Episode Listen Later Dec 9, 2019 15:54


Un podcast dedicado a Leslie Lamport sobre su trabajo y visión.

LA PETITE HISTOIRE
Celui qui a inspiré Zorro (1ère partie) : William Lamport

LA PETITE HISTOIRE

Play Episode Listen Later Dec 1, 2019 13:11


Aujourd'hui on s'intéresse à Zorro ! Qui était Zorro? A-t-il vraiment existé? De qui s'est inspiré l'auteur de Zorro?  Réponse dans ce podcast de La Petite Histoire! Retrouvez tous nos podcasts de La Fabrik Audio sur notre site et en écoute sur  Cinémaradio. N'hésitez pas à soutenir ce podcast en lui donnant une note sur iTunes, Apple Podcasts, et en faisant des commentaires. ;-) Et pour cet épisode on remercie nos copains de Binouze USA qui ont contribué à enrichir ce podcast! Zorro serait en fait Irlandais... Bon en tous cas, il est possible que l'auteur de Zorro se soit inspiré de William Lamport, un irlandais né au début du 17 e siecle et dont la vie agitée a été tout simplement fascinante ! William Lamport a fait des études à Londres. Très cultivé, à 21 ans il ne parlait pas moins de 14 langues. En 1627, Lamport est suspecté de catholicisme et est arrêté à Londres pour avoir distribué des pamphlets catholiques. Il décide donc de fuir la Grande-Bretagne pour débarquer en Espagne où il prend le nom de Guillén Lombardo. Très vite, il attire l'attention d'un marquis qui lui permet d'intégrer un régiment irlandais patronné par l'Espagne. C'est dans ce cadre qu'il prend part au combat contre les forces suédoises aux Pays-Bas espagnols. Lors d'une bataille, il est remarqué par le comte-duc d'Olivares, alors ministre en chef de Philippe IV d'Espagne. Ce dernier lui propose de se mettre au service du roi.  Ce dernier l'envoie rapidement au Mexique pour être l'espion officiel de la Couronne ! Il doit donc donner des informations sur la situation politique dans la région. Nous sommes en 1640 et les événements politiques méritent un suivi attentif puisque la situation au Mexique est politiquement difficile ! Le Mexique est à ce moment-là une vice-royauté gouvernée par un vice-roi qui pille sans vergogne dans les caisses de l'Etat. Vers 1641, Guillen Lamport commence à préparer un complot pour renverser le vice-roi, et pour cela il essaye de persuader Indiens, les Noirs et les marchands créoles de se joindre à ce soulèvement.  Il essaye de se faire un peu d'argent pour cette organisation qui coute cher et se met donc à faire du trafic de tabac la nuit alors que la journée il mène la vie d'un gentilhomme aisé. Et il semble qu'a ce moment là William Lamport allias Guillén Lombardo signait ses actes d'un… Z pour « Ziza ». Ziza c'est un terme hébreu qui est employé par les francs-maçons pour désigner la lumière, la force de vie. Alors qu'il tente de soulever le peuple contre le vice-roi du mexique il est dénoncé et est arrêté par la police qui le conduit au cachot où il va rester pendant 8 ans avant de s'évader la veille de Noël 1650, avec son compagnon de cellule, un certain Diego Pinto Bravo. Mais il est rattrapé et l'Inquisition du Mexique va le condamner à être brûlé sur le bûcher... Fin des aventures de William Lamport / Guillén Lombardo / Zorro !

Howlin' Hour
Se01.Ep36: Heading to the Grand Final!

Howlin' Hour

Play Episode Listen Later Sep 25, 2019 50:56


Episode 36: The Two Howlers, Rob & Gareth, breakdown the Wolfpack's comprehensive victory over promotion contenders, Toulouse Olympique, at Lamport this past weekend. With some excellent performances across the squad, and in particular from marquee players like Ricky Leutele, it appears that the Grand Final is very much Toronto's to lose. That being said, Featherstone may very well have something to say after impressing with a big win over the York City Knights. Fev came in to the playoffs showing good form and with every game for them being a do-or-die situation, they have equipped themselves very handily. The Community Spotlight shines solely and brightly on Canada this week as the Canada Wolverines & Ravens finished their tour of Bosnia & Serbia with mixed fortunes. Congrats to both though going 4 for 5 on the tour and the efforts they put in. Everyone can be rightly proud of themselves. The Super League playoffs have kicked off in fine fashion and the guys give their predictions on who they expect to progress to the next round along with who they think will take on Toronto in the Grand Final on October 5th. So tune in and come run with the Pack this Hunting SZN!

Game Day
Wolfpack CEO Bob Hunter on the excitement of attending a game at Lamport

Game Day

Play Episode Listen Later Aug 29, 2019 8:28


The Chairman and CEO of the Toronto Wolfpack Bob Hunter joins Andy McNamara on Game Day. The guys get into how Bob got into Rugby after a great career with MLSE, what the most exciting part of attending a Wolfpack game is, and this weekend's special Military Day festivities.

5 Minute Mentor
Think and Write, with Leslie Lamport

5 Minute Mentor

Play Episode Listen Later Aug 14, 2019


Leslie Lamport is an American computer scientist. Lamport is best known for his seminal work in distributed systems and as the initial developer of the document preparation system LaTeX. Leslie Lamport was the winner of the 2013 Turing Award for imposing clear, well-defined coherence on the seemingly chaotic behavior of distributed computing systems, in which several autonomous computers communicate with each other by passing messages. He devised important algorithms and developed formal modeling and verification protocols that improve the quality of real distributed systems. These contributions have resulted in improved correctness, performance, and reliability of computer systems.

Howlin' Hour
Se01.Ep25: Pepé le Pew visits 'The Den'

Howlin' Hour

Play Episode Listen Later Jul 10, 2019 67:53


Episode 25: A surprise visit from a furry friend just minutes before kick off provided some entertainment for the crowd at Lamport, and Jefferson Wolf further asserted his dominance over the much maligned mascot Gordo as Medieval Times made their return for the half time show. The weather was oppressive with both teams feeling the effects of the heat and humidity in what was at times a sloppy affair by both squads. Ultimately though, the Wolfpack came away the victors. The spotlight is shone both far and wide on the Community game this week as Paul Buchanan, founder of the Haldimand Wolfpack, joins the Two Howlers for a chat. The Ontario provincial champsionship concluded over the weekend with the Toronto City Saints; Spanish RL is set for it's first 'Oirign' game & South of the border giant strides are being taken towards the establishment of West Coast RL in California with a fledgling match up later this year. Round up of the Super League & Championship sees some shaking and moving in both competitions. And the mysterious and oft confusing Championship playoff format comes under scrutiny from Rob & Gareth as they try to make heads or tails of what exactly is going on. So listen in and com run with the Pack this Hunting SZN!

The Progressive Rugby League Podcast
PRL 28/05/19 - Lamport of Call

The Progressive Rugby League Podcast

Play Episode Listen Later May 27, 2019 68:00


Big Al and Johnno (with cameo from The Slug) are back for another romp around the world of Progressive Rugby League. NRL local derbies, Super League Magic Weekend PLUS a very special edition of Grounds for Optimism, as we're given the Lamport Stadium experience via Toronto Wolfpack expert Nicholas Mew.

Rugby League in America
Ep. 100, The Pack Is Back!!!

Rugby League in America

Play Episode Listen Later May 1, 2019 24:41


Iron your plaid, wash your mullets & pop open a crisp Molson because the Pack are back...at Lamport!!!

Howlin' Hour
Se01.Ep15: The Home Opener

Howlin' Hour

Play Episode Listen Later May 1, 2019 50:12


Rob is flying solo this episode as Gareth is away on the other side of the world. Re-capping an eventful week for the Toronto Wolfpack including the emergence of a cult hero "Gordo"; the long-awaited return of Jefferson Wolf; the near sell out & record-breaking crowd at Lamport; Rob dissects the big home opening win vs. the Swinton Lions. After, Rob takes a look the rest of the League results and previews upcoming opponents Bradford Bulls in next weeks home game. Listen in and come run with the Pack this Hunting SZN!

What Bitcoin Did
Whitfield Diffie on the History of Cryptography

What Bitcoin Did

Play Episode Listen Later Feb 1, 2019


Interview location: Palo AltoInterview date: Thursday 31st Jan, 2018Company: CryptomathicRole: Cryptographer and Security ExpertA number of pioneers developed the cryptographic and technical foundations for which Bitcoin is built upon. Nick Szabo highlighted a number of these people in his Tweet:“Inventors of the most important technologies in Bitcoin: digital signatures and Merkle trees (Merkle), elliptic curve crypto (Koblitz), malicious-fault-tolerant consensus (Lamport), elliptic curve crypto (independent inventor: Miller).”Many others along the way have contributed to the success of Bitcoin, either through early work or attempts at created a digital currency, names such as Adam Back, David Chaum and Szabo himself.Whitfield Diffie sits alongside all of these people. His work on cryptography should not be forgotten in the history of Bitcoin and I had the pleasure of sitting down with him to discuss this, how he was introduced to Bitcoin and other such topics as privacy and security.-----If you enjoy The What Bitcoin Did Podcast you can help support the show my doing the following:Become a Patron and get access to shows early or help contributeMake a tip:Bitcoin: 3FiC6w7eb3dkcaNHMAnj39ANTAkv8Ufi2SQR Codes: Bitcoin | Ethereum | Litecoin | Monero | ZCash | RipplecoinIf you do send a tip then please email me so that I can say thank youSubscribe on iTunes | Spotify | Stitcher | SoundCloud | YouTube | TuneIn | RSS FeedLeave a review on iTunesShare the show and episodes with your friends and familySubscribe to the newsletter on my websiteFollow me on Twitter Personal | Twitter Podcast | Instagram | Medium | YouTubeIf you are interested in sponsoring the show, you can read more about that here or please feel free to drop me an email to discuss options.

The Bitcoin Game
The Bitcoin Game #60: Dr. Adam Back Part 2 - Liquid

The Bitcoin Game

Play Episode Listen Later Nov 4, 2018 71:04


Welcome to episode 60 of The Bitcoin Game, I'm Rob Mitchell. I'm happy to bring to you part two of my interview with Cypherpunk and CEO of Blockstream, Dr. Adam Back. In this episode, we take a deep dive into Liquid, Blockstream's new federated sidechain. There's a lot more to Liquid than I realized, and it's fascinating to hear tons of details about the protocol. I lead us astray with some of my questions, but Dr. Back never fails to drop tons of crypto-knowledge. And if you missed the first part of my interview, give episode 59 a listen to hear a great Cypherpunk history lesson. EPISODE LINKS First half of our interview (The Bitcoin Game #59) https://letstalkbitcoin.com/blog/post/the-bitcoin-game-59-dr-adam-back Adam Back (Adam3us) on Twitter https://twitter.com/adam3us Dr. Back's Info Page http://www.cypherspace.org/adam Blockstream https://blockstream.com Lightning Network White Paper by Joseph Poon & Thaddeus Dryja https://lightning.network/lightning-network-paper.pdf Duplex Micropayment Channels by Christian Decker & Roger Wattenhofer https://www.tik.ee.ethz.ch/file/716b955c130e6c703fac336ea17b1670/duplex-micropayment-channels.pdf ETHZ (where Christian Decker earned his Bitcoin PhD) https://www.ethz.ch/de.html Rusty Russell https://en.wikipedia.org/wiki/Rusty_Russell C-Lightning https://github.com/ElementsProject/lightning Liquid https://blockstream.com/liquid Liquid Assets https://blockstream.com/2018/07/02/liquid-issued-assets Confidential Transactions https://bitcoinmagazine.com/articles/confidential-transactions-how-hiding-transaction-amounts-increases-bitcoin-privacy-1464892525 ERC-20 https://en.wikipedia.org/wiki/ERC-20 Counterparty https://counterparty.io/docs/faq-xcp UTXO https://en.wikipedia.org/wiki/Unspent_transaction_output Bulletproofs https://crypto.stanford.edu/bulletproofs https://eprint.iacr.org/2017/1066.pdf Tether https://en.wikipedia.org/wiki/Tether_(cryptocurrency) Proof of Burn https://www.coinbureau.com/education/proof-of-burn-explained Public Key vs. Public Address https://www.reddit.com/r/Bitcoin/comments/3filud/whats_the_difference_between_public_key_and Lamport Signatures https://en.wikipedia.org/wiki/Lamport_signature The Byzantine Generals Problem https://people.eecs.berkeley.edu/~luca/cs174/byzantine.pdf Schnorr Signatures https://en.wikipedia.org/wiki/Schnorr_signature ECDSA https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm Denial-of-Service Attack https://en.wikipedia.org/wiki/Denial-of-service_attack Tor https://www.torproject.org Liquid Block Explorer https://blockstream.com/2018/08/02/accelerating-liquid-adoption-liquid-block-explorer NBitcoin by Nicolas Dorier https://www.codeproject.com/Articles/768412/NBitcoin-The-most-complete-Bitcoin-port-Part-Crypt Green Address Wallet https://greenaddress.it KYC Compliance https://en.wikipedia.org/wiki/Know_your_customer Paul Sztorc talks about Sidechains, Drivechain, Liquid https://letstalkbitcoin.com/blog/post/lets-talk-bitcoin-377-sidechains-drivechains-and-the-apple-store RootStock https://en.wikipedia.org/wiki/RootStock Spark Lightning Wallet by Nadav Ivgi https://bitcoinmagazine.com/articles/spark-new-gui-lightning-wallet-bitcoin-now-available-download Lightning Splice-In, Splice Out (and more) https://medium.com/@pierre_rochard/day-2-of-the-chaincode-labs-lightning-residency-669aecab5f16 SPV Wallet https://en.bitcoinwiki.org/wiki/Simplified_Payment_Verification Neutrino Lite Bitcoin Client https://github.com/lightninglabs/neutrino/blob/master/README.md Lightning Watchtower https://www.coindesk.com/laolu-building-watchtower-fight-bitcoin-lightning-fraud/ ABCore by Lawrence Nahum (BTC full node on Android) https://play.google.com/store/apps/details?id=com.greenaddress.abcore Hardware Wallet https://en.bitcoin.it/wiki/Hardware_wallet STAY IN TOUCH Thanks so much for taking the time to listen to The Bitcoin Game! https://Twitter.com/TheBTCGame http://TheBitcoinGame.com Rob@TheBitcoinGame.com SPONSORS BTC Inc is excited to announce its upcoming conference, Distributed Health, November 5 & 6 in Nashville, TN. This is the first conference to bridge the gap between blockchain technology and the healthcare industry. Now in its third year, this two-day event is an opportunity for all members of the ecosystem, including payers, providers, law makers, retailers, investors and innovators, to reshape the future of healthcare. For more information, visit: health.distributed.com and use the promo code: BTCGAME20 to secure a 20% discount! While much of a Bitcoiner's time is spent in the world of digital assets, sometimes it's nice to own a physical representation of the virtual things you care about. For just the price of a cup of coffee or two (at Starbucks), you can own the world famous Bitcoin Keychain. As Seen On The Guardian • TechCrunch • Engadget • Ars Technica • Popular Mechanics Inforwars • Maxim • Inc. • Vice • RT • Bitcoin Magazine • VentureBeat PRI • CoinDesk • Washington Post • Forbes • Fast Company Bitcoin Keychains - BKeychain.com CREDITS All music in this episode of The Bitcoin Game was created by Rob Mitchell. The Bitcoin Game box art was created from an illustration by Rock Barcellos. Bitcoin (Segwit) tipping address: 3AYvXZseExRn3Dum8z9tFUk9jtQK6KMU4g Note: We've recently migrated our RSS feed (and primary content host) from Soundcloud to Libsyn. So if you notice the Soundcloud numbers have dropped off recently, that's the reason.

Un BIT de memoria
De la panadería a los sistemas distribuidos

Un BIT de memoria

Play Episode Listen Later Sep 21, 2018 58:56


Leslie Lamport es un científico en la industria, un rara avis cuyas contribuciones han tenido un impacto tan sutil como importante. Lamport fue capaz de resolver problemas básicos en la construcción de los sistemas distribuidos y concurrentes de forma elegante e intuitiva lo que le hizo merecedor del Premio Turing en 2013. Con el crecimieno […] Lee la entrada completa en De la panadería a los sistemas distribuidos.

The Frontside Podcast
104: Blockchain Development with Chris Martin

The Frontside Podcast

Play Episode Listen Later Jun 28, 2018 35:49


In this episode, Chris Martin of Type Classes and Joe LaSala of The Frontside talk about blockchain development. Do you have opinions on this show? Want to hear about a specific topic in the future? Reach out to us at contact@frontside.io or on Twitter at @thefrontside. This show was produced by Mandy Moore, aka @therubyrep of DevReps, LLC. TRANSCRIPT: JOE: Hey, everybody. Welcome to Episode 104 of The Frontside Podcast. I'm Joe LaSala. I'm a developer here at the Frontside and I'm going to be hosting today's episode. Today we're going to be associating blockchains and other cryptographically secure technologies and everything that has to do with the web and the future of the web. We have with us Chris Martin and he's currently with Type Classes. What do you do over there, Chris? CHRIS: Our goal is to teach functional programming with Types, specifically with Haskell and a little bit Nix. We do subscription video service. JOE: There seems to be, I guess a bit of an overlap between people who are into functional programming and people who are involved in this new space that has opened up, this new web, I guess and that's something that I want to talk about based on a tweet that I saw you made recently. You mentioned that there's a big section of the Haskell community that is being drawn into whatever the hot ICO is at that moment there, something along those lines. CHRIS: Some of it are bitcoin people or something else but there's definitely a weird overlap that I can't fully explain. JOE: It seems like strange bedfellows, right? CHRIS: Well, there's a couple of things that make sense, which I think the distributed systems in cryptography are kind of these notoriously hard problems. I think when somebody wants to convince their boss that they really need to use Haskell for this problem, I think they can make a persuasive argument in this case. JOE: That's interesting. There's actually, a lot of technology around blockchains around bitcoins, specifically being written in Haskell. I didn't know they were technologically overlapped like that. I guess I just thought they were two very kind of passionate communities but you're saying that a lot of the bitcoin startups that you might see coming out in any given week are actually being written with an eye towards functional programming. Is that accurate? CHRIS: I don't know about bitcoin along this bit but I think some of the people who are working for banks and trying to develop their own sort of novel internal blockchains and stuff, I think those are the people who see this. Although in the case of banks, we don't necessarily see what's coming out of them, so we can't verify whether they're actually shipping things or not. JOE: Yeah. That means there's a lot to touch on there. I would agree with you on your initial sentiment, also just to extend to say that I think personally that both communities are really evangelical. Functional programmers, people who are into functional programming, for me it hasn't clicked yet and I know that it will come into my heart. I've asked functional programming to kind of where things are starting to fall into line where I'm certain to see the world in that way but for people who have seen the light fully, I'm sure believers once monads and functors kind of enter the conversation. They don't leave. It's similar like when bitcoin first started and everybody's running about the gold standard. Really, it's just nothing. It was hard to find resources on it that did the most of the amount of screaming. CHRIS: Yeah, you're absolutely right, that culturally, they're going to attract the same group of people or the people who are willing to adopt something that's not fully fleshed out yet, people who want to take what they believe and sit in this community and try and spread it to the rest of the world. I think it's the same kind of people. JOE: The early adoption, I think is something I can consider too. I guess it's a very risk-oriented group. CHRIS: Yeah, kind of. I mean, Haskell is pretty old, I guess but -- JOE: That's fair, yeah. CHRIS: -- Some of the changes that really make it, it great and usable lately are pretty [inaudible]. JOE: That's interesting. You mentioned this idea -- we kind of skipped over a little bit but thanks, having their own blockchains and that's something that I think that maybe people not actively following this space, which is I will say, a very hard space to keep up for those of us who are actively following it. But those who may just know blockchain through the name of an iced tea company changing or some sensational news article or what have you or just through bitcoin even, but I know that it's not the blockchain. It's not a singular blockchain. It's very easy to implement the fundamental structure. It's a linked list, essentially, with the kind of a cryptographic thing that keeps from breaking that link. Those links are inserting new history, I guess the further you go back. I guess people are even exploring different data structures like directed acyclic graphs and stuff and how that could be used to map other domains but the reality is it's a linked list and you can spend up as many of them as you want and you can mine blocks based on all this different criteria. Bitcoin is a proof of work associated with the minting of a new block and that's been a problem for them as they scale as a currency but it could be a history of anything and the minting of those blocks can be based on anything. You mentioned banks, the financial kind of sector is certainly interested in these smaller private chains but do you think there's a use for that consumer market as well? How do you think that your personal blockchain or set of blockchains might be a factor in the hobbyist of the futurist life? CHRIS: Oh, wow. That's a different question than I thought. [inaudible] where you're going with that -- JOE: Where do you think I was going? CHRIS: Well, we're talking about banks and so, the question is now everybody other than banks -- JOE: Well, it could be everybody, including banks too, however you want to take it. CHRIS: Yeah. There's a much harder question, I think of what in the world we're actually saying when we are talking about blockchain, right? The notion obviously has started with bitcoin but if what you want to do is bitcoin, then you should just be as in bitcoin, so what are we talking about similar bitcoin and the general phrase people have they like to throw in here is Byzantine fault tolerance. I'm talking about any kind of system that can have multiple participants. We're used to talking about clusters of computers and making systems that can work if one of them fails, if one of them just stops working but now, we're starting to talk about how do we make systems work if one of them gets hacked, then we still have some assurances that the whole system works together as a whole. JOE: Would you consider Byzantine fault tolerance to be the defining factor of a blockchain because I feel like there's the timestamping element that goes along with it. I feel like they're kind of part and parcel, right? CHRIS: Kind of but if you're not considering Byzantine faults, if you're only talking about systems where you have benign faults, which is a machine goes down sometimes, then timestamping isn't really a problem because we can just use NTP and we all have a pretty sensible idea of what time it is. JOE: Time specifically, even just like, I guess order. I always considered sequence to be a massive part of what a blockchain fundamentally was. You have the distributed aspect of the network that gives this sort of resilience to malicious intent but not only is it protected, I guess against demolition and malicious intent by this crowd strength but also just fundamentally through the cryptographic side of it, you can't go in and insert things that didn't happen. Once that order has been said, it's been written in stone, basically, right? Because the way I understood is there were papers coming out of Bell Labs in the early 90s and those two things set as approaches to this independently and it wasn't until the internet advance so we put them together and we're able to achieve Byzantine fault tolerance through that. Is that, I mean...? CHRIS: It does help a lot, I think to buck up and think about what the state of research was in the 90s because I think that's something that a lot of people in blockchain space kind of lose sight of. You have a whole lot of people writing papers now who didn't used to be academics until a couple of years ago. It was the early 90s where we started having faxes and we started having what later turned into what's kind of known as raft. Like you said, they solved the ordering problem. Even something as simple as what we call Lamport clocks which is you have sort of a virtual timestamps and as long as nobody's malicious, if you remove the timestamp forward, then we can all have something that resembles the deterministic forward flow of time. Then, that milestone that I was like to remind people of this in 1999 is when we had the paper practical Byzantine fault tolerance. JOE: That was '99. You're talking about the... was it Castro and --? CHRIS: Liskov, yeah. JOE: Okay. I didn't know it was '99. CHRIS: Interestingly, the same Liskov that the Liskov substitution principles named for, Barbara Liskov. It's also a distributed systems research. JOE: That's swell as well. I kind of heard the concept of Byzantine fault tolerance but I never read this paper. I'm also surprised to find that it didn't come out of that same period of the early 90s and it was as far as '99. I haven't read its entirety but I did fall asleep reading it last night. You mentioned this specifically, I guess, when we're talking today, as a paper that is important. It's the work that we're trying to do at... was it Hijro, I think? CHRIS: Yeah. JOE: Yeah, so what kind of work were you doing there and what is important to you, I guess about this paper specifically, when you look at all the research that went into priming the community for the space that we are now in? CHRIS: When I joined Hijro, I got kind of a difficult and nebulous mission, which was that everyone in and around that space that was trying to sell to banks was if you said the word blockchain, you could get your foot in the door because all the banks were looking at bitcoin and saying, "Well, look, this is clearly something that's going to be big and we don't want to be missing out, so we have to figure out how this applies to us." JOE: What year is this? They were working this in 2014-ish, is that right? CHRIS: '15 or '16, I think. The question was trying to figure out what aspect of it was actually what they wanted here. What Hijro is trying to sell them, the details aren't even important for this conversation but we need an interbank solution. We needed a ledger of accounts that 'we weren't a bank so we couldn't be the one holding everyone's money and keeping track of the flow of money in our network.' We were on something that the banks were truly in charge of but we didn't want to necessarily have our platform be owned by a particular bank. We wanted to be the sort of consortium of all of our partners. JOE: Consortium is a keystone word I think here, that we should definitely come back to that. CHRIS: Yeah and people talk about, if I use the word consortium blockchain, I think sometimes in contrast with the public blockchain, with the 'free anyone can join' blockchain. JOE: Yeah. I'm particularly fascinated by this concept. That is a term that is used. I can confirm this. But you're doing that pretty early then because I feel like that concept didn't make it out into, I guess the public understanding, until recently or maybe I'm just behind at times. CHRIS: Yeah, I guess so. I don't know. When I start working on this, I just spent a couple of months trying to read papers about what was in space and I guess, the only big name that was trying to do something like this was Tendermint. JOE: Tendermint? Interesting. CHRIS: You can pick out technologies like this because the magic number is always one-third. They can tolerate Byzantine failure up to one-third of the nodes. That was a theoretical result that was reached, just sort of the best you can do. Before BFT and then BFT is one of those solutions in that category and Tendermint does something similar. JOE: That, I guess is sort of the background to this paper and it's impacting your life. I guess, what is put forth in this paper is to solve for higher tolerance. Would that be the right way to put it? CHRIS: Did you say higher tolerance? JOE: Yes. You're talking about the Byzantine tolerance is 30%, right? With Tendermint? But you're saying that they're doing something similar to that's before in the paper? CHRIS: The most interesting thing to me, I think is probably, hopefully possible to convey concisely is the rationale behind the one-third number because that took a while for me to really appreciate but I think it really clicked when it did. One of the hardest intuitions to get people to break, I don't know, way of thinking to shift, I guess is convincing people that consensus is even a hard problem because I had this conversation a lot with people that'd say, "I've got this JavaScript library here, for instance that just lets me broadcast a message to all the nodes in a cluster, so why can I just do that?" Why can't we just use one at a time to do it and if I detected someone's trying to cheat, if I get two different messages from someone that are conflicting, maybe I can just ignore them. JOE: Not in finance. That's kind of ironic, I guess that you found it difficult to get people to come to a consensus about the importance of consensus. CHRIS: Right. The basic flow of all these things is we describe them as voting systems. We have voting rounds where each time, like you said the blockchain of the ledger or whatever it is, just a linked list, so the problem of using consensus build database is we're just going to iteratively try to vote or come to consensus on what the next block is. What the next ledger entry should be? Obviously, since we don't have a synchronized wall clock to go by, we have to assume messages can come in any order. We might all sort of speak up simultaneously and propose different blocks as the next one, at which point we have to start over and retry that. But furthermore, I can send different votes to different people if I'm trying to be malicious and that's where the tricky part comes on. The rationale for the one-third number, maybe I can just try to come around to that and say it directly then, is that when we take a vote for what the next block is going to be, we need the supermajority. We need two-thirds of the participants to have all said the same thing and the rationale for that is it's actually easier to think of it backwards. Rather than saying, two thirds of the total, what we say is, "If we're going to allow some fixed number of nodes to fail, to behave maliciously --" you know, we traditionally call that number 'F' in the paper, then what we say is we need 3F+1 total nodes to be participating. JOE: I didn't know that was sort of codified into how conflict is resolved on things like bitcoin during blockchain. It's inherent, I guess. CHRIS: No. This is the total opposite of what bitcoin and Ethereum are going to do. JOE: Because I always thought it was just going to be like a majority, I guess but what you are talking about is more like how the Senate would were to pass a resolution to the constitution, like it has to be an exceptional majority. I'm starting to understand why one-third, specifically. It's 3F+1, I guess. CHRIS: The reason is because for each vote, every time I look at the results of a vote, I have to be able to assume that some number that we called F, of the people that I've heard back from are trying to cheat me. It turns out I need to be sure that the majority of the votes that I've heard back are from people who are actually following the protocol correctly and not lying. We need to be tolerant to two kinds of failure. One is that a node simply goes down and we don't hear from them and we don't receive a vote from them and then the other kind of failure is the Byzantine failure, that they're not following protocol in some way. The reason I need 3F+1 nodes is because we need to be able to make progress, even if F of these number is we didn't hear from at all because they're down and then, I need 2F+1 votes because I need to take into account the possibility that some F of these votes were from cheaters and then we need to have more honest votes than lying votes. JOE: That's pretty profound. I definitely going to finish the rest of this paper while conscious later today. I guess we're a little off with regard to math at this point and it's when you said, you spent I guess a month or so just reading papers around the time you started with Hijro and I guess did you stop because I feel like I've read just more white papers than ever thought I would outside of the academic setting, just trying to keep pace with what's been going on, particularly with regard to the web. I don't if you're familiar with like IPFS but these sort of directed acyclic graph things are popping up all over the place and platforms are even now being built on this concept. I guess, Ethereum feels impractical in a lot of ways. These dime-a-dozen tutorials, when you started talking about the global computer that is Ethereum and the blockchain and it's going to change everything in the internet and you won't have to pay Comcast like some central authority or you just pay for each transactions. The reality of it is every time you do a write against a data store have, first of all, thousands of computers go and verify that and also, you don't want to store your information on a linked list. It's not feasible for storing large data structures and it becomes very expensive for the user and for the person, if you're maintaining a smart contract for the contract itself. These are volatile, all little points of value. It's impractical. CHRIS: It's definitely a cost that you don't want to incur. In all cases, just a confirmation time is a cost you don't want to incur. JOE: Absolutely. CHRIS: There is one nice thing that that you can do in some cases, which is that people is talking about the piggybacking on these blockchains like if I have a system and I just want some extra assurance to keep it honest, then I can do things like periodically publish a hash of my database onto something like bitcoin or Ethereum. JOE: Yeah. That actually happen with anyone in financial... They do publish stuff in the paper and this was before cryptographic ledgers but to basically prove that this was the state of something, I remembered seeing this somewhere, like there would be in financial news, like there'd be some crazy number or string at the top to verify what was on the string. CHRIS: Yes. Of course, the irony there is that you really don't need some kind of blockchain if you want to do that because the fact that we're doing that before the blockchain has existed and doubly, it's funny because the first block of the bitcoin blockchain, the genesis block includes in it, I think a New York Times headline, which was intended as proof that Satoshi or whoever didn't spend years mining bitcoin prior to releasing it. It's supposed to be a proof of the time of the first genesis actually was. It's funny that we are actually already had this verification system and what that demonstrates is sort of a principle of consensus that I like to talk about which is that as you increase the time scale, consensus becomes an easier and easier problem. I think the reason why something like newspaper headlines are reliable means of a timestamp is just mostly because they're big and slow, because there's only one every day. I think the whole challenge like you said of, how a lot of systems kind of boiled down to having the white paper for bitcoin refers it describes bitcoin as a distributed timestamp server, something along those lines. The reason why you need a new technology to do that, I think so that you could have timestamps that are every of couple minutes, rather than every 24 hours. JOE: That's a very interesting take on it. I guess, the more time there is, it is easier to reach a consensus. It's just interesting to think about. It's funny as humans like the longer time passes, the less reliable memory is, I guess, less reliable history as we conceive of it, I guess. It's different when you record something than the way that you hold in the brain that sometimes I wonder how much impact that's had on. It's a little ephemeral, I guess but it's interesting. CHRIS: Yeah. I guess my statement is limited to the on-scale where we can actually fit into memory. JOE: Right, that most of the times, it's the only relevant scale, I guess, like a blockchain doesn't have use outside of our use of it, inherent to it, so it's going to be seen through that lens, I guess of our use of it. I think it is kind of profound, a thing to think about that I definitely considered. You mentioned using blockchains as adding a little bit of... how do you put it? Like truthiness, I guess, we'll say. I know that's not how you put it but adding a little bit of security, maybe around something else but the reality is you can get away with that on a number of other levels. I think that's important and interesting to think about. There seems to be this trend now talking about a blockchain as part of a bigger picture or consortium blockchain or a consortia of blockchains, right? Because a consortia would be multiple and then a consortium would be... No, a consortium would be a single grouping, consortia would be multiple groups. Basically, going back to the problem you're trying to solve with Hijro, you have multiple banks and I believe eventually, I don't know if you work on it, there was a protocol that came out of that company to unify these blockchains, like a few of them. They demoed and everything. That, I think gives you some power with regard to access control but again, I guess, that's not a thing that you really need consensus for. So, where does it fit in? Aside from things like voting and transparent finance for maybe a political cause or in the case of bitcoin, just finance in general. In bitcoin, I feel like we got Mongo DB super hard in the sense that it just got applied to every domain and it applies to very, very few. CHRIS: My boss at Hijro, Lamar Wilson really like to say that people talked about blockchain like it was hot sauce and they sort of sprinkle it on everything to make it better. JOE: That's sad. CHRIS: I guess, two answer to that one. One of the places where it absolutely captivates people's imaginations too far and doesn't work and then places where it doesn't work, so I want to start with the first here because the biggest mistake that people make is that there was this notion of tokenization that came out of Ethereum, where anyone could make a smart contract that represented something and now, also that I can trade digitally. Just like it's money or some kind of digital asset, so people want to talk about putting your car, putting your house on the blockchain or selling it there. But it's just shocking how many times I had to remind people that if I make a smart contract that represents cars and I put my VIN number on it and I transfer you my car, at Ethereum contract in an exchange for a bitcoin, if I call the police and report my car stolen, they're not going to look at the Ethereum contract, right? JOE: Yeah. Man, you're really right. People don't think about that enough. If your car is in the blockchain, your car still on the block. CHRIS: What we had realize when we're selling solutions like this is that they're great for some reasons but you need actual legal agreements to underpin things when you actually make connection to the real world. The magic of bitcoin that can't really be replicated is that the coin actually didn't need a pinning to the real world because the thing bitcoin was running was itself. It just depended on hoping that people were going to find the coin and ledger valuable intrinsically and bitcoin never really purported to control things in the real world. JOE: I guess, definitely not in the paper. There are some place that can buy in from some very specific elements of society that sort of cemented its place as useful but we don't really need to go to that road, I guess. I don't know. You know, my roommate is a lawyer and we have this conversation often and I feel like if we go down law and cryptography, we're going to be talking for too long, where we are at currently. CHRIS: Right and that wasn't your question anyway. It was just what I respond to easiest because being a critic is always the easier thing to do. JOE: I can feel you there. CHRIS: One of the interesting things that I never even found too much about but I noticed this in a couple of passing references as I was reading stuff about Byzantine fault tolerance in general is that it seems to have some application in things like flight control systems and space ships because when you think about a computer that you're going to send into space, you have two things that Byzantine fault tolerance applies to directly. One is you need a lot of redundancy. You need these control systems, maybe you have a dozen things computing the same result because you can't replace the hardware when once you shot something to space. The second thing is once you've sent something outside the atmosphere, all of the sudden, you're being bombarded with a lot more cosmic rays than you were before. Now, you actually really do this idea that computers can fail, not just by stopping but by producing wrong results. All of a sudden, it becomes a lot more real because you actually have physics slipping a bit at your computers. JOE: I don't even think you have to go as far as space if you talk about just like a fleet of something, like self-trading cards. I suppose, in domain where there is an interplanetary file system, it's good to specify the planet we're talking about. Just having worked a little bit with robots in college, they lie all the time and they produce bad data constantly, so not even bad actors just incompetent actors, I guess could definitely... This is something that has to be, I guess on our minds as we move forward as the society that has more connected devices, which I think as much as I would love to have left this conversation off in outer space, I think bringing it around to the internet of things, which is sort of where this all began months and months ago is probably a good place to stop meandering through these cryptographic weeds. You can probably put a pin in this. I think we've been talking about for a while now, I guess and just kind of trying to see what it is and where the applications are. It's constantly changing and never clear, I think is the conclusion that I've come to. I don't know. I think, just kind of shooting the breeze about it is a fitting end to a series of Frontside engagements in this space, for the time being. CHRIS: I've seen several people try to tackle the space of how to stop relying on things like Google Drive to store our data because I think a lot of us have realized that we're tired of losing all of our family photos every time a hard drive dies but a lot of people are uncomfortable with trusting Google with everything. This to me seems like a perfect opportunity for people to start building redundant systems among their home and friends. JOE: Yeah, I completely agree. I'm actively trying to do exactly that right now. CHRIS: Oh, cool. And you don't necessarily want your cluster of machines that's running on all of your family's computers to be able to go down if your 10-year old get some virus, right? JOE: Right and also, there's definitely things that you want just within your home or even just within your section of the home. I guess you could layer chains, to kind of manage those interactions? CHRIS: Sure. I'm not exactly sure what you mean by layering chains. JOE: You could have consortia in this case. If you had like a hypervisor, almost like a control notice, essentially or some type of view from above of this situation, you could say, think of it as a family scenario. We have three different houses on this call that all belong to our immediate family and cousins and whatever and it's like, me and my siblings, we have information that we all want just within the siblings. We don't want Mom and Dad to know. We don't want the cousins to know, so you could basically use like a blockchain to kind of date access to data that is held within that consortium and then the consortia could communicate amongst each other. Only the pertinent information that they wanted to allow access to at that time and then, internally of course, you could have all these different mechanisms for how you actually store that data or how you actually serve it up. It's pretty complicated. CHRIS: Yeah, I think you made a lot of sense, though. JOE: Yeah, cool. I'm hoping so. There's been some work on it out of Microsoft, actually. CHRIS: On the files storage problem, specifically? JOE: I guess this is like with a smart home and kind of just teaching devices to cooperate and ask each other. If you had a section of connected devices that maybe were related to the workflow that a human being might go through to get groceries or something and then a section that's related to doing laundry or whatever, eventually, they would learn to communicate in the laundry grouping and could say, "Hey, grocery people. We're out of soup," or something like that. It's sort of almost happened organically, I guess. I had not actually felt like I found that paper. I've only found references to it. This is where I need to get something like academic access but that was interesting stuff. I don't know how I end up here, either. This has always happening when you're talking about this domain. Anyway -- CHRIS: People's ideas, it's just sort of generally inspiring concept so people is following you everywhere. JOE: Yeah, it's heartwarming. You know, with my ICT, I could look back and see exactly where I usually came from than [inaudible], the name of the farmer who grew with. I don't know. It'd be so much easier to fake most things, really when you think about it. On that note, I hope that this conversation was... I know that there was no JavaScript and I apologize for that but I hope that our audience finds it interesting on some level and I want to thank you for your time. Chris, it was really great talking to you and getting your take on these things as somebody who's been in the industry for a while. Definitely, some fascinating points to consider and definitely, I will finish that white paper, probably this evening because it's pretty cool. If anybody in the audience has anything they'd like to ask you about pertinent to this conversation or anything else, where is a good place to get a hold of you? CHRIS: For me, it's mostly Twitter. I'm @chris__martin. I'm also at Type Classes, if you want to talk to me about our new business. JOE: Cool. This has been Episode 104, I believe of The Frontside Podcast. Frontside, we're a consultancy based in Austin, Texas and we love writing elegant, sustainable code and just producing good stuff, really. I think that's what we're all about. I think, we can agree at least, that's a core tenet of what we do and if you would like us to produce some good stuff for you, feel free to get in touch with us. Also, feel free to reach out via email if you have any ideas for future topics or any feedback about this episode. I also want to thank Mandy for producing this episode. You can catch us next week, I believe for our talk with Brian Douglas on Probot and Robert will be hosting that one, as far as I know. Thank you all for your time and feel free to reach out. This has been The Frontside Podcast. I'm Joe LaSala. Chris Martin, thank you for joining us and have a good day, everybody.

Inside the Artists Shanty
Episode 23 - Michael Lamport

Inside the Artists Shanty

Play Episode Listen Later Mar 14, 2018 31:55


Originally from England Michael Lamport has carved out a career in Toronto Canada by dipping his toe into just about every aspect of film and television production. Michael is co-owner of LAMPORT-SHEPPARD ENTERTAINMENT and the producer and narrator of Rescue Mediums, which was seen in 28 countries around the world before its long run ended in 2011. He has also produced programming for Discovery Channel USA and the series Curious and Unusual Deaths for Discovery Channel Canada and the Crime and Investigation channel in the UK. Michael was also the producer and host of the travel series Suite & Simple that aired on networks in Canada and the USA. His documentary film Offstage, which followed the trials and tribulations of the residents of a small town as they put on an Xmas stage production, won a GOLD HUGO at the Chicago International Television Festival and was nominated for Best Documentary and Best Editing at the Yorkton Film Festival, as well as an Honourable Mention at the Chris Awards. He has also executive produced the A&E documentary The Disciples and produced the film The Right To Play for CBC. He was also a script consultant and directing consultant for the highly successful MOCDOCS series on CBC. In addition to producing and directing, he is the voice of Life’s A Zoo.TV, The Wombles, Bob & Margaret, Ace Lightning, Upstairs Downstairs, Bears and Maggie and the Ferocious Beast. His acting credits include guest-starring in many stage productions and television series and starring in the eight-hour miniseries The Adventures of Smoke Belliou which aired in Canada, the USA and France.

BSD Now
222: How Netflix works

BSD Now

Play Episode Listen Later Nov 29, 2017 127:25


We take a look at two-faced Oracle, cover a FAMP installation, how Netflix works the complex stuff, and show you who the patron of yak shaving is. This episode was brought to you by Headlines Why is Oracle so two-faced over open source? (https://www.theregister.co.uk/2017/10/12/oracle_must_grow_up_on_open_source/) Oracle loves open source. Except when the database giant hates open source. Which, according to its recent lobbying of the US federal government, seems to be "most of the time". Yes, Oracle has recently joined the Cloud Native Computing Foundation (CNCF) to up its support for open-source Kubernetes and, yes, it has long supported (and contributed to) Linux. And, yes, Oracle has even gone so far as to (finally) open up Java development by putting it under a foundation's stewardship. Yet this same, seemingly open Oracle has actively hammered the US government to consider that "there is no math that can justify open source from a cost perspective as the cost of support plus the opportunity cost of forgoing features, functions, automation and security overwhelm any presumed cost savings." That punch to the face was delivered in a letter to Christopher Liddell, a former Microsoft CFO and now director of Trump's American Technology Council, by Kenneth Glueck, Oracle senior vice president. The US government had courted input on its IT modernisation programme. Others writing back to Liddell included AT&T, Cisco, Microsoft and VMware. In other words, based on its letter, what Oracle wants us to believe is that open source leads to greater costs and poorly secured, limply featured software. Nor is Oracle content to leave it there, also arguing that open source is exactly how the private sector does not function, seemingly forgetting that most of the leading infrastructure, big data, and mobile software today is open source. Details! Rather than take this counterproductive detour into self-serving silliness, Oracle would do better to follow Microsoft's path. Microsoft, too, used to Janus-face its way through open source, simultaneously supporting and bashing it. Only under chief executive Satya Nadella's reign did Microsoft realise it's OK to fully embrace open source, and its financial results have loved the commitment. Oracle has much to learn, and emulate, in Microsoft's approach. I love you, you're perfect. Now change Oracle has never been particularly warm and fuzzy about open source. As founder Larry Ellison might put it, Oracle is a profit-seeking corporation, not a peace-loving charity. To the extent that Oracle embraces open source, therefore it does so for financial reward, just like every other corporation. Few, however, are as blunt as Oracle about this fact of corporate open-source life. As Ellison told the Financial Times back in 2006: "If an open-source product gets good enough, we'll simply take it. So the great thing about open source is nobody owns it – a company like Oracle is free to take it for nothing, include it in our products and charge for support, and that's what we'll do. "So it is not disruptive at all – you have to find places to add value. Once open source gets good enough, competing with it would be insane... We don't have to fight open source, we have to exploit open source." "Exploit" sounds about right. While Oracle doesn't crack the top-10 corporate contributors to the Linux kernel, it does register a respectable number 12, which helps it influence the platform enough to feel comfortable building its IaaS offering on Linux (and Xen for virtualisation). Oracle has also managed to continue growing MySQL's clout in the industry while improving it as a product and business. As for Kubernetes, Oracle's decision to join the CNCF also came with P&L strings attached. "CNCF technologies such as Kubernetes, Prometheus, gRPC and OpenTracing are critical parts of both our own and our customers' development toolchains," said Mark Cavage, vice president of software development at Oracle. One can argue that Oracle has figured out the exploitation angle reasonably well. This, however, refers to the right kind of exploitation, the kind that even free software activist Richard Stallman can love (or, at least, tolerate). But when it comes to government lobbying, Oracle looks a lot more like Mr Hyde than Dr Jekyll. Lies, damned lies, and Oracle lobbying The current US president has many problems (OK, many, many problems), but his decision to follow the Obama administration's support for IT modernisation is commendable. Most recently, the Trump White House asked for feedback on how best to continue improving government IT. Oracle's response is high comedy in many respects. As TechDirt's Mike Masnick summarises, Oracle's "latest crusade is against open-source technology being used by the federal government – and against the government hiring people out of Silicon Valley to help create more modern systems. Instead, Oracle would apparently prefer the government just give it lots of money." Oracle is very good at making lots of money. As such, its request for even more isn't too surprising. What is surprising is the brazenness of its position. As Masnick opines: "The sheer contempt found in Oracle's submission on IT modernization is pretty stunning." Why? Because Oracle contradicts much that it publicly states in other forums about open source and innovation. More than this, Oracle contradicts much of what we now know is essential to competitive differentiation in an increasingly software and data-driven world. Take, for example, Oracle's contention that "significant IT development expertise is not... central to successful modernization efforts". What? In our "software is eating the world" existence Oracle clearly believes that CIOs are buyers, not doers: "The most important skill set of CIOs today is to critically compete and evaluate commercial alternatives to capture the benefits of innovation conducted at scale, and then to manage the implementation of those technologies efficiently." While there is some truth to Oracle's claim – every project shouldn't be a custom one-off that must be supported forever – it's crazy to think that a CIO – government or otherwise – is doing their job effectively by simply shovelling cash into vendors' bank accounts. Indeed, as Masnick points out: "If it weren't for Oracle's failures, there might not even be a USDS [the US Digital Service created in 2014 to modernise federal IT]. USDS really grew out of the emergency hiring of some top-notch internet engineers in response to the Healthcare.gov rollout debacle. And if you don't recall, a big part of that debacle was blamed on Oracle's technology." In short, blindly giving money to Oracle and other big vendors is the opposite of IT modernisation. In its letter to Liddell, Oracle proceeded to make the fantastic (by which I mean "silly and false") claim that "the fact is that the use of open-source software has been declining rapidly in the private sector". What?!? This is so incredibly untrue that Oracle should score points for being willing to say it out loud. Take a stroll through the most prominent software in big data (Hadoop, Spark, Kafka, etc.), mobile (Android), application development (Kubernetes, Docker), machine learning/AI (TensorFlow, MxNet), and compare it to Oracle's statement. One conclusion must be that Oracle believes its CIO audience is incredibly stupid. Oracle then tells a half-truth by declaring: "There is no math that can justify open source from a cost perspective." How so? Because "the cost of support plus the opportunity cost of forgoing features, functions, automation and security overwhelm any presumed cost savings." Which I guess is why Oracle doesn't use any open source like Linux, Kubernetes, etc. in its services. Oops. The Vendor Formerly Known As Satan The thing is, Oracle doesn't need to do this and, for its own good, shouldn't do this. After all, we already know how this plays out. We need only look at what happened with Microsoft. Remember when Microsoft wanted us to "get the facts" about Linux? Now it's a big-time contributor to Linux. Remember when it told us open source was anti-American and a cancer? Now it aggressively contributes to a huge variety of open-source projects, some of them homegrown in Redmond, and tells the world that "Microsoft loves open source." Of course, Microsoft loves open source for the same reason any corporation does: it drives revenue as developers look to build applications filled with open-source components on Azure. There's nothing wrong with that. Would Microsoft prefer government IT to purchase SQL Server instead of open-source-licensed PostgreSQL? Sure. But look for a single line in its response to the Trump executive order that signals "open source is bad". You won't find it. Why? Because Microsoft understands that open source is a friend, not foe, and has learned how to monetise it. Microsoft, in short, is no longer conflicted about open source. It can compete at the product level while embracing open source at the project level, which helps fuel its overall product and business strategy. Oracle isn't there yet, and is still stuck where Microsoft was a decade ago. It's time to grow up, Oracle. For a company that builds great software and understands that it increasingly needs to depend on open source to build that software, it's disingenuous at best to lobby the US government to put the freeze on open source. Oracle needs to learn from Microsoft, stop worrying and love the open-source bomb. It was a key ingredient in Microsoft's resurgence. Maybe it could help Oracle get a cloud clue, too. Install FAMP on FreeBSD (https://www.linuxsecrets.com/home/3164-install-famp-on-freebsd) The acronym FAMP refers to a set of free open source applications which are commonly used in Web server environments called Apache, MySQL and PHP on the FreeBSD operating system, which provides a server stack that provides web services, database and PHP. Prerequisites sudo Installed and working - Please read Apache PHP5 or PHP7 MySQL or MariaDB Install your favorite editor, ours is vi Note: You don't need to upgrade FreeBSD but make sure all patches have been installed and your port tree is up-2-date if you plan to update by ports. Install Ports portsnap fetch You must use sudo for each indivdual command during installations. Please see link above for installing sudo. Searching Available Apache Versions to Install pkg search apache Install Apache To install Apache 2.4 using pkg. The apache 2.4 user account managing Apache is www in FreeBSD. pkg install apache24 Confirmation yes prompt and hit y for yes to install Apache 2.4 This installs Apache and its dependencies. Enable Apache use sysrc to update services to be started at boot time, Command below adds "apache24enable="YES" to the /etc/rc.conf file. For sysrc commands please read ```sysrc apache24enable=yes Start Apache service apache24 start``` Visit web address by accessing your server's public IP address in your web browser How To find Your Server's Public IP Address If you do not know what your server's public IP address is, there are a number of ways that you can find it. Usually, this is the address you use to connect to your server through SSH. ifconfig vtnet0 | grep "inet " | awk '{ print $2 }' Now that you have the public IP address, you may use it in your web browser's address bar to access your web server. Install MySQL Now that we have our web server up and running, it is time to install MySQL, the relational database management system. The MySQL server will organize and provide access to databases where our server can store information. Install MySQL 5.7 using pkg by typing pkg install mysql57-server Enter y at the confirmation prompt. This installs the MySQL server and client packages. To enable MySQL server as a service, add mysqlenable="YES" to the /etc/rc.conf file. This sysrc command will do just that ```sysrc mysqlenable=yes Now start the MySQL server service mysql-server start Now run the security script that will remove some dangerous defaults and slightly restrict access to your database system. mysqlsecureinstallation``` Answer all questions to secure your newly installed MySQL database. Enter current password for root (enter for none): [RETURN] Your database system is now set up and we can move on. Install PHP5 or PHP70 pkg search php70 Install PHP70 you would do the following by typing pkg install php70-mysqli mod_php70 Note: In these instructions we are using php5.7 not php7.0. We will be coming out with php7.0 instructions with FPM. PHP is the component of our setup that will process code to display dynamic content. It can run scripts, connect to MySQL databases to get information, and hand the processed content over to the web server to display. We're going to install the modphp, php-mysql, and php-mysqli packages. To install PHP 5.7 with pkg, run this command ```pkg install modphp56 php56-mysql php56-mysqli Copy sample PHP configuration file into place. cp /usr/local/etc/php.ini-production /usr/local/etc/php.ini Regenerate the system's cached information about your installed executable files rehash``` Before using PHP, you must configure it to work with Apache. Install PHP Modules (Optional) To enhance the functionality of PHP, we can optionally install some additional modules. To see the available options for PHP 5.6 modules and libraries, you can type this into your system pkg search php56 Get more information about each module you can look at the long description of the package by typing pkg search -f apache24 Optional Install Example pkg install php56-calendar Configure Apache to Use PHP Module Open the Apache configuration file vim /usr/local/etc/apache24/Includes/php.conf DirectoryIndex index.php index.html Next, we will configure Apache to process requested PHP files with the PHP processor. Add these lines to the end of the file: SetHandler application/x-httpd-php SetHandler application/x-httpd-php-source Now restart Apache to put the changes into effect service apache24 restart Test PHP Processing By default, the DocumentRoot is set to /usr/local/www/apache24/data. We can create the info.php file under that location by typing vim /usr/local/www/apache24/data/info.php Add following line to info.php and save it. Details on info.php info.php file gives you information about your server from the perspective of PHP. It' useful for debugging and to ensure that your settings are being applied correctly. If this was successful, then your PHP is working as expected. You probably want to remove info.php after testing because it could actually give information about your server to unauthorized users. Remove file by typing rm /usr/local/www/apache24/data/info.php Note: Make sure Apache / meaning the root of Apache is owned by user which should have been created during the Apache install is the owner of the /usr/local/www structure. That explains FAMP on FreeBSD. IXsystems IXsystems TrueNAS X10 Torture Test & Fail Over Systems In Action with the ZFS File System (https://www.youtube.com/watch?v=GG_NvKuh530) How Netflix works: what happens every time you hit Play (https://medium.com/refraction-tech-everything/how-netflix-works-the-hugely-simplified-complex-stuff-that-happens-every-time-you-hit-play-3a40c9be254b) Not long ago, House of Cards came back for the fifth season, finally ending a long wait for binge watchers across the world who are interested in an American politician's ruthless ascendance to presidency. For them, kicking off a marathon is as simple as reaching out for your device or remote, opening the Netflix app and hitting Play. Simple, fast and instantly gratifying. What isn't as simple is what goes into running Netflix, a service that streams around 250 million hours of video per day to around 98 million paying subscribers in 190 countries. At this scale, providing quality entertainment in a matter of a few seconds to every user is no joke. And as much as it means building top-notch infrastructure at a scale no other Internet service has done before, it also means that a lot of participants in the experience have to be negotiated with and kept satiated?—?from production companies supplying the content, to internet providers dealing with the network traffic Netflix brings upon them. This is, in short and in the most layman terms, how Netflix works. Let us just try to understand how Netflix is structured on the technological side with a simple example. Netflix literally ushered in a revolution around ten years ago by rewriting the applications that run the entire service to fit into a microservices architecture?—?which means that each application, or microservice's code and resources are its very own. It will not share any of it with any other app by nature. And when two applications do need to talk to each other, they use an application programming interface (API)?—?a tightly-controlled set of rules that both programs can handle. Developers can now make many changes, small or huge, to each application as long as they ensure that it plays well with the API. And since the one program knows the other's API properly, no change will break the exchange of information. Netflix estimates that it uses around 700 microservices to control each of the many parts of what makes up the entire Netflix service: one microservice stores what all shows you watched, one deducts the monthly fee from your credit card, one provides your device with the correct video files that it can play, one takes a look at your watching history and uses algorithms to guess a list of movies that you will like, and one will provide the names and images of these movies to be shown in a list on the main menu. And that's the tip of the iceberg. Netflix engineers can make changes to any part of the application and can introduce new changes rapidly while ensuring that nothing else in the entire service breaks down. They made a courageous decision to get rid of maintaining their own servers and move all of their stuff to the cloud?—?i.e. run everything on the servers of someone else who dealt with maintaining the hardware while Netflix engineers wrote hundreds of programs and deployed it on the servers rapidly. The someone else they chose for their cloud-based infrastructure is Amazon Web Services (AWS). Netflix works on thousands of devices, and each of them play a different format of video and sound files. Another set of AWS servers take this original film file, and convert it into hundreds of files, each meant to play the entire show or film on a particular type of device and a particular screen size or video quality. One file will work exclusively on the iPad, one on a full HD Android phone, one on a Sony TV that can play 4K video and Dolby sound, one on a Windows computer, and so on. Even more of these files can be made with varying video qualities so that they are easier to load on a poor network connection. This is a process known as transcoding. A special piece of code is also added to these files to lock them with what is called digital rights management or DRM?—?a technological measure which prevents piracy of films. The Netflix app or website determines what particular device you are using to watch, and fetches the exact file for that show meant to specially play on your particular device, with a particular video quality based on how fast your internet is at that moment. Here, instead of relying on AWS servers, they install their very own around the world. But it has only one purpose?—?to store content smartly and deliver it to users. Netflix strikes deals with internet service providers and provides them the red box you saw above at no cost. ISPs install these along with their servers. These Open Connect boxes download the Netflix library for their region from the main servers in the US?—?if there are multiple of them, each will rather store content that is more popular with Netflix users in a region to prioritise speed. So a rarely watched film might take time to load more than a Stranger Things episode. Now, when you will connect to Netflix, the closest Open Connect box to you will deliver the content you need, thus videos load faster than if your Netflix app tried to load it from the main servers in the US. In a nutshell… This is what happens when you hit that Play button: Hundreds of microservices, or tiny independent programs, work together to make one large Netflix service. Content legally acquired or licensed is converted into a size that fits your screen, and protected from being copied. Servers across the world make a copy of it and store it so that the closest one to you delivers it at max quality and speed. When you select a show, your Netflix app cherry picks which of these servers will it load the video from> You are now gripped by Frank Underwood's chilling tactics, given depression by BoJack Horseman's rollercoaster life, tickled by Dev in Master of None and made phobic to the future of technology by the stories in Black Mirror. And your lifespan decreases as your binge watching turns you into a couch potato. It looked so simple before, right? News Roundup Moving FreshPorts (http://dan.langille.org/2017/11/15/moving-freshports/) Today I moved the FreshPorts website from one server to another. My goal is for nobody to notice. In preparation for this move, I have: DNS TTL reduced to 60s Posted to Twitter Updated the status page Put the website put in offline mode: What was missed I turned off commit processing on the new server, but I did not do this on the old server. I should have: sudo svc -d /var/service/freshports That stops processing of incoming commits. No data is lost, but it keeps the two databases at the same spot in history. Commit processing could continue during the database dumping, but that does not affect the dump, which will be consistent regardless. The offline code Here is the basic stuff I used to put the website into offline mode. The main points are: header(“HTTP/1.1 503 Service Unavailable”); ErrorDocument 404 /index.php I move the DocumentRoot to a new directory, containing only index.php. Every error invokes index.php, which returns a 503 code. The dump The database dump just started (Sun Nov 5 17:07:22 UTC 2017). root@pg96:~ # /usr/bin/time pg_dump -h 206.127.23.226 -Fc -U dan freshports.org > freshports.org.9.6.dump That should take about 30 minutes. I have set a timer to remind me. Total time was: 1464.82 real 1324.96 user 37.22 sys The MD5 is: MD5 (freshports.org.9.6.dump) = 5249b45a93332b8344c9ce01245a05d5 It is now: Sun Nov 5 17:34:07 UTC 2017 The rsync The rsync should take about 10-20 minutes. I have already done an rsync of yesterday's dump file. The rsync today should copy over only the deltas (i.e. differences). The rsync started at about Sun Nov 5 17:36:05 UTC 2017 That took 2m9.091s The MD5 matches. The restore The restore should take about 30 minutes. I ran this test yesterday. It is now Sun Nov 5 17:40:03 UTC 2017. $ createdb -T template0 -E SQL_ASCII freshports.testing $ time pg_restore -j 16 -d freshports.testing freshports.org.9.6.dump Done. real 25m21.108s user 1m57.508s sys 0m15.172s It is now Sun Nov 5 18:06:22 UTC 2017. Insert break here About here, I took a 30 minute break to run an errand. It was worth it. Changing DNS I'm ready to change DNS now. It is Sun Nov 5 19:49:20 EST 2017 Done. And nearly immediately, traffic started. How many misses? During this process, XXXXX requests were declined: $ grep -c '" 503 ' /usr/websites/log/freshports.org-access.log XXXXX That's it, we're done Total elapsed time: 1 hour 48 minutes. There are still a number of things to follow up on, but that was the transfers. The new FreshPorts Server (http://dan.langille.org/2017/11/17/x8dtu-3/) *** Using bhyve on top of CEPH (https://lists.freebsd.org/pipermail/freebsd-virtualization/2017-November/005876.html) Hi, Just an info point. I'm preparing for a lecture tomorrow, and thought why not do an actual demo.... Like to be friends with Murphy :) So after I started the cluster: 5 jails with 7 OSDs This what I manually needed to do to boot a memory stick Start een Bhyve instance rbd --dest-pool rbddata --no-progress import memstick.img memstick rbd-ggate map rbddata/memstick ggate-devvice is available on /dev/ggate1 kldload vmm kldload nmdm kldload iftap kldload ifbridge kldload cpuctl sysctl net.link.tap.uponopen=1 ifconfig bridge0 create ifconfig bridge0 addm em0 up ifconfig ifconfig tap11 create ifconfig bridge0 addm tap11 ifconfig tap11 up load the GGate disk in bhyve bhyveload -c /dev/nmdm11A -m 2G -d /dev/ggate1 FB11 and boot a single from it. bhyve -H -P -A -c 1 -m 2G -l com1,/dev/nmdm11A -s 0:0,hostbridge -s 1:0,lpc -s 2:0,virtio-net,tap11 -s 4,ahci-hd,/dev/ggate1 FB11 & bhyvectl --vm=FB11 --get-stats Connect to the VM cu -l /dev/nmdm11B And that'll give you a bhyve VM running on an RBD image over ggate. In the installer I tested reading from the bootdisk: root@:/ # dd if=/dev/ada0 of=/dev/null bs=32M 21+1 records in 21+1 records out 734077952 bytes transferred in 5.306260 secs (138341865 bytes/sec) which is a nice 138Mb/sec. Hope the demonstration does work out tomorrow. --WjW *** Donald Knuth - The Patron Saint of Yak Shaves (http://yakshav.es/the-patron-saint-of-yakshaves/) Excerpts: In 2015, I gave a talk in which I called Donald Knuth the Patron Saint of Yak Shaves. The reason is that Donald Knuth achieved the most perfect and long-running yak shave: TeX. I figured this is worth repeating. How to achieve the ultimate Yak Shave The ultimate yak shave is the combination of improbable circumstance, the privilege to be able to shave at your hearts will and the will to follow things through to the end. Here's the way it was achieved with TeX. The recount is purely mine, inaccurate and obviously there for fun. I'll avoid the most boring facts that everyone always tells, such as why Knuth's checks have their own Wikipedia page. Community Shaving is Best Shaving Since the release of TeX, the community has been busy working on using it as a platform. If you ever downloaded the full TeX distribution, please bear in mind that you are downloading the amassed work of over 40 years, to make sure that each and every TeX document ever written builds. We're talking about documents here. But mostly, two big projects sprung out of that. The first is LaTeX by Leslie Lamport. Lamport is a very productive researcher, famous for research in formal methods through TLA+ and also known laying groundwork for many distributed algorithms. LaTeX is based on the idea of separating presentation and content. It is based around the idea of document classes, which then describe the way a certain document is laid out. Think Markdown, just much more complex. The second is ConTeXt, which is far more focused on fine grained layout control. The Moral of the Story Whenever you feel like “can't we just replace this whole thing, it can't be so hard” when handling TeX, don't forget how many years of work and especially knowledge were poured into that system. Typesetting isn't the most popular knowledge around programmers. Especially see it in the context of the space it is in: they can't remove legacy. Ever. That would break documents. TeX is also not a programming language. It might resemble one, but mostly, it should be approached as a typesetting system first. A lot of it's confusing lingo gets much better then. It's not programming lingo. By approaching TeX with an understanding for its history, a lot of things can be learned from it. And yes, a replacement would be great, but it would take ages. In any case, I hope I thoroughly convinced you why Donald Knuth is the Patron Saint of Yak Shaves. Extra Credits This comes out of a enjoyable discussion with [Arne from Lambda Island](https://lambdaisland.com/https://lambdaisland.com/, who listened and said “you should totally turn this into a talk”. Vincent's trip to EuroBSDCon 2017 (http://www.vincentdelft.be/post/post_20171016) My euroBSDCon 2017 Posted on 2017-10-16 09:43:00 from Vincent in Open Bsd Let me just share my feedback on those 2 days spent in Paris for the EuroBSDCon. My 1st BSDCon. I'm not a developer, contributor, ... Do not expect to improve your skills with OpenBSD with this text :-) I know, we are on October 16th, and the EuroBSDCon of Paris was 3 weeks ago :( I'm not quick !!! Sorry for that Arrival at 10h, I'm too late for the start of the key note. The few persons behind a desk welcome me by talking in Dutch, mainly because of my name. Indeed, Delft is a city in Netherlands, but also a well known university. I inform them that I'm from Belgium, and the discussion moves to the fact the Fosdem is located in Brussels. I receive my nice T-shirt white and blue, a bit like the marine T-shirts, but with the nice EuroBSDCon logo. I'm asking where are the different rooms reserved for the BSD event. We have 1 big on the 1st floor, 1 medium 1 level below, and 2 smalls 1 level above. All are really easy to access. In this entrance we have 4 or 5 tables with some persons representing their company. Those are mainly the big sponsors of the event providing details about their activity and business. I discuss a little bit with StormShield and Gandi. On other tables people are selling BSD t-shirts, and they will quickly be sold. "Is it done yet ?" The never ending story of pkg tools In the last Fosdem, I've already hear Antoine and Baptiste presenting the OpenBSD and FreeBSD battle, I decide to listen Marc Espie in the medium room called Karnak. Marc explains that he has rewritten completely the pkg_add command. He explains that, at contrario with other elements of OpenBSD, the packages tools must be backward compatible and stable on a longer period than 12 months (the support period for OpenBSD). On the funny side, he explains that he has his best idea inside his bath. Hackathons are also used to validate some ideas with other OpenBSD developers. All in all, he explains that the most time consuming part is to imagine a good solution. Coding it is quite straightforward. He adds that better an idea is, shorter the implementation will be. A Tale of six motherboards, three BSDs and coreboot After the lunch I decide to listen the talk about Coreboot. Indeed, 1 or 2 years ago I had listened the Libreboot project at Fosdem. Since they did several references to Coreboot, it's a perfect occasion to listen more carefully to this project. Piotr and Katazyba Kubaj explains us how to boot a machine without the native Bios. Indeed Coreboot can replace the bios, and de facto avoid several binaries imposed by the vendor. They explain that some motherboards are supporting their code. But they also show how difficult it is to flash a Bios and replace it by Coreboot. They even have destroyed a motherboard during the installation. Apparently because the power supply they were using was not stable enough with the 3v. It's really amazing to see that open source developers can go, by themselves, to such deep technical level. State of the DragonFly's graphics stack After this Coreboot talk, I decide to stay in the room to follow the presentation of Fran?ois Tigeot. Fran?ois is now one of the core developer of DrangonflyBSD, an amazing BSD system having his own filesystem called Hammer. Hammer offers several amazing features like snapshots, checksum data integrity, deduplication, ... Francois has spent his last years to integrate the video drivers developed for Linux inside DrangonflyBSD. He explains that instead of adapting this code for the video card to the kernel API of DrangonflyBSD, he has "simply" build an intermediate layer between the kernel of DragonflyBSD and the video drivers. This is not said in the talk, but this effort is very impressive. Indeed, this is more or less a linux emulator inside DragonflyBSD. Francois explains that he has started with Intel video driver (drm/i915), but now he is able to run drm/radeon quite well, but also drm/amdgpu and drm/nouveau. Discovering OpenBSD on AWS Then I move to the small room at the upper level to follow a presentation made by Laurent Bernaille on OpenBSD and AWS. First Laurent explains that he is re-using the work done by Antoine Jacoutot concerning the integration of OpenBSD inside AWS. But on top of that he has integrated several other Open Source solutions allowing him to build OpenBSD machines very quickly with one command. Moreover those machines will have the network config, the required packages, ... On top of the slides presented, he shows us, in a real demo, how this system works. Amazing presentation which shows that, by putting the correct tools together, a machine builds and configure other machines in one go. OpenBSD Testing Infrastructure Behind bluhm.genua.de Here Jan Klemkow explains us that he has setup a lab where he is able to run different OpenBSD architectures. The system has been designed to be able to install, on demand, a certain version of OpenBSD on the different available machines. On top of that a regression test script can be triggered. This provides reports showing what is working and what is not more working on the different machines. If I've well understood, Jan is willing to provide such lab to the core developers of OpenBSD in order to allow them to validate easily and quickly their code. Some more effort is needed to reach this goal, but with what exists today, Jan and his colleague are quite close. Since his company is using OpenBSD business, to his eyes this system is a "tit for tat" to the OpenBSD community. French story on cybercrime Then comes the second keynote of the day in the big auditorium. This talk is performed by the colonel of french gendarmerie. Mr Freyssinet, who is head of the Cyber crimes unit inside the Gendarmerie. Mr Freyssinet explains that the "bad guys" are more and more volatile across countries, and more and more organized. The small hacker in his room, alone, is no more the reality. As a consequence the different national police investigators are collaborating more inside an organization called Interpol. What is amazing in his talk is that Mr Freyssinet talks about "Crime as a service". Indeed, more and more hackers are selling their services to some "bad and temporary organizations". Social event It's now time for the famous social event on the river: la Seine. The organizers ask us to go, by small groups, to a station. There is a walk of 15 minutes inside Paris. Hopefully the weather is perfect. To identify them clearly several organizers takes a "beastie fork" in their hands and walk on the sidewalk generating some amazing reactions from some citizens and toursits. Some of them recognize the Freebsd logo and ask us some details. Amazing :-) We walk on small and big sidewalks until a small stair going under the street. There, we have a train station a bit like a metro station. 3 stations later they ask us to go out. We walk few minutes and come in front of a boat having a double deck: one inside, with nice tables and chairs and one on the roof. But the crew ask us to go up, on the second deck. There, we are welcome with a glass of wine. The tour Eiffel is just at few 100 meters from us. Every hour the Eiffel tower is blinking for 5 minutes with thousands of small lights. Brilliant :-) We see also the "statue de la libertee" (the small one) which is on a small island in the middle of the river. During the whole night the bar will be open with drinks and some appetizers, snacks, ... Such walking diner is perfect to talk with many different persons. I've discussed with several persons just using BSD, they are not, like me, deep and specialized developers. One was from Switzerland, another one from Austria, and another one from Netherlands. But I've also followed a discussion with Theo de Raadt, several persons of the FreeBSD foundation. Some are very technical guys, other just users, like me. But all with the same passion for one of the BSD system. Amazing evening. OpenBSD's small steps towards DTrace (a tale about DDB and CTF) On the second day, I decide to sleep enough in order to have enough resources to drive back to my home (3 hours by car). So I miss the 1st presentations, and arrive at the event around 10h30. Lot of persons are already present. Some faces are less "fresh" than others. I decide to listen to Dtrace in OpenBSD. After 10 minutes I am so lost into those too technical explainations, that I decide to open and look at my PC. My OpenBSD laptop is rarely leaving my home, so I've never had the need to have a screen locking system. In a crowded environment, this is better. So I was looking for a simple solution. I've looked at how to use xlock. I've combined it with the /ets/apm/suspend script, ... Always very easy to use OpenBSD :-) The OpenBSD web stack Then I decide to follow the presentation of Michael W Lucas. Well know person for his different books about "Absolute OpenBSD", Relayd", ... Michael talks about the httpd daemon inside OpenBSD. But he also present his integration with Carp, Relayd, PF, FastCGI, the rules based on LUA regexp (opposed to perl regexp), ... For sure he emphasis on the security aspect of those tools: privilege separation, chroot, ... OpenSMTPD, current state of affairs Then I follow the presentation of Gilles Chehade about the OpenSMTPD project. Amazing presentation that, on top of the technical challenges, shows how to manage such project across the years. Gilles is working on OpenSMTPD since 2007, thus 10 years !!!. He explains the different decisions they took to make the software as simple as possible to use, but as secure as possible, too: privilege separation, chroot, pledge, random malloc, ? . The development starts on BSD systems, but once quite well known they received lot of contributions from Linux developers. Hoisting: lessons learned integrating pledge into 500 programs After a small break, I decide to listen to Theo de Raadt, the founder of OpenBSD. In his own style, with trekking boots, shorts, backpack. Theo starts by saying that Pledge is the outcome of nightmares. Theo explains that the book called "Hacking blind" presenting the BROP has worried him since few years. That's why he developed Pledge as a tool killing a process as soon as possible when there is an unforeseen behavior of this program. For example, with Pledge a program which can only write to disk will be immediately killed if he tries to reach network. By implementing Pledge in the +-500 programs present in the "base", OpenBSD is becoming more secured and more robust. Conclusion My first EuroBSDCon was a great, interesting and cool event. I've discussed with several BSD enthusiasts. I'm using OpenBSD since 2010, but I'm not a developer, so I was worried to be "lost" in the middle of experts. In fact it was not the case. At EuroBSDCon you have many different type of enthusiasts BSD's users. What is nice with the EuroBSDCon is that the organizers foresee everything for you. You just have to sit and listen. They foresee even how to spend, in a funny and very cool attitude, the evening of Saturday. > The small draw back is that all of this has a cost. In my case the whole weekend cost me a bit more than 500euro. Based on what I've learned, what I've saw this is very acceptable price. Nearly all presentations I saw give me a valuable input for my daily job. For sure, the total price is also linked to my personal choice: hotel, parking. And I'm surely biased because I'm used to go to the Fosdem in Brussels which cost nothing (entrance) and is approximately 45 minutes of my home. But Fosdem is not the same atmosphere and presentations are less linked to my daily job. I do not regret my trip to EuroBSDCon and will surely plan other ones. Beastie Bits Important munitions lawyering (https://www.jwz.org/blog/2017/10/important-munitions-lawyering/) AsiaBSDCon 2018 CFP is now open, until December 15th (https://2018.asiabsdcon.org/) ZSTD Compression for ZFS by Allan Jude (https://www.youtube.com/watch?v=hWnWEitDPlM&feature=share) NetBSD on Allwinner SoCs Update (https://blog.netbsd.org/tnf/entry/netbsd_on_allwinner_socs_update) *** Feedback/Questions Tim - Creating Multi Boot USB sticks (http://dpaste.com/0FKTJK3#wrap) Nomen - ZFS Questions (http://dpaste.com/1HY5MFB) JJ - Questions (http://dpaste.com/3ZGNSK9#wrap) Lars - Hardening Diffie-Hellman (http://dpaste.com/3TRXXN4) ***

Mexico Unexplained
William Lamport, Mexico’s Irish Would-Be King

Mexico Unexplained

Play Episode Listen Later Feb 19, 2017 18:35


A brilliant Irish adventurer almost won Mexico's independence from Spain in the 1640s. The post William Lamport, Mexico’s Irish Would-Be King appeared first on Mexico Unexplained.

Aussie Bloggers Podcast
Imogen Lamport Talks Style Blogging

Aussie Bloggers Podcast

Play Episode Listen Later Nov 9, 2016 21:28


Marketing your blog as a style blogger | The science of style through blogging | Where her blog first originated | How she grew an international audience before her Australian audience | Her experiences about blogging and how readers connect with her | Blogging and her personal relationships with others | How having a mailing list can help monetise your blog | Why Pinterest and Facebook work for her | Best tip for growing your subscriber base | What sort of content is the best content to create when using options | How to build and retain a community through Facebook Groups | How to use Facebook groups to connect your readers together

Q.E.D. Code
Q.E.D. 18: Paxos

Q.E.D. Code

Play Episode Listen Later Sep 18, 2016 16:49


Leslie Lamport wrote a paper describing the Paxos algorithm in plain English. This is an algorithm that guarantees that a distributed system reaches consensus on the value of an attribute. Lamport claims that the algorithm can be derived from the problem statement, which would imply that it is the only algorithm that can achieve these results. Indeed, many of the distributed systems that we use today are based at least in part on Paxos. Linq not only introduced the ability to query a database directly from C#. It also introduced a set of language features that were useful in their own right. Lambda expressions did not exist in C# prior to 3.0, but now they are preferred to their predecessor, anonymous delegates. Extension methods were added to the language specifically to make Linq work, but nevertheless provide an entirely new form of expressiveness. Linq works because its parts were designed to compose. This is not quite so true of language features that came later, like async and primary constructors. Fermat was quite fond of writing little theorems and not providing their proofs. One such instance, known as "Fermat's Little Theorem", turns out to be fairly easy to prove, but provides the basis for much of cryptography. It states that a base raised to one less than a prime in the prime's modulus will always be equal to 1. This is useful in generating a good trap door function, where we can easily compute the discrete exponent of a number, but not the discrete logarithm.

Teahour
#82 - 聊聊比特币背后的技术和 Blockchain

Teahour

Play Episode Listen Later Dec 26, 2015 131:19


本期节目由 思客教学 赞助,思客教学 “专注 IT 领域远程学徒式” 教育。 本期由 Terry 主持, 请到了他的最好基友 Jan, 和他聊聊比特币背后的技术, 分布式系统, 算法以及Blockchain. Intridea Peatio ethfans LMAX Disruptor archlinux bspwm plan9 ranger Is bitcoinn a good idea? Merkle tree Linked list Hash List Mixing 椭圆曲线签名算法 (ECDSA) Checksum RSA Zerocash Zero-knowledge proof The Byzantine Generals Problem Leslie Lamport LaTeX TeX Donald Knuth Lamport signature PoW’s pros and cons PoS’s pros and cons DAO and DAC scrypt Proof-of-stake Vitalik Buterin Ethereum gollum Nick Szabo’s Smart Contracts Idea Bitcoin Script Special Guest: Jan.

Algorithmes, machines et langages
05 - Prouver les programmes : pourquoi, quand, comment ? - PDF

Algorithmes, machines et langages

Play Episode Listen Later Apr 2, 2015 63:20


Gérard Berry Algorithmes, machines et langages Année 2014-2015 Prouver les programmes : pourquoi, quand, comment ? Sixième leçon : La vérification de modèles (model-checking) Ce cours termine la présentation générale des méthodes de vérification formelle par la vérification de modèles, plus connue sous son nom anglais original de model-checking. Cette méthode est bien différente des précédentes car elle s’intéresse essentiellement aux programmes d’états finis, ceux dont on peut au moins conceptuellement dérouler complètement toutes les exécutions possibles en temps et espace fini. De plus, contrairement aux méthodes précédemment décrites, le model-checking s’intéresse principalement aux programmes parallèles. Le parallélisme peut y être synchrone comme dans les circuits ou les langages synchrones présentés les années précédentes, ou asynchrones comme dans les protocoles de communication, les réseaux et les algorithmes distribués. Le model-checking est né au début des années 1980, quasi-simultanément en deux endroits : Grenoble avec J-P. Queille et J. Sifakis, qui ont développé le système CESAR et sa logique temporelle, et les USA avec E. Clarke et E. Emerson qui ont développé la logique temporelle CTL et le système EMV. Ces travaux ont donné le prix Turing 2007 à Clarke, Emerson et Sifakis. Ils s’appuyaient eux-mêmes sur les travaux d’Amir Pnueli (prix Turing en 1996) sur la logique temporelle. Le model-checking s’est considérablement développé ensuite, et constitue certainement la méthode formelle la plus utilisée dans l’industrie, en particulier dans la CAO de circuits. L’idée de base est de construire le graphe de toutes les exécutions possibles d’un programme, qu’on appelle son modèle. Ce modèle peut prendre la forme d’une structure de Kripke (logicien et philosophe de la logique modale), c’est-à-dire d’un graphe où les états sont étiquetés par des prédicats élémentaires, ou encore d’une structure de transitions, où les étiquettes sont portées par les transitions entre états. Une fois construit, le modèle devient indépendant du langage qui l’a engendré. Pour raisonner sur un modèle, un moyen très répandu est l’utilisation de logiques temporelles, définissent les propriétés temporelles à l’aide de propriétés élémentaires des états ou transitions et de quantificateurs sur les états ou les chemins de son graphe. On peut ainsi exprimer et vérifier des propriétés de sûreté (absence de bugs), comme « à aucun moment l’ascenseur ne peut voyager la porte ouverte », d’absence de blocages de l’exécution, ou de vivacité, comme « l’ascenseur finira par répondre à toutes les demandes des passagers » ou encore « chaque processus obtiendra infiniment souvent la ressource partagée s’il la demande infiniment souvent ». Nous présenterons d’abord la logique CTL*, la plus générale, qui permet d’imbriquer arbitrairement les quantifications d’états et de chemin sur les structures de Kripke. Mais cette logique très expressive est difficile à utiliser et les calculs y sont d’un coût prohibitif. Deux sous-logiques différentes sont utilisées : LTL (Linear Temporal Logic), qui ne quantifie pas sur les états et considère donc seulement des traces linéaires, et CTL, logique arborescente qui permet de quantifier sur les chemins mais avec des restrictions par rapport à CTL*. Ces deux logiques sont d’expressivités différentes et ont chacune des avantages et des inconvénients que nous discuterons brièvement. LTL est la logique la mieux adaptées pour la vérification de propriétés de vivacité, comme le montre L. Lamport (prix Turing 2014) avec son système TLA+. Mais, au contraire, elle ne permet pas d’exprimer des prédicats sur l’existence de calculs particuliers. La modélisation par systèmes de transitions, systématisée par R. Milner (prix Turing 1992) dans l’étude des calculs de processus communicants, permet de bien mieux composer les exécutions de ces processus parallèles. Une notion fondamentale introduite par Milner est la bisimulation, équivalence comportementale qui permet de comparer finement ce que peuvent faire ou ne pas faire les processus. Nous montrons que la réduction par bisimulation fournit une alternative très intéressante et intuitive aux logiques temporelles pour la vérification de modèles, en particulier en liaison avec les langages synchrones. Une dernière façon de conduite la vérification de modèles est de remplacer les formules temporelles par des programmes observateurs, qui prennent en entrée les entrées et les sorties du programme à vérifier et ont pour charge d’envoyer un signal de bug s’ils détectent une anomalie. Cette méthode est en particulier idéale pour les langages synchrones comme Esterel et Lustre étudiés les années précédentes, car les observateurs peuvent être écrits dans ces mêmes langages de façon plus simple qu’en logique temporelle, au moins pour les propriétés de sûreté qui sont les plus importantes dans leur domaine d’application. Cette méthode n’est en fait pas disjointe des précédentes, cat les formules temporelles sont souvent traduites en automates observateurs pour la vérification. Il faut noter que, dans tous les cas, le programme à vérifier évolue dans un environnement qu’il est important et souvent indispensable de modéliser aussi avec les mêmes techniques. La modélisation de l’environnement n’est pas forcément plus simple que celle du programme lui-même, et, particulièrement en logique temporelle, il faut s’assurer que le modèle d’environnement construit n’est pas vide, sous peine que toutes les propriétés à vérifier ne deviennent trivialement vraies. Cela demande d’étudier la satisfiabilité des formules d’environnement, ce qui n’est pas forcément simple. Nous terminons le cours par une brève présentation de l’algorithmique du model-checking, qui se divise en deux grandes classes de méthodes et de systèmes associés. Les méthodes explicites énumèrent systématiquement les états et transitions possibles. Comme la taille du modèle peut être gigantesque, ces méthodes utilisent des machines massivement parallèles ainsi que de nombreuses façons de réduire la complexité de l’analyse : calcul à la volée du modèle et des propriétés, exploration aléatoire du graphe, réduction par utilisation par symétrie ou commutation d’actions indépendantes, abstractions diverses, etc. Les méthodes implicites, introduites plus tard, utilisent des représentations symboliques des espaces d’états et de transitions, par exemple en utilisant des fonctions caractéristiques d’ensembles. Leur avantage est que la taille des formules servant à l’exploration du modèle n’est plus associée à la taille de l’espace d’états, leurs inconvénients étant que leur temps de calcul est difficilement prévisible et leur implémentation sur machine parallèle problématique. Les Binary Decision Diagrams (BDDs), la plus ancienne des représentations implicites en calcul booléen, seront étudiés au cours suivant. La satisfaction Booléenne (SAT) ou modulo théories (SMT) devient de plus en plus la méthode de choix pour le model-checking implicite. Nous l’illustrerons sur l’exemple bien connu du Sudoku, pertinent ici même si sa résolution n’est pas vraiment du model-checking temporel. Comme les méthodes explicites, les méthodes implicites font appel à de nombreuses techniques pour lutter contre l’explosion exponentielle du temps de calcul ou de la taille mémoire. Elles seront étudiées les années suivantes. Nous insisterons enfin sur deux propriétés essentielles des model-checkers, qui les rendent attirants pour les utilisateurs : le fait qu’ils ne demandent pas à leur utilisateur de connaître précisément les techniques qu’ils emploient, et leur faculté de produire des contre-exemples minimaux pour les propriétés fausses, qui est primordiale à la fois pour le débogage et la génération de tests.

Algorithmes, machines et langages
05 - Prouver les programmes : pourquoi, quand, comment ?

Algorithmes, machines et langages

Play Episode Listen Later Apr 2, 2015 63:20


Gérard Berry Algorithmes, machines et langages Année 2014-2015 Prouver les programmes : pourquoi, quand, comment ? Cinquième leçon : La vérification de modèles (model-checking) Ce cours termine la présentation générale des méthodes de vérification formelle par la vérification de modèles, plus connue sous son nom anglais original de model-checking. Cette méthode est bien différente des précédentes car elle s’intéresse essentiellement aux programmes d’états finis, ceux dont on peut au moins conceptuellement dérouler complètement toutes les exécutions possibles en temps et espace fini. De plus, contrairement aux méthodes précédemment décrites, le model-checking s’intéresse principalement aux programmes parallèles. Le parallélisme peut y être synchrone comme dans les circuits ou les langages synchrones présentés les années précédentes, ou asynchrones comme dans les protocoles de communication, les réseaux et les algorithmes distribués. Le model-checking est né au début des années 1980, quasi-simultanément en deux endroits : Grenoble avec J-P. Queille et J. Sifakis, qui ont développé le système CESAR et sa logique temporelle, et les USA avec E. Clarke et E. Emerson qui ont développé la logique temporelle CTL et le système EMV. Ces travaux ont donné le prix Turing 2007 à Clarke, Emerson et Sifakis. Ils s’appuyaient eux-mêmes sur les travaux d’Amir Pnueli (prix Turing en 1996) sur la logique temporelle. Le model-checking s’est considérablement développé ensuite, et constitue certainement la méthode formelle la plus utilisée dans l’industrie, en particulier dans la CAO de circuits. L’idée de base est de construire le graphe de toutes les exécutions possibles d’un programme, qu’on appelle son modèle. Ce modèle peut prendre la forme d’une structure de Kripke (logicien et philosophe de la logique modale), c’est-à-dire d’un graphe où les états sont étiquetés par des prédicats élémentaires, ou encore d’une structure de transitions, où les étiquettes sont portées par les transitions entre états. Une fois construit, le modèle devient indépendant du langage qui l’a engendré. Pour raisonner sur un modèle, un moyen très répandu est l’utilisation de logiques temporelles, définissent les propriétés temporelles à l’aide de propriétés élémentaires des états ou transitions et de quantificateurs sur les états ou les chemins de son graphe. On peut ainsi exprimer et vérifier des propriétés de sûreté (absence de bugs), comme « à aucun moment l’ascenseur ne peut voyager la porte ouverte », d’absence de blocages de l’exécution, ou de vivacité, comme « l’ascenseur finira par répondre à toutes les demandes des passagers » ou encore « chaque processus obtiendra infiniment souvent la ressource partagée s’il la demande infiniment souvent ». Nous présenterons d’abord la logique CTL*, la plus générale, qui permet d’imbriquer arbitrairement les quantifications d’états et de chemin sur les structures de Kripke. Mais cette logique très expressive est difficile à utiliser et les calculs y sont d’un coût prohibitif. Deux sous-logiques différentes sont utilisées : LTL (Linear Temporal Logic), qui ne quantifie pas sur les états et considère donc seulement des traces linéaires, et CTL, logique arborescente qui permet de quantifier sur les chemins mais avec des restrictions par rapport à CTL*. Ces deux logiques sont d’expressivités différentes et ont chacune des avantages et des inconvénients que nous discuterons brièvement. LTL est la logique la mieux adaptées pour la vérification de propriétés de vivacité, comme le montre L. Lamport (prix Turing 2014) avec son système TLA+. Mais, au contraire, elle ne permet pas d’exprimer des prédicats sur l’existence de calculs particuliers. La modélisation par systèmes de transitions, systématisée par R. Milner (prix Turing 1992) dans l’étude des calculs de processus communicants, permet de bien mieux composer les exécutions de ces processus parallèles. Une notion fondamentale introduite par Milner est la bisimulation, équivalence comportementale qui permet de comparer finement ce que peuvent faire ou ne pas faire les processus. Nous montrons que la réduction par bisimulation fournit une alternative très intéressante et intuitive aux logiques temporelles pour la vérification de modèles, en particulier en liaison avec les langages synchrones. Une dernière façon de conduite la vérification de modèles est de remplacer les formules temporelles par des programmes observateurs, qui prennent en entrée les entrées et les sorties du programme à vérifier et ont pour charge d’envoyer un signal de bug s’ils détectent une anomalie. Cette méthode est en particulier idéale pour les langages synchrones comme Esterel et Lustre étudiés les années précédentes, car les observateurs peuvent être écrits dans ces mêmes langages de façon plus simple qu’en logique temporelle, au moins pour les propriétés de sûreté qui sont les plus importantes dans leur domaine d’application. Cette méthode n’est en fait pas disjointe des précédentes, cat les formules temporelles sont souvent traduites en automates observateurs pour la vérification. Il faut noter que, dans tous les cas, le programme à vérifier évolue dans un environnement qu’il est important et souvent indispensable de modéliser aussi avec les mêmes techniques. La modélisation de l’environnement n’est pas forcément plus simple que celle du programme lui-même, et, particulièrement en logique temporelle, il faut s’assurer que le modèle d’environnement construit n’est pas vide, sous peine que toutes les propriétés à vérifier ne deviennent trivialement vraies. Cela demande d’étudier la satisfiabilité des formules d’environnement, ce qui n’est pas forcément simple. Nous terminons le cours par une brève présentation de l’algorithmique du model-checking, qui se divise en deux grandes classes de méthodes et de systèmes associés. Les méthodes explicites énumèrent systématiquement les états et transitions possibles. Comme la taille du modèle peut être gigantesque, ces méthodes utilisent des machines massivement parallèles ainsi que de nombreuses façons de réduire la complexité de l’analyse : calcul à la volée du modèle et des propriétés, exploration aléatoire du graphe, réduction par utilisation par symétrie ou commutation d’actions indépendantes, abstractions diverses, etc. Les méthodes implicites, introduites plus tard, utilisent des représentations symboliques des espaces d’états et de transitions, par exemple en utilisant des fonctions caractéristiques d’ensembles. Leur avantage est que la taille des formules servant à l’exploration du modèle n’est plus associée à la taille de l’espace d’états, leurs inconvénients étant que leur temps de calcul est difficilement prévisible et leur implémentation sur machine parallèle problématique. Les Binary Decision Diagrams (BDDs), la plus ancienne des représentations implicites en calcul booléen, seront étudiés au cours suivant. La satisfaction Booléenne (SAT) ou modulo théories (SMT) devient de plus en plus la méthode de choix pour le model-checking implicite. Nous l’illustrerons sur l’exemple bien connu du Sudoku, pertinent ici même si sa résolution n’est pas vraiment du model-checking temporel. Comme les méthodes explicites, les méthodes implicites font appel à de nombreuses techniques pour lutter contre l’explosion exponentielle du temps de calcul ou de la taille mémoire. Elles seront étudiées les années suivantes. Nous insisterons enfin sur deux propriétés essentielles des model-checkers, qui les rendent attirants pour les utilisateurs : le fait qu’ils ne demandent pas à leur utilisateur de connaître précisément les techniques qu’ils emploient, et leur faculté de produire des contre-exemples minimaux pour les propriétés fausses, qui est primordiale à la fois pour le débogage et la génération de tests.

Algorithmes, machines et langages
05 - Prouver les programmes : pourquoi, quand, comment ?

Algorithmes, machines et langages

Play Episode Listen Later Apr 2, 2015 63:20


Gérard Berry Algorithmes, machines et langages Année 2014-2015 Prouver les programmes : pourquoi, quand, comment ? Cinquième leçon : La vérification de modèles (model-checking) Ce cours termine la présentation générale des méthodes de vérification formelle par la vérification de modèles, plus connue sous son nom anglais original de model-checking. Cette méthode est bien différente des précédentes car elle s’intéresse essentiellement aux programmes d’états finis, ceux dont on peut au moins conceptuellement dérouler complètement toutes les exécutions possibles en temps et espace fini. De plus, contrairement aux méthodes précédemment décrites, le model-checking s’intéresse principalement aux programmes parallèles. Le parallélisme peut y être synchrone comme dans les circuits ou les langages synchrones présentés les années précédentes, ou asynchrones comme dans les protocoles de communication, les réseaux et les algorithmes distribués. Le model-checking est né au début des années 1980, quasi-simultanément en deux endroits : Grenoble avec J-P. Queille et J. Sifakis, qui ont développé le système CESAR et sa logique temporelle, et les USA avec E. Clarke et E. Emerson qui ont développé la logique temporelle CTL et le système EMV. Ces travaux ont donné le prix Turing 2007 à Clarke, Emerson et Sifakis. Ils s’appuyaient eux-mêmes sur les travaux d’Amir Pnueli (prix Turing en 1996) sur la logique temporelle. Le model-checking s’est considérablement développé ensuite, et constitue certainement la méthode formelle la plus utilisée dans l’industrie, en particulier dans la CAO de circuits. L’idée de base est de construire le graphe de toutes les exécutions possibles d’un programme, qu’on appelle son modèle. Ce modèle peut prendre la forme d’une structure de Kripke (logicien et philosophe de la logique modale), c’est-à-dire d’un graphe où les états sont étiquetés par des prédicats élémentaires, ou encore d’une structure de transitions, où les étiquettes sont portées par les transitions entre états. Une fois construit, le modèle devient indépendant du langage qui l’a engendré. Pour raisonner sur un modèle, un moyen très répandu est l’utilisation de logiques temporelles, définissent les propriétés temporelles à l’aide de propriétés élémentaires des états ou transitions et de quantificateurs sur les états ou les chemins de son graphe. On peut ainsi exprimer et vérifier des propriétés de sûreté (absence de bugs), comme « à aucun moment l’ascenseur ne peut voyager la porte ouverte », d’absence de blocages de l’exécution, ou de vivacité, comme « l’ascenseur finira par répondre à toutes les demandes des passagers » ou encore « chaque processus obtiendra infiniment souvent la ressource partagée s’il la demande infiniment souvent ». Nous présenterons d’abord la logique CTL*, la plus générale, qui permet d’imbriquer arbitrairement les quantifications d’états et de chemin sur les structures de Kripke. Mais cette logique très expressive est difficile à utiliser et les calculs y sont d’un coût prohibitif. Deux sous-logiques différentes sont utilisées : LTL (Linear Temporal Logic), qui ne quantifie pas sur les états et considère donc seulement des traces linéaires, et CTL, logique arborescente qui permet de quantifier sur les chemins mais avec des restrictions par rapport à CTL*. Ces deux logiques sont d’expressivités différentes et ont chacune des avantages et des inconvénients que nous discuterons brièvement. LTL est la logique la mieux adaptées pour la vérification de propriétés de vivacité, comme le montre L. Lamport (prix Turing 2014) avec son système TLA+. Mais, au contraire, elle ne permet pas d’exprimer des prédicats sur l’existence de calculs particuliers. La modélisation par systèmes de transitions, systématisée par R. Milner (prix Turing 1992) dans l’étude des calculs de processus communicants, permet de bien mieux composer les exécutions de ces processus parallèles. Une notion fondamentale introduite par Milner est la bisimulation, équivalence comportementale qui permet de comparer finement ce que peuvent faire ou ne pas faire les processus. Nous montrons que la réduction par bisimulation fournit une alternative très intéressante et intuitive aux logiques temporelles pour la vérification de modèles, en particulier en liaison avec les langages synchrones. Une dernière façon de conduite la vérification de modèles est de remplacer les formules temporelles par des programmes observateurs, qui prennent en entrée les entrées et les sorties du programme à vérifier et ont pour charge d’envoyer un signal de bug s’ils détectent une anomalie. Cette méthode est en particulier idéale pour les langages synchrones comme Esterel et Lustre étudiés les années précédentes, car les observateurs peuvent être écrits dans ces mêmes langages de façon plus simple qu’en logique temporelle, au moins pour les propriétés de sûreté qui sont les plus importantes dans leur domaine d’application. Cette méthode n’est en fait pas disjointe des précédentes, cat les formules temporelles sont souvent traduites en automates observateurs pour la vérification. Il faut noter que, dans tous les cas, le programme à vérifier évolue dans un environnement qu’il est important et souvent indispensable de modéliser aussi avec les mêmes techniques. La modélisation de l’environnement n’est pas forcément plus simple que celle du programme lui-même, et, particulièrement en logique temporelle, il faut s’assurer que le modèle d’environnement construit n’est pas vide, sous peine que toutes les propriétés à vérifier ne deviennent trivialement vraies. Cela demande d’étudier la satisfiabilité des formules d’environnement, ce qui n’est pas forcément simple. Nous terminons le cours par une brève présentation de l’algorithmique du model-checking, qui se divise en deux grandes classes de méthodes et de systèmes associés. Les méthodes explicites énumèrent systématiquement les états et transitions possibles. Comme la taille du modèle peut être gigantesque, ces méthodes utilisent des machines massivement parallèles ainsi que de nombreuses façons de réduire la complexité de l’analyse : calcul à la volée du modèle et des propriétés, exploration aléatoire du graphe, réduction par utilisation par symétrie ou commutation d’actions indépendantes, abstractions diverses, etc. Les méthodes implicites, introduites plus tard, utilisent des représentations symboliques des espaces d’états et de transitions, par exemple en utilisant des fonctions caractéristiques d’ensembles. Leur avantage est que la taille des formules servant à l’exploration du modèle n’est plus associée à la taille de l’espace d’états, leurs inconvénients étant que leur temps de calcul est difficilement prévisible et leur implémentation sur machine parallèle problématique. Les Binary Decision Diagrams (BDDs), la plus ancienne des représentations implicites en calcul booléen, seront étudiés au cours suivant. La satisfaction Booléenne (SAT) ou modulo théories (SMT) devient de plus en plus la méthode de choix pour le model-checking implicite. Nous l’illustrerons sur l’exemple bien connu du Sudoku, pertinent ici même si sa résolution n’est pas vraiment du model-checking temporel. Comme les méthodes explicites, les méthodes implicites font appel à de nombreuses techniques pour lutter contre l’explosion exponentielle du temps de calcul ou de la taille mémoire. Elles seront étudiées les années suivantes. Nous insisterons enfin sur deux propriétés essentielles des model-checkers, qui les rendent attirants pour les utilisateurs : le fait qu’ils ne demandent pas à leur utilisateur de connaître précisément les techniques qu’ils emploient, et leur faculté de produire des contre-exemples minimaux pour les propriétés fausses, qui est primordiale à la fois pour le débogage et la génération de tests.

Algorithmes, machines et langages
05 - Prouver les programmes : pourquoi, quand, comment ? - PDF

Algorithmes, machines et langages

Play Episode Listen Later Apr 2, 2015 63:20


Gérard Berry Algorithmes, machines et langages Année 2014-2015 Prouver les programmes : pourquoi, quand, comment ? Sixième leçon : La vérification de modèles (model-checking) Ce cours termine la présentation générale des méthodes de vérification formelle par la vérification de modèles, plus connue sous son nom anglais original de model-checking. Cette méthode est bien différente des précédentes car elle s’intéresse essentiellement aux programmes d’états finis, ceux dont on peut au moins conceptuellement dérouler complètement toutes les exécutions possibles en temps et espace fini. De plus, contrairement aux méthodes précédemment décrites, le model-checking s’intéresse principalement aux programmes parallèles. Le parallélisme peut y être synchrone comme dans les circuits ou les langages synchrones présentés les années précédentes, ou asynchrones comme dans les protocoles de communication, les réseaux et les algorithmes distribués. Le model-checking est né au début des années 1980, quasi-simultanément en deux endroits : Grenoble avec J-P. Queille et J. Sifakis, qui ont développé le système CESAR et sa logique temporelle, et les USA avec E. Clarke et E. Emerson qui ont développé la logique temporelle CTL et le système EMV. Ces travaux ont donné le prix Turing 2007 à Clarke, Emerson et Sifakis. Ils s’appuyaient eux-mêmes sur les travaux d’Amir Pnueli (prix Turing en 1996) sur la logique temporelle. Le model-checking s’est considérablement développé ensuite, et constitue certainement la méthode formelle la plus utilisée dans l’industrie, en particulier dans la CAO de circuits. L’idée de base est de construire le graphe de toutes les exécutions possibles d’un programme, qu’on appelle son modèle. Ce modèle peut prendre la forme d’une structure de Kripke (logicien et philosophe de la logique modale), c’est-à-dire d’un graphe où les états sont étiquetés par des prédicats élémentaires, ou encore d’une structure de transitions, où les étiquettes sont portées par les transitions entre états. Une fois construit, le modèle devient indépendant du langage qui l’a engendré. Pour raisonner sur un modèle, un moyen très répandu est l’utilisation de logiques temporelles, définissent les propriétés temporelles à l’aide de propriétés élémentaires des états ou transitions et de quantificateurs sur les états ou les chemins de son graphe. On peut ainsi exprimer et vérifier des propriétés de sûreté (absence de bugs), comme « à aucun moment l’ascenseur ne peut voyager la porte ouverte », d’absence de blocages de l’exécution, ou de vivacité, comme « l’ascenseur finira par répondre à toutes les demandes des passagers » ou encore « chaque processus obtiendra infiniment souvent la ressource partagée s’il la demande infiniment souvent ». Nous présenterons d’abord la logique CTL*, la plus générale, qui permet d’imbriquer arbitrairement les quantifications d’états et de chemin sur les structures de Kripke. Mais cette logique très expressive est difficile à utiliser et les calculs y sont d’un coût prohibitif. Deux sous-logiques différentes sont utilisées : LTL (Linear Temporal Logic), qui ne quantifie pas sur les états et considère donc seulement des traces linéaires, et CTL, logique arborescente qui permet de quantifier sur les chemins mais avec des restrictions par rapport à CTL*. Ces deux logiques sont d’expressivités différentes et ont chacune des avantages et des inconvénients que nous discuterons brièvement. LTL est la logique la mieux adaptées pour la vérification de propriétés de vivacité, comme le montre L. Lamport (prix Turing 2014) avec son système TLA+. Mais, au contraire, elle ne permet pas d’exprimer des prédicats sur l’existence de calculs particuliers. La modélisation par systèmes de transitions, systématisée par R. Milner (prix Turing 1992) dans l’étude des calculs de processus communicants, permet de bien mieux composer les exécutions de ces processus parallèles. Une notion fondamentale introduite par Milner est la bisimulation, équivalence comportementale qui permet de comparer finement ce que peuvent faire ou ne pas faire les processus. Nous montrons que la réduction par bisimulation fournit une alternative très intéressante et intuitive aux logiques temporelles pour la vérification de modèles, en particulier en liaison avec les langages synchrones. Une dernière façon de conduite la vérification de modèles est de remplacer les formules temporelles par des programmes observateurs, qui prennent en entrée les entrées et les sorties du programme à vérifier et ont pour charge d’envoyer un signal de bug s’ils détectent une anomalie. Cette méthode est en particulier idéale pour les langages synchrones comme Esterel et Lustre étudiés les années précédentes, car les observateurs peuvent être écrits dans ces mêmes langages de façon plus simple qu’en logique temporelle, au moins pour les propriétés de sûreté qui sont les plus importantes dans leur domaine d’application. Cette méthode n’est en fait pas disjointe des précédentes, cat les formules temporelles sont souvent traduites en automates observateurs pour la vérification. Il faut noter que, dans tous les cas, le programme à vérifier évolue dans un environnement qu’il est important et souvent indispensable de modéliser aussi avec les mêmes techniques. La modélisation de l’environnement n’est pas forcément plus simple que celle du programme lui-même, et, particulièrement en logique temporelle, il faut s’assurer que le modèle d’environnement construit n’est pas vide, sous peine que toutes les propriétés à vérifier ne deviennent trivialement vraies. Cela demande d’étudier la satisfiabilité des formules d’environnement, ce qui n’est pas forcément simple. Nous terminons le cours par une brève présentation de l’algorithmique du model-checking, qui se divise en deux grandes classes de méthodes et de systèmes associés. Les méthodes explicites énumèrent systématiquement les états et transitions possibles. Comme la taille du modèle peut être gigantesque, ces méthodes utilisent des machines massivement parallèles ainsi que de nombreuses façons de réduire la complexité de l’analyse : calcul à la volée du modèle et des propriétés, exploration aléatoire du graphe, réduction par utilisation par symétrie ou commutation d’actions indépendantes, abstractions diverses, etc. Les méthodes implicites, introduites plus tard, utilisent des représentations symboliques des espaces d’états et de transitions, par exemple en utilisant des fonctions caractéristiques d’ensembles. Leur avantage est que la taille des formules servant à l’exploration du modèle n’est plus associée à la taille de l’espace d’états, leurs inconvénients étant que leur temps de calcul est difficilement prévisible et leur implémentation sur machine parallèle problématique. Les Binary Decision Diagrams (BDDs), la plus ancienne des représentations implicites en calcul booléen, seront étudiés au cours suivant. La satisfaction Booléenne (SAT) ou modulo théories (SMT) devient de plus en plus la méthode de choix pour le model-checking implicite. Nous l’illustrerons sur l’exemple bien connu du Sudoku, pertinent ici même si sa résolution n’est pas vraiment du model-checking temporel. Comme les méthodes explicites, les méthodes implicites font appel à de nombreuses techniques pour lutter contre l’explosion exponentielle du temps de calcul ou de la taille mémoire. Elles seront étudiées les années suivantes. Nous insisterons enfin sur deux propriétés essentielles des model-checkers, qui les rendent attirants pour les utilisateurs : le fait qu’ils ne demandent pas à leur utilisateur de connaître précisément les techniques qu’ils emploient, et leur faculté de produire des contre-exemples minimaux pour les propriétés fausses, qui est primordiale à la fois pour le débogage et la génération de tests.

Algorithmes, machines et langages
05 - Prouver les programmes : pourquoi, quand, comment ? - PDF

Algorithmes, machines et langages

Play Episode Listen Later Apr 2, 2015 63:20


Gérard Berry Algorithmes, machines et langages Année 2014-2015 Prouver les programmes : pourquoi, quand, comment ? Sixième leçon : La vérification de modèles (model-checking) Ce cours termine la présentation générale des méthodes de vérification formelle par la vérification de modèles, plus connue sous son nom anglais original de model-checking. Cette méthode est bien différente des précédentes car elle s’intéresse essentiellement aux programmes d’états finis, ceux dont on peut au moins conceptuellement dérouler complètement toutes les exécutions possibles en temps et espace fini. De plus, contrairement aux méthodes précédemment décrites, le model-checking s’intéresse principalement aux programmes parallèles. Le parallélisme peut y être synchrone comme dans les circuits ou les langages synchrones présentés les années précédentes, ou asynchrones comme dans les protocoles de communication, les réseaux et les algorithmes distribués. Le model-checking est né au début des années 1980, quasi-simultanément en deux endroits : Grenoble avec J-P. Queille et J. Sifakis, qui ont développé le système CESAR et sa logique temporelle, et les USA avec E. Clarke et E. Emerson qui ont développé la logique temporelle CTL et le système EMV. Ces travaux ont donné le prix Turing 2007 à Clarke, Emerson et Sifakis. Ils s’appuyaient eux-mêmes sur les travaux d’Amir Pnueli (prix Turing en 1996) sur la logique temporelle. Le model-checking s’est considérablement développé ensuite, et constitue certainement la méthode formelle la plus utilisée dans l’industrie, en particulier dans la CAO de circuits. L’idée de base est de construire le graphe de toutes les exécutions possibles d’un programme, qu’on appelle son modèle. Ce modèle peut prendre la forme d’une structure de Kripke (logicien et philosophe de la logique modale), c’est-à-dire d’un graphe où les états sont étiquetés par des prédicats élémentaires, ou encore d’une structure de transitions, où les étiquettes sont portées par les transitions entre états. Une fois construit, le modèle devient indépendant du langage qui l’a engendré. Pour raisonner sur un modèle, un moyen très répandu est l’utilisation de logiques temporelles, définissent les propriétés temporelles à l’aide de propriétés élémentaires des états ou transitions et de quantificateurs sur les états ou les chemins de son graphe. On peut ainsi exprimer et vérifier des propriétés de sûreté (absence de bugs), comme « à aucun moment l’ascenseur ne peut voyager la porte ouverte », d’absence de blocages de l’exécution, ou de vivacité, comme « l’ascenseur finira par répondre à toutes les demandes des passagers » ou encore « chaque processus obtiendra infiniment souvent la ressource partagée s’il la demande infiniment souvent ». Nous présenterons d’abord la logique CTL*, la plus générale, qui permet d’imbriquer arbitrairement les quantifications d’états et de chemin sur les structures de Kripke. Mais cette logique très expressive est difficile à utiliser et les calculs y sont d’un coût prohibitif. Deux sous-logiques différentes sont utilisées : LTL (Linear Temporal Logic), qui ne quantifie pas sur les états et considère donc seulement des traces linéaires, et CTL, logique arborescente qui permet de quantifier sur les chemins mais avec des restrictions par rapport à CTL*. Ces deux logiques sont d’expressivités différentes et ont chacune des avantages et des inconvénients que nous discuterons brièvement. LTL est la logique la mieux adaptées pour la vérification de propriétés de vivacité, comme le montre L. Lamport (prix Turing 2014) avec son système TLA+. Mais, au contraire, elle ne permet pas d’exprimer des prédicats sur l’existence de calculs particuliers. La modélisation par systèmes de transitions, systématisée par R. Milner (prix Turing 1992) dans l’étude des calculs de processus communicants, permet de bien mieux composer les exécutions de ces processus parallèles. Une notion fondamentale introduite par Milner est la bisimulation, équivalence comportementale qui permet de comparer finement ce que peuvent faire ou ne pas faire les processus. Nous montrons que la réduction par bisimulation fournit une alternative très intéressante et intuitive aux logiques temporelles pour la vérification de modèles, en particulier en liaison avec les langages synchrones. Une dernière façon de conduite la vérification de modèles est de remplacer les formules temporelles par des programmes observateurs, qui prennent en entrée les entrées et les sorties du programme à vérifier et ont pour charge d’envoyer un signal de bug s’ils détectent une anomalie. Cette méthode est en particulier idéale pour les langages synchrones comme Esterel et Lustre étudiés les années précédentes, car les observateurs peuvent être écrits dans ces mêmes langages de façon plus simple qu’en logique temporelle, au moins pour les propriétés de sûreté qui sont les plus importantes dans leur domaine d’application. Cette méthode n’est en fait pas disjointe des précédentes, cat les formules temporelles sont souvent traduites en automates observateurs pour la vérification. Il faut noter que, dans tous les cas, le programme à vérifier évolue dans un environnement qu’il est important et souvent indispensable de modéliser aussi avec les mêmes techniques. La modélisation de l’environnement n’est pas forcément plus simple que celle du programme lui-même, et, particulièrement en logique temporelle, il faut s’assurer que le modèle d’environnement construit n’est pas vide, sous peine que toutes les propriétés à vérifier ne deviennent trivialement vraies. Cela demande d’étudier la satisfiabilité des formules d’environnement, ce qui n’est pas forcément simple. Nous terminons le cours par une brève présentation de l’algorithmique du model-checking, qui se divise en deux grandes classes de méthodes et de systèmes associés. Les méthodes explicites énumèrent systématiquement les états et transitions possibles. Comme la taille du modèle peut être gigantesque, ces méthodes utilisent des machines massivement parallèles ainsi que de nombreuses façons de réduire la complexité de l’analyse : calcul à la volée du modèle et des propriétés, exploration aléatoire du graphe, réduction par utilisation par symétrie ou commutation d’actions indépendantes, abstractions diverses, etc. Les méthodes implicites, introduites plus tard, utilisent des représentations symboliques des espaces d’états et de transitions, par exemple en utilisant des fonctions caractéristiques d’ensembles. Leur avantage est que la taille des formules servant à l’exploration du modèle n’est plus associée à la taille de l’espace d’états, leurs inconvénients étant que leur temps de calcul est difficilement prévisible et leur implémentation sur machine parallèle problématique. Les Binary Decision Diagrams (BDDs), la plus ancienne des représentations implicites en calcul booléen, seront étudiés au cours suivant. La satisfaction Booléenne (SAT) ou modulo théories (SMT) devient de plus en plus la méthode de choix pour le model-checking implicite. Nous l’illustrerons sur l’exemple bien connu du Sudoku, pertinent ici même si sa résolution n’est pas vraiment du model-checking temporel. Comme les méthodes explicites, les méthodes implicites font appel à de nombreuses techniques pour lutter contre l’explosion exponentielle du temps de calcul ou de la taille mémoire. Elles seront étudiées les années suivantes. Nous insisterons enfin sur deux propriétés essentielles des model-checkers, qui les rendent attirants pour les utilisateurs : le fait qu’ils ne demandent pas à leur utilisateur de connaître précisément les techniques qu’ils emploient, et leur faculté de produire des contre-exemples minimaux pour les propriétés fausses, qui est primordiale à la fois pour le débogage et la génération de tests.

Algorithmes, machines et langages
05 - Prouver les programmes : pourquoi, quand, comment ?

Algorithmes, machines et langages

Play Episode Listen Later Apr 2, 2015 63:20


Gérard Berry Algorithmes, machines et langages Année 2014-2015 Prouver les programmes : pourquoi, quand, comment ? Cinquième leçon : La vérification de modèles (model-checking) Ce cours termine la présentation générale des méthodes de vérification formelle par la vérification de modèles, plus connue sous son nom anglais original de model-checking. Cette méthode est bien différente des précédentes car elle s’intéresse essentiellement aux programmes d’états finis, ceux dont on peut au moins conceptuellement dérouler complètement toutes les exécutions possibles en temps et espace fini. De plus, contrairement aux méthodes précédemment décrites, le model-checking s’intéresse principalement aux programmes parallèles. Le parallélisme peut y être synchrone comme dans les circuits ou les langages synchrones présentés les années précédentes, ou asynchrones comme dans les protocoles de communication, les réseaux et les algorithmes distribués. Le model-checking est né au début des années 1980, quasi-simultanément en deux endroits : Grenoble avec J-P. Queille et J. Sifakis, qui ont développé le système CESAR et sa logique temporelle, et les USA avec E. Clarke et E. Emerson qui ont développé la logique temporelle CTL et le système EMV. Ces travaux ont donné le prix Turing 2007 à Clarke, Emerson et Sifakis. Ils s’appuyaient eux-mêmes sur les travaux d’Amir Pnueli (prix Turing en 1996) sur la logique temporelle. Le model-checking s’est considérablement développé ensuite, et constitue certainement la méthode formelle la plus utilisée dans l’industrie, en particulier dans la CAO de circuits. L’idée de base est de construire le graphe de toutes les exécutions possibles d’un programme, qu’on appelle son modèle. Ce modèle peut prendre la forme d’une structure de Kripke (logicien et philosophe de la logique modale), c’est-à-dire d’un graphe où les états sont étiquetés par des prédicats élémentaires, ou encore d’une structure de transitions, où les étiquettes sont portées par les transitions entre états. Une fois construit, le modèle devient indépendant du langage qui l’a engendré. Pour raisonner sur un modèle, un moyen très répandu est l’utilisation de logiques temporelles, définissent les propriétés temporelles à l’aide de propriétés élémentaires des états ou transitions et de quantificateurs sur les états ou les chemins de son graphe. On peut ainsi exprimer et vérifier des propriétés de sûreté (absence de bugs), comme « à aucun moment l’ascenseur ne peut voyager la porte ouverte », d’absence de blocages de l’exécution, ou de vivacité, comme « l’ascenseur finira par répondre à toutes les demandes des passagers » ou encore « chaque processus obtiendra infiniment souvent la ressource partagée s’il la demande infiniment souvent ». Nous présenterons d’abord la logique CTL*, la plus générale, qui permet d’imbriquer arbitrairement les quantifications d’états et de chemin sur les structures de Kripke. Mais cette logique très expressive est difficile à utiliser et les calculs y sont d’un coût prohibitif. Deux sous-logiques différentes sont utilisées : LTL (Linear Temporal Logic), qui ne quantifie pas sur les états et considère donc seulement des traces linéaires, et CTL, logique arborescente qui permet de quantifier sur les chemins mais avec des restrictions par rapport à CTL*. Ces deux logiques sont d’expressivités différentes et ont chacune des avantages et des inconvénients que nous discuterons brièvement. LTL est la logique la mieux adaptées pour la vérification de propriétés de vivacité, comme le montre L. Lamport (prix Turing 2014) avec son système TLA+. Mais, au contraire, elle ne permet pas d’exprimer des prédicats sur l’existence de calculs particuliers. La modélisation par systèmes de transitions, systématisée par R. Milner (prix Turing 1992) dans l’étude des calculs de processus communicants, permet de bien mieux composer les exécutions de ces processus parallèles. Une notion fondamentale introduite par Milner est la bisimulation, équivalence comportementale qui permet de comparer finement ce que peuvent faire ou ne pas faire les processus. Nous montrons que la réduction par bisimulation fournit une alternative très intéressante et intuitive aux logiques temporelles pour la vérification de modèles, en particulier en liaison avec les langages synchrones. Une dernière façon de conduite la vérification de modèles est de remplacer les formules temporelles par des programmes observateurs, qui prennent en entrée les entrées et les sorties du programme à vérifier et ont pour charge d’envoyer un signal de bug s’ils détectent une anomalie. Cette méthode est en particulier idéale pour les langages synchrones comme Esterel et Lustre étudiés les années précédentes, car les observateurs peuvent être écrits dans ces mêmes langages de façon plus simple qu’en logique temporelle, au moins pour les propriétés de sûreté qui sont les plus importantes dans leur domaine d’application. Cette méthode n’est en fait pas disjointe des précédentes, cat les formules temporelles sont souvent traduites en automates observateurs pour la vérification. Il faut noter que, dans tous les cas, le programme à vérifier évolue dans un environnement qu’il est important et souvent indispensable de modéliser aussi avec les mêmes techniques. La modélisation de l’environnement n’est pas forcément plus simple que celle du programme lui-même, et, particulièrement en logique temporelle, il faut s’assurer que le modèle d’environnement construit n’est pas vide, sous peine que toutes les propriétés à vérifier ne deviennent trivialement vraies. Cela demande d’étudier la satisfiabilité des formules d’environnement, ce qui n’est pas forcément simple. Nous terminons le cours par une brève présentation de l’algorithmique du model-checking, qui se divise en deux grandes classes de méthodes et de systèmes associés. Les méthodes explicites énumèrent systématiquement les états et transitions possibles. Comme la taille du modèle peut être gigantesque, ces méthodes utilisent des machines massivement parallèles ainsi que de nombreuses façons de réduire la complexité de l’analyse : calcul à la volée du modèle et des propriétés, exploration aléatoire du graphe, réduction par utilisation par symétrie ou commutation d’actions indépendantes, abstractions diverses, etc. Les méthodes implicites, introduites plus tard, utilisent des représentations symboliques des espaces d’états et de transitions, par exemple en utilisant des fonctions caractéristiques d’ensembles. Leur avantage est que la taille des formules servant à l’exploration du modèle n’est plus associée à la taille de l’espace d’états, leurs inconvénients étant que leur temps de calcul est difficilement prévisible et leur implémentation sur machine parallèle problématique. Les Binary Decision Diagrams (BDDs), la plus ancienne des représentations implicites en calcul booléen, seront étudiés au cours suivant. La satisfaction Booléenne (SAT) ou modulo théories (SMT) devient de plus en plus la méthode de choix pour le model-checking implicite. Nous l’illustrerons sur l’exemple bien connu du Sudoku, pertinent ici même si sa résolution n’est pas vraiment du model-checking temporel. Comme les méthodes explicites, les méthodes implicites font appel à de nombreuses techniques pour lutter contre l’explosion exponentielle du temps de calcul ou de la taille mémoire. Elles seront étudiées les années suivantes. Nous insisterons enfin sur deux propriétés essentielles des model-checkers, qui les rendent attirants pour les utilisateurs : le fait qu’ils ne demandent pas à leur utilisateur de connaître précisément les techniques qu’ils emploient, et leur faculté de produire des contre-exemples minimaux pour les propriétés fausses, qui est primordiale à la fois pour le débogage et la génération de tests.

Algorithmes, machines et langages
05 - Prouver les programmes : pourquoi, quand, comment ?

Algorithmes, machines et langages

Play Episode Listen Later Apr 2, 2015 63:20


Gérard Berry Algorithmes, machines et langages Année 2014-2015 Prouver les programmes : pourquoi, quand, comment ? Cinquième leçon : La vérification de modèles (model-checking) Ce cours termine la présentation générale des méthodes de vérification formelle par la vérification de modèles, plus connue sous son nom anglais original de model-checking. Cette méthode est bien différente des précédentes car elle s’intéresse essentiellement aux programmes d’états finis, ceux dont on peut au moins conceptuellement dérouler complètement toutes les exécutions possibles en temps et espace fini. De plus, contrairement aux méthodes précédemment décrites, le model-checking s’intéresse principalement aux programmes parallèles. Le parallélisme peut y être synchrone comme dans les circuits ou les langages synchrones présentés les années précédentes, ou asynchrones comme dans les protocoles de communication, les réseaux et les algorithmes distribués. Le model-checking est né au début des années 1980, quasi-simultanément en deux endroits : Grenoble avec J-P. Queille et J. Sifakis, qui ont développé le système CESAR et sa logique temporelle, et les USA avec E. Clarke et E. Emerson qui ont développé la logique temporelle CTL et le système EMV. Ces travaux ont donné le prix Turing 2007 à Clarke, Emerson et Sifakis. Ils s’appuyaient eux-mêmes sur les travaux d’Amir Pnueli (prix Turing en 1996) sur la logique temporelle. Le model-checking s’est considérablement développé ensuite, et constitue certainement la méthode formelle la plus utilisée dans l’industrie, en particulier dans la CAO de circuits. L’idée de base est de construire le graphe de toutes les exécutions possibles d’un programme, qu’on appelle son modèle. Ce modèle peut prendre la forme d’une structure de Kripke (logicien et philosophe de la logique modale), c’est-à-dire d’un graphe où les états sont étiquetés par des prédicats élémentaires, ou encore d’une structure de transitions, où les étiquettes sont portées par les transitions entre états. Une fois construit, le modèle devient indépendant du langage qui l’a engendré. Pour raisonner sur un modèle, un moyen très répandu est l’utilisation de logiques temporelles, définissent les propriétés temporelles à l’aide de propriétés élémentaires des états ou transitions et de quantificateurs sur les états ou les chemins de son graphe. On peut ainsi exprimer et vérifier des propriétés de sûreté (absence de bugs), comme « à aucun moment l’ascenseur ne peut voyager la porte ouverte », d’absence de blocages de l’exécution, ou de vivacité, comme « l’ascenseur finira par répondre à toutes les demandes des passagers » ou encore « chaque processus obtiendra infiniment souvent la ressource partagée s’il la demande infiniment souvent ». Nous présenterons d’abord la logique CTL*, la plus générale, qui permet d’imbriquer arbitrairement les quantifications d’états et de chemin sur les structures de Kripke. Mais cette logique très expressive est difficile à utiliser et les calculs y sont d’un coût prohibitif. Deux sous-logiques différentes sont utilisées : LTL (Linear Temporal Logic), qui ne quantifie pas sur les états et considère donc seulement des traces linéaires, et CTL, logique arborescente qui permet de quantifier sur les chemins mais avec des restrictions par rapport à CTL*. Ces deux logiques sont d’expressivités différentes et ont chacune des avantages et des inconvénients que nous discuterons brièvement. LTL est la logique la mieux adaptées pour la vérification de propriétés de vivacité, comme le montre L. Lamport (prix Turing 2014) avec son système TLA+. Mais, au contraire, elle ne permet pas d’exprimer des prédicats sur l’existence de calculs particuliers. La modélisation par systèmes de transitions, systématisée par R. Milner (prix Turing 1992) dans l’étude des calculs de processus communicants, permet de bien mieux composer les exécutions de ces processus parallèles. Une notion fondamentale introduite par Milner est la bisimulation, équivalence comportementale qui permet de comparer finement ce que peuvent faire ou ne pas faire les processus. Nous montrons que la réduction par bisimulation fournit une alternative très intéressante et intuitive aux logiques temporelles pour la vérification de modèles, en particulier en liaison avec les langages synchrones. Une dernière façon de conduite la vérification de modèles est de remplacer les formules temporelles par des programmes observateurs, qui prennent en entrée les entrées et les sorties du programme à vérifier et ont pour charge d’envoyer un signal de bug s’ils détectent une anomalie. Cette méthode est en particulier idéale pour les langages synchrones comme Esterel et Lustre étudiés les années précédentes, car les observateurs peuvent être écrits dans ces mêmes langages de façon plus simple qu’en logique temporelle, au moins pour les propriétés de sûreté qui sont les plus importantes dans leur domaine d’application. Cette méthode n’est en fait pas disjointe des précédentes, cat les formules temporelles sont souvent traduites en automates observateurs pour la vérification. Il faut noter que, dans tous les cas, le programme à vérifier évolue dans un environnement qu’il est important et souvent indispensable de modéliser aussi avec les mêmes techniques. La modélisation de l’environnement n’est pas forcément plus simple que celle du programme lui-même, et, particulièrement en logique temporelle, il faut s’assurer que le modèle d’environnement construit n’est pas vide, sous peine que toutes les propriétés à vérifier ne deviennent trivialement vraies. Cela demande d’étudier la satisfiabilité des formules d’environnement, ce qui n’est pas forcément simple. Nous terminons le cours par une brève présentation de l’algorithmique du model-checking, qui se divise en deux grandes classes de méthodes et de systèmes associés. Les méthodes explicites énumèrent systématiquement les états et transitions possibles. Comme la taille du modèle peut être gigantesque, ces méthodes utilisent des machines massivement parallèles ainsi que de nombreuses façons de réduire la complexité de l’analyse : calcul à la volée du modèle et des propriétés, exploration aléatoire du graphe, réduction par utilisation par symétrie ou commutation d’actions indépendantes, abstractions diverses, etc. Les méthodes implicites, introduites plus tard, utilisent des représentations symboliques des espaces d’états et de transitions, par exemple en utilisant des fonctions caractéristiques d’ensembles. Leur avantage est que la taille des formules servant à l’exploration du modèle n’est plus associée à la taille de l’espace d’états, leurs inconvénients étant que leur temps de calcul est difficilement prévisible et leur implémentation sur machine parallèle problématique. Les Binary Decision Diagrams (BDDs), la plus ancienne des représentations implicites en calcul booléen, seront étudiés au cours suivant. La satisfaction Booléenne (SAT) ou modulo théories (SMT) devient de plus en plus la méthode de choix pour le model-checking implicite. Nous l’illustrerons sur l’exemple bien connu du Sudoku, pertinent ici même si sa résolution n’est pas vraiment du model-checking temporel. Comme les méthodes explicites, les méthodes implicites font appel à de nombreuses techniques pour lutter contre l’explosion exponentielle du temps de calcul ou de la taille mémoire. Elles seront étudiées les années suivantes. Nous insisterons enfin sur deux propriétés essentielles des model-checkers, qui les rendent attirants pour les utilisateurs : le fait qu’ils ne demandent pas à leur utilisateur de connaître précisément les techniques qu’ils emploient, et leur faculté de produire des contre-exemples minimaux pour les propriétés fausses, qui est primordiale à la fois pour le débogage et la génération de tests.

Algorithmes, machines et langages
05 - Prouver les programmes : pourquoi, quand, comment ? - PDF

Algorithmes, machines et langages

Play Episode Listen Later Apr 2, 2015 63:20


Gérard Berry Algorithmes, machines et langages Année 2014-2015 Prouver les programmes : pourquoi, quand, comment ? Sixième leçon : La vérification de modèles (model-checking) Ce cours termine la présentation générale des méthodes de vérification formelle par la vérification de modèles, plus connue sous son nom anglais original de model-checking. Cette méthode est bien différente des précédentes car elle s’intéresse essentiellement aux programmes d’états finis, ceux dont on peut au moins conceptuellement dérouler complètement toutes les exécutions possibles en temps et espace fini. De plus, contrairement aux méthodes précédemment décrites, le model-checking s’intéresse principalement aux programmes parallèles. Le parallélisme peut y être synchrone comme dans les circuits ou les langages synchrones présentés les années précédentes, ou asynchrones comme dans les protocoles de communication, les réseaux et les algorithmes distribués. Le model-checking est né au début des années 1980, quasi-simultanément en deux endroits : Grenoble avec J-P. Queille et J. Sifakis, qui ont développé le système CESAR et sa logique temporelle, et les USA avec E. Clarke et E. Emerson qui ont développé la logique temporelle CTL et le système EMV. Ces travaux ont donné le prix Turing 2007 à Clarke, Emerson et Sifakis. Ils s’appuyaient eux-mêmes sur les travaux d’Amir Pnueli (prix Turing en 1996) sur la logique temporelle. Le model-checking s’est considérablement développé ensuite, et constitue certainement la méthode formelle la plus utilisée dans l’industrie, en particulier dans la CAO de circuits. L’idée de base est de construire le graphe de toutes les exécutions possibles d’un programme, qu’on appelle son modèle. Ce modèle peut prendre la forme d’une structure de Kripke (logicien et philosophe de la logique modale), c’est-à-dire d’un graphe où les états sont étiquetés par des prédicats élémentaires, ou encore d’une structure de transitions, où les étiquettes sont portées par les transitions entre états. Une fois construit, le modèle devient indépendant du langage qui l’a engendré. Pour raisonner sur un modèle, un moyen très répandu est l’utilisation de logiques temporelles, définissent les propriétés temporelles à l’aide de propriétés élémentaires des états ou transitions et de quantificateurs sur les états ou les chemins de son graphe. On peut ainsi exprimer et vérifier des propriétés de sûreté (absence de bugs), comme « à aucun moment l’ascenseur ne peut voyager la porte ouverte », d’absence de blocages de l’exécution, ou de vivacité, comme « l’ascenseur finira par répondre à toutes les demandes des passagers » ou encore « chaque processus obtiendra infiniment souvent la ressource partagée s’il la demande infiniment souvent ». Nous présenterons d’abord la logique CTL*, la plus générale, qui permet d’imbriquer arbitrairement les quantifications d’états et de chemin sur les structures de Kripke. Mais cette logique très expressive est difficile à utiliser et les calculs y sont d’un coût prohibitif. Deux sous-logiques différentes sont utilisées : LTL (Linear Temporal Logic), qui ne quantifie pas sur les états et considère donc seulement des traces linéaires, et CTL, logique arborescente qui permet de quantifier sur les chemins mais avec des restrictions par rapport à CTL*. Ces deux logiques sont d’expressivités différentes et ont chacune des avantages et des inconvénients que nous discuterons brièvement. LTL est la logique la mieux adaptées pour la vérification de propriétés de vivacité, comme le montre L. Lamport (prix Turing 2014) avec son système TLA+. Mais, au contraire, elle ne permet pas d’exprimer des prédicats sur l’existence de calculs particuliers. La modélisation par systèmes de transitions, systématisée par R. Milner (prix Turing 1992) dans l’étude des calculs de processus communicants, permet de bien mieux composer les exécutions de ces processus parallèles. Une notion fondamentale introduite par Milner est la bisimulation, équivalence comportementale qui permet de comparer finement ce que peuvent faire ou ne pas faire les processus. Nous montrons que la réduction par bisimulation fournit une alternative très intéressante et intuitive aux logiques temporelles pour la vérification de modèles, en particulier en liaison avec les langages synchrones. Une dernière façon de conduite la vérification de modèles est de remplacer les formules temporelles par des programmes observateurs, qui prennent en entrée les entrées et les sorties du programme à vérifier et ont pour charge d’envoyer un signal de bug s’ils détectent une anomalie. Cette méthode est en particulier idéale pour les langages synchrones comme Esterel et Lustre étudiés les années précédentes, car les observateurs peuvent être écrits dans ces mêmes langages de façon plus simple qu’en logique temporelle, au moins pour les propriétés de sûreté qui sont les plus importantes dans leur domaine d’application. Cette méthode n’est en fait pas disjointe des précédentes, cat les formules temporelles sont souvent traduites en automates observateurs pour la vérification. Il faut noter que, dans tous les cas, le programme à vérifier évolue dans un environnement qu’il est important et souvent indispensable de modéliser aussi avec les mêmes techniques. La modélisation de l’environnement n’est pas forcément plus simple que celle du programme lui-même, et, particulièrement en logique temporelle, il faut s’assurer que le modèle d’environnement construit n’est pas vide, sous peine que toutes les propriétés à vérifier ne deviennent trivialement vraies. Cela demande d’étudier la satisfiabilité des formules d’environnement, ce qui n’est pas forcément simple. Nous terminons le cours par une brève présentation de l’algorithmique du model-checking, qui se divise en deux grandes classes de méthodes et de systèmes associés. Les méthodes explicites énumèrent systématiquement les états et transitions possibles. Comme la taille du modèle peut être gigantesque, ces méthodes utilisent des machines massivement parallèles ainsi que de nombreuses façons de réduire la complexité de l’analyse : calcul à la volée du modèle et des propriétés, exploration aléatoire du graphe, réduction par utilisation par symétrie ou commutation d’actions indépendantes, abstractions diverses, etc. Les méthodes implicites, introduites plus tard, utilisent des représentations symboliques des espaces d’états et de transitions, par exemple en utilisant des fonctions caractéristiques d’ensembles. Leur avantage est que la taille des formules servant à l’exploration du modèle n’est plus associée à la taille de l’espace d’états, leurs inconvénients étant que leur temps de calcul est difficilement prévisible et leur implémentation sur machine parallèle problématique. Les Binary Decision Diagrams (BDDs), la plus ancienne des représentations implicites en calcul booléen, seront étudiés au cours suivant. La satisfaction Booléenne (SAT) ou modulo théories (SMT) devient de plus en plus la méthode de choix pour le model-checking implicite. Nous l’illustrerons sur l’exemple bien connu du Sudoku, pertinent ici même si sa résolution n’est pas vraiment du model-checking temporel. Comme les méthodes explicites, les méthodes implicites font appel à de nombreuses techniques pour lutter contre l’explosion exponentielle du temps de calcul ou de la taille mémoire. Elles seront étudiées les années suivantes. Nous insisterons enfin sur deux propriétés essentielles des model-checkers, qui les rendent attirants pour les utilisateurs : le fait qu’ils ne demandent pas à leur utilisateur de connaître précisément les techniques qu’ils emploient, et leur faculté de produire des contre-exemples minimaux pour les propriétés fausses, qui est primordiale à la fois pour le débogage et la génération de tests.

Collège de France (Sciences et technologies)
05 - Prouver les programmes : pourquoi, quand, comment ? - PDF

Collège de France (Sciences et technologies)

Play Episode Listen Later Mar 25, 2015 63:20


Gérard Berry Algorithmes, machines et langages Année 2014-2015 Prouver les programmes : pourquoi, quand, comment ? Sixième leçon : La vérification de modèles (model-checking) Ce cours termine la présentation générale des méthodes de vérification formelle par la vérification de modèles, plus connue sous son nom anglais original de model-checking. Cette méthode est bien différente des précédentes car elle s'intéresse essentiellement aux programmes d'états finis, ceux dont on peut au moins conceptuellement dérouler complètement toutes les exécutions possibles en temps et espace fini. De plus, contrairement aux méthodes précédemment décrites, le model-checking s'intéresse principalement aux programmes parallèles. Le parallélisme peut y être synchrone comme dans les circuits ou les langages synchrones présentés les années précédentes, ou asynchrones comme dans les protocoles de communication, les réseaux et les algorithmes distribués. Le model-checking est né au début des années 1980, quasi-simultanément en deux endroits : Grenoble avec J-P. Queille et J. Sifakis, qui ont développé le système CESAR et sa logique temporelle, et les USA avec E. Clarke et E. Emerson qui ont développé la logique temporelle CTL et le système EMV. Ces travaux ont donné le prix Turing 2007 à Clarke, Emerson et Sifakis. Ils s'appuyaient eux-mêmes sur les travaux d'Amir Pnueli (prix Turing en 1996) sur la logique temporelle. Le model-checking s'est considérablement développé ensuite, et constitue certainement la méthode formelle la plus utilisée dans l'industrie, en particulier dans la CAO de circuits. L'idée de base est de construire le graphe de toutes les exécutions possibles d'un programme, qu'on appelle son modèle. Ce modèle peut prendre la forme d'une structure de Kripke (logicien et philosophe de la logique modale), c'est-à-dire d'un graphe où les états sont étiquetés par des prédicats élémentaires, ou encore d'une structure de transitions, où les étiquettes sont portées par les transitions entre états. Une fois construit, le modèle devient indépendant du langage qui l'a engendré. Pour raisonner sur un modèle, un moyen très répandu est l'utilisation de logiques temporelles, définissent les propriétés temporelles à l'aide de propriétés élémentaires des états ou transitions et de quantificateurs sur les états ou les chemins de son graphe. On peut ainsi exprimer et vérifier des propriétés de sûreté (absence de bugs), comme « à aucun moment l'ascenseur ne peut voyager la porte ouverte », d'absence de blocages de l'exécution, ou de vivacité, comme « l'ascenseur finira par répondre à toutes les demandes des passagers » ou encore « chaque processus obtiendra infiniment souvent la ressource partagée s'il la demande infiniment souvent ». Nous présenterons d'abord la logique CTL*, la plus générale, qui permet d'imbriquer arbitrairement les quantifications d'états et de chemin sur les structures de Kripke. Mais cette logique très expressive est difficile à utiliser et les calculs y sont d'un coût prohibitif. Deux sous-logiques différentes sont utilisées : LTL (Linear Temporal Logic), qui ne quantifie pas sur les états et considère donc seulement des traces linéaires, et CTL, logique arborescente qui permet de quantifier sur les chemins mais avec des restrictions par rapport à CTL*. Ces deux logiques sont d'expressivités différentes et ont chacune des avantages et des inconvénients que nous discuterons brièvement. LTL est la logique la mieux adaptées pour la vérification de propriétés de vivacité, comme le montre L. Lamport (prix Turing 2014) avec son système TLA+. Mais, au contraire, elle ne permet pas d'exprimer des prédicats sur l'existence de calculs particuliers. La modélisation par systèmes de transitions, systématisée par R. Milner (prix Turing 1992) dans l'étude des calculs de processus communicants, permet de bien mieux composer les exécutions de ces processus parallèles. Une notion fondamentale introduite par Milner est la bisimulation, équivalence comportementale qui permet de comparer finement ce que peuvent faire ou ne pas faire les processus. Nous montrons que la réduction par bisimulation fournit une alternative très intéressante et intuitive aux logiques temporelles pour la vérification de modèles, en particulier en liaison avec les langages synchrones. Une dernière façon de conduite la vérification de modèles est de remplacer les formules temporelles par des programmes observateurs, qui prennent en entrée les entrées et les sorties du programme à vérifier et ont pour charge d'envoyer un signal de bug s'ils détectent une anomalie. Cette méthode est en particulier idéale pour les langages synchrones comme Esterel et Lustre étudiés les années précédentes, car les observateurs peuvent être écrits dans ces mêmes langages de façon plus simple qu'en logique temporelle, au moins pour les propriétés de sûreté qui sont les plus importantes dans leur domaine d'application. Cette méthode n'est en fait pas disjointe des précédentes, cat les formules temporelles sont souvent traduites en automates observateurs pour la vérification. Il faut noter que, dans tous les cas, le programme à vérifier évolue dans un environnement qu'il est important et souvent indispensable de modéliser aussi avec les mêmes techniques. La modélisation de l'environnement n'est pas forcément plus simple que celle du programme lui-même, et, particulièrement en logique temporelle, il faut s'assurer que le modèle d'environnement construit n'est pas vide, sous peine que toutes les propriétés à vérifier ne deviennent trivialement vraies. Cela demande d'étudier la satisfiabilité des formules d'environnement, ce qui n'est pas forcément simple. Nous terminons le cours par une brève présentation de l'algorithmique du model-checking, qui se divise en deux grandes classes de méthodes et de systèmes associés. Les méthodes explicites énumèrent systématiquement les états et transitions possibles. Comme la taille du modèle peut être gigantesque, ces méthodes utilisent des machines massivement parallèles ainsi que de nombreuses façons de réduire la complexité de l'analyse : calcul à la volée du modèle et des propriétés, exploration aléatoire du graphe, réduction par utilisation par symétrie ou commutation d'actions indépendantes, abstractions diverses, etc. Les méthodes implicites, introduites plus tard, utilisent des représentations symboliques des espaces d'états et de transitions, par exemple en utilisant des fonctions caractéristiques d'ensembles. Leur avantage est que la taille des formules servant à l'exploration du modèle n'est plus associée à la taille de l'espace d'états, leurs inconvénients étant que leur temps de calcul est difficilement prévisible et leur implémentation sur machine parallèle problématique. Les Binary Decision Diagrams (BDDs), la plus ancienne des représentations implicites en calcul booléen, seront étudiés au cours suivant. La satisfaction Booléenne (SAT) ou modulo théories (SMT) devient de plus en plus la méthode de choix pour le model-checking implicite. Nous l'illustrerons sur l'exemple bien connu du Sudoku, pertinent ici même si sa résolution n'est pas vraiment du model-checking temporel. Comme les méthodes explicites, les méthodes implicites font appel à de nombreuses techniques pour lutter contre l'explosion exponentielle du temps de calcul ou de la taille mémoire. Elles seront étudiées les années suivantes. Nous insisterons enfin sur deux propriétés essentielles des model-checkers, qui les rendent attirants pour les utilisateurs : le fait qu'ils ne demandent pas à leur utilisateur de connaître précisément les techniques qu'ils emploient, et leur faculté de produire des contre-exemples minimaux pour les propriétés fausses, qui est primordiale à la fois pour le débogage et la génération de tests.

Collège de France (Sciences et technologies)
05 - Prouver les programmes : pourquoi, quand, comment ?

Collège de France (Sciences et technologies)

Play Episode Listen Later Mar 25, 2015 63:20


Gérard Berry Algorithmes, machines et langages Année 2014-2015 Prouver les programmes : pourquoi, quand, comment ? Cinquième leçon : La vérification de modèles (model-checking) Ce cours termine la présentation générale des méthodes de vérification formelle par la vérification de modèles, plus connue sous son nom anglais original de model-checking. Cette méthode est bien différente des précédentes car elle s'intéresse essentiellement aux programmes d'états finis, ceux dont on peut au moins conceptuellement dérouler complètement toutes les exécutions possibles en temps et espace fini. De plus, contrairement aux méthodes précédemment décrites, le model-checking s'intéresse principalement aux programmes parallèles. Le parallélisme peut y être synchrone comme dans les circuits ou les langages synchrones présentés les années précédentes, ou asynchrones comme dans les protocoles de communication, les réseaux et les algorithmes distribués. Le model-checking est né au début des années 1980, quasi-simultanément en deux endroits : Grenoble avec J-P. Queille et J. Sifakis, qui ont développé le système CESAR et sa logique temporelle, et les USA avec E. Clarke et E. Emerson qui ont développé la logique temporelle CTL et le système EMV. Ces travaux ont donné le prix Turing 2007 à Clarke, Emerson et Sifakis. Ils s'appuyaient eux-mêmes sur les travaux d'Amir Pnueli (prix Turing en 1996) sur la logique temporelle. Le model-checking s'est considérablement développé ensuite, et constitue certainement la méthode formelle la plus utilisée dans l'industrie, en particulier dans la CAO de circuits. L'idée de base est de construire le graphe de toutes les exécutions possibles d'un programme, qu'on appelle son modèle. Ce modèle peut prendre la forme d'une structure de Kripke (logicien et philosophe de la logique modale), c'est-à-dire d'un graphe où les états sont étiquetés par des prédicats élémentaires, ou encore d'une structure de transitions, où les étiquettes sont portées par les transitions entre états. Une fois construit, le modèle devient indépendant du langage qui l'a engendré. Pour raisonner sur un modèle, un moyen très répandu est l'utilisation de logiques temporelles, définissent les propriétés temporelles à l'aide de propriétés élémentaires des états ou transitions et de quantificateurs sur les états ou les chemins de son graphe. On peut ainsi exprimer et vérifier des propriétés de sûreté (absence de bugs), comme « à aucun moment l'ascenseur ne peut voyager la porte ouverte », d'absence de blocages de l'exécution, ou de vivacité, comme « l'ascenseur finira par répondre à toutes les demandes des passagers » ou encore « chaque processus obtiendra infiniment souvent la ressource partagée s'il la demande infiniment souvent ». Nous présenterons d'abord la logique CTL*, la plus générale, qui permet d'imbriquer arbitrairement les quantifications d'états et de chemin sur les structures de Kripke. Mais cette logique très expressive est difficile à utiliser et les calculs y sont d'un coût prohibitif. Deux sous-logiques différentes sont utilisées : LTL (Linear Temporal Logic), qui ne quantifie pas sur les états et considère donc seulement des traces linéaires, et CTL, logique arborescente qui permet de quantifier sur les chemins mais avec des restrictions par rapport à CTL*. Ces deux logiques sont d'expressivités différentes et ont chacune des avantages et des inconvénients que nous discuterons brièvement. LTL est la logique la mieux adaptées pour la vérification de propriétés de vivacité, comme le montre L. Lamport (prix Turing 2014) avec son système TLA+. Mais, au contraire, elle ne permet pas d'exprimer des prédicats sur l'existence de calculs particuliers. La modélisation par systèmes de transitions, systématisée par R. Milner (prix Turing 1992) dans l'étude des calculs de processus communicants, permet de bien mieux composer les exécutions de ces processus parallèles. Une notion fondamentale introduite par Milner est la bisimulation, équivalence comportementale qui permet de comparer finement ce que peuvent faire ou ne pas faire les processus. Nous montrons que la réduction par bisimulation fournit une alternative très intéressante et intuitive aux logiques temporelles pour la vérification de modèles, en particulier en liaison avec les langages synchrones. Une dernière façon de conduite la vérification de modèles est de remplacer les formules temporelles par des programmes observateurs, qui prennent en entrée les entrées et les sorties du programme à vérifier et ont pour charge d'envoyer un signal de bug s'ils détectent une anomalie. Cette méthode est en particulier idéale pour les langages synchrones comme Esterel et Lustre étudiés les années précédentes, car les observateurs peuvent être écrits dans ces mêmes langages de façon plus simple qu'en logique temporelle, au moins pour les propriétés de sûreté qui sont les plus importantes dans leur domaine d'application. Cette méthode n'est en fait pas disjointe des précédentes, cat les formules temporelles sont souvent traduites en automates observateurs pour la vérification. Il faut noter que, dans tous les cas, le programme à vérifier évolue dans un environnement qu'il est important et souvent indispensable de modéliser aussi avec les mêmes techniques. La modélisation de l'environnement n'est pas forcément plus simple que celle du programme lui-même, et, particulièrement en logique temporelle, il faut s'assurer que le modèle d'environnement construit n'est pas vide, sous peine que toutes les propriétés à vérifier ne deviennent trivialement vraies. Cela demande d'étudier la satisfiabilité des formules d'environnement, ce qui n'est pas forcément simple. Nous terminons le cours par une brève présentation de l'algorithmique du model-checking, qui se divise en deux grandes classes de méthodes et de systèmes associés. Les méthodes explicites énumèrent systématiquement les états et transitions possibles. Comme la taille du modèle peut être gigantesque, ces méthodes utilisent des machines massivement parallèles ainsi que de nombreuses façons de réduire la complexité de l'analyse : calcul à la volée du modèle et des propriétés, exploration aléatoire du graphe, réduction par utilisation par symétrie ou commutation d'actions indépendantes, abstractions diverses, etc. Les méthodes implicites, introduites plus tard, utilisent des représentations symboliques des espaces d'états et de transitions, par exemple en utilisant des fonctions caractéristiques d'ensembles. Leur avantage est que la taille des formules servant à l'exploration du modèle n'est plus associée à la taille de l'espace d'états, leurs inconvénients étant que leur temps de calcul est difficilement prévisible et leur implémentation sur machine parallèle problématique. Les Binary Decision Diagrams (BDDs), la plus ancienne des représentations implicites en calcul booléen, seront étudiés au cours suivant. La satisfaction Booléenne (SAT) ou modulo théories (SMT) devient de plus en plus la méthode de choix pour le model-checking implicite. Nous l'illustrerons sur l'exemple bien connu du Sudoku, pertinent ici même si sa résolution n'est pas vraiment du model-checking temporel. Comme les méthodes explicites, les méthodes implicites font appel à de nombreuses techniques pour lutter contre l'explosion exponentielle du temps de calcul ou de la taille mémoire. Elles seront étudiées les années suivantes. Nous insisterons enfin sur deux propriétés essentielles des model-checkers, qui les rendent attirants pour les utilisateurs : le fait qu'ils ne demandent pas à leur utilisateur de connaître précisément les techniques qu'ils emploient, et leur faculté de produire des contre-exemples minimaux pour les propriétés fausses, qui est primordiale à la fois pour le débogage et la génération de tests.

Algorithmes, machines et langages
05 - Prouver les programmes : pourquoi, quand, comment ? - PDF

Algorithmes, machines et langages

Play Episode Listen Later Mar 25, 2015 63:20


Gérard Berry Algorithmes, machines et langages Année 2014-2015 Prouver les programmes : pourquoi, quand, comment ? Sixième leçon : La vérification de modèles (model-checking) Ce cours termine la présentation générale des méthodes de vérification formelle par la vérification de modèles, plus connue sous son nom anglais original de model-checking. Cette méthode est bien différente des précédentes car elle s’intéresse essentiellement aux programmes d’états finis, ceux dont on peut au moins conceptuellement dérouler complètement toutes les exécutions possibles en temps et espace fini. De plus, contrairement aux méthodes précédemment décrites, le model-checking s’intéresse principalement aux programmes parallèles. Le parallélisme peut y être synchrone comme dans les circuits ou les langages synchrones présentés les années précédentes, ou asynchrones comme dans les protocoles de communication, les réseaux et les algorithmes distribués. Le model-checking est né au début des années 1980, quasi-simultanément en deux endroits : Grenoble avec J-P. Queille et J. Sifakis, qui ont développé le système CESAR et sa logique temporelle, et les USA avec E. Clarke et E. Emerson qui ont développé la logique temporelle CTL et le système EMV. Ces travaux ont donné le prix Turing 2007 à Clarke, Emerson et Sifakis. Ils s’appuyaient eux-mêmes sur les travaux d’Amir Pnueli (prix Turing en 1996) sur la logique temporelle. Le model-checking s’est considérablement développé ensuite, et constitue certainement la méthode formelle la plus utilisée dans l’industrie, en particulier dans la CAO de circuits. L’idée de base est de construire le graphe de toutes les exécutions possibles d’un programme, qu’on appelle son modèle. Ce modèle peut prendre la forme d’une structure de Kripke (logicien et philosophe de la logique modale), c’est-à-dire d’un graphe où les états sont étiquetés par des prédicats élémentaires, ou encore d’une structure de transitions, où les étiquettes sont portées par les transitions entre états. Une fois construit, le modèle devient indépendant du langage qui l’a engendré. Pour raisonner sur un modèle, un moyen très répandu est l’utilisation de logiques temporelles, définissent les propriétés temporelles à l’aide de propriétés élémentaires des états ou transitions et de quantificateurs sur les états ou les chemins de son graphe. On peut ainsi exprimer et vérifier des propriétés de sûreté (absence de bugs), comme « à aucun moment l’ascenseur ne peut voyager la porte ouverte », d’absence de blocages de l’exécution, ou de vivacité, comme « l’ascenseur finira par répondre à toutes les demandes des passagers » ou encore « chaque processus obtiendra infiniment souvent la ressource partagée s’il la demande infiniment souvent ». Nous présenterons d’abord la logique CTL*, la plus générale, qui permet d’imbriquer arbitrairement les quantifications d’états et de chemin sur les structures de Kripke. Mais cette logique très expressive est difficile à utiliser et les calculs y sont d’un coût prohibitif. Deux sous-logiques différentes sont utilisées : LTL (Linear Temporal Logic), qui ne quantifie pas sur les états et considère donc seulement des traces linéaires, et CTL, logique arborescente qui permet de quantifier sur les chemins mais avec des restrictions par rapport à CTL*. Ces deux logiques sont d’expressivités différentes et ont chacune des avantages et des inconvénients que nous discuterons brièvement. LTL est la logique la mieux adaptées pour la vérification de propriétés de vivacité, comme le montre L. Lamport (prix Turing 2014) avec son système TLA+. Mais, au contraire, elle ne permet pas d’exprimer des prédicats sur l’existence de calculs particuliers. La modélisation par systèmes de transitions, systématisée par R. Milner (prix Turing 1992) dans l’étude des calculs de processus communicants, permet de bien mieux composer les exécutions de ces processus parallèles. Une notion fondamentale introduite par Milner est la bisimulation, équivalence comportementale qui permet de comparer finement ce que peuvent faire ou ne pas faire les processus. Nous montrons que la réduction par bisimulation fournit une alternative très intéressante et intuitive aux logiques temporelles pour la vérification de modèles, en particulier en liaison avec les langages synchrones. Une dernière façon de conduite la vérification de modèles est de remplacer les formules temporelles par des programmes observateurs, qui prennent en entrée les entrées et les sorties du programme à vérifier et ont pour charge d’envoyer un signal de bug s’ils détectent une anomalie. Cette méthode est en particulier idéale pour les langages synchrones comme Esterel et Lustre étudiés les années précédentes, car les observateurs peuvent être écrits dans ces mêmes langages de façon plus simple qu’en logique temporelle, au moins pour les propriétés de sûreté qui sont les plus importantes dans leur domaine d’application. Cette méthode n’est en fait pas disjointe des précédentes, cat les formules temporelles sont souvent traduites en automates observateurs pour la vérification. Il faut noter que, dans tous les cas, le programme à vérifier évolue dans un environnement qu’il est important et souvent indispensable de modéliser aussi avec les mêmes techniques. La modélisation de l’environnement n’est pas forcément plus simple que celle du programme lui-même, et, particulièrement en logique temporelle, il faut s’assurer que le modèle d’environnement construit n’est pas vide, sous peine que toutes les propriétés à vérifier ne deviennent trivialement vraies. Cela demande d’étudier la satisfiabilité des formules d’environnement, ce qui n’est pas forcément simple. Nous terminons le cours par une brève présentation de l’algorithmique du model-checking, qui se divise en deux grandes classes de méthodes et de systèmes associés. Les méthodes explicites énumèrent systématiquement les états et transitions possibles. Comme la taille du modèle peut être gigantesque, ces méthodes utilisent des machines massivement parallèles ainsi que de nombreuses façons de réduire la complexité de l’analyse : calcul à la volée du modèle et des propriétés, exploration aléatoire du graphe, réduction par utilisation par symétrie ou commutation d’actions indépendantes, abstractions diverses, etc. Les méthodes implicites, introduites plus tard, utilisent des représentations symboliques des espaces d’états et de transitions, par exemple en utilisant des fonctions caractéristiques d’ensembles. Leur avantage est que la taille des formules servant à l’exploration du modèle n’est plus associée à la taille de l’espace d’états, leurs inconvénients étant que leur temps de calcul est difficilement prévisible et leur implémentation sur machine parallèle problématique. Les Binary Decision Diagrams (BDDs), la plus ancienne des représentations implicites en calcul booléen, seront étudiés au cours suivant. La satisfaction Booléenne (SAT) ou modulo théories (SMT) devient de plus en plus la méthode de choix pour le model-checking implicite. Nous l’illustrerons sur l’exemple bien connu du Sudoku, pertinent ici même si sa résolution n’est pas vraiment du model-checking temporel. Comme les méthodes explicites, les méthodes implicites font appel à de nombreuses techniques pour lutter contre l’explosion exponentielle du temps de calcul ou de la taille mémoire. Elles seront étudiées les années suivantes. Nous insisterons enfin sur deux propriétés essentielles des model-checkers, qui les rendent attirants pour les utilisateurs : le fait qu’ils ne demandent pas à leur utilisateur de connaître précisément les techniques qu’ils emploient, et leur faculté de produire des contre-exemples minimaux pour les propriétés fausses, qui est primordiale à la fois pour le débogage et la génération de tests.

Algorithmes, machines et langages
05 - Prouver les programmes : pourquoi, quand, comment ?

Algorithmes, machines et langages

Play Episode Listen Later Mar 25, 2015 63:20


Gérard Berry Algorithmes, machines et langages Année 2014-2015 Prouver les programmes : pourquoi, quand, comment ? Cinquième leçon : La vérification de modèles (model-checking) Ce cours termine la présentation générale des méthodes de vérification formelle par la vérification de modèles, plus connue sous son nom anglais original de model-checking. Cette méthode est bien différente des précédentes car elle s’intéresse essentiellement aux programmes d’états finis, ceux dont on peut au moins conceptuellement dérouler complètement toutes les exécutions possibles en temps et espace fini. De plus, contrairement aux méthodes précédemment décrites, le model-checking s’intéresse principalement aux programmes parallèles. Le parallélisme peut y être synchrone comme dans les circuits ou les langages synchrones présentés les années précédentes, ou asynchrones comme dans les protocoles de communication, les réseaux et les algorithmes distribués. Le model-checking est né au début des années 1980, quasi-simultanément en deux endroits : Grenoble avec J-P. Queille et J. Sifakis, qui ont développé le système CESAR et sa logique temporelle, et les USA avec E. Clarke et E. Emerson qui ont développé la logique temporelle CTL et le système EMV. Ces travaux ont donné le prix Turing 2007 à Clarke, Emerson et Sifakis. Ils s’appuyaient eux-mêmes sur les travaux d’Amir Pnueli (prix Turing en 1996) sur la logique temporelle. Le model-checking s’est considérablement développé ensuite, et constitue certainement la méthode formelle la plus utilisée dans l’industrie, en particulier dans la CAO de circuits. L’idée de base est de construire le graphe de toutes les exécutions possibles d’un programme, qu’on appelle son modèle. Ce modèle peut prendre la forme d’une structure de Kripke (logicien et philosophe de la logique modale), c’est-à-dire d’un graphe où les états sont étiquetés par des prédicats élémentaires, ou encore d’une structure de transitions, où les étiquettes sont portées par les transitions entre états. Une fois construit, le modèle devient indépendant du langage qui l’a engendré. Pour raisonner sur un modèle, un moyen très répandu est l’utilisation de logiques temporelles, définissent les propriétés temporelles à l’aide de propriétés élémentaires des états ou transitions et de quantificateurs sur les états ou les chemins de son graphe. On peut ainsi exprimer et vérifier des propriétés de sûreté (absence de bugs), comme « à aucun moment l’ascenseur ne peut voyager la porte ouverte », d’absence de blocages de l’exécution, ou de vivacité, comme « l’ascenseur finira par répondre à toutes les demandes des passagers » ou encore « chaque processus obtiendra infiniment souvent la ressource partagée s’il la demande infiniment souvent ». Nous présenterons d’abord la logique CTL*, la plus générale, qui permet d’imbriquer arbitrairement les quantifications d’états et de chemin sur les structures de Kripke. Mais cette logique très expressive est difficile à utiliser et les calculs y sont d’un coût prohibitif. Deux sous-logiques différentes sont utilisées : LTL (Linear Temporal Logic), qui ne quantifie pas sur les états et considère donc seulement des traces linéaires, et CTL, logique arborescente qui permet de quantifier sur les chemins mais avec des restrictions par rapport à CTL*. Ces deux logiques sont d’expressivités différentes et ont chacune des avantages et des inconvénients que nous discuterons brièvement. LTL est la logique la mieux adaptées pour la vérification de propriétés de vivacité, comme le montre L. Lamport (prix Turing 2014) avec son système TLA+. Mais, au contraire, elle ne permet pas d’exprimer des prédicats sur l’existence de calculs particuliers. La modélisation par systèmes de transitions, systématisée par R. Milner (prix Turing 1992) dans l’étude des calculs de processus communicants, permet de bien mieux composer les exécutions de ces processus parallèles. Une notion fondamentale introduite par Milner est la bisimulation, équivalence comportementale qui permet de comparer finement ce que peuvent faire ou ne pas faire les processus. Nous montrons que la réduction par bisimulation fournit une alternative très intéressante et intuitive aux logiques temporelles pour la vérification de modèles, en particulier en liaison avec les langages synchrones. Une dernière façon de conduite la vérification de modèles est de remplacer les formules temporelles par des programmes observateurs, qui prennent en entrée les entrées et les sorties du programme à vérifier et ont pour charge d’envoyer un signal de bug s’ils détectent une anomalie. Cette méthode est en particulier idéale pour les langages synchrones comme Esterel et Lustre étudiés les années précédentes, car les observateurs peuvent être écrits dans ces mêmes langages de façon plus simple qu’en logique temporelle, au moins pour les propriétés de sûreté qui sont les plus importantes dans leur domaine d’application. Cette méthode n’est en fait pas disjointe des précédentes, cat les formules temporelles sont souvent traduites en automates observateurs pour la vérification. Il faut noter que, dans tous les cas, le programme à vérifier évolue dans un environnement qu’il est important et souvent indispensable de modéliser aussi avec les mêmes techniques. La modélisation de l’environnement n’est pas forcément plus simple que celle du programme lui-même, et, particulièrement en logique temporelle, il faut s’assurer que le modèle d’environnement construit n’est pas vide, sous peine que toutes les propriétés à vérifier ne deviennent trivialement vraies. Cela demande d’étudier la satisfiabilité des formules d’environnement, ce qui n’est pas forcément simple. Nous terminons le cours par une brève présentation de l’algorithmique du model-checking, qui se divise en deux grandes classes de méthodes et de systèmes associés. Les méthodes explicites énumèrent systématiquement les états et transitions possibles. Comme la taille du modèle peut être gigantesque, ces méthodes utilisent des machines massivement parallèles ainsi que de nombreuses façons de réduire la complexité de l’analyse : calcul à la volée du modèle et des propriétés, exploration aléatoire du graphe, réduction par utilisation par symétrie ou commutation d’actions indépendantes, abstractions diverses, etc. Les méthodes implicites, introduites plus tard, utilisent des représentations symboliques des espaces d’états et de transitions, par exemple en utilisant des fonctions caractéristiques d’ensembles. Leur avantage est que la taille des formules servant à l’exploration du modèle n’est plus associée à la taille de l’espace d’états, leurs inconvénients étant que leur temps de calcul est difficilement prévisible et leur implémentation sur machine parallèle problématique. Les Binary Decision Diagrams (BDDs), la plus ancienne des représentations implicites en calcul booléen, seront étudiés au cours suivant. La satisfaction Booléenne (SAT) ou modulo théories (SMT) devient de plus en plus la méthode de choix pour le model-checking implicite. Nous l’illustrerons sur l’exemple bien connu du Sudoku, pertinent ici même si sa résolution n’est pas vraiment du model-checking temporel. Comme les méthodes explicites, les méthodes implicites font appel à de nombreuses techniques pour lutter contre l’explosion exponentielle du temps de calcul ou de la taille mémoire. Elles seront étudiées les années suivantes. Nous insisterons enfin sur deux propriétés essentielles des model-checkers, qui les rendent attirants pour les utilisateurs : le fait qu’ils ne demandent pas à leur utilisateur de connaître précisément les techniques qu’ils emploient, et leur faculté de produire des contre-exemples minimaux pour les propriétés fausses, qui est primordiale à la fois pour le débogage et la génération de tests.

Algorithmes, machines et langages
05 - Prouver les programmes : pourquoi, quand, comment ?

Algorithmes, machines et langages

Play Episode Listen Later Mar 25, 2015 63:20


Gérard Berry Algorithmes, machines et langages Année 2014-2015 Prouver les programmes : pourquoi, quand, comment ? Cinquième leçon : La vérification de modèles (model-checking) Ce cours termine la présentation générale des méthodes de vérification formelle par la vérification de modèles, plus connue sous son nom anglais original de model-checking. Cette méthode est bien différente des précédentes car elle s’intéresse essentiellement aux programmes d’états finis, ceux dont on peut au moins conceptuellement dérouler complètement toutes les exécutions possibles en temps et espace fini. De plus, contrairement aux méthodes précédemment décrites, le model-checking s’intéresse principalement aux programmes parallèles. Le parallélisme peut y être synchrone comme dans les circuits ou les langages synchrones présentés les années précédentes, ou asynchrones comme dans les protocoles de communication, les réseaux et les algorithmes distribués. Le model-checking est né au début des années 1980, quasi-simultanément en deux endroits : Grenoble avec J-P. Queille et J. Sifakis, qui ont développé le système CESAR et sa logique temporelle, et les USA avec E. Clarke et E. Emerson qui ont développé la logique temporelle CTL et le système EMV. Ces travaux ont donné le prix Turing 2007 à Clarke, Emerson et Sifakis. Ils s’appuyaient eux-mêmes sur les travaux d’Amir Pnueli (prix Turing en 1996) sur la logique temporelle. Le model-checking s’est considérablement développé ensuite, et constitue certainement la méthode formelle la plus utilisée dans l’industrie, en particulier dans la CAO de circuits. L’idée de base est de construire le graphe de toutes les exécutions possibles d’un programme, qu’on appelle son modèle. Ce modèle peut prendre la forme d’une structure de Kripke (logicien et philosophe de la logique modale), c’est-à-dire d’un graphe où les états sont étiquetés par des prédicats élémentaires, ou encore d’une structure de transitions, où les étiquettes sont portées par les transitions entre états. Une fois construit, le modèle devient indépendant du langage qui l’a engendré. Pour raisonner sur un modèle, un moyen très répandu est l’utilisation de logiques temporelles, définissent les propriétés temporelles à l’aide de propriétés élémentaires des états ou transitions et de quantificateurs sur les états ou les chemins de son graphe. On peut ainsi exprimer et vérifier des propriétés de sûreté (absence de bugs), comme « à aucun moment l’ascenseur ne peut voyager la porte ouverte », d’absence de blocages de l’exécution, ou de vivacité, comme « l’ascenseur finira par répondre à toutes les demandes des passagers » ou encore « chaque processus obtiendra infiniment souvent la ressource partagée s’il la demande infiniment souvent ». Nous présenterons d’abord la logique CTL*, la plus générale, qui permet d’imbriquer arbitrairement les quantifications d’états et de chemin sur les structures de Kripke. Mais cette logique très expressive est difficile à utiliser et les calculs y sont d’un coût prohibitif. Deux sous-logiques différentes sont utilisées : LTL (Linear Temporal Logic), qui ne quantifie pas sur les états et considère donc seulement des traces linéaires, et CTL, logique arborescente qui permet de quantifier sur les chemins mais avec des restrictions par rapport à CTL*. Ces deux logiques sont d’expressivités différentes et ont chacune des avantages et des inconvénients que nous discuterons brièvement. LTL est la logique la mieux adaptées pour la vérification de propriétés de vivacité, comme le montre L. Lamport (prix Turing 2014) avec son système TLA+. Mais, au contraire, elle ne permet pas d’exprimer des prédicats sur l’existence de calculs particuliers. La modélisation par systèmes de transitions, systématisée par R. Milner (prix Turing 1992) dans l’étude des calculs de processus communicants, permet de bien mieux composer les exécutions de ces processus parallèles. Une notion fondamentale introduite par Milner est la bisimulation, équivalence comportementale qui permet de comparer finement ce que peuvent faire ou ne pas faire les processus. Nous montrons que la réduction par bisimulation fournit une alternative très intéressante et intuitive aux logiques temporelles pour la vérification de modèles, en particulier en liaison avec les langages synchrones. Une dernière façon de conduite la vérification de modèles est de remplacer les formules temporelles par des programmes observateurs, qui prennent en entrée les entrées et les sorties du programme à vérifier et ont pour charge d’envoyer un signal de bug s’ils détectent une anomalie. Cette méthode est en particulier idéale pour les langages synchrones comme Esterel et Lustre étudiés les années précédentes, car les observateurs peuvent être écrits dans ces mêmes langages de façon plus simple qu’en logique temporelle, au moins pour les propriétés de sûreté qui sont les plus importantes dans leur domaine d’application. Cette méthode n’est en fait pas disjointe des précédentes, cat les formules temporelles sont souvent traduites en automates observateurs pour la vérification. Il faut noter que, dans tous les cas, le programme à vérifier évolue dans un environnement qu’il est important et souvent indispensable de modéliser aussi avec les mêmes techniques. La modélisation de l’environnement n’est pas forcément plus simple que celle du programme lui-même, et, particulièrement en logique temporelle, il faut s’assurer que le modèle d’environnement construit n’est pas vide, sous peine que toutes les propriétés à vérifier ne deviennent trivialement vraies. Cela demande d’étudier la satisfiabilité des formules d’environnement, ce qui n’est pas forcément simple. Nous terminons le cours par une brève présentation de l’algorithmique du model-checking, qui se divise en deux grandes classes de méthodes et de systèmes associés. Les méthodes explicites énumèrent systématiquement les états et transitions possibles. Comme la taille du modèle peut être gigantesque, ces méthodes utilisent des machines massivement parallèles ainsi que de nombreuses façons de réduire la complexité de l’analyse : calcul à la volée du modèle et des propriétés, exploration aléatoire du graphe, réduction par utilisation par symétrie ou commutation d’actions indépendantes, abstractions diverses, etc. Les méthodes implicites, introduites plus tard, utilisent des représentations symboliques des espaces d’états et de transitions, par exemple en utilisant des fonctions caractéristiques d’ensembles. Leur avantage est que la taille des formules servant à l’exploration du modèle n’est plus associée à la taille de l’espace d’états, leurs inconvénients étant que leur temps de calcul est difficilement prévisible et leur implémentation sur machine parallèle problématique. Les Binary Decision Diagrams (BDDs), la plus ancienne des représentations implicites en calcul booléen, seront étudiés au cours suivant. La satisfaction Booléenne (SAT) ou modulo théories (SMT) devient de plus en plus la méthode de choix pour le model-checking implicite. Nous l’illustrerons sur l’exemple bien connu du Sudoku, pertinent ici même si sa résolution n’est pas vraiment du model-checking temporel. Comme les méthodes explicites, les méthodes implicites font appel à de nombreuses techniques pour lutter contre l’explosion exponentielle du temps de calcul ou de la taille mémoire. Elles seront étudiées les années suivantes. Nous insisterons enfin sur deux propriétés essentielles des model-checkers, qui les rendent attirants pour les utilisateurs : le fait qu’ils ne demandent pas à leur utilisateur de connaître précisément les techniques qu’ils emploient, et leur faculté de produire des contre-exemples minimaux pour les propriétés fausses, qui est primordiale à la fois pour le débogage et la génération de tests.

Algorithmes, machines et langages
05 - Prouver les programmes : pourquoi, quand, comment ? - PDF

Algorithmes, machines et langages

Play Episode Listen Later Mar 25, 2015 63:20


Gérard Berry Algorithmes, machines et langages Année 2014-2015 Prouver les programmes : pourquoi, quand, comment ? Sixième leçon : La vérification de modèles (model-checking) Ce cours termine la présentation générale des méthodes de vérification formelle par la vérification de modèles, plus connue sous son nom anglais original de model-checking. Cette méthode est bien différente des précédentes car elle s’intéresse essentiellement aux programmes d’états finis, ceux dont on peut au moins conceptuellement dérouler complètement toutes les exécutions possibles en temps et espace fini. De plus, contrairement aux méthodes précédemment décrites, le model-checking s’intéresse principalement aux programmes parallèles. Le parallélisme peut y être synchrone comme dans les circuits ou les langages synchrones présentés les années précédentes, ou asynchrones comme dans les protocoles de communication, les réseaux et les algorithmes distribués. Le model-checking est né au début des années 1980, quasi-simultanément en deux endroits : Grenoble avec J-P. Queille et J. Sifakis, qui ont développé le système CESAR et sa logique temporelle, et les USA avec E. Clarke et E. Emerson qui ont développé la logique temporelle CTL et le système EMV. Ces travaux ont donné le prix Turing 2007 à Clarke, Emerson et Sifakis. Ils s’appuyaient eux-mêmes sur les travaux d’Amir Pnueli (prix Turing en 1996) sur la logique temporelle. Le model-checking s’est considérablement développé ensuite, et constitue certainement la méthode formelle la plus utilisée dans l’industrie, en particulier dans la CAO de circuits. L’idée de base est de construire le graphe de toutes les exécutions possibles d’un programme, qu’on appelle son modèle. Ce modèle peut prendre la forme d’une structure de Kripke (logicien et philosophe de la logique modale), c’est-à-dire d’un graphe où les états sont étiquetés par des prédicats élémentaires, ou encore d’une structure de transitions, où les étiquettes sont portées par les transitions entre états. Une fois construit, le modèle devient indépendant du langage qui l’a engendré. Pour raisonner sur un modèle, un moyen très répandu est l’utilisation de logiques temporelles, définissent les propriétés temporelles à l’aide de propriétés élémentaires des états ou transitions et de quantificateurs sur les états ou les chemins de son graphe. On peut ainsi exprimer et vérifier des propriétés de sûreté (absence de bugs), comme « à aucun moment l’ascenseur ne peut voyager la porte ouverte », d’absence de blocages de l’exécution, ou de vivacité, comme « l’ascenseur finira par répondre à toutes les demandes des passagers » ou encore « chaque processus obtiendra infiniment souvent la ressource partagée s’il la demande infiniment souvent ». Nous présenterons d’abord la logique CTL*, la plus générale, qui permet d’imbriquer arbitrairement les quantifications d’états et de chemin sur les structures de Kripke. Mais cette logique très expressive est difficile à utiliser et les calculs y sont d’un coût prohibitif. Deux sous-logiques différentes sont utilisées : LTL (Linear Temporal Logic), qui ne quantifie pas sur les états et considère donc seulement des traces linéaires, et CTL, logique arborescente qui permet de quantifier sur les chemins mais avec des restrictions par rapport à CTL*. Ces deux logiques sont d’expressivités différentes et ont chacune des avantages et des inconvénients que nous discuterons brièvement. LTL est la logique la mieux adaptées pour la vérification de propriétés de vivacité, comme le montre L. Lamport (prix Turing 2014) avec son système TLA+. Mais, au contraire, elle ne permet pas d’exprimer des prédicats sur l’existence de calculs particuliers. La modélisation par systèmes de transitions, systématisée par R. Milner (prix Turing 1992) dans l’étude des calculs de processus communicants, permet de bien mieux composer les exécutions de ces processus parallèles. Une notion fondamentale introduite par Milner est la bisimulation, équivalence comportementale qui permet de comparer finement ce que peuvent faire ou ne pas faire les processus. Nous montrons que la réduction par bisimulation fournit une alternative très intéressante et intuitive aux logiques temporelles pour la vérification de modèles, en particulier en liaison avec les langages synchrones. Une dernière façon de conduite la vérification de modèles est de remplacer les formules temporelles par des programmes observateurs, qui prennent en entrée les entrées et les sorties du programme à vérifier et ont pour charge d’envoyer un signal de bug s’ils détectent une anomalie. Cette méthode est en particulier idéale pour les langages synchrones comme Esterel et Lustre étudiés les années précédentes, car les observateurs peuvent être écrits dans ces mêmes langages de façon plus simple qu’en logique temporelle, au moins pour les propriétés de sûreté qui sont les plus importantes dans leur domaine d’application. Cette méthode n’est en fait pas disjointe des précédentes, cat les formules temporelles sont souvent traduites en automates observateurs pour la vérification. Il faut noter que, dans tous les cas, le programme à vérifier évolue dans un environnement qu’il est important et souvent indispensable de modéliser aussi avec les mêmes techniques. La modélisation de l’environnement n’est pas forcément plus simple que celle du programme lui-même, et, particulièrement en logique temporelle, il faut s’assurer que le modèle d’environnement construit n’est pas vide, sous peine que toutes les propriétés à vérifier ne deviennent trivialement vraies. Cela demande d’étudier la satisfiabilité des formules d’environnement, ce qui n’est pas forcément simple. Nous terminons le cours par une brève présentation de l’algorithmique du model-checking, qui se divise en deux grandes classes de méthodes et de systèmes associés. Les méthodes explicites énumèrent systématiquement les états et transitions possibles. Comme la taille du modèle peut être gigantesque, ces méthodes utilisent des machines massivement parallèles ainsi que de nombreuses façons de réduire la complexité de l’analyse : calcul à la volée du modèle et des propriétés, exploration aléatoire du graphe, réduction par utilisation par symétrie ou commutation d’actions indépendantes, abstractions diverses, etc. Les méthodes implicites, introduites plus tard, utilisent des représentations symboliques des espaces d’états et de transitions, par exemple en utilisant des fonctions caractéristiques d’ensembles. Leur avantage est que la taille des formules servant à l’exploration du modèle n’est plus associée à la taille de l’espace d’états, leurs inconvénients étant que leur temps de calcul est difficilement prévisible et leur implémentation sur machine parallèle problématique. Les Binary Decision Diagrams (BDDs), la plus ancienne des représentations implicites en calcul booléen, seront étudiés au cours suivant. La satisfaction Booléenne (SAT) ou modulo théories (SMT) devient de plus en plus la méthode de choix pour le model-checking implicite. Nous l’illustrerons sur l’exemple bien connu du Sudoku, pertinent ici même si sa résolution n’est pas vraiment du model-checking temporel. Comme les méthodes explicites, les méthodes implicites font appel à de nombreuses techniques pour lutter contre l’explosion exponentielle du temps de calcul ou de la taille mémoire. Elles seront étudiées les années suivantes. Nous insisterons enfin sur deux propriétés essentielles des model-checkers, qui les rendent attirants pour les utilisateurs : le fait qu’ils ne demandent pas à leur utilisateur de connaître précisément les techniques qu’ils emploient, et leur faculté de produire des contre-exemples minimaux pour les propriétés fausses, qui est primordiale à la fois pour le débogage et la génération de tests.

Q.E.D. Code
Partial Order

Q.E.D. Code

Play Episode Listen Later Dec 3, 2014 14:59


A fully ordered set is one in which we can tell for any two members which one comes before the other. Think integers. A partially ordered set, however, only gives us an answer for some pairs of nodes. A directed acyclic graph defines a partial order over its nodes. Find out how to compute this partial order, and some of the ways in which it can be applied. The number of degrees of freedom in a system of equations is equal to the number of unknowns minus the number of equations. We can add unknowns, as long as we also add the same number of equations, without changing the behavior of the system. Then we can rewrite the equations, again without changing behavior, to solve for some of those unknowns in terms of others. Doing so, we divide the system into independent and dependent variables. The number of degrees of freedom is equal to the number of independent variables. In 1978, Leslie Lamport wrote "Time, Clocks, and the Ordering of Events in a Distributed System". In this paper, he constructs a clock that gives us a way of saying which events in a distributed system happened before another. See how he did it, and start to imagine why this might be useful.

Guys on FOSS
LaTeX Part 1 - History

Guys on FOSS

Play Episode Listen Later Mar 3, 2011


Throughout history there have been people who identify a problem, and instead of idly accepting the situation, they produce a solution. These are people like Henry Ford who noticed the inefficiency of automobile production and invented the assembly line, a standard in modern manufacturing. Or perhaps we consider Bette Nesmith Graham, painter and more importantly white out inventor. She saw a common problem and instead of just chucking out entire sheets of typing she set out to fix the problem. Often these innovators do not realize the full potential of their solution to one problem. Who doesn't have a can or two of WD-40 in their house? This was originally a product to solve a specific space related problem, but it is now used on everything from zippers to distributer caps. There is another case in which a project was designed due to solve a daily irritation which took off quite dramatically. Of course I am talking about the subject of this article: TeX and more specifically LaTeX. To understand where LaTeX came from we must first examine it's foundation, TeX (pronounced tɛk). TeX was invented by a man by the name of Donald E. Knuth. Donald was growing frustrated with the current typesetting practices in the 70's. In fact his own book, The Art of Computer Programming, was re-published in the 70's he found the typesetting to be hideous. A few months later Knuth decided that he would not accept the situation idly and set out to produce TeX. The project was started in 1977 and Knuth predicted that he'd be able to finish it in one year. His estimate was off by about 10 years. However, Knuth's system did not really take off until the invention of a macro package called LaTeX. LaTeX was invented by a man named Lamport as a way for the user to concentrate on the writing instead of the formatting. Think C instead of Assembly. Today if you were to start writing a book in TeX, you'd most likely use LaTeX. It does a bunch of automatic formatting by section, chapter, etc. in order to make the document as readable as possible. This system really lets the author focus on the content, and then make stylistic changes later. Next article: Plain TeX vs LaTeX Very Useful Links:Big Online LaTeX Beginner's guide Nice PDF for LaTeX BeginnersNice PDF for Plain TeX BeginnersWikiBook on LaTeXHistoryWiki pages: LaTeX TeX -The Thoth-

Open Society Foundations Podcast
Fighting Impunity in Guatemala: The Experience of CICIG

Open Society Foundations Podcast

Play Episode Listen Later May 4, 2010 94:40


Featured at this Open Society Institute event is Carlos Castresana, the Spanish prosecutor who currently leads the International Commission against Impunity in Guatemala (CICIG), an unprecedented entity that seeks to assist Guatemalan institutions in investigating and ultimately dismantling domestic illegal security apparatuses and clandestine security organizations. Speakers: Carlos Castresana, Roberto Alejos, Rigoberta Menchu, Peter Lamport, Aryeh Neier, Robert Varenik. (Recorded: April 20, 2010)