Podcasts about Dennis Ritchie

American computer scientist

  • 46PODCASTS
  • 75EPISODES
  • 47mAVG DURATION
  • ?INFREQUENT EPISODES
  • Jan 7, 2025LATEST
Dennis Ritchie

POPULARITY

20172018201920202021202220232024


Best podcasts about Dennis Ritchie

Latest podcast episodes about Dennis Ritchie

David Bombal
#485: FREE Programming courses (Python, C, SQL and more)

David Bombal

Play Episode Listen Later Jan 7, 2025 70:28


Change your life in 2025! You have access to fantastic training from the amazing Dr Chuck - no excuses!! // Python for Everybody // Python for Everybody: https://www.py4e.com/ Python for Everybody on Coursera: https://www.coursera.org/specializati... YouTube: • Python for Everybody - Full Universit... Free Python Book: http://do1.dr-chuck.com/pythonlearn/E... Dr Chuck's Website: https://www.dr-chuck.com/ Free Python Book options: https://www.py4e.com/book // C for Everybody Course // Free C Programming Course https://www.cc4e.com/ Free course on YouTube (freeCodeCamp): • Dr. Chuck reads C Programming (the cl... C Programming for Everybody on Coursera: https://www.coursera.org/specializati... // C book Audio by Dr Chuck // https://www.cc4e.com/podcast // Django for Everybody // Django for Everybody: https://www.dj4e.com/ Django for Everybody for on Coursera: https://www.coursera.org/specializati... YouTube: • Django For Everybody - Full Python Un... // PostgreSQL for Everybody // PostgreSQL for Everybody: https://www.pg4e.com/ PostgreSQL for Everybody on Coursera: https://www.coursera.org/specializati... YouTube: • Welcome to PostgreSQL for Everybody -... // Web Applications for Everybody // YouTube: • Web Applications for Everybody Course... Web Applications for Everybody: https://www.wa4e.com/ Web Applications for Everybody on Coursera: https://www.coursera.org/specializati... YouTube: • Welcome to Web Applications for Every... // Books // The C Programming Language by Brian Kernighan and Dennis Ritchie (the 1984 Second Ed and 1978 First Ed): https://amzn.to/3G0HSkU // MY STUFF // https://www.amazon.com/shop/davidbombal // SOCIAL // Discord: discord.com/invite/usKSyzb Twitter: www.twitter.com/davidbombal Instagram: www.instagram.com/davidbombal LinkedIn: www.linkedin.com/in/davidbombal Facebook: www.facebook.com/davidbombal.co TikTok: tiktok.com/@davidbombal // Dr Chuck Social // Website: https://www.dr-chuck.com/ Twitter: / drchuck YouTube: / csev Coursera: https://www.coursera.org/instructor/d... // MENU // 0:00 - Coming up 01:33 - How A.I. is affecting education 04:25 - Using A.I. to help students learn 08:11 - A.I. will fail you // Using A.I. to cheat in the real-world 19:40 - The Golden Age of A.I. and how it will get worse 24:51 - Is it worth it becoming a programmer in 2025 27:15 - Will A.I. replace programmers? 29:12 - Programming as a career choice 36:52 - A.I. is becoming a hardware problem 40:28 - Expectations of the younger generation 44:40 - The Master Programmer explained // Higher education is changing 52:03 - The Master Programmer courses and how to get started 56:23 - Learning JavaScript 01:09:37 - Conclusion Please note that links listed may be affiliate links and provide me with a small percentage/kickback should you use them to purchase any of the items listed or recommended. Thank you for supporting me and this channel!

Hacking Humans
Encore: Unix (noun) [Word Notes]

Hacking Humans

Play Episode Listen Later Apr 16, 2024 5:15


A family of multitasking, multi-user computer operating systems that derive from the original Unix system built by Ken Thompson and Dennis Ritchie in the 1960s.

Word Notes
Encore: Unix (noun)

Word Notes

Play Episode Listen Later Apr 16, 2024 5:15


A family of multitasking, multi-user computer operating systems that derive from the original Unix system built by Ken Thompson and Dennis Ritchie in the 1960s. Learn more about your ad choices. Visit megaphone.fm/adchoices

David Bombal
#462: AI just replaced us with Devin... seriously? Dr Chuck!

David Bombal

Play Episode Listen Later Mar 22, 2024 34:03


Did the Devin AI just replace us and become the first fully autonomous AI software engineer? Dr Chuck tells us if this is fact or hype. // C for Everybody Course // Free C Programming Course https://www.cc4e.com/ Free course on YouTube (freeCodeCamp): • Learn C Programming with Dr. Chuck (f... C Programming for Everybody on Coursera: https://www.coursera.org/specializati... // C book Audio by Dr Chuck // https://www.cc4e.com/podcast // Python for Everybody // Python for Everybody: https://www.py4e.com/ Python for Everybody on Coursera: https://www.coursera.org/specializati... YouTube: • Python for Everybody - Full Universit... Free Python Book: http://do1.dr-chuck.com/pythonlearn/E... Dr Chuck's Website: https://www.dr-chuck.com/ Free Python Book options: https://www.py4e.com/book // Django for Everybody // Django for Everybody: https://www.dj4e.com/ Django for Everybody for on Coursera: https://www.coursera.org/specializati... YouTube: • Django For Everybody - Full Python Un... // PostgreSQL for Everybody // PostgreSQL for Everybody: https://www.pg4e.com/ PostgreSQL for Everybody on Coursera: https://www.coursera.org/specializati... YouTube: • Welcome to PostgreSQL for Everybody -... // Web Applications for Everybody // YouTube: • Web Applications for Everybody Course... Web Applications for Everybody: https://www.wa4e.com/ Web Applications for Everybody on Coursera: https://www.coursera.org/specializati... YouTube: • Welcome to Web Applications for Every... // Books // The C Programming Language by Brian Kernighan and Dennis Ritchie (the 1984 Second Ed and 1978 First Ed): https://amzn.to/3G0HSkU // MY STUFF // https://www.amazon.com/shop/davidbombal // SOCIAL // Discord: / discord Twitter: / davidbombal Instagram: / davidbombal LinkedIn: / davidbombal Facebook: / davidbombal.co TikTok: / davidbombal YouTube: / davidbombal // Dr Chuck Social // Website: https://www.dr-chuck.com/ Twitter: / drchuck YouTube: / csev Coursera: https://www.coursera.org/instructor/d... // David's SOCIAL // Discord: discord.com/invite/usKSyzb Twitter: www.twitter.com/davidbombal Instagram: www.instagram.com/davidbombal LinkedIn: www.linkedin.com/in/davidbombal Facebook: www.facebook.com/davidbombal.co TikTok: tiktok.com/@davidbombal ai devin devin ai nvidia the first AI agent software engineer AI Agent Software Engineer gpu nvidia chatgpt artificial intelligence bard ai jobs lamda c dr chuck dr chuck master programmer python neural network machine learning deep learning sentient google ai artificial intelligence google ai sentient google ai lamda google ai sentient conversation google ai alive ai jobs Please note that links listed may be affiliate links and provide me with a small percentage/kickback should you use them to purchase any of the items listed or recommended. Thank you for supporting me and this channel! #ai #devin #nvidia

Geeks in Space
The Chairs of Star Trek, X-Wing Records, Blasting Kidney Stones, VCV NES Rack, Suitable Flesh GIS811

Geeks in Space

Play Episode Listen Later Oct 31, 2023 35:26


RobChrisRob returned after weeks apart to pretend to be in low earth orbit to talk about VCV Rack and a new NES emulator module for the synthesier rack simulator, we talked about a screen used X-Wing model that sold at auction for $3.1M, we played a game where you guess classic keyboards via their shift keys, a site that listed chairs used in Star Trek, an ultrasonic kidney stone blaster, Hevry Cavill & Highlander, Estimating the Age of the Moon, Making Oxygen on Mars, the new hottest chili pepper, the Asteroid retrieval mission capsule is kinda stuck shut, a Penn & Teller prank from the 80s with Rob Pike & Dennis Ritchie, and finally a whole mess of Halloween spooktober movies including Suitable Flesh, The Conference, The Ritual, Five Nights at Freddies and whatever else we've been keeping track of lately... Join our discord to talk along or the Subreddit where you will find all the links https://discord.gg/YZMTgpyhB https://www.reddit.com/r/TacoZone/

EmacsTalk
015. 漫谈 Vim,对 Bram Moolenaar 的致敬

EmacsTalk

Play Episode Listen Later Aug 19, 2023 84:15


The Stephen Wolfram Podcast
History of Science & Technology Q&A (September 7, 2022)

The Stephen Wolfram Podcast

Play Episode Listen Later Jun 16, 2023 91:58


Stephen Wolfram answers questions from his viewers about the history science and technology as part of an unscripted livestream series, also available on YouTube here: https://wolfr.am/youtube-sw-qa Questions include: Did any ancient unit systems use base 10, or did they all use more easily dividable bases like 12, 20, 60, etc.? - What is the history of design patterns in software engineering? How did people come to them? - Did you ever meet Niklaus Wirth, Dennis Ritchie, Brian Kernighan, Alan Kay and/or Paul Allen? - Have Julia sets and Fatou sets played a significant role in the development of computer programming languages? - Agreed. Computer programming languages should be object oriented for the language and structure to make sense instead of coming off as abstract and convoluted, and also so they are easier to work with and learn. - Did eighteenth-century engineers/craftsmen make use of the paradigm of Newtonian mechanics? - Why is it that Isaac Newton spent most of his time trying to prove theological ideas? - When will Moore's law expire? Apple announced four-nanometer chip technology, and there has to be a limit. - I wonder whether the future will be multicomputational, but to be honest, computers nowadays are more than powerful enough for the average user.

Hacker Public Radio
HPR3874: 2022-2023 New Years Show Episode 9

Hacker Public Radio

Play Episode Listen Later Jun 8, 2023


Episode #9 wikipedia: MS-DOS is an operating system for x86-based personal computers mostly developed by Microsoft. freedos: FreeDOS is a complete, free, DOS-compatible operating system. While we provide some utilities, you should be able to run any program intended for MS-DOS. wikipedia: Linux (/ˈliːnʊks/ (listen) LEE-nuuks or /ˈlɪnʊks/ LIN-uuks) is a family of open-source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991, by Linus Torvalds. wikipedia: Token Ring is a computer networking technology used to build local area networks. It was introduced by IBM in 1984, and standardized in 1989 as IEEE 802.5. wikipedia: The BNC connector (initialism of "Bayonet Neill–Concelman") is a miniature quick connect/disconnect radio frequency connector used for coaxial cable. wikipedia: GPRS core network. wikipedia: Novell, Inc. /noʊˈvɛl/ was an American software and services company headquartered in Provo, Utah, that existed from 1980 until 2014. wikipedia: BITNET. wikipedia: DECnet. wikipedia: 3Com. realtek: realtek. tp: TP-Link Vastly Expands Smart Home Lineup With Tapo Full Home Security Solutions, Tapo Robot Vacuums and Various Matter Compatible Products. cisco: Cisco Systems, Inc., commonly known as Cisco, is an American-based multinational digital communications technology conglomerate corporation headquartered in San Jose, California. wikipedia: The International Business Machines Corporation (IBM), nicknamed Big Blue, is an American multinational technology corporation headquartered in Armonk, New York, with operations in over 175 countries. It specializes in computer hardware, middleware and software and provides hosting and consulting services in areas ranging from mainframe computers to nanotechnology. duckduckgo: Bootleg stuff search. wikipedia: VM (often: VM/CMS) is a family of IBM virtual machine operating systems used on IBM mainframes System/370, System/390, zSeries, System z and compatible systems, including the Hercules emulator for personal computers. wikipedia: Disk partitioning or disk slicing is the creation of one or more regions on secondary storage, so that each region can be managed separately. wikipedia: The IBM System/360 is a family of mainframe computer systems that was announced by IBM on April 7, 1964, and delivered between 1965 and 1978. wikipedia: The IBM System/370 (S/370) is a model range of IBM mainframe computers announced on June 30, 1970, as the successors to the System/360 family. cisco: What Is Routing? wikipedia: The Internet protocol suite, commonly known as TCP/IP, is a framework for organizing the set of communication protocols used in the Internet and similar computer networks according to functional criteria. wikipedia: The Open Systems Interconnection protocols are a family of information exchange standards developed jointly by the ISO and the ITU-T. The standardization process began in 1977. perl: Perl is a highly capable, feature-rich programming language with over 30 years of development. wikipedia: An FTP server is computer software consisting of one or more programs that can execute commands given by remote client(s) such as receiving, sending, deleting files, creating or removing directories, etc. wikipedia: The Defense Advanced Research Projects Agency (DARPA) is a research and development agency of the United States Department of Defense responsible for the development of emerging technologies for use by the military. wikipedia: The Advanced Research Projects Agency Network (ARPANET) was the first wide-area packet-switched network with distributed control and one of the first networks to implement the TCP/IP protocol suite. wikipedia: A modulator-demodulator or modem is a computer hardware device that converts data from a digital format into a format suitable for an analog transmission medium such as telephone or radio. wikipedia: Telnet (short for "teletype network") is a client/server application protocol that provides access to virtual terminals of remote systems on local area networks or the Internet. wikipedia: Remote Function Call is a proprietary SAP interface. icannwiki: BBN (Bolt, Beranek and Newman Inc.), now Raytheon BBN Technologies, is one of the leading Research and Development companies in the United States, dedicated to providing high-technology products and services to consumers. wikipedia: A punched card (also punch card or punched-card) is a piece of stiff paper that holds digital data represented by the presence or absence of holes in predefined positions. wikipedia: Punched tape or perforated paper tape is a form of data storage that consists of a long strip of paper in which holes are punched. wikipedia: A teleprinter (teletypewriter, teletype or TTY) is an electromechanical device that can be used to send and receive typed messages through various communications channels, in both point-to-point and point-to-multipoint configurations. wikipedia: Teletype Model 33. wikipedia: Teletype Model 37. wikipedia: Unix (/ˈjuːnɪks/; trademarked as UNIX) is a family of multitasking, multiuser computer operating systems that derive from the original AT&T Unix, whose development started in 1969 at the Bell Labs research center by Ken Thompson, Dennis Ritchie, and others. wikipedia: Wang Laboratories was a US computer company founded in 1951 by An Wang and G. Y. Chu. wikipedia: Library (computing). wikipedia: Magnetic-core memory was the predominant form of random-access computer memory for 20 years between about 1955 and 1975. BASIC BASIC (Beginners' All-purpose Symbolic Instruction Code) is a family of general-purpose, high-level programming languages designed for ease of use. The original version was created by John G. Kemeny and Thomas E. Kurtz at Dartmouth College in 1963. wikipedia: Microsoft BASIC is the foundation software product of the Microsoft company and evolved into a line of BASIC interpreters and compiler(s) adapted for many different microcomputers. It first appeared in 1975 as Altair BASIC, which was the first version of BASIC published by Microsoft as well as the first high-level programming language available for the Altair 8800 microcomputer. wikipedia: A floppy disk or floppy diskette (casually referred to as a floppy, or a diskette) is an obsolescent type of disk storage composed of a thin and flexible disk of a magnetic storage medium in a square or nearly square plastic enclosure lined with a fabric that removes dust particles from the spinning disk. wikipedia: A tape drive is a data storage device that reads and writes data on a magnetic tape. wikipedia: In computer engineering, microarchitecture, also called computer organization and sometimes abbreviated as µarch or uarch, is the way a given instruction set architecture (ISA) is implemented in a particular processor. wikipedia: A microsleep is a sudden temporary episode of sleep or drowsiness which may last for a few seconds where an individual fails to respond to some arbitrary sensory input and becomes unconscious. clevo: We offer over 50 models from CLEVO. wikipedia: Clevo is a Taiwanese OEM/ODM computer manufacturer which produces laptop computers exclusively. wikipedia: Rapid transit or mass rapid transit (MRT), also known as heavy rail or metro, is a type of high-capacity public transport generally found in urban areas. wikipedia: Cracker Jack is an American brand of snack food that consists of molasses-flavored, caramel-coated popcorn, and peanuts, well known for being packaged with a prize of trivial value inside. gov: UK Driver's Licence. gov: Legal obligations of drivers and riders. sheilaswheels: We keep our Sheilas happy by supplying fabulous 5 Star Defaqto rated car and home insurance, and that's helped us to become one of the UK's leading direct insurers. nestle: Yorkie was launched in 1976 by Rowntree's of York hence the name. wikipedia: Joyriding refers to driving or riding in a stolen vehicle, most commonly a car, with no particular goal other than the pleasure or thrill of doing so or to impress other people. oggcamp: OggCamp is an unconference celebrating Free Culture, Free and Open Source Software, hardware hacking, digital rights, and all manner of collaborative cultural activities and is committed to creating a conference that is as inclusive as possible. ubuntu: Ubuntu is a Linux distribution based on Debian and composed mostly of free and open-source software. wikipedia: Ubuntu. wikipedia: Mark Shuttleworth. ubuntu: Ubuntu tablet press pack. stallman: Richard Stallman's Personal Site. elementary: The thoughtful, capable, and ethical replacement for Windows and macOS. slackware: The Slackware Linux Project. wikipedia: identi.ca was a free and open-source social networking and blogging service based on the pump.io software, using the Activity Streams protocol. wikipedia: GNU social (previously known as StatusNet and once known as Laconica) is a free and open source software microblogging server written in PHP that implements the OStatus standard for interoperation between installations. wikipedia: Friendica (formerly Friendika, originally Mistpark) is a free and open-source software distributed social network. lugcast: We are an open Podcast/LUG that meets every first and third Friday of every month using mumble. toastmasters Toastmasters International is a nonprofit educational organization that teaches public speaking and leadership skills through a worldwide network of clubs. wikipedia: Motorola, Inc. (/ˌmoʊtəˈroʊlə/) was an American multinational telecommunications company based in Schaumburg, Illinois, United States. volla: Volla Phone. ubports: We are building a secure & private operating system for your smartphone. sailfishos: The mobile OS with built-in privacy. calyxos: CalyxOS is an operating system for smartphones based on Android with mostly free and open-source software. wikipedia: WhatsApp. IRC IRC is short for Internet Relay Chat. It is a popular chat service still in use today. zoom: Unified communication and collaboration platform. jitsi: Jitsi Free & Open Source Video Conferencing Projects. joinmastodon: Mastodon is free and open-source software for running self-hosted social networking services. wikipedia: Karen Sandler is the executive director of the Software Freedom Conservancy, former executive director of the GNOME Foundation, an attorney, and former general counsel of the Software Freedom Law Center. fosdem: FOSDEM is a free event for software developers to meet, share ideas and collaborate. southeastlinuxfest: The SouthEast LinuxFest is a community event for anyone who wants to learn more about Linux and Open Source Software. olfconference: OLF (formerly known as Ohio LinuxFest) is a grassroots conference for the GNU/Linux/Open Source Software/Free Software community that started in 2003 as a large inter-LUG (Linux User Group) meeting and has grown steadily since. linuxfests: A home for educational programs focused on free and open source software & culture. wikipedia: Notacon (pronounced "not-a-con") was an art and technology conference which took place annually in Cleveland, Ohio from 2003 to 2014. penpalworld: a place where you can meet over 3,000,000 pen pals from every country on the planet. redhat: Red Hat Enterprise Linux. openssl: The OpenSSL Project develops and maintains the OpenSSL software - a robust, commercial-grade, full-featured toolkit for general-purpose cryptography and secure communication. STEM wikipedia: Obsessive–compulsive disorder. cdc: Autism. wikipedia: Asperger syndrome. askubuntu: Manual partitioning during installation. wikipedia: Colon cancer staging. cdc: Get Vaccinated Before You Travel. sqlite: SQLite is a C-language library that implements a small, fast, self-contained, high-reliability, full-featured, SQL database engine. wikipedia: Facial recognition system. wikipedia: Tribalism is the state of being organized by, or advocating for, tribes or tribal lifestyles. wikipedia: Southern hospitality. wikipedia: The Kroger Company, or simply Kroger, is an American retail company that operates (either directly or through its subsidiaries) supermarkets and multi-department stores throughout the United States. wikipedia: Prosopagnosia, more commonly known as face blindness, is a cognitive disorder of face perception in which the ability to recognize familiar faces, including one's own face, is impaired, while other aspects of visual processing and intellectual functioning remain intact. wikipedia: T-Mobile is the brand name used by some of the mobile communications subsidiaries of the German telecommunications company Deutsche Telekom AG in the Czech Republic, Poland, the United States and by the former subsidiary in the Netherlands. stackexchange: Where did the phrase "batsh-t crazy" come from? wikipedia: A conspiracy theory is an explanation for an event or situation that asserts the existence of a conspiracy by powerful and sinister groups, often political in motivation, when other explanations are more probable. brigs: At Brigs, we want everyone to get exactly what they're craving! papajohns: Papa Johns. dominos: Domino's Pizza, Inc., trading as Domino's, is a Michigan-based multinational pizza restaurant chain founded in 1960 and led by CEO Russell Weiner. wikipedia: Loitering is the act of remaining in a particular public place for a prolonged amount of time without any apparent purpose. wikipedia: Psychiatric hospitals, also known as mental health hospitals, behavioral health hospitals, are hospitals or wards specializing in the treatment of severe mental disorders, such as schizophrenia, bipolar disorder, eating disorders, dissociative identity disorder, major depressive disorder and many others. wikipedia: Therapist is a person who offers any kinds of therapy. Thanks To: Mumble Server: Delwin HPR Site/VPS: Joshua Knapp - AnHonestHost.com Streams: Honkeymagoo EtherPad: HonkeyMagoo Shownotes by: Sgoti and hplovecraft

OsProgramadores
E75 (EN) - Brian Kernighan - Professor of Computer Science at Princeton University

OsProgramadores

Play Episode Listen Later Mar 6, 2023 44:13


From Brian's Wikipedia page: Brian Wilson Kernighan is a Canadian computer scientist, he worked at Bell Labs and contributed to the development of Unix alongside Unix creators Ken Thompson and Dennis Ritchie. Kernighan's name became widely known through co-authorship of the first book on the C programming language (The C Programming Language) with Dennis Ritchie. Kernighan affirmed that he had no part in the design of the C language ("it's entirely Dennis Ritchie's work").[8] He authored many Unix programs, including ditroff. Kernighan is coauthor of the AWK and AMPL programming languages. The "K" of K&R C and of AWK both stand for "Kernighan". In collaboration with Shen Lin he devised well-known heuristics for two NP-complete optimization problems: graph partitioning and the travelling salesman problem. In a display of authorial equity, the former is usually called the Kernighan–Lin algorithm, while the latter is known as the Lin–Kernighan heuristic. Kernighan has been a Professor of Computer Science at Princeton University since 2000 and is the Director of Undergraduate Studies in the Department of Computer Science. In 2015, he co-authored the book The Go Programming Language. Links IBM 7094 Multics University of Toronto CTSS B programming Language BCPL Go ChatGPT PDP7 PL/1 Python Livros AWK The C Programming Language The Go Programming Language The Mythical Man-Month How to lie with statistics The Elements of Style OsProgramadores Site do OsProgramadores Grupo do OsProgramadores no Telegram

David Bombal
#411: 2023 Path to Master Programmer (for free)

David Bombal

Play Episode Listen Later Jan 5, 2023 19:50


This is your FREE path to becoming a master programmer! Use the links below to learn Python, C, Django, PostgreSQL and web programming for free! :) // Menu // 00:00 - Intro 01:24 - Dr Chuck's Courses 02:24 - Path to Master Programmer 07:00 - Path to Master Programmer Languages 10:41 - How to Get the Courses 14:24 - How Python Changed the World 15:42 - Do You Need a Degree? 18:01 - Financial Aid 19:39 - Conclusion // Previous video // Computer Science isn't programming: https://youtu.be/z3o6yEzcnLc Best programming language ever: https://youtu.be/aQ_XTBmCXS8 // C for Everybody Course // Free C Programming Course https://www.cc4e.com/ Free course on YouTube: https://www.youtube.com/watch?v=XteaW... // C book Audio by Dr Chuck // https://www.cc4e.com/podcast // Python for Everybody // Python for Everybody: https://www.py4e.com/ Python for Everybody on Coursera: https://www.coursera.org/specializati... YouTube: https://youtu.be/8DvywoWv6fI Free Python Book: http://do1.dr-chuck.com/pythonlearn/E... Dr Chuck's Website: https://www.dr-chuck.com/ Free Python Book options: https://www.py4e.com/book // Django for Everybody // Django for Everybody: https://www.dj4e.com/ Django for Everybody for on Coursera: https://www.coursera.org/specializati... YouTube: https://youtu.be/o0XbHvKxw7Y // PostgreSQL for Everybody // PostgreSQL for Everybody: https://www.pg4e.com/ PostgreSQL for Everybody on Coursera: https://www.coursera.org/specializati... YouTube: https://www.youtube.com/watch?v=flRUu... // Web Applications for Everybody // YouTube: https://youtu.be/xr6uZDRTna0 Web Applications for Everybody: https://www.wa4e.com/ Web Applications for Everybody on Coursera: https://www.coursera.org/specializati... YouTube: https://www.youtube.com/watch?v=tuXyS... // Books // The C Programming Language by Brian Kernighan and Dennis Ritchie (the 1984 Second Ed and 1978 First Ed): https://amzn.to/3G0HSkU // MY STUFF // https://www.amazon.com/shop/davidbombal // SOCIAL // Discord: https://discord.com/invite/usKSyzb Twitter: https://www.twitter.com/davidbombal Instagram: https://www.instagram.com/davidbombal LinkedIn: https://www.linkedin.com/in/davidbombal Facebook: https://www.facebook.com/davidbombal.co TikTok: http://tiktok.com/@davidbombal YouTube: https://www.youtube.com/davidbombal // Dr Chuck Social // Website: https://www.dr-chuck.com/ Twitter: https://twitter.com/drchuck/ YouTube: https://www.youtube.com/user/csev Coursera: https://www.coursera.org/instructor/d... c rust c vs rust c course free c course Python C Django SQL PostgreSQL PHP MySQL jQuery CSS best programming language python python course python for beginners master programmer dr chuck dr chuck master programmer python mentorship google code interview google interview computer science python best course dr chuck python dr chuck python course learn to code software development software developer computer science software engineer software engineering how to learn programming free python course free python course online free python class free python tutorial free python training how to learn to code coding tutorials how to code learning to code learn to code for free learn to code python python jobs coding bootcamp google code interview python for beginners python full course python tutorial python projects python basic tutorial python programming python interview questions python course python basics open source #python #javascript #drchuck

David Bombal
#412: Best Programming Language Ever? (Free Course)

David Bombal

Play Episode Listen Later Jan 5, 2023 44:31


Is this the best programming language ever created? How did it change the world in 1978 and affect developments such as the Apple M1? // Menu // 00:00 - Intro 00:46 - Dr Chuck's Courses 02:18 - C Program 04:40 - C Programming vs Rust Programming 06:58 - C Programming Language Book 08:52 - CC4E.com / Fair Use 13:01 - Amazon 18:58 - Learning Different Languages 24:58 - Garbage Collection 27:40 - C Programming Language Backstory 36:12 - Power PC to Intel 42:13 - Why You Need Master Programmer 42:57 - Did C Change the World? // Previous video // Computer Science isn't programming: https://youtu.be/z3o6yEzcnLc // C for Everybody Course // Free C Programming Course https://www.cc4e.com/ Free course on YouTube (freeCodeCamp): https://youtu.be/j-_s8f5K30I // C book Audio by Dr Chuck // https://www.cc4e.com/podcast // Python for Everybody // Python for Everybody: https://www.py4e.com/ Python for Everybody on Coursera: https://www.coursera.org/specializati... YouTube: https://youtu.be/8DvywoWv6fI Free Python Book: http://do1.dr-chuck.com/pythonlearn/E... Dr Chuck's Website: https://www.dr-chuck.com/ Free Python Book options: https://www.py4e.com/book // Django for Everybody // Django for Everybody: https://www.dj4e.com/ Django for Everybody for on Coursera: https://www.coursera.org/specializati... YouTube: https://youtu.be/o0XbHvKxw7Y // PostgreSQL for Everybody // PostgreSQL for Everybody: https://www.pg4e.com/ PostgreSQL for Everybody on Coursera: https://www.coursera.org/specializati... YouTube: https://www.youtube.com/watch?v=flRUu... // Web Applications for Everybody // YouTube: https://youtu.be/xr6uZDRTna0 Web Applications for Everybody: https://www.wa4e.com/ Web Applications for Everybody on Coursera: https://www.coursera.org/specializati... YouTube: https://www.youtube.com/watch?v=tuXyS... // Books // The C Programming Language by Brian Kernighan and Dennis Ritchie (the 1984 Second Ed and 1978 First Ed): https://amzn.to/3G0HSkU // MY STUFF // https://www.amazon.com/shop/davidbombal // SOCIAL // Discord: https://discord.com/invite/usKSyzb Twitter: https://www.twitter.com/davidbombal Instagram: https://www.instagram.com/davidbombal LinkedIn: https://www.linkedin.com/in/davidbombal Facebook: https://www.facebook.com/davidbombal.co TikTok: http://tiktok.com/@davidbombal YouTube: https://www.youtube.com/davidbombal // Dr Chuck Social // Website: https://www.dr-chuck.com/ Twitter: https://twitter.com/drchuck/ YouTube: https://www.youtube.com/user/csev Coursera: https://www.coursera.org/instructor/d... c rust c vs rust c course free c course best programming language python python course python for beginners master programmer dr chuck dr chuck master programmer python mentorship google code interview google interview computer science python best course dr chuck python dr chuck python course learn to code software development software developer computer science software engineer software engineering how to learn programming free python course free python course online free python class free python tutorial free python training how to learn to code coding tutorials how to code learning to code learn to code for free learn to code python python jobs coding bootcamp google code interview python for beginners python full course python tutorial python projects python basic tutorial python programming python interview questions python course python basics open source #c #rust #drchuck

Podcast de CreadoresDigitales
Dennis Ritchie, Aviones eléctricos y recortes en Intel

Podcast de CreadoresDigitales

Play Episode Listen Later Oct 14, 2022 84:36


Dennis Ritchie, Aviones eléctricos y recortes en Intel además de varias noticias que en ésta ocasión tornaron nuestro programa en algo bien futurista, a veces alentador y aveces sombrío. ¡Elige el tipo de futuro que quieres tener!

The Hacks
Super Nerd Spotlight! "The Origin Guys"

The Hacks

Play Episode Listen Later Sep 27, 2022 42:58


Tom is losing his mind. Chunga has been bugging him to do a new Super Nerd Spotlight episode of The Hacks. Tom has been insisting that the two of them have done one of these recently.  Much to Chunga's delight, Tom is wrong! For today's episode, they've decided to focus on a group of men that Tom calls "The Origin Guys". They're a small group of dudes, who are largely credited with creating modern computing as we know it. Tom and Chunga will look all the way back to the 1960s and '70s and talk about guys like Dennis Ritchie, Rob Pike, Ken Thompson, and many more! It'll be fun to take at how far we've come from the first day until now!  Believe it or not, it's a really short period of time, and most of these guys are still alive and well!  Can they still be credited for the way we're using computers today?  Listen now to find out! Check out the brand new Idem Project! Learn more about Salt!

The Bike Shed
354: The History of Computing

The Bike Shed

Play Episode Listen Later Sep 13, 2022 31:16


Why does the history of computing matter? Joël and Developer at thoughtbot Sara Jackson, ponder this and share some cool stories (and trivia!!) behind the tools we use in the industry. This episode is brought to you by Airbrake (https://airbrake.io/?utm_campaign=Q3_2022%3A%20Bike%20Shed%20Podcast%20Ad&utm_source=Bike%20Shed&utm_medium=website). Visit Frictionless error monitoring and performance insight for your app stack. Sara on Twitter (https://twitter.com/csarajackson) UNIX philosophy (https://en.wikipedia.org/wiki/Unix_philosophy) Hillel Wayne on why we ask linked list questions (https://www.hillelwayne.com/post/linked-lists/) Transcript: JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. And today, I'm joined by fellow thoughtboter, Team Lead, and Developer Sara Jackson. SARA: Hello, happy to be here. JOËL: Together, we're here to share a little bit of what we've learned along the way. So, Sara, what's new in your world? SARA: Well, Joël, you might know that recently our team had a small get-together in Toronto. JOËL: And our team, for those who are not aware, is fully remote distributed across multiple countries. So this was a chance to get together in person. SARA: Yes, correct. This was a chance for those on the Boost team to get together and work together as if we had a physical office. JOËL: Was this your first time meeting some members of the team? SARA: It was my second, for the most part. So I joined thoughtbot, but after thoughtbot had already gotten remote. Fortunately, I was able to meet many other thoughtboters in May at our summit. JOËL: Had you worked at a remote company before coming to thoughtbot? SARA: Yes, I actually started working remotely in 2019, but even then, that wasn't my first time working remotely. I actually had a full year of internship in college that was remote. JOËL: So you were a pro at this long before the pandemic made us all try it out. SARA: I don't know about that, but I've certainly dealt with the idiosyncrasies that come with remote work for longer. JOËL: What do you think are some of the challenges of remote work as opposed to working in person in an office? SARA: I think definitely growing and maintaining a culture. When you're in an office, it's easy to create ad hoc conversations and have events that are small that build on the culture. But when you're remote, it has to be a lot more intentional. JOËL: That definitely rings true for me. One of the things that I really appreciated about in-person office culture was the serendipity that you have those sort of random meetings at the water cooler, those conversations, waiting for coffee with people who are not necessarily on the same team or the same project as you are. SARA: I also really miss being able to have lunch in person with folks where I can casually gripe about an issue I might be having, and almost certainly, someone would have the answer. Now, if I'm having an issue, I have to intentionally seek help. [chuckles] JOËL: One of the funny things that often happened, at least the office where I worked at, was that lunches would often devolve into taxonomy conversations. SARA: I wish I had been there for that. [laughter] JOËL: Well, we do have a taxonomy channel on Slack to somewhat continue that legacy. SARA: Do you have a favorite taxonomy lunch discussion that you recall? JOËL: I definitely got to the point where I hated the classifying a sandwich. That one has been way overdone. SARA: Absolutely. JOËL: There was an interesting one about motorcycles, and mopeds, and bicycles, and e-bikes, and trying to see how do you distinguish one from the other. Is it an electric motor? Is it the power of the engine that you have? Is it the size? SARA: My brain is already turning on those thoughts. I feel like I could get lost down that rabbit hole very easily. [laughter] JOËL: Maybe that should be like a special anniversary episode for The Bike Shed, just one long taxonomy ramble. SARA: Where we talk about bikes. JOËL: Ooh, that's so perfect. I love it. One thing that I really appreciated during our time in Toronto was that we actually got to have lunch in person again. SARA: Yeah, that was so wonderful. Having folks coming together that had maybe never worked together directly on clients just getting to sit down and talk about our day. JOËL: Yeah, and talk about maybe it's work-related, maybe it's not. There's a lot of power to having some amount of deeper interpersonal connection with your co-workers beyond just the we work on a project together. SARA: Yeah, it's like camaraderie beyond the shared mission of the company. It's the shared interpersonal mission, like you say. Did you have any in-person pairing sessions in Toronto? JOËL: I did. It was actually kind of serendipitous. Someone was stuck with a weird failing test because somehow the order factories were getting created in was not behaving in the expected way, and we herd on it, dug into it, found some weird thing with composite primary keys, and solved the issue. SARA: That's wonderful. I love that. I wonder if that interaction would have happened or gotten solved as quickly if we hadn't been in person. JOËL: I don't know about you, but I feel like I sometimes struggle to ask for help or ask for a pair more when I'm online. SARA: Yeah, I agree. It's easier to feel like you're not as big of an impediment when you're in person. You tap someone on the shoulder, "Hey, can you take a look at this?" JOËL: Especially when they're on the same team as you, they're sitting at the next desk over. I don't know; it just felt easier. Even though it's literally one button press to get Tuple to make a call, somehow, I feel like I'm interrupting more. SARA: To combat that, I've been trying to pair more frequently and consistently regardless of if I'm struggling with a problem. JOËL: Has that worked pretty well? SARA: It's been wonderful. The only downside has been pairing fatigue. JOËL: Pairing fatigue is real. SARA: But other than that, problems have gotten solved quickly. We've all learned something for those that I've paired with. It goes faster. JOËL: So it was really great that we had this experience of doing our daily work but co-located in person; we have these experiences of working together. What would you say has been one of the highlights for you of that time? SARA: 100% karaoke. JOËL: [laughs] SARA: Only two folks did not attend. Many of the folks that did attend told me they weren't going to sing, but they were just going to watch. By the end of the night, everyone had sung. We were there for nearly three and a half hours. [laughs] JOËL: It was a good time all around. SARA: I saw a different side to Chad. JOËL: [laughs] SARA: And everyone, honestly. Were there any musical choices that surprised you? JOËL: Not particularly. Karaoke is always fun when you have a group of people that you trust to be a little bit foolish in front of to put yourself out there. I really appreciated the style that we went for, where we have a private room for just the people who were there as opposed to a stage in a bar somewhere. I think that makes it a little bit more accessible to pick up the mic and try to sing a song. SARA: I agree. That style of karaoke is a lot more popular in Asia, having your private room. Sometimes you can find it in major cities. But I also prefer it for that reason. JOËL: One of my highlights of this trip was this very sort of serendipitous moment that happened. Someone was asking a question about the difference between a Mac and Linux operating systems. And then just an impromptu gathering happened. And you pulled up a chair, and you're like, gather around, everyone. In the beginning, there was Multics. It was amazing. SARA: I felt like some kind of historian or librarian coming out from the deep. Let me tell you about this random operating system knowledge that I have. [laughs] JOËL: The ancient lore. SARA: The ancient lore in the year 1969. JOËL: [laughs] And then yeah, we had a conversation walking the history of operating systems, and why we have macOS and Linux, and why they're different, and why Windows is a totally different kind of family there. SARA: Yeah, macOS and Linux are sort of like cousins coming from the same tree. JOËL: Is that because they're both related through Unix? SARA: Yes. Linux and macOS are both built based off of different versions of Unix. Over the years, there's almost like a family tree of these different Nix operating systems as they're called. JOËL: I've sometimes seen asterisk N-I-X. This is what you're referring to as Nix. SARA: Yes, where the asterisk is like the RegEx catch-all. JOËL: So this might be Unix. It might be Linux. It might be... SARA: Minix. JOËL: All of those. SARA: Do you know the origin of the name Unix? JOËL: I do not. SARA: It's kind of a fun trivia piece. So, in the beginning, there was Multics spelled M-U-L-T-I-C-S, standing for the Multiplexed Information and Computing Service. Dennis Ritchie and Ken Thompson of Bell Labs famous for the C programming language... JOËL: You may have heard of it. SARA: You may have heard of it maybe on a different podcast. They were employees at Bell Labs when Multics was being created. They felt that Multics was very bulky and heavy. It was trying to do too many things at once. It did have a few good concepts. So they developed their own smaller Unix originally, Unics, the Uniplexed Information and Computing Service, Uniplexed versus Multiplexed. We do one thing really well. JOËL: And that's the Unix philosophy. SARA: It absolutely is. The Unix philosophy developed out of the creation of Unix and C. Do you know the four main points? JOËL: No, is it small sharp tools? It's the main one I hear. SARA: Yes, that is the kind of quippy version that has come out for sure. JOËL: But there is a formal four-point manifesto. SARA: I believe it's evolved over the years. But it's interesting looking at the Unix philosophy and seeing how relevant it is today in web development. The four points being make each program do one thing well. To this end, don't add features; make a new program. I feel like we have this a lot in encapsulation. JOËL: Hmm, maybe even the open-closed principle. SARA: Absolutely. JOËL: Similar idea. SARA: Another part of the philosophy is expecting output of your program to become input of another program that is yet unknown. The key being don't clutter your output; don't have extraneous text. This feels very similar to how we develop APIs. JOËL: With a focus on composability. SARA: Absolutely. Being able to chain commands together like you see in Ruby all the time. JOËL: I love being able to do this, for example, the enumerable API in Ruby and just being able to chain all these methods together to just very nicely do some pretty big transformations on an array or some other data structure. SARA: 100% agree there. That ability almost certainly came out of following the tenets of this philosophy, maybe not knowingly so but maybe knowingly so. [chuckles] JOËL: So is that three or four? SARA: So that was two. The third being what we know as agile. JOËL: Really? SARA: Yeah, right? The '70s brought us agile. Design and build software to be tried early, and don't hesitate to throw away clumsy parts and rebuild. JOËL: Hmmm. SARA: Even in those days, despite waterfall style still coming on the horizon. It was known for those writing software that it was important to iterate quickly. JOËL: Wow, I would never have known. SARA: It's neat having this history available to us. It's sort of like a lens at where we came from. Another piece of this history that might seem like a more modern concept but was a very big part of the movement in the '70s and the '80s was using tools rather than unskilled help or trying to struggle through something yourself when you're lightening a programming task. We see this all the time at thoughtbot. Folks do this many times there is an issue on a client code. We are able to generalize the solution, extract into a tool that can then be reused. JOËL: So that's the same kind of genesis as a lot of thoughtbot's open-source gems, so I'm thinking of FactoryBot, Clearance, Paperclip, the old-timey file upload gem, Suspenders, the Rails app generator, and the list goes on. SARA: I love that in this last point of the Unix philosophy, they specifically call out that you should create a new tool, even if it means detouring, even if it means throwing the tools out later. JOËL: What impact do you think that has had on the way that tooling in the Unix, or maybe I should say *Nix, ecosystem has developed? SARA: It was a major aspect of the Nix environment community because Unix was available, not free, but very inexpensively to educational institutions. And because of how lightweight it was and its focus on single-use programs, programs that were designed to do one thing, and also the way the shell was allowing you to use commands directly and having it be the same language as the shell scripting language, users, students, amateurs, and I say that in a loving way, were able to create their own tools very quickly. It was almost like a renaissance of Homebrew. JOËL: Not Homebrew as in the macOS package manager. SARA: [laughs] And also not Homebrew as in the alcoholic beverage. JOËL: [laughs] So, this kind of history is fun trivia to know. Is it really something valuable for us as a jobbing developer in 2022? SARA: I would say it's a difficult question. If you are someone that doesn't dive into the why of something, especially when something goes wrong, maybe it wouldn't be important or useful. But what sparked the conversation in Toronto was trying to determine why we as thoughtbot tend to prefer using Macs to develop on versus Linux or Windows. There is a reason, and the reason is in the history. Knowing that can clarify decisions and can give meaning where it feels like an arbitrary decision. JOËL: Right. We're not just picking Macs because they're shiny. SARA: They are certainly shiny. And the first thing I did was to put a matte case on it. JOËL: [laughs] So no shiny in your office. SARA: If there were too many shiny things in my office, boy, I would never get work done. The cats would be all over me. MID-ROLL AD: Debugging errors can be a developer's worst nightmare...but it doesn't have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers, that can actually help cut your debugging time in half. So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking! Airbrake's debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted. In addition to stellar error monitoring, Airbrake's lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction. Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality. Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back. Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today! JOËL: So we've talked a little bit about Unix or *Nix, this evolution of systems. I've also heard the term POSIX thrown around when talking about things that seem to encompass both macOS and Linux. How does that fit into this history? SARA: POSIX is sort of an umbrella of standards around operating systems that was based on Unix and the things that were standard in Unix. It stands for the Portable Operating System Interface. This allowed for compatibility between OSs, very similar to USB being the standard for peripherals. JOËL: So, if I was implementing my own Unix-like operating system in the '80s, I would try to conform to the POSIX standard. SARA: Absolutely. Now, not every Nix operating system is POSIX-compliant, but most are or at least 90% of the way there. JOËL: Are any of the big ones that people tend to think about not compliant? SARA: A major player in the operating system space that is not generally considered POSIX-compliant is Microsoft Windows. JOËL: [laughs] It doesn't even try to be Unix-like, right? It's just its own thing, SARA: It is completely its own thing. I don't think it even has a standard necessarily that it conforms to. JOËL: It is its own standard, its own branch of the family tree. SARA: And that's what happens when your operating system is very proprietary. This has caused folks pain, I'm sure, in the past that may have tried to develop software on their computers using languages that are more readily compatible with POSIX operating systems. JOËL: So would you say that a language like Ruby is more compatible with one of the POSIX-compatible operating systems? SARA: 100% yes. In fact, to even use Ruby as a development tool in Windows, prior to Windows 10, you needed an additional tool. You needed something like Cygwin or MinGW, which were POSIX-compliant programs that it was almost like a shell in your Windows computer that would allow you to run those commands. JOËL: Really? For some reason, I thought that they had some executables that you could run just on Windows by itself. SARA: Now they do, fortunately, to the benefit of Ruby developers everywhere. As of Windows 10, we now have WSL, the Windows Subsystem for Linux that's built-in. You don't have to worry about installing or configuring some third-party software. JOËL: I guess that kind of almost cheats by just having a POSIX system embedded in your non-POSIX system. SARA: It does feel like a cheat, but I think it was born out of demand. The Windows NT kernel, for example, is mostly POSIX-compliant. JOËL: Really? SARA: As a result of it being used primarily for servers. JOËL: So you mentioned the Ruby tends and the Rails ecosystem tends to run better and much more frequently on the various Nix systems. Did it have to be that way? Or is it just kind of an accident of history that we happen to end up with Ruby and Rails in this ecosystem, but just as easily, it could have evolved in the Windows world? SARA: I think it is an amalgam of things. For example, Unix and Nix operating systems being developed earlier, being widely spread due to being license-free oftentimes, and being widely used in the education space. Also, because it is so lightweight, it is the operating system of choice. For most servers in the world, they're running some form of Unix, Linux, or macOS. JOËL: I don't think I've ever seen a server that runs macOS; exclusively seen it on dev machines. SARA: If you go to an animation company, they have server farms of macOS machines because they're really good at rendering. This might not be the case anymore, but it was at one point. JOËL: That's a whole other world that I've not interacted with a whole lot. SARA: [chuckles] JOËL: It's a fun intersection between software, and design, and storytelling. That is an important part for the software field. SARA: Yeah, it's definitely an aspect that deserves its own deep dive of sorts. If you have a server that's running a Windows-based operating system like NT and you have a website or a program that's designed to be served under a Unix-based server, it can easily be hosted on the Windows server; it's not an issue. The reverse is not true. JOËL: Oh. SARA: And this is why programming on a Nix system is the better choice. JOËL: It's more broadly compatible. SARA: Absolutely. Significantly more compatible with more things. JOËL: So today, when I develop, a lot of the tooling that I use is open source. The open-source movement has created a lot of the languages that we know and love, including Ruby, including Rails. Do you think there's some connection between a lot of that tooling being open source and maybe some of the Unix family of operating systems and movements that came out of that branch of the operating system family tree? SARA: I think that there is a lot of tie-in with today's open-source culture and the computing history that we've been talking about, for example, people finding something that they dislike about the tools that are available and then rolling their own. That's what Ken Thompson and Dennis Ritchie did. Unix was not an official Bell development. It was a side project for them. JOËL: I love that. SARA: You see this happen a lot in the software world where a program gets shared widely, and due to this, it gains traction and gains buy-in from the community. If your software is easily accessible to students, folks that are learning, and breaking things, and rebuilding, and trying, and inventing, it's going to persist. And we saw that with Unix. JOËL: I feel like this background on where a lot of these operating systems came but then also the ecosystems, the values that evolved with them has given me a deeper appreciation of the tooling, the systems that we work with today. Are there any other advantages, do you think, to trying to learn a little bit of computing history? SARA: I think the main benefit that I mentioned before of if you're a person that wants to know why, then there is a great benefit in knowing some of these details. That being said, you don't need to deep dive or read multiple books or write papers on it. You can get enough information from reading or skimming some Wikipedia pages. But it's interesting to know where we came from and how it still affects us today. Ruby was written in C, for example. Unix was written in C as well, originally Assembly Language, but it got rewritten in C. And understanding the underlying tooling that goes into that that when things go wrong, you know where to look. JOËL: I guess that that is the next question is where do you look if you're kind of interested? Is Wikipedia good enough? You just sort of look up operating system, and it tells you where to go? Or do you have other sources you like to search for or start pulling at those threads to understand history? SARA: That's a great question. And Wikipedia is a wonderful starting point for sure. It has a lot of the abbreviated history and links to better references. I don't have them off the top of my head. So I will find them for you for the show notes. But there are some old esoteric websites with some of this history more thoroughly documented by the people that lived it. JOËL: I feel like those websites always end up being in HTML 2; your very basic text, horizontal rules, no CSS. SARA: Mm-hmm. And those are the sites that have many wonderful kernels of knowledge. JOËL: Uh-huh! Great pun. SARA: [chuckles] Thank you. JOËL: Do you read any content by Hillel Wayne? SARA: I have not. JOËL: So Hillel produces a lot of deep dives into computing history, oftentimes trying to answer very particular questions such as when and why did we start using reversing a linked list as the canonical interview question? And there are often urban legends around like, oh, it's because of this. And then Hillel will do some research and go through actual archives of messages on message boards or...what is that protocol? SARA: BBS. JOËL: Yes. And then find the real answer, like, do actual historical methodology, and I love that. SARA: I had not heard of this before. I don't know how. And that is all I'm going to be doing this weekend is reading these. That kind of history speaks to my heart. I have a random fun fact along those lines that I wanted to bring to the show, which was that the echo command that we know and love in the terminal was first introduced by the Multics operating system. JOËL: Wow. So that's like the most common piece of Multics that as an everyday user of a modern operating system that we would still touch a little bit of that history every day when we work. SARA: Yeah, it's one of those things that we don't think about too much. Where did it come from? How long has it been around? I'm sure the implementation today is very different. But it's like etymology, and like taxonomy, pulling those threads. JOËL: Two fantastic topics. On that wonderful little nugget of knowledge, let's wrap up. Sara, where can people find you online? SARA: You can find me on Twitter at @csarajackson. JOËL: And we will include a link to that in the show notes. SARA: Thank you so much for having me on the show and letting me nerd out about operating system history. JOËL: It's been a pleasure. The show notes for this episode can be found at bikeshed.fm. This show is produced and edited by Mandy Moore. If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes. It really helps other folks find the show. If you have any feedback, you can reach us at @_bikeshed or reach me @joelquen on Twitter or at hosts@bikeshed.fm via email. Thank you so much for listening to The Bike Shed, and we'll see you next week. Byeeeeee!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

C Programming for Everybody (cc4e.com)
Welcome to C Programming for Everybody - www.cc4e.com

C Programming for Everybody (cc4e.com)

Play Episode Listen Later Jul 10, 2022 2:53


This www.cc4e.com web site is dedicated to learning the "classic" version of the C Programming language from the 1978 book written by Brian Kernighan and Dennis Ritchie. This K&R book places the reader in the middle of the 1970's transition from a hardware-centered computer science to a focus on writing portable and efficient software. C was used to develop operating systems like Unix, Minix, and Linux.

C Programming for Everybody (cc4e.com)
C Programming - Chapter 0 - Preface

C Programming for Everybody (cc4e.com)

Play Episode Listen Later Jul 9, 2022 18:03


The preface to C Programming by Brian Kernighan and Dennis Ritchie places the C programming language in the context of the other popular programming languages of the 1960's and 1970's FORTRAN, COBOL, Pascal, Algol, and PL/I .  Many concepts like separation of concerns and the use of provided run-time libraries versus language syntax are introduced and described.

The History of Computing
Colossal Cave Adventure

The History of Computing

Play Episode Listen Later Jun 2, 2022 11:28


Imagine a game that begins with a printout that reads: You are standing at the end of a road before a small brick building. Around you is a forest. A small stream flows out of the building and down a gully. In the distance there is a tall gleaming white tower. Now imagine typing some information into a teletype and then reading the next printout. And then another. A trail of paper lists your every move. This is interactive gaming in the 1970s. Later versions had a monitor so a screen could just show a cursor and the player needed to know what to type. Type N and hit enter and the player travels north. “Search” doesn't work but “look” does. “Take water” works as does “Drink water” but it takes hours to find dwarves and dragons and figure out how to battle or escape. This is one of the earliest games we played and it was marvelous. The game was called Colossal Cave Adventure and it was one of the first conversational adventure games. Many came after it in the 70s and 80s, in an era before good graphics were feasible. But the imagination was strong.  The Oregon Trail was written before it, in 1971 and Trek73 came in 1973, both written for HP minicomputers. Dungeon was written in 1975 for a PDP-10. The author, Don Daglow, went on the work on games like Utopia and Neverwinter Nights Another game called Dungeon showed up in 1975 as well, on the PLATO network at the University of Illinois Champagne-Urbana. As the computer monitor spread, so spread games. William Crowther got his degree in physics at MIT and then went to work at Bolt Baranek and Newman during the early days of the ARPANET. He was on the IMP team, or the people who developed the Interface Message Processor, the first nodes of the packet switching ARPANET, the ancestor of the Internet. They were long hours, but when he wasn't working, he and his wife Pat explored caves. She was a programmer as well. Or he played the new Dungeons & Dragons game that was popular with other programmers. The two got divorced in 1975 and like many suddenly single fathers he searched for something for his daughters to do when they were at the house. Crowther combined exploring caves, Dungeons & Dragons, and FORTRAN to get Colossal Cave Adventure, often just called Adventure. And since he worked on the ARPANET, the game found its way out onto the growing computer network. Crowther moved to Palo Alto and went to work for Xerox PARC in 1976 before going back to BBN and eventually retiring from Cisco. Crowther loosely based the game mechanics on the ELIZA natural language processing work done by Joseph Weizenbaum at the MIT Artificial Intelligence Laboratory in the 1960s. That had been a project to show how computers could be shown to understand text provided to computers. It was most notably used in tests to have a computer provide therapy sessions. And writing software for the kids or gaming can be therapeutic as well. As can replaying happier times.  Crowther explored Mammoth Cave National Park in Kentucky in the early 1970s. The characters in the game follow along his notes about the caves, exploring the area around it using natural language while the computer looked for commands in what was entered. It took about 700 lines to do the original Fortran code for the PDP-10 he had at his disposal at BBN. When he was done he went off on vacation, and the game spread. Programmers in that era just shared code. Source needed to be recompiled for different computers, so they had to. Another programmer was Don Woods, who also used a PDP-10. He went to Princeton in the 1970s and was working at the Stanford AI Lab, or SAIL, at the time. He came across the game and asked Crowther if it would be OK to add a few features and did. His version got distributed through DECUS, or the Digital Equipment Computer Users Society. A lot of people went there for software at the time. The game was up to 3,000 lines of code when it left Woods. The adventurer could now enter the mysterious cave in search of the hidden treasures. The concept of the computer as a narrator began with Collosal Cave Adventure and is now widely used. Although we now have vast scenery rendered and can point and click where we want to go so don't need to type commands as often. The interpreter looked for commands like “move”, “interact” with other characters, “get” items for the inventory, etc. Woods went further and added more words and the ability to interpret punctuation as well. He also added over a thousand lines of text used to identify and describe the 40 locations. Woods continued to update that game until the mid-1990s. James Gillogly of RAND ported the code to C so it would run on the newer Unix architecture in 1977  and it's still part of many a BSD distribution. Microsoft published a version of Adventure in 1979 that was distributed for the Apple II and TRS-80 and followed that up in 1981 with a version for Microsoft DOS or MS-DOS. Adventure was now a commercial product. Kevin Black wrote a version for IBM PCs. Peter Gerrard ported it to Amiga Bob Supnik rose to a Vice President at Digital Equipment, not because he ported the game, but it didn't hurt. And throughout the 1980s, the game spread to other devices as well. Peter Gerrard implemented the version for the Tandy 1000. The Original Adventure was a version that came out of Aventuras AD in Spain. They gave it one of the biggest updates of all. Colossal Cave Adventure was never forgotten, even though it was Zork was replaced. Zork came along in 1977 and Adventureland in 1979. Ken and Roberta Williams played the game in 1979. Ken had bounced around the computer industry for awhile and had a teletype terminal at home when he came across Colossal Cave Adventure in 1979. The two became transfixed and opened their own company to make the game they released the next year called Mystery House. And the text adventure genre moved to a new level when they sold 15,000 copies and it became the first hit. Rogue, and others followed, increasingly interactive, until fully immersive graphical games replaced the adventure genre in general. That process began when Warren Robinett of Atari created the 1980 game, Adventure.  Robinett saw Colossal Cave Adventure when he visited the Stanford Artificial Intelligence Laboratory in 1977. He was inspired into a life of programming by a programming professor he had in college named Ken Thompson while he was on sabbatical from Bell Labs. That's where Thompason, with Dennis Ritchie and one of the most amazing teams of programmers ever assembled, gave the world Unix and the the C programming language at Bell Labs. Adventure game went on to sell over a million copies and the genre of fantasy action-adventure games moved from text to video.

The History of Computing
Project MAC and Multics

The History of Computing

Play Episode Listen Later Feb 15, 2022 11:31


Welcome to the history of computing podcast. Today we're going to cover a cold war-era project called Project MAC that bridged MIT with GE and Bell Labs. The Russians beat the US to space when they launched Sputnik in 1958. Many in the US felt the nation was falling behind and so later that year president Dwight D. Eisenhower appointed then president of MIT James Killian as the Presidential Assistant for Science and created ARPA. The office was lean and funded a few projects without much oversight. One was Project MAC at MIT, which helped cement the university as one of the top in the field of computing as it grew. Project MAC, short for Project on Mathematics and Computation, was a 1960s collaborative endeavor to develop a workable timesharing system. The concept of timesharing initially emerged during the late 1950s. Scientists and Researchers finally went beyond batch processing with Whirlwind and its spiritual predecessors, the TX-0 through TX-2 computers at MIT. We had computer memory now and so had interactive computing. That meant we could explore different ways to connect directly with the machine. In 1959, British mathematician Christopher Strachey presented the first public presentation on timesharing at a UNESCO meeting, and John McCarthy distributed an internal letter regarding timesharing at MIT. Timesharing was initially demonstrated at the MIT Computational Center in November 1961, under the supervision of Fernando Corbato, an MIT professor. J.C.R. Licklider at ARPA had been involved with MIT for most of his career in one way or another and helped provide vision and funding along with contacts and guidance, including getting the team to work with Bolt, Beranek & Newman (BBN). Yuri Alekseyevich Gagarin went to space in 1961. The Russians were still lapping us. Money. Governments spend money. Let's do that. Licklider assisted in the development of Project MAC, machine-assisted cognition, led by Professor Robert M. Fano. He then funded the project with $3 million per year. That would become the most prominent initiative in timesharing. In 1967, the Information Processing Techniques Office invested more than $12 million in over a dozen timesharing programs at colleges and research institutions. Timesharing then enabled the development of new software and hardware separate from that used for batch processing. Thus, one of the most important innovations to come out of the project was an operating system capable of supporting multiple parallel users - all of whom could have complete control of the machine. The operating system they created would be known as Multics, short for Multiplexed Information and Computing Service. It was created for a GE 645 computer but modular in nature and could be ported to other computers. The project was a collaborative effort between MIT, GE, and Bell Labs. Multics was the first time we really split files away from objects read in memory and wrote them into memory for processing then back to disk. They developed the concepts of dynamic linking, daemons, procedural calls, hierarchical file systems, process stacks, a split between user land and the system, and much more. By the end of six months after Project MAC was created, 200 users in 10 different MIT departments had secured access to the system. The Project MAC laboratory was apart from its former Department of Electrical Engineering by 1967 and evolved into its interdepartmental laboratory. Multics progressed from computer timesharing to a networked computer system, integrating file sharing and administration capabilities and security mechanisms into its architecture. The sophisticated design, which could serve 300 daily active users on 1,000 MIT terminal computers within a couple more years, inspired engineers Ken Thompson and Dennis Ritchie to create their own at Bell Labs, which evolved into the C programming language and the Unix operating system. See, all the stakeholders with all the things they wanted in the operating system had built something slow and fragile. Solo developers don't tend to build amazing systems, but neither do large intracompany bureaucracies. GE never did commercialize Multics because they ended their computer hardware business in 1970. Bell Labs dropped out of the project as well. So Honeywell acquired the General Electric computer division and so rights to the Multics project. In addition, Honeywell possessed several other operating systems, each supported by its internal organizations. In 1976, Project MAC was renamed the Laboratory for Computer Science (LCS) at MIT, broadening its scope. Michael L. Dertouzos, the lab's director, advocated developing intelligent computer programs. To increase computer use, the laboratory analyzed how to construct cost-effective, user-friendly systems and the theoretical underpinnings of computer science to recognize space and time constraints. Some of their project ran for decades afterwards. In 2000, several Multics sites were shut down. The concept of buying corporate “computer utilities” was a large area of research in the late 60s to 70s. Scientists bought time on computers that universities purchased. Companies did the same. The pace of research at both increased dramatically. Companies like Tymeshare and IBM made money selling time or processing credits, and then after an anti-trust case, IBM handed that business over to Control Data Corporation, who developed training centers to teach people how to lease time. These helped prepare a generation of programmers when the microcomputers came along, often taking people who had spent their whole careers on CDC Cybers or Burroughs mainframes by surprise. That seems to happen with the rapid changes in computing. But it was good to those who invested in the concept early. And the lessons learned about scalable architectures were skills that transitioned nicely into a microcomputer world. In fact, many environments still run on applications built in this era. The Laboratory for Computer Science (LCS) accomplished other ground-breaking work, including playing a critical role in advancing the Internet. It was often larger but less opulent than the AI lab at MIT. And their role in developing applications that would facilitate online processing and evaluation across various academic fields, such as engineering, medical, and library sciences led to advances in each. In 2004, LCS merged with MIT's AI laboratory to establish the Computer Science and Artificial Intelligence Laboratory (CSAIL), one of the flagship research labs at MIT. And in the meantime countless computer scientists who contributed at every level of the field flowed through MIT - some because of the name made in those early days. And the royalties from patents have certainly helped the universities endowment. The Cold War thawed. The US reduced ARPA spending after the Mansfield Amendment was passed in 1969. The MIT hackers flowed out to the world, changing not only how people thought of automating business processes, but how they thought of work and collaboration. And those hackers were happy to circumvent all the security precautions put on Multics, and so cultural movements evolved from there. And the legacy of Multics lived on in Unix, which evolved to influence Linux and is in some way now a part of iOS, Mac OS, Android, and Chrome OS.

Command Line Heroes en español
La revolución de C

Command Line Heroes en español

Play Episode Listen Later Feb 1, 2022 27:18


C y UNIX son la base de la informática moderna. Muchos de los lenguajes de los que hemos hablado esta temporada están relacionados o al menos tuvieron alguna influencia de C. Lo increíble es que C y UNIX surgieron gracias a cuatro desarrolladores de Bell Labs que se aferraron a sus sueños y los crearon como un proyecto propio.   Bell Labs fue un centro de innovación de mediados del siglo XX. Jon Gertner lo describe como una "fábrica de ideas". Uno de sus proyectos más importantes en la década de 1960 fue ayudar a desarrollar un sistema operativo de tiempo compartido llamado Multics. La Dra. Joy Lisi Rankin explica que en ese momento hubo una exageración importante en torno al tiempo compartido: se le describió como algo que lograría que se accediera a la computación como si fuera un servicio público. Hubo equipos grandes que dedicaron muchos años a desarrollar Multics, pero el resultado no fue el esperado. Bell Labs se alejó oficialmente del tiempo compartido en 1969. Pero, como cuenta Andrew Tanenbaum, un pequeño equipo de héroes siguió adelante, y C y UNIX fueron el fruto de sus esfuerzos. En ese momento ni siquiera se imaginaban que su trabajo daría forma al curso de la tecnología.

Hôm nay ngày gì?
12 Tháng 10 Là Ngày Gì? Hôm Nay Là Ngày Sinh Của Vua Trần Thánh Tông

Hôm nay ngày gì?

Play Episode Listen Later Oct 12, 2021 2:35


12 Tháng 10 Là Ngày Gì? Hôm Nay Là Ngày Sinh Của Vua Trần Thánh Tông SỰ KIỆN 2017 –Hoa Kỳ công bố quyết định rút khỏi UNESCO 1901 - Tổng thống Theodore Roosevelt chính thức đổi tên "Biệt thự Hành pháp" thành Nhà Trắng . 1847 –Werner von Siemens thành lập Siemens & Halske, sau này trở thành Siemens AG. 1492 - Chuyến thám hiểm đầu tiên của Christopher Columbus đổ bộ vào vùng biển Caribê, đặc biệt là ở Bahamas . 1994 - Tàu vũ trụ Magellan bốc cháy trong bầu khí quyển của Sao Kim. 2005 –Tàu vũ trụ thứ hai có phi hành gia của Trung Quốc, Thần Châu 6, được phóng 1810 - Người dân Munich tổ chức lễ hội Oktoberfest lần đầu tiên . 2019 - Eliud Kipchoge đến từ Kenya trở thành người đầu tiên chạy marathon trong vòng chưa đầy hai giờ với thời gian 1:59:40 tại Vienna Sinh 1240 –Trần Thánh Tông, Hoàng đế thứ hai của nhà Trần Mất 2011 –Dennis Ritchie, nhà khoa học máy tính người Mỹ, người tạo ra ngôn ngữ lập trình C (s. 1941) 2011 –Hồ Thị Bi, ""Bà Năm chính sách"", nữ chỉ huy quân sự của Việt Nam trong Kháng chiến chống Pháp (s. 1916) 2007 - Kisho Kurokawa , kiến ​​trúc sư người Nhật, thiết kế Tháp Nakagin Capsule (sinh năm 1934) Chương trình "Hôm nay ngày gì" hiện đã có mặt trên Youtube, Facebook và Spotify: - Facebook: https://www.facebook.com/aweekmedia - Youtube: https://www.youtube.com/c/AWeekTV - Spotify: https://open.spotify.com/show/6rC4CgZNV6tJpX2RIcbK0J - Apple Podcast: https://podcasts.apple.com/.../h%C3%B4m-nay.../id1586073418 #aweektv #12thang10 #UNESCO #SiemensAG #Oktoberfest #DennisRitchie #DennisRitchie Các video đều thuộc quyền sở hữu của Adwell jsc (adwell.vn) , mọi hành động sử dụng lại nội dung của chúng tôi đều không được phép. --- Send in a voice message: https://anchor.fm/aweek-tv/message

Hôm nay ngày gì?
9 Tháng 9 Là Ngày Gì? Hôm Nay Là Ngày Sinh Của Ca Sĩ Như Quỳnh

Hôm nay ngày gì?

Play Episode Listen Later Sep 9, 2021 1:48


9 Tháng 9 Là Ngày Gì? Hôm Nay Là Ngày Sinh Của Ca Sĩ Như Quỳnh SỰ KIỆN 1948 – Nước Cộng hòa Dân chủ Nhân dân Triều Tiên được thành lập tại miền bắc bán đảo Triều Tiên, Kim Nhật Thành nhậm chức thủ tướng nội các. 1791 – Thủ đô Hoa Kỳ được đặt tên theo Tổng thống George Washington. Sinh 1987 – Jung Il Woo, diễn viên người Hàn Quốc 1985 – Luka Modrić, cầu thủ bóng đá người Croatia 1970 – ca sĩ Như Quỳnh .Cô tuy hoạt động ở hải ngoại là chủ yếu nhưng lại có một lượng lớn những người yêu thích và thần tượng ở Việt Nam. Những ca khúc do Như Quỳnh thể hiện đều được khá nhiều người yêu thích. Điển hình nhất là hai bài hát Vùng lá me bay và Duyên phận mà Như Quỳnh từng thể hiện vào năm 2010 đã trở thành một hiện tượng vào những năm 2016 - 2017, khi dòng nhạc Bolero bùng nổ mạnh mẽ tại Việt Nam. " 1941 – Dennis Ritchie, nhà khoa học máy tính người Mỹ, tạo ra ngôn ngữ lập trình C (m. 2011) 1890 – Harland Sanders, doanh nhân người Mỹ, sáng lập KFC (m. 1980) Mất 1978 – Jack Warner, nhà sản xuất phim người Canada, đồng sáng lập Warner Bros. (s. 1892) Chương trình "Hôm nay ngày gì" hiện đã có mặt trên Youtube, Facebook và Spotify: - Facebook: https://www.facebook.com/aweekmedia - Youtube: https://www.youtube.com/c/AWeekTV - Spotify: https://open.spotify.com/show/6rC4CgZNV6tJpX2RIcbK0J #aweektv #9thang9 #LukaModrić #NhưQuỳnh #KFC #WarnerBros Các video đều thuộc quyền sở hữu của Adwell jsc, mọi hành động sử dụng lại nội dung của chúng tôi đều không được phép. --- Send in a voice message: https://anchor.fm/aweek-tv/message

The History of Computing
The Innovations Of Bell Labs

The History of Computing

Play Episode Listen Later Aug 15, 2021 22:18


What is the nature of innovation? Is it overhearing a conversation as with Morse and the telegraph? Working with the deaf as with Bell? Divine inspiration? Necessity? Science fiction? Or given that the answer to all of these is yes, is it really more the intersectionality between them and multiple basic and applied sciences with deeper understandings in each domain? Or is it being given the freedom to research? Or being directed to research? Few have as storied a history of innovation as Bell Labs and few have had anything close to the impact. Bell Labs gave us 9 Nobel Prizes and 5 Turing awards. Their alumni have even more, but those were the ones earned while at Bell. And along the way they gave us 26,000 patents. They researched, automated, and built systems that connected practically every human around the world - moving us all into an era of instant communication. It's a rich history that goes back in time from the 2018 Ashkin Nobel for applied optical tweezers and 2018 Turing award for Deep Learning to an almost steampunk era of tophats and the dawn of the electrification of the world. Those late 1800s saw a flurry of applied and basic research. One reason was that governments were starting to fund that research. Alessandro Volta had come along and given us the battery and it was starting to change the world. So Napolean's nephew, Napoleon III, during the second French Empire gave us the Volta Prize in 1852. One of those great researchers to receive the Volta Prize was Alexander Graham Bell. He invented the telephone in 1876 and was awarded the Volta Prize, getting 50,000 francs. He used the money to establish the Volta Laboratory, which would evolve or be a precursor to a research lab that would be called Bell Labs. He also formed the Bell Patent Association in 1876. They would research sound. Recording, transmission, and analysis - so science. There was a flurry of business happening in preparation to put a phone in every home in the world. We got the Bell System, The Bell Telephone Company, American Bell Telephone Company patent disputes with Elisha Gray over the telephone (and so the acquisition of Western Electric), and finally American Telephone and Telegraph, or AT&T. Think of all this as Ma' Bell. Not Pa' Bell mind you - as Graham Bell gave all of his shares except 10 to his new wife when they were married in 1877. And her dad ended up helping build the company and later creating National Geographic, even going international with International Bell Telephone Company. Bell's assistant Thomas Watson sold his shares off to become a millionaire in the 1800s, and embarking on a life as a Shakespearean actor. But Bell wasn't done contributing. He still wanted to research all the things. Hackers gotta' hack. And the company needed him to - keep in mind, they were a cutting edge technology company (then as in now). That thirst for research would infuse AT&T - with Bell Labs paying homage to the founder's contribution to the modern day. Over the years they'd be on West Street in New York and expand to have locations around the US. Think about this: it was becoming clear that automation would be able to replace human efforts where electricity is concerned. The next few decades gave us the vacuum tube, flip flop circuits, mass deployment of radio. The world was becoming ever so slightly interconnected. And Bell Labs was researching all of it. From physics to the applied sciences. By the 1920s, they were doing sound synchronized with motion and shooting that over long distances and calculating the noise loss. They were researching encryption. Because people wanted their calls to be private. That began with things like one-time pad cyphers but would evolve into speech synthesizers and even SIGSALY, the first encrypted (or scrambled) speech transmission that led to the invention of the first computer modem. They had engineers like Harry Nyquist, whose name is on dozens of theories, frequencies, even noise. He arrived in 1917 and stayed until he retired in 1954. One of his most important contributions was to move beyond printing telegraph to paper tape and to helping transmit pictures over electricity - and Herbert Ives from there sent color photos, thus the fax was born (although it would be Xerox who commercialized the modern fax machine in the 1960s). Nyquist and others like Ralph Hartley worked on making audio better, able to transmit over longer lines, reducing feedback, or noise. While there, Hartley gave us the oscillator, developed radio receivers, parametric amplifiers, and then got into servomechanisms before retiring from Bell Labs in 1950. The scientists who'd been in their prime between the two world wars were titans and left behind commercializable products, even if they didn't necessarily always mean to. By the 40s a new generation was there and building on the shoulders of these giants. Nyquist's work was extended by Claude Shannon, who we devoted an entire episode to. He did a lot of mathematical analysis like writing “A Mathematical Theory of Communication” to birth Information Theory as a science. They were researching radio because secretly I think they all knew those leased lines would some day become 5G. But also because the tech giants of the era included radio and many could see a day coming when radio, telephony, and aThey were researching how electrons diffracted, leading to George Paget Thomson receiving the Nobel Prize and beginning the race for solid state storage. Much of the work being done was statistical in nature. And they had William Edwards Deming there, whose work on statistical analysis when he was in Japan following World War II inspired a global quality movement that continues to this day in the form of frameworks like Six Sigma and TQM. Imagine a time when Japanese manufacturing was of such low quality that he couldn't stay on a phone call for a few minutes or use a product for a time. His work in Japan's reconstruction paired with dedicated founders like Akio Morita, who co-founded Sony, led to one of the greatest productivity increases, without sacrificing quality, of any time in the world. Deming would change the way Ford worked, giving us the “quality culture.” Their scientists had built mechanical calculators going back to the 30s (Shannon had built a differential analyzer while still at MIT) - first for calculating the numbers they needed to science better then for ballistic trajectories, then with the Model V in 1946, general computing. But these were slow; electromechanical at best. Mary Torrey was another statistician of the era who along with Harold Hodge gave us the theory of acceptance sampling and thus quality control for electronics. And basic electronics research to do flip-flop circuits fast enough to establish a call across a number of different relays was where much of this was leading. We couldn't use mechanical computers for that, and tubes were too slow. And so in 1947 John Bardeen, Walter Brattain, and William Shockley invented the transistor at Bell Labs, which be paired with Shannon's work to give us the early era of computers as we began to weave Boolean logic in ways that allowed us to skip moving parts and move to a purely transistorized world of computing. In fact, they all knew one day soon, everything that monster ENIAC and its bastard stepchild UNIVAC was doing would be done on a single wafer of silicon. But there was more basic research to get there. The types of wires we could use, the Marnaugh map from Maurice Karnaugh, zone melting so we could do level doping. And by 1959 Mohamed Atalla and Dawon Kahng gave us metal-oxide semiconductor field-effect transistors, or MOSFETs - which was a step on the way to large-scale integration, or LSI chips. Oh, and they'd started selling those computer modems as the Bell 101 after perfecting the tech for the SAGE air-defense system. And the research to get there gave us the basic science for the solar cell, electronic music, and lasers - just in the 1950s. The 1960s saw further work work on microphones and communication satellites like Telstar, which saw Bell Labs outsource launching satellites to NASA. Those transistors were coming in handy, as were the solar panels. The 14 watts produced certainly couldn't have moved a mechanical computer wheel. Blaise Pascal and would be proud of the research his countries funds inspired and Volta would have been perfectly happy to have his name still on the lab I'm sure. Again, shoulders and giants. Telstar relayed its first television signal in 1962. The era of satellites was born later that year when Cronkite televised coverage of Kennedy manipulating world markets on this new medium for the first time and IBM 1401 computers encrypted and decrypted messages, ushering in an era of encrypted satellite communications. Sputnik may heave heated the US into orbit but the Telstar program has been an enduring system through to the Telstar 19V launched in 2018 - now outsourced to a Falcon 9 rocket from Space X. It might seem like Bell Labs had done enough for the world. But they still had a lot of the basic wireless research to bring us into the cellular age. In fact, they'd plotted out what the cellular age would look like all the way back in 1947! The increasing use of computers to do the all the acoustics and physics meant they were working closely with research universities during the rise of computing. They were involved in a failed experiment to create an operating system in the late 60s. Multics influenced so much but wasn't what we might consider a commercial success. It was the result of yet another of DARPA's J.C.R. Licklider's wild ideas in the form of Project MAC, which had Marvin Minsky and John McCarthy. Big names in the scientific community collided with cooperation and GE, Bell Labs and Multics would end up inspiring many a feature of a modern operating system. The crew at Bell Labs knew they could do better and so set out to take the best of Multics and implement a lighter, easier operating system. So they got to work on Uniplexed Information and Computing Service, or Unics, which was a pun on Multics. Ken Thompson, Dennis Ritchie, Doug McIllroy, Joe Assana, Brian Kernigan, and many others wrote Unix originally in assembly and then rewrote it in C once Dennis Ritchie wrote that to replace B. Along the way, Alfred Aho, Peter Weinber, and Kernighan gave us AWSK and with all this code they needed a way to keep the source under control so Marc Rochkind gave us the SCCS, or Course Code Control System, first written for an IBM S/3370 and then ported to C - which would be how most environments maintained source code until CVS came along in 1986. And Robert Fourer, David Gay, and Brian Kernighan wrote A Mathematical Programming Language, or AMPL, while there. Unix began as a bit of a shadow project but would eventually go to market as Research Unix when Don Gillies left Bell to go to the University of Illinois at Champaign-Urbana. From there it spread and after it fragmented in System V led to the rise of IBM's AIX, HP-UX, SunOS/Solaris, BSD, and many other variants - including those that have evolved into the macOS through Darwin, and Android through Linux. But Unix wasn't all they worked on - it was a tool to enable other projects. They gave us the charge-coupled device, which resulted in yet another Nobel Prize. That is an image sensor built on the MOS technologies. While fiber optics goes back to the 1800s, they gave us attenuation over fiber and thus could stretch cables to only need repeaters every few dozen miles - again reducing the cost to run the ever-growing phone company. All of this electronics allowed them to finally start reducing their reliance on electromechanical and human-based relays to transistor-to-transistor logic and less mechanical meant less energy, less labor to repair, and faster service. Decades of innovation gave way to decades of profit - in part because of automation. The 5ESS was a switching system that went online in 1982 and some of what it did - its descendants still do today. Long distance billing, switching modules, digital line trunk units, line cards - the grid could run with less infrastructure because the computer managed distributed switching. The world was ready for packet switching. 5ESS was 100 million lines of code, mostly written in C. All that source was managed with SCCS. Bell continued with innovations. They produced that modem up into the 70s but allowed Hayes, Rockewell, and others to take it to a larger market - coming back in from time to time to help improve things like when Bell Labs, branded as Lucent after the breakup of AT&T, helped bring the 56k modem to market. The presidents of Bell Labs were as integral to the success and innovation as the researchers. Frank Baldwin Jewett from 1925 to 1940, Oliver Buckley from 40 to 51, the great Mervin Kelly from 51 to 59, James Fisk from 59 to 73, William Oliver Baker from 73 to 79, and a few others since gave people like Bishnu Atal the space to develop speech processing algorithms and predictive coding and thus codecs. And they let Bjarne Stroustrup create C++, and Eric Schmidt who would go on to become a CEO of Google and the list goes on. Nearly every aspect of technology today is touched by the work they did. All of this research. Jon Gerstner wrote a book called The Idea Factory: Bell Labs and the Great Age of American Innovation. He chronicles the journey of multiple generations of adventurers from Germany, Ohio, Iowa, Japan, and all over the world to the Bell campuses. The growth and contraction of the basic and applied research and the amazing minds that walked the halls. It's a great book and a short episode like this couldn't touch the aspects he covers. He doesn't end the book as hopeful as I remain about the future of technology, though. But since he wrote the book, plenty has happened. After the hangover from the breakup of Ma Bell they're now back to being called Nokia Bell Labs - following a $16.6 billion acquisition by Nokia. I sometimes wonder if the world has the stomach for the same level of basic research. And then Alfred Aho and Jeffrey Ullman from Bell end up sharing the Turing Award for their work on compilers. And other researchers hit a terabit a second speeds. A storied history that will be a challenge for Marcus Weldon's successor. He was there as a post-doc there in 1995 and rose to lead the labs and become the CTO of Nokia - he said the next regeneration of a Doctor Who doctor would come in after him. We hope they are as good of stewards as those who came before them. The world is looking around after these decades of getting used to the technology they helped give us. We're used to constant change. We're accustomed to speed increases from 110 bits a second to now terabits. The nature of innovation isn't likely to be something their scientists can uncover. My guess is Prometheus is guarding that secret - if only to keep others from suffering the same fate after giving us the fire that sparked our imaginations. For more on that, maybe check out Hesiod's Theogony. In the meantime, think about the places where various sciences and disciplines intersect and think about the wellspring of each and the vast supporting casts that gave us our modern life. It's pretty phenomenal when ya' think about it.

Minister's Toolbox
EP 206: The Subtle Strategy Aimed At Destroying The Next Generation

Minister's Toolbox

Play Episode Listen Later Aug 10, 2021 16:55


The Church largely ignores the subtle strategy destroying young people today. While we prepare sermons to inspire, Satan is busy infecting the next generation with misinformation designed to damn their souls. How can we combat this demonic strategy in our churches? Today, I interview Dennis Ritchie from Oasis Church in Cheshire, CT, who shares insights from the Book of Daniel. Connect With Dennis Ritchie Like my Facebook page to follow us in Zimbabwe Subscribe to My YouTube Channel to watch Videos From Zimbabwe.

Hacking Humans
Unix (noun) [Word Notes]

Hacking Humans

Play Episode Listen Later Jan 5, 2021 4:45


A family of multitasking, multi-user computer operating systems that derive from the original Unix system built by Ken Thompson and Dennis Ritchie in the 1960s.

Continuous Delivery
Special Storia dell'Informatica: le origini, da Ada Lovelace a Dennis Ritchie

Continuous Delivery

Play Episode Listen Later Jan 5, 2021 17:27


Primo Special Storia di Continuous Delivery, raccontiamo senza tecnicismi l'affascinante storia della nascita dell'informatica, dal primo algoritmo su una rudimentale macchina meccanica agli enormi computer bellici della seconda guerra mondiale, spingendoci fino alla nascita di C.Con: Edoardo Dusi/* Newsletter */https://landing.sparkfabrik.com/continuous-delivery-newsletter/* Link e Social */https://www.sparkfabrik.com/ - @sparkfabrik

Word Notes
Unix (noun)

Word Notes

Play Episode Listen Later Dec 8, 2020 4:45


A family of multitasking, multi-user computer operating systems that derive from the original Unix system built by Ken Thompson and Dennis Ritchie in the 1960s.

Lex Fridman Podcast
#109 – Brian Kernighan: UNIX, C, AWK, AMPL, and Go Programming

Lex Fridman Podcast

Play Episode Listen Later Jul 18, 2020 103:37


Brian Kernighan is a professor of computer science at Princeton University. He co-authored the C Programming Language with Dennis Ritchie (creator of C) and has written a lot of books on programming, computers, and life including the Practice of Programming, the Go Programming Language, his latest UNIX: A History and a Memoir. He co-created AWK, the text processing language used by Linux folks like myself. He co-designed AMPL, an algebraic modeling language for large-scale optimization. Support this podcast by supporting our sponsors: – Eight Sleep: https://eightsleep.com/lex – Raycon: http://buyraycon.com/lex If you would like to get more information about this podcast

Dear Discreet Guide
Brian Kernighan on Dennis Ritchie: Unix, C and a Legacy

Dear Discreet Guide

Play Episode Listen Later Jul 10, 2020 59:28


Computer scientist Brian Kernighan joins us to talk about Dennis Ritchie, creator of the C programming language and co-developer of the Unix operating system along with Ken Thompson. Ritchie never sought the limelight, but his impact on the computer world is still visible everywhere from a multitude of operating systems to programming languages, search engines, car engines, and websites. Brian takes us through what made Ritchie's work novel, profound, inspiring, and long lasting. In a wide-ranging conversation, we talk about writing, Bell Labs, office configurations, motivation, and legacy. Brian even quotes a bit of Latin and offers some great takeaways about finding success and happiness at work. A historical and timely episode.Brian's book "Unix: A History and a Memoir" that we discuss:https://bookshop.org/books/unix-a-history-and-a-memoir/9781695978553The Penn & Teller prank we discuss:https://www.youtube.com/watch?v=fxMKuv0A6z4Thoughts? Comments? Potshots? Contact the show at:https://www.discreetguide.com/Follow or like us on podomatic.com (it raises our visibility :)https://www.podomatic.com/podcasts/deardiscreetguideSupport us on Patreon:https://www.patreon.com/discreetguideFollow the host on Twitter:@DiscreetGuideThe host on LinkedIn:https://www.linkedin.com/in/jenniferkcrittenden/

Dear Discreet Guide
Brian Kernighan on Dennis Ritchie: Unix, C and a Legacy

Dear Discreet Guide

Play Episode Listen Later Jul 10, 2020 59:28


Computer scientist Brian Kernighan joins us to talk about Dennis Ritchie, creator of the C programming language and co-developer of the Unix operating system along with Ken Thompson. Ritchie never sought the limelight, but his impact on the computer world is still visible everywhere from a multitude of operating systems to programming languages, search engines, car engines, and websites. Brian takes us through what made Ritchie's work novel, profound, inspiring, and long lasting. In a wide-ranging conversation, we talk about writing, Bell Labs, office configurations, motivation, and legacy. Brian even quotes a bit of Latin and offers some great takeaways about finding success and happiness at work. A historical and timely episode.Brian's book "Unix: A History and a Memoir" that we discuss:https://bookshop.org/books/unix-a-history-and-a-memoir/9781695978553The Penn & Teller prank we discuss:https://www.youtube.com/watch?v=fxMKuv0A6z4Thoughts? Comments? Potshots? Contact the show at:https://www.discreetguide.com/Follow or like us on podomatic.com (it raises our visibility :)https://www.podomatic.com/podcasts/deardiscreetguideSupport us on Patreon:https://www.patreon.com/discreetguideFollow the host on Twitter:@DiscreetGuideThe host on LinkedIn:https://www.linkedin.com/in/jenniferkcrittenden/

Dear Discreet Guide
Brian Kernighan on Dennis Ritchie: Unix, C and a Legacy

Dear Discreet Guide

Play Episode Listen Later Jul 10, 2020 59:28


Computer scientist Brian Kernighan joins us to talk about Dennis Ritchie, creator of the C programming language and co-developer of the Unix operating system along with Ken Thompson. Ritchie never sought the limelight, but his impact on the computer world is still visible everywhere from a multitude of operating systems to programming languages, search engines, car engines, and websites. Brian takes us through what made Ritchie's work novel, profound, inspiring, and long lasting. In a wide-ranging conversation, we talk about writing, Bell Labs, office configurations, motivation, and legacy. Brian even quotes a bit of Latin and offers some great takeaways about finding success and happiness at work. A historical and timely episode. Brian's book "Unix: A History and a Memoir" that we discuss: https://bookshop.org/books/unix-a-history-and-a-memoir/9781695978553 The Penn & Teller prank we discuss: https://www.youtube.com/watch?v=fxMKuv0A6z4 Thoughts? Comments? Potshots? Contact the show at: https://www.discreetguide.com/ Follow or like us on podomatic.com (it raises our visibility :) https://www.podomatic.com/podcasts/deardiscreetguide Support us on Patreon: https://www.patreon.com/discreetguide Follow the host on Twitter: @DiscreetGuide The host on LinkedIn: https://www.linkedin.com/in/jenniferkcrittenden/

AdminDev Labs
The Unix Philosophy

AdminDev Labs

Play Episode Listen Later Jun 2, 2020 24:00


### Doug McIlroy - Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new features. - Expect the output of every program to become the input to another, as yet unknown, program. Don't clutter output with extraneous information. - Design and build software, even operating systems, to be tried early, ideally within weeks. Don't hesitate to throw away the clumsy parts and rebuild them. - Use tools in preference to unskilled help to lighten a programming task, even if you have to detour to build the tools and expect to throw some of them out after you've finished using them. ### Peter H. Salus - Write programs that do one thing and do it well. - Write programs to work together. - Write programs to handle text streams, because that is a universal interface. ### Rob Pike - Rule 1. You can't tell where a program is going to spend its time. Bottlenecks occur in surprising places, so don't try to second guess and put in a speed hack until you've proven that's where the bottleneck is. - Rule 2. Measure. Don't tune for speed until you've measured, and even then don't unless one part of the code overwhelms the rest. - Rule 3. Fancy algorithms are slow when n is small, and n is usually small. Fancy algorithms have big constants. Until you know that n is frequently going to be big, don't get fancy. (Even if n does get big, use Rule 2 first.) - Rule 4. Fancy algorithms are buggier than simple ones, and they're much harder to implement. Use simple algorithms as well as simple data structures. - Rule 5. Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming. ### Dennis Ritchie and Ken Thompson - Make it easy to write, test, and run programs. - Interactive use instead of batch processing. - Economy and elegance of design due to size constraints - Self-supporting system: all Unix software is maintained under Unix. ### ESR - Modularity - Write simple parts connected by clean interfaces. - Readable - Programs that are clean and clear. - Composition - Programs connected to programs. - Separation: Separate policy from mechanism; separate interfaces from engines. - Simplicity: Design for simplicity; add complexity only where you must. - Parsimony: Write a big program only when it is clear by demonstration that nothing else will do. - Transparency: Design for visibility to make inspection and debugging easier. - Robust: Robustness is the child of transparency and simplicity. - Representation: Fold knowledge into data so program logic can be stupid and robust. - Least Surprise: In interface design, always do the least surprising thing. - Silence: When a program has nothing surprising to say, it should say nothing. - Repair: When you must fail, fail noisily and as soon as possible. - Economy: Programmer time is expensive; conserve it in preference to machine time. - Generation: Avoid hand-hacking; write programs to write programs when you can. - Optimization: Prototype before polishing. Get it working before you optimize it. - Diversity: Distrust all claims for “one true way”. - Extensibility: Design for the future, because it will be here sooner than you think. ### Does The Unix Philosophy Still Matter? - Yes? - We can still learn - Do what makes sense - Simplify everything - Abstraction isn't the answer

BSD Now
325: Cracking Rainbows

BSD Now

Play Episode Listen Later Nov 21, 2019 57:40


FreeBSD 12.1 is here, A history of Unix before Berkeley, FreeBSD development setup, HardenedBSD 2019 Status Report, DNSSEC, compiling RainbowCrack on OpenBSD, and more. Headlines FreeBSD 12.1 (https://www.freebsd.org/releases/12.1R/announce.html) Some of the highlights: BearSSL has been imported to the base system. The clang, llvm, lld, lldb, compiler-rt utilities and libc++ have been updated to version 8.0.1. OpenSSL has been updated to version 1.1.1d. Several userland utility updates. For a complete list of new features and known problems, please see the online release notes and errata list, available at: https://www.FreeBSD.org/releases/12.1R/relnotes.html A History of UNIX before Berkeley: UNIX Evolution: 1975-1984. (http://www.darwinsys.com/history/hist.html) Nobody needs to be told that UNIX is popular today. In this article we will show you a little of where it was yesterday and over the past decade. And, without meaning in the least to minimise the incredible contributions of Ken Thompson and Dennis Ritchie, we will bring to light many of the others who worked on early versions, and try to show where some of the key ideas came from, and how they got into the UNIX of today. Our title says we are talking about UNIX evolution. Evolution means different things to different people. We use the term loosely, to describe the change over time among the many different UNIX variants in use both inside and outside Bell Labs. Ideas, code, and useful programs seem to have made their way back and forth - like mutant genes - among all the many UNIXes living in the phone company over the decade in question. Part One looks at some of the major components of the current UNIX system - the text formatting tools, the compilers and program development tools, and so on. Most of the work described in Part One took place at Research'', a part of Bell Laboratories (now AT&T Bell Laboratories, then as nowthe Labs''), and the ancestral home of UNIX. In planned (but not written) later parts, we would have looked at some of the myriad versions of UNIX - there are far more than one might suspect. This includes a look at Columbus and USG and at Berkeley Unix. You'll begin to get a glimpse inside the history of the major streams of development of the system during that time. News Roundup My FreeBSD Development Setup (https://adventurist.me/posts/00296) I do my FreeBSD development using git, tmux, vim and cscope. I keep a FreeBSD fork on my github, I have forked https://github.com/freebsd/freebsd to https://github.com/adventureloop/freebsd OPNsense 19.7.6 released (https://opnsense.org/opnsense-19-7-6-released/) As we are experiencing the Suricata community first hand in Amsterdam we thought to release this version a bit earlier than planned. Included is the latest Suricata 5.0.0 release in the development version. That means later this November we will releasing version 5 to the production version as we finish up tweaking the integration and maybe pick up 5.0.1 as it becomes available. LDAP TLS connectivity is now integrated into the system trust store, which ensures that all required root and intermediate certificates will be seen by the connection setup when they have been added to the authorities section. The same is true for trusting self-signed certificates. On top of this, IPsec now supports public key authentication as contributed by Pascal Mathis. HardenedBSD November 2019 Status Report. (https://hardenedbsd.org/article/shawn-webb/2019-11-09/hardenedbsd-status-report) We at HardenedBSD have a lot of news to share. On 05 Nov 2019, Oliver Pinter resigned amicably from the project. All of us at HardenedBSD owe Oliver our gratitude and appreciation. This humble project, named by Oliver, was born out of his thesis work and the collaboration with Shawn Webb. Oliver created the HardenedBSD repo on GitHub in April 2013. The HardenedBSD Foundation was formed five years later to carry on this great work. DNSSEC enabled in default unbound(8) configuration. (https://undeadly.org/cgi?action=article;sid=20191110123908) DNSSEC validation has been enabled in the default unbound.conf(5) in -current. The relevant commits were from Job Snijders (job@) How to Install Shopware with NGINX and Let's Encrypt on FreeBSD 12 (https://www.howtoforge.com/how-to-install-shopware-with-nginx-and-lets-encrypt-on-freebsd-12/) Shopware is the next generation of open source e-commerce software. Based on bleeding edge technologies like Symfony 3, Doctrine2 and Zend Framework Shopware comes as the perfect platform for your next e-commerce project. This tutorial will walk you through the Shopware Community Edition (CE) installation on FreeBSD 12 system by using NGINX as a web server. Requirements Make sure your system meets the following minimum requirements: + Linux-based operating system with NGINX or Apache 2.x (with mod_rewrite) web server installed. + PHP 5.6.4 or higher with ctype, gd, curl, dom, hash, iconv, zip, json, mbstring, openssl, session, simplexml, xml, zlib, fileinfo, and pdo/mysql extensions. PHP 7.1 or above is strongly recommended. + MySQL 5.5.0 or higher. + Possibility to set up cron jobs. + Minimum 4 GB available hard disk space. + IonCube Loader version 5.0.0 or higher (optional). How to Compile RainbowCrack on OpenBSD (https://cromwell-intl.com/open-source/compiling-rainbowcrack-on-openbsd.html) Project RainbowCrack was originally Zhu Shuanglei's implementation, it's not clear to me if the project is still just his or if it's even been maintained for a while. His page seems to have been last updated in August 2007. The Project RainbowCrack web page now has just binaries for Windows XP and Linux, both 32-bit and 64-bit versions. Earlier versions were available as source code. The version 1.2 source code does not compile on OpenBSD, and in my experience it doesn't compile on Linux, either. It seems to date from 2004 at the earliest, and I think it makes some version-2.4 assumptions about Linux kernel headers. You might also look at ophcrack, a more modern tool, although it seems to be focused on cracking Windows XP/Vista/7/8/10 password hashes Feedback/Questions Reese - Amature radio info (http://dpaste.com/2RDG9K4#wrap) Chris - VPN (http://dpaste.com/2K4T2FQ#wrap) Malcolm - NAT (http://dpaste.com/138NEMA) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) Your browser does not support the HTML5 video tag.

The Dan York Report
TDYR 371 - Celebrating 50 Years of UNIX

The Dan York Report

Play Episode Listen Later Nov 8, 2019 8:27


It was 50 years ago in the summer of 1969 that the Unix operating system was created at Bell Labs by Ken Thompson, Dennis Ritchie and their colleagues. In this episode I talk about the large effect Unix has had on my own life, in all of its many variations, and celebrate the 50 years of this remarkable creation. More info: https://www.bell-labs.com/var/articles/celebrating-50-years-unix/ https://www.bell-labs.com/unix50/ https://en.wikipedia.org/wiki/Unix

BSD Now
323: OSI Burrito Guy

BSD Now

Play Episode Listen Later Nov 7, 2019 49:22


The earliest Unix code, how to replace fail2ban with blacklistd, OpenBSD crossed 400k commits, how to install Bolt CMS on FreeBSD, optimized hammer2, appeasing the OSI 7-layer burrito guys, and more. Headlines The Earliest Unix Code: An Anniversary Source Code Release (https://computerhistory.org/blog/the-earliest-unix-code-an-anniversary-source-code-release/) What is it that runs the servers that hold our online world, be it the web or the cloud? What enables the mobile apps that are at the center of increasingly on-demand lives in the developed world and of mobile banking and messaging in the developing world? The answer is the operating system Unix and its many descendants: Linux, Android, BSD Unix, MacOS, iOS—the list goes on and on. Want to glimpse the Unix in your Mac? Open a Terminal window and enter “man roff” to view the Unix manual entry for an early text formatting program that lives within your operating system. 2019 marks the 50th anniversary of the start of Unix. In the summer of 1969, that same summer that saw humankind’s first steps on the surface of the Moon, computer scientists at the Bell Telephone Laboratories—most centrally Ken Thompson and Dennis Ritchie—began the construction of a new operating system, using a then-aging DEC PDP-7 computer at the labs. This man sent the first online message 50 years ago (https://www.cbc.ca/radio/thecurrent/the-current-for-oct-29-2019-1.5339212/this-man-sent-the-first-online-message-50-years-ago-he-s-since-seen-the-web-s-dark-side-emerge-1.5339244) As many of you have heard in the past, the first online message ever sent between two computers was "lo", just over 50 years ago, on Oct. 29, 1969. It was supposed to say "log," but the computer sending the message — based at UCLA — crashed before the letter "g" was typed. A computer at Stanford 560 kilometres away was supposed to fill in the remaining characters "in," as in "log in." The CBC Radio show, “The Current” has a half-hour interview with the man who sent that message, Leonard Kleinrock, distinguished professor of computer science at UCLA "The idea of the network was you could sit at one computer, log on through the network to a remote computer and use its services there," 50 years later, the internet has become so ubiquitous that it has almost been rendered invisible. There's hardly an aspect in our daily lives that hasn't been touched and transformed by it. Q: Take us back to that day 50 years ago. Did you have the sense that this was going to be something you'd be talking about a half a century later? A: Well, yes and no. Four months before that message was sent, there was a press release that came out of UCLA in which it quotes me as describing what my vision for this network would become. Basically what it said is that this network would be always on, always available. Anybody with any device could get on at anytime from any location, and it would be invisible. Well, what I missed ... was that this is going to become a social network. People talking to people. Not computers talking to computers, but [the] human element. Q: Can you briefly explain what you were working on in that lab? Why were you trying to get computers to actually talk to one another? A: As an MIT graduate student, years before, I recognized I was surrounded by computers and I realized there was no effective [or efficient] way for them to communicate. I did my dissertation, my research, on establishing a mathematical theory of how these networks would work. But there was no such network existing. AT&T said it won't work and, even if it does, we want nothing to do with it. So I had to wait around for years until the Advanced Research Projects Agency within the Department of Defence decided they needed a network to connect together the computer scientists they were supervising and supporting. Q: For all the promise of the internet, it has also developed some dark sides that I'm guessing pioneers like yourselves never anticipated. A: We did not. I knew everybody on the internet at that time, and they were all well-behaved and they all believed in an open, shared free network. So we did not put in any security controls. When the first spam email occurred, we began to see the dark side emerge as this network reached nefarious people sitting in basements with a high-speed connection, reaching out to millions of people instantaneously, at no cost in time or money, anonymously until all sorts of unpleasant events occurred, which we called the dark side. But in those early days, I considered the network to be going through its teenage years. Hacking to spam, annoying kinds of effects. I thought that one day this network would mature and grow up. Well, in fact, it took a turn for the worse when nation states, organized crime and extremists came in and began to abuse the network in severe ways. Q: Is there any part of you that regrets giving birth to this? A: Absolutely not. The greater good is much more important. News Roundup How to use blacklistd(8) with NPF as a fail2ban replacement (https://www.unitedbsd.com/d/63-how-to-use-blacklistd8-with-npf-as-a-fail2ban-replacement) blacklistd(8) provides an API that can be used by network daemons to communicate with a packet filter via a daemon to enforce opening and closing ports dynamically based on policy. The interface to the packet filter is in /libexec/blacklistd-helper (this is currently designed for npf) and the configuration file (inspired from inetd.conf) is in etc/blacklistd.conf Now, blacklistd(8) will require bpfjit(4) (Just-In-Time compiler for Berkeley Packet Filter) in order to properly work, in addition to, naturally, npf(7) as frontend and syslogd(8), as a backend to print diagnostic messages. Also remember npf shall rely on the npflog* virtual network interface to provide logging for tcpdump() to use. Unfortunately (dont' ask me why ??) in 8.1 all the required kernel components are still not compiled by default in the GENERIC kernel (though they are in HEAD), and are rather provided as modules. Enabling NPF and blacklistd services would normally result in them being automatically loaded as root, but predictably on securelevel=1 this is not going to happen. FreeBSD’s handbook chapter on blacklistd (https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/firewalls-blacklistd.html) OpenBSD crossed 400,000 commits (https://marc.info/?l=openbsd-tech&m=157059352620659&w=2) Sometime in the last week OpenBSD crossed 400,000 commits (*) upon all our repositories since starting at 1995/10/18 08:37:01 Canada/Mountain. That's a lot of commits by a lot of amazing people. (*) by one measure. Since the repository is so large and old, there are a variety of quirks including ChangeLog missing entries and branches not convertible to other repo forms, so measuring is hard. If you think you've got a great way of measuring, don't be so sure of yourself -- you may have overcounted or undercounted. Subject to the notes Theo made about under and over counting, FreeBSD should hit 1 million commits (base + ports + docs) some time in 2020 NetBSD + pkgsrc are approaching 600,000, but of course pkgsrc covers other operating systems too How to Install Bolt CMS with Nginx and Let's Encrypt on FreeBSD 12 (https://www.howtoforge.com/how-to-install-bolt-cms-nginx-ssl-on-freebsd-12/) Bolt is a sophisticated, lightweight and simple CMS built with PHP. It is released under the open-source MIT-license and source code is hosted as a public repository on Github. A bolt is a tool for Content Management, which strives to be as simple and straightforward as possible. It is quick to set up, easy to configure, uses elegant templates. Bolt is created using modern open-source libraries and is best suited to build sites in HTML5 with modern markup. In this tutorial, we will go through the Bolt CMS installation on FreeBSD 12 system by using Nginx as a web server, MySQL as a database server, and optionally you can secure the transport layer by using acme.sh client and Let's Encrypt certificate authority to add SSL support. Requirements The system requirements for Bolt are modest, and it should run on any fairly modern web server: PHP version 5.5.9 or higher with the following common PHP extensions: pdo, mysqlnd, pgsql, openssl, curl, gd, intl, json, mbstring, opcache, posix, xml, fileinfo, exif, zip. Access to SQLite (which comes bundled with PHP), or MySQL or PostgreSQL. Apache with mod_rewrite enabled (.htaccess files) or Nginx (virtual host configuration covered below). A minimum of 32MB of memory allocated to PHP. hammer2 - Optimize hammer2 support threads and dispatch (http://lists.dragonflybsd.org/pipermail/commits/2019-September/719632.html) Refactor the XOP groups in order to be able to queue strategy calls, whenever possible, to the same CPU as the issuer. This optimizes several cases and reduces unnecessary IPI traffic between cores. The next best thing to do would be to not queue certain XOPs to an H2 support thread at all, but I would like to keep the threads intact for later clustering work. The best scaling case for this is when one has a large number of user threads doing I/O. One instance of a single-threaded program on an otherwise idle machine might see a slightly reduction in performance but at the same time we completely avoid unnecessarily spamming all cores in the system on the behalf of a single program, so overhead is also significantly lower. This will tend to increase the number of H2 support threads since we need a certain degree of multiplication for domain separation. This should significantly increase I/O performance for multi-threaded workloads. You know, we might as well just run every network service over HTTPS/2 and build another six layers on top of that to appease the OSI 7-layer burrito guys (http://boston.conman.org/2019/10/17.1) I've seen the writing on the wall, and while for now you can configure Firefox not to use DoH, I'm not confident enough to think it will remain that way. To that end, I've finally set up my own DoH server for use at Chez Boca. It only involved setting up my own CA to generate the appropriate certificates, install my CA certificate into Firefox, configure Apache to run over HTTP/2 (THANK YOU SO VERY XXXXX­XX MUCH GOOGLE FOR SHOVING THIS HTTP/2 XXXXX­XXX DOWN OUR THROATS!—no, I'm not bitter) and write a 150 line script that just queries my own local DNS, because, you know, it's more XXXXX­XX secure or some XXXXX­XXX reason like that. Sigh. Beastie Bits An Oral History of Unix (https://www.princeton.edu/~hos/Mahoney/unixhistory) NUMA Siloing in the FreeBSD Network Stack [pdf] (https://people.freebsd.org/~gallatin/talks/euro2019.pdf) EuroBSDCon 2019 videos available (https://www.youtube.com/playlist?list=PLskKNopggjc6NssLc8GEGSiFYJLYdlTQx) Barbie knows best (https://twitter.com/eksffa/status/1188638425567682560) For the #OpenBSD #e2k19 attendees. I did a pre visit today. (https://twitter.com/bob_beck/status/1188226661684301824) Drawer Find (https://twitter.com/pasha_sh/status/1187877745499561985) Slides - Removing ROP Gadgets from OpenBSD - AsiaBSDCon 2019 (https://www.openbsd.org/papers/asiabsdcon2019-rop-slides.pdf) Feedback/Questions Bostjan - Open source doesn't mean secure (http://dpaste.com/1M5MVCX#wrap) Malcolm - Allan is Correct. (http://dpaste.com/2RFNR94) Michael - FreeNAS inside a Jail (http://dpaste.com/28YW3BB#wrap) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) Your browser does not support the HTML5 video tag.

BSD Now
322: Happy Birthday, Unix

BSD Now

Play Episode Listen Later Oct 31, 2019 67:30


Unix is 50, Hunting down Ken's PDP-7, OpenBSD and OPNSense have new releases, Clarification on what GhostBSD is, sshuttle - VPN over SSH, and more. Headlines Unix is 50 (https://www.bell-labs.com/unix50/) In the summer of 1969 computer scientists Ken Thompson and Dennis Ritchie created the first implementation of Unix with the goal of designing an elegant and economical operating system for a little-used PDP-7 minicomputer at Bell Labs. That modest project, however, would have a far-reaching legacy. Unix made large-scale networking of diverse computing systems — and the Internet — practical. The Unix team went on to develop the C language, which brought an unprecedented combination of efficiency and expressiveness to programming. Both made computing more "portable". Today, Linux, the most popular descendent of Unix, powers the vast majority of servers, and elements of Unix and Linux are found in most mobile devices. Meanwhile C++ remains one of the most widely used programming languages today. Unix may be a half-century old but its influence is only growing. Hunting down Ken's PDP-7: video footage found (https://bsdimp.blogspot.com/2019/10/video-footage-of-first-pdp-7-to-run-unix.html) In my prior blog post, I traced Ken's scrounged PDP-7 to SN 34. In this post I'll show that we have actual video footage of that PDP-7 due to an old film from Bell Labs. this gives us almost a minute of footage of the PDP-7 Ken later used to create Unix. News Roundup OpenBSD 6.6 Released (https://openbsd.org/66.html) Announce: https://marc.info/?l=openbsd-tech&m=157132024225971&w=2 Upgrade Guide: https://openbsd.org/faq/upgrade66.html Changelog: https://openbsd.org/plus66.html OPNsense 19.7.5 released (https://opnsense.org/opnsense-19-7-5-released/) Hello friends and followers, Lots of plugin and ports updates this time with a few minor improvements in all core areas. Behind the scenes we are starting to migrate the base system to version 12.1 which is supposed to hit the next 20.1 release. Stay tuned for more infos in the next month or so. Here are the full patch notes: + system: show all swap partitions in system information widget + system: flatten services_get() in preparation for removal + system: pin Syslog-ng version to specific package name + system: fix LDAP/StartTLS with user import page + system: fix a PHP warning on authentication server page + system: replace most subprocess.call use + interfaces: fix devd handling of carp devices (contributed by stumbaumr) + firewall: improve firewall rules inline toggles + firewall: only allow TCP flags on TCP protocol + firewall: simplify help text for direction setting + firewall: make protocol log summary case insensitive + reporting: ignore malformed flow records + captive portal: fix type mismatch for timeout read + dhcp: add note for static lease limitation with lease registration (contributed by Northguy) + ipsec: add margintime and rekeyfuzz options + ipsec: clear $dpdline correctly if not set + ui: fix tokenizer reorder on multiple saves + plugins: os-acme-client 1.26[1] + plugins: os-bind will reload bind on record change (contributed by blablup) + plugins: os-etpro-telemetry minor subprocess.call replacement + plugins: os-freeradius 1.9.4[2] + plugins: os-frr 1.12[3] + plugins: os-haproxy 2.19[4] + plugins: os-mailtrail 1.2[5] + plugins: os-postfix 1.11[6] + plugins: os-rspamd 1.8[7] + plugins: os-sunnyvalley LibreSSL support (contributed by Sunny Valley Networks) + plugins: os-telegraf 1.7.6[8] + plugins: os-theme-cicada 1.21 (contributed by Team Rebellion) + plugins: os-theme-tukan 1.21 (contributed by Team Rebellion) + plugins: os-tinc minor subprocess.call replacement + plugins: os-tor 1.8 adds dormant mode disable option (contributed by Fabian Franz) + plugins: os-virtualbox 1.0 (contributed by andrewhotlab) Dealing with the misunderstandings of what is GhostBSD (http://ghostbsd.org/node/194) Since the release of 19.09, I have seen a lot of misunderstandings on what is GhostBSD and the future of GhostBSD. GhostBSD is based on TrueOS with FreeBSD 12 STABLE with our twist to it. We are still continuing to use TrueOS for OpenRC, and the new package's system for the base system that is built from ports. GhostBSD is becoming a slow-moving rolling release base on the latest TrueOS with FreeBSD 12 STABLE. When FreeBSD 13 STABLE gets released, GhostBSD will be upgraded to TrueOS with FreeBSD 13 STABLE. Our official desktop is MATE, which means that the leading developer of GhostBSD does not officially support XFCE. Community releases are maintained by the community and for the community. GhostBSD project will provide help to build and to host the community release. If anyone wants to have a particular desktop supported, it is up to the community. Sure I will help where I can, answer questions and guide new community members that contribute to community release. There is some effort going on for Plasma5 desktop. If anyone is interested in helping with XFCE and Plasma5 or in creating another community release, you are well come to contribute. Also, Contribution to the GhostBSD base system, to ports and new ports, and in house software are welcome. We are mostly active on Telegram https://t.me/ghostbsd, but you can also reach us on the forum. SHUTTLE – VPN over SSH | VPN Alternative (https://www.terminalbytes.com/sshuttle-vpn-over-ssh-vpn-alternative/) Looking for a lightweight VPN client, but are not ready to spend a monthly recurring amount on a VPN? VPNs can be expensive depending upon the quality of service and amount of privacy you want. A good VPN plan can easily set you back by 10$ a month and even that doesn’t guarantee your privacy. There is no way to be sure whether the VPN is storing your confidential information and traffic logs or not. sshuttle is the answer to your problem it provides VPN over ssh and in this article we’re going to explore this cheap yet powerful alternative to the expensive VPNs. By using open source tools you can control your own privacy. VPN over SSH – sshuttle sshuttle is an awesome program that allows you to create a VPN connection from your local machine to any remote server that you have ssh access on. The tunnel established over the ssh connection can then be used to route all your traffic from client machine through the remote machine including all the dns traffic. In the bare bones sshuttle is just a proxy server which runs on the client machine and forwards all the traffic to a ssh tunnel. Since its open source it holds quite a lot of major advantages over traditional VPN. OpenSSH 8.1 Released (http://www.openssh.com/txt/release-8.1) Security ssh(1), sshd(8), ssh-add(1), ssh-keygen(1): an exploitable integer overflow bug was found in the private key parsing code for the XMSS key type. This key type is still experimental and support for it is not compiled by default. No user-facing autoconf option exists in portable OpenSSH to enable it. This bug was found by Adam Zabrocki and reported via SecuriTeam's SSD program. ssh(1), sshd(8), ssh-agent(1): add protection for private keys at rest in RAM against speculation and memory side-channel attacks like Spectre, Meltdown and Rambleed. This release encrypts private keys when they are not in use with a symmetric key that is derived from a relatively large "prekey" consisting of random data (currently 16KB). This release includes a number of changes that may affect existing configurations: ssh-keygen(1): when acting as a CA and signing certificates with an RSA key, default to using the rsa-sha2-512 signature algorithm. Certificates signed by RSA keys will therefore be incompatible with OpenSSH versions prior to 7.2 unless the default is overridden (using "ssh-keygen -t ssh-rsa -s ..."). New Features ssh(1): Allow %n to be expanded in ProxyCommand strings ssh(1), sshd(8): Allow prepending a list of algorithms to the default set by starting the list with the '^' character, E.g. "HostKeyAlgorithms ^ssh-ed25519" ssh-keygen(1): add an experimental lightweight signature and verification ability. Signatures may be made using regular ssh keys held on disk or stored in a ssh-agent and verified against an authorized_keys-like list of allowed keys. Signatures embed a namespace that prevents confusion and attacks between different usage domains (e.g. files vs email). ssh-keygen(1): print key comment when extracting public key from a private key. ssh-keygen(1): accept the verbose flag when searching for host keys in known hosts (i.e. "ssh-keygen -vF host") to print the matching host's random-art signature too. All: support PKCS8 as an optional format for storage of private keys to disk. The OpenSSH native key format remains the default, but PKCS8 is a superior format to PEM if interoperability with non-OpenSSH software is required, as it may use a less insecure key derivation function than PEM's. Beastie Bits Say goodbye to the 32 CPU limit in NetBSD/aarch64 (https://twitter.com/jmcwhatever/status/1185584719183962112) vBSDcon 2019 videos (https://www.youtube.com/channel/UCvcdrOSlYOSzOzLjv_n1_GQ/videos) Browse the web in the terminal - W3M (https://www.youtube.com/watch?v=3Hfda0Tjqsg&feature=youtu.be) NetBSD 9 and GSoC (http://netbsd.org/~kamil/GSoC2019.html#slide1) BSDCan 2019 Videos (https://www.youtube.com/playlist?list=PLeF8ZihVdpFegPoAKppaDSoYmsBvpnSZv) NYC*BUG Install Fest: Nov 6th 18:45 @ Suspenders (https://www.nycbug.org/index?action=view&id=10673) FreeBSD Miniconf at linux.conf.au 2020 Call for Sessions Now Open (https://www.freebsdfoundation.org/blog/freebsd-miniconf-at-linux-conf-au-2020-call-for-sessions-now-open/) FOSDEM 2020 - BSD Devroom Call for Participation (https://people.freebsd.org/~rodrigo/fosdem20/) University of Cambridge looking for Research Assistants/Associates (https://twitter.com/ed_maste/status/1184865668317007874) Feedback/Questions Trenton - Beeping Thinkpad (http://dpaste.com/0ZEXNM6#wrap) Alex - Per user ZFS Datasets (http://dpaste.com/1K31A65#wrap) Allan’s old patch from 2015 (https://reviews.freebsd.org/D2272) Javier - FBSD 12.0 + ZFS + encryption (http://dpaste.com/1XX4NNA#wrap) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) Your browser does not support the HTML5 video tag.

Tech Aloud
C, the Enduring Legacy of Dennis Ritchie - Alfred V. Aho - 07 Sep 2012

Tech Aloud

Play Episode Listen Later Oct 15, 2019 16:44


Source: http://www.cs.columbia.edu/~aho/Talks/12-09-07_DMR.pdf (via HN) A tribute to the late Dennis Ritchie delivered at Dennis Ritchie Day at Bell Labs, Murray Hill, NJ, September 7, 2012 Bell Labs, Murray Hill, NJ, September 7, 2012. I think it's important to remember some of the great thinkers and creators, whose work underpins so much of what we rely upon today. Dennis is one of those people. I remember that Dennis passed away only a few days after Steve Jobs, and his passing was somewhat eclipsed, which made me a little sad. But C and Unix, Dennis's legacy, strongly endures.

Command Line Heroes
The C Change

Command Line Heroes

Play Episode Listen Later Oct 1, 2019 25:35


C and UNIX are at the root of modern computing. Many of the languages we’ve covered this season are related to or at least influenced by C. But C and UNIX only happened because a few developers at Bell Labs created both as a skunkworks project. Bell Labs was a mid-twentieth century center for innovation. Jon Gertner describes it as an “idea factory.” One of their biggest projects in the 1960s was helping build a time-sharing operating system called Multics. Dr. Joy Lisi Rankin explains the hype around time-sharing at the time—it was described as potentially making computing accessible as a public utility. Large teams devoted years of effort to build Multics—and it wasn’t what they had hoped for. Bell Labs officially moved away from time-sharing in 1969. But as Andrew Tanenbaum recounts, a small team of heroes pushed on anyways. C and UNIX were the result. Little did they know how much their work would shape the course of technology.That's all for Season 3. If you want to dive deeper into C and UNIX, you can check out all our bonus material over at redhat.com/commandlineheroes. You’ll find extra content for every episode. Follow along with the episode transcript. Subscribe to the newsletter for more stories and to be among the first to see announcements about the podcast. See you soon for Season 4.

BSD Now
315: Recapping vBSDcon 2019

BSD Now

Play Episode Listen Later Sep 12, 2019 76:55


vBSDcon 2019 recap, Unix at 50, OpenBSD on fan-less Tuxedo InfinityBook, humungus - an hg server, how to configure a network dump in FreeBSD, and more. Headlines vBSDcon Recap Allan and Benedict attended vBSDcon 2019, which ended last week. It was held again at the Hyatt Regency Reston and the main conference was organized by Dan Langille of BSDCan fame.The two day conference was preceded by a one day FreeBSD hackathon, where FreeBSD developers had the chance to work on patches and PRs. In the evening, a reception was held to welcome attendees and give them a chance to chat and get to know each other over food and drinks. The first day of the conference was opened with a Keynote by Paul Vixie about DNS over HTTPS (DoH). He explained how we got to the current state and what challenges (technical and social) this entails. If you missed this talk and are dying to see it, it will also be presented at EuroBSDCon next week John Baldwin followed up by giving an overview of the work on “In-Kernel TLS Framing and Encryption for FreeBSD” abstract (https://www.vbsdcon.com/schedule/2019-09-06.html#talk:132615) and the recent commit we covered in episode 313. Meanwhile, Brian Callahan was giving a separate session in another room about “Learning to (Open)BSD through its porting system: an attendee-driven educational session” where people had the chance to learn about how to create ports for the BSDs. David Fullard’s talk about “Transitioning from FreeNAS to FreeBSD” was his first talk at a BSD conference and described how he built his own home NAS setup trying to replicate FreeNAS’ functionality on FreeBSD, and why he transitioned from using an appliance to using vanilla FreeBSD. Shawn Webb followed with his overview talk about the “State of the Hardened Union”. Benedict’s talk about “Replacing an Oracle Server with FreeBSD, OpenZFS, and PostgreSQL” was well received as people are interested in how we liberated ourselves from the clutches of Oracle without compromising functionality. Entertaining and educational at the same time, Michael W. Lucas talk about “Twenty Years in Jail: FreeBSD Jails, Then and Now” closed the first day. Lucas also had a table in the hallway with his various tech and non-tech books for sale. People formed small groups and went into town for dinner. Some returned later that night to some work in the hacker lounge or talk amongst fellow BSD enthusiasts. Colin Percival was the keynote speaker for the second day and had an in-depth look at “23 years of software side channel attacks”. Allan reprised his “ELI5: ZFS Caching” talk explaining how the ZFS adaptive replacement cache (ARC) work and how it can be tuned for various workloads. “By the numbers: ZFS Performance Results from Six Operating Systems and Their Derivatives” by Michael Dexter followed with his approach to benchmarking OpenZFS on various platforms. Conor Beh was also a new speaker to vBSDcon. His talk was about “FreeBSD at Work: Building Network and Storage Infrastructure with pfSense and FreeNAS”. Two OpenBSD talks closed the talk session: Kurt Mosiejczuk with “Care and Feeding of OpenBSD Porters” and Aaron Poffenberger with “Road Warrior Disaster Recovery: Secure, Synchronized, and Backed-up”. A dinner and reception was enjoyed by the attendees and gave more time to discuss the talks given and other things until late at night. We want to thank the vBSDcon organizers and especially Dan Langille for running such a great conference. We are grateful to Verisign as the main sponsor and The FreeBSD Foundation for sponsoring the tote bags. Thanks to all the speakers and attendees! humungus - an hg server (https://humungus.tedunangst.com/r/humungus) Features View changes, files, changesets, etc. Some syntax highlighting. Read only. Serves multiple repositories. Allows cloning via the obvious URL. Supports go get. Serves files for downloads. Online documentation via mandoc. Terminal based admin interface. News Roundup OpenBSD on fan-less Tuxedo InfinityBook 14″ v2. (https://hazardous.org/archive/blog/openbsd/2019/09/02/OpenBSD-on-Infinitybook14) The InfinityBook 14” v2 is a fanless 14” notebook. It is an excellent choice for running OpenBSD - but order it with the supported wireless card (see below.). I’ve set it up in a dual-boot configuration so that I can switch between Linux and OpenBSD - mainly to spot differences in the drivers. TUXEDO allows a variety of configurations through their webshop. The dual boot setup with grub2 and EFI boot will be covered in a separate blogpost. My tests were done with OpenBSD-current - which is as of writing flagged as 6.6-beta. See Article for breakdown of CPU, Wireless, Video, Webcam, Audio, ACPI, Battery, Touchpad, and MicroSD Card Reader Unix at 50: How the OS that powered smartphones started from failure (https://arstechnica.com/gadgets/2019/08/unix-at-50-it-starts-with-a-mainframe-a-gator-and-three-dedicated-researchers/) Maybe its pervasiveness has long obscured its origins. But Unix, the operating system that in one derivative or another powers nearly all smartphones sold worldwide, was born 50 years ago from the failure of an ambitious project that involved titans like Bell Labs, GE, and MIT. Largely the brainchild of a few programmers at Bell Labs, the unlikely story of Unix begins with a meeting on the top floor of an otherwise unremarkable annex at the sprawling Bell Labs complex in Murray Hill, New Jersey. It was a bright, cold Monday, the last day of March 1969, and the computer sciences department was hosting distinguished guests: Bill Baker, a Bell Labs vice president, and Ed David, the director of research. Baker was about to pull the plug on Multics (a condensed form of MULTiplexed Information and Computing Service), a software project that the computer sciences department had been working on for four years. Multics was two years overdue, way over budget, and functional only in the loosest possible understanding of the term. Trying to put the best spin possible on what was clearly an abject failure, Baker gave a speech in which he claimed that Bell Labs had accomplished everything it was trying to accomplish in Multics and that they no longer needed to work on the project. As Berk Tague, a staffer present at the meeting, later told Princeton University, “Like Vietnam, he declared victory and got out of Multics.” Within the department, this announcement was hardly unexpected. The programmers were acutely aware of the various issues with both the scope of the project and the computer they had been asked to build it for. Still, it was something to work on, and as long as Bell Labs was working on Multics, they would also have a $7 million mainframe computer to play around with in their spare time. Dennis Ritchie, one of the programmers working on Multics, later said they all felt some stake in the success of the project, even though they knew the odds of that success were exceedingly remote. Cancellation of Multics meant the end of the only project that the programmers in the Computer science department had to work on—and it also meant the loss of the only computer in the Computer science department. After the GE 645 mainframe was taken apart and hauled off, the computer science department’s resources were reduced to little more than office supplies and a few terminals. Some of Allan’s favourite excerpts: In the early '60s, Bill Ninke, a researcher in acoustics, had demonstrated a rudimentary graphical user interface with a DEC PDP-7 minicomputer. Acoustics still had that computer, but they weren’t using it and had stuck it somewhere out of the way up on the sixth floor. And so Thompson, an indefatigable explorer of the labs’ nooks and crannies, finally found that PDP-7 shortly after Davis and Baker cancelled Multics. With the rest of the team’s help, Thompson bundled up the various pieces of the PDP-7—a machine about the size of a refrigerator, not counting the terminal—moved it into a closet assigned to the acoustics department, and got it up and running. One way or another, they convinced acoustics to provide space for the computer and also to pay for the not infrequent repairs to it out of that department’s budget. McIlroy’s programmers suddenly had a computer, kind of. So during the summer of 1969, Thompson, Ritchie, and Canaday hashed out the basics of a file manager that would run on the PDP-7. This was no simple task. Batch computing—running programs one after the other—rarely required that a computer be able to permanently store information, and many mainframes did not have any permanent storage device (whether a tape or a hard disk) attached to them. But the time-sharing environment that these programmers had fallen in love with required attached storage. And with multiple users connected to the same computer at the same time, the file manager had to be written well enough to keep one user’s files from being written over another user’s. When a file was read, the output from that file had to be sent to the user that was opening it. It was a challenge that McIlroy’s team was willing to accept. They had seen the future of computing and wanted to explore it. They knew that Multics was a dead-end, but they had discovered the possibilities opened up by shared development, shared access, and real-time computing. Twenty years later, Ritchie characterized it for Princeton as such: “What we wanted to preserve was not just a good environment in which to do programming, but a system around which a fellowship could form.” Eventually when they had the file management system more or less fleshed out conceptually, it came time to actually write the code. The trio—all of whom had terrible handwriting—decided to use the Labs’ dictating service. One of them called up a lab extension and dictated the entire code base into a tape recorder. And thus, some unidentified clerical worker or workers soon had the unenviable task of trying to convert that into a typewritten document. Of course, it was done imperfectly. Among various errors, “inode” came back as “eye node,” but the output was still viewed as a decided improvement over their assorted scribbles. In August 1969, Thompson’s wife and son went on a three-week vacation to see her family out in Berkeley, and Thompson decided to spend that time writing an assembler, a file editor, and a kernel to manage the PDP-7 processor. This would turn the group’s file manager into a full-fledged operating system. He generously allocated himself one week for each task. Thompson finished his tasks more or less on schedule. And by September, the computer science department at Bell Labs had an operating system running on a PDP-7—and it wasn’t Multics. By the summer of 1970, the team had attached a tape drive to the PDP-7, and their blossoming OS also had a growing selection of tools for programmers (several of which persist down to this day). But despite the successes, Thompson, Canaday, and Ritchie were still being rebuffed by labs management in their efforts to get a brand-new computer. It wasn’t until late 1971 that the computer science department got a truly modern computer. The Unix team had developed several tools designed to automatically format text files for printing over the past year or so. They had done so to simplify the production of documentation for their pet project, but their tools had escaped and were being used by several researchers elsewhere on the top floor. At the same time, the legal department was prepared to spend a fortune on a mainframe program called “AstroText.” Catching wind of this, the Unix crew realized that they could, with only a little effort, upgrade the tools they had written for their own use into something that the legal department could use to prepare patent applications. The computer science department pitched lab management on the purchase of a DEC PDP-11 for document production purposes, and Max Mathews offered to pay for the machine out of the acoustics department budget. Finally, management gave in and purchased a computer for the Unix team to play with. Eventually, word leaked out about this operating system, and businesses and institutions with PDP-11s began contacting Bell Labs about their new operating system. The Labs made it available for free—requesting only the cost of postage and media from anyone who wanted a copy. The rest has quite literally made tech history. See the link for the rest of the article How to configure a network dump in FreeBSD? (https://www.oshogbo.vexillium.org/blog/68/) A network dump might be very useful for collecting kernel crash dumps from embedded machines and machines with a larger amount of RAM then available swap partition size. Besides net dumps we can also try to compress the core dump. However, often this may still not be enough swap to keep whole core dump. In such situation using network dump is a convenient and reliable way for collecting kernel dump. So, first, let’s talk a little bit about history. The first implementation of the network dumps was implemented around 2000 for the FreeBSD 4.x as a kernel module. The code was implemented in 2010 with the intention of being part of FreeBSD 9.0. However, the code never landed in FreeBSD. Finally, in 2018 with the commit r333283 by Mark Johnston the netdump client code landed in the FreeBSD. Subsequently, many other commitments were then implemented to add support for the different drivers (for example r333289). The first official release of FreeBSD, which support netdump is FreeBSD 12.0. Now, let’s get back to the main topic. How to configure the network dump? Two machines are needed. One machine is to collect core dump, let’s call it server. We will use the second one to send us the core dump - the client. See the link for the rest of the article Beastie Bits Sudo Mastery 2nd edition is not out (https://mwl.io/archives/4530) Empirical Notes on the Interaction Between Continuous Kernel Fuzzing and Development (http://users.utu.fi/kakrind/publications/19/vulnfuzz_camera.pdf) soso (https://github.com/ozkl/soso) GregKH - OpenBSD was right (https://youtu.be/gUqcMs0svNU?t=254) Game of Trees (https://gameoftrees.org/faq.html) Feedback/Questions BostJan - Another Question (http://dpaste.com/1ZPCCQY#wrap) Tom - PF (http://dpaste.com/3ZSCB8N#wrap) JohnnyK - Changing VT without keys (http://dpaste.com/3QZQ7Q5#wrap) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) Your browser does not support the HTML5 video tag.

The History of Computing
The Advent Of The Cloud

The History of Computing

Play Episode Listen Later Sep 5, 2019 14:55


Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past prepares us for the innovations of the future! Today we're going to look at the emergence of the cloud. As with everything evil, the origin of the cloud began with McCarthyism. From 1950 to 1954 Joe McCarthy waged a war against communism. Wait, wrong McCarthyism. Crap. After Joe McCarthy was condemned and run out of Washington, **John** McCarthy made the world a better place in 1955 with a somewhat communistic approach to computing. The 1950s were the peak of the military industrial complex. The SAGE air defense system needed to process data coming in from radars and perform actions based on that data. This is when McCarthy stepped in. John, not Joe. He proposed things like allocating memory automatically between programs, quote “Programming techniques can be encouraged which make destruction of other programs unlikely” and modifying FORTRAN to trap programs into specified areas of the storage. When a person loading cards or debugging code, the computer could be doing other things. To use his words: “The only way quick response can be provided at a bearable cost is by time-sharing. That is, the computer must attend to other customers while one customer is reacting to some output.” He posited that this could go from a 3 hour to day and a half turnaround to seconds. Remember, back then these things were huge and expensive. So people worked shifts and ran them continuously. McCarthy had been at MIT and Professor Fernando Corbato from there actually built it between 1961 and 1963. But at about the same time, Professor Jack Dennis from MIT started doing about the same thing with a PDP-1 from DEC - he's actually probably one of the most influential people many I talk to have never heard of. He called this APEX and hooked up multiple terminals on TX-2. Remember John McCarthy? He and some students actually did the same thing in 1962 after moving on to become a professor at Stanford. 1965 saw Alan Kotok sell a similar solution for the PDP-6 and then as the 60s rolled on and people in the Bay Area got really creative and free lovey, Cobato, Jack Dennis of MIT, a team from GE, and another from Bell labs started to work on Multics, or Multiplexed Information and Computing Service for short, for the GE-645 mainframe. Bell Labs pulled out and Multics was finished by MIT and GE, who then sold their computer business to Honeywell so they wouldn't be out there competing with some of their customers. Honeywell sold Multics until 1985 and it included symmetric multiprocessing, paging, a supervisor program, command programs, and a lot of the things we now take for granted in Linux, Unix, and macOS command lines. But we're not done with the 60s yet. ARPAnet gave us a standardized communications platform and distributed computing started in the 60s and then became a branch of computer science later in the late 1970s. This is really a software system that has components stored on different networked computers. Oh, and Telnet came at the tail end of 1969 in RFC 15, allowing us to remotely connect to those teletypes. People wanted Time Sharing Systems. Which led Project Genie at Berkely, TOPS-10 for the PDP-10 and IBM's failed TSS/360 for the System 360. To close out the 70s, Ken Thompson, Dennis Ritchie, Doug McIllroy, Mike Lesk, Joe Assana, and of course Brian Kernighan at Bell Labs hid a project to throw out the fluff from Multics and build a simpler system. This became Unix. Unix was originally developed in Assembly but Ritchie would write C in 72 and the team would eventually refactor Unix in C. Pretty sure management wasn't at all pissed when they found out. Pretty sure the Uniplexed Information and Computing Services, or eunuchs for short wasn't punny enough for the Multics team to notice. BSD would come shortly thereafter. Over the coming years you could create multiple users and design permissions in a way that users couldn't step on each others toes (or more specifically delete each others files). IBM did something interesting in 1972 as well. They invented the Virtual Machine, which allowed them to run an operating system inside an operating system. At this point, time sharing options were becoming common place in mainframes. Enter Moore's Law. Computers got cheaper and smaller. Altair and hobbyists became a thing. Bill Joy ported BSD to Sun workstations in 77. Computers kept getting smaller. CP/M shows up on early microcomputers at about the same time up until 1983. Apple arrives on the scene. Microsoft DOS appears in 1981 and and In 1983, with all this software you have to pay for really starting to harsh his calm, Richard Stallman famously set out to make software free. Maybe this was in response to Gates' 1976 Open Letter to Hobbyists asking pc hobbyists to actually pay for software. Maybe they forgot they wrote most of Microsoft BASIC on DARPA gear. Given that computers were so cheap for a bit, we forgot about multi-user operating systems for awhile. By 1991, Linus Torvalds, who also believed in free software, by then known as open source, developed a Unix-like operating system he called Linux. Computers continued to get cheaper and smaller. Now you could have them on multiple desks in an office. Companies like Novell brought us utility computers we now refer to as servers. You had one computer to just host all the files so users could edit them. CERN gave us the first web server in 1990. The University of Minnesota gave us Gopher in 1991. NTP 3 came in 1992. The 90s also saw the rise of virtual private networks and client-server networks. You might load a Delphi-based app on every computer in your office and connect that fat client with a shared database on a server to, for example, have a shared system to enter accounting information into, or access customer information to do sales activities and report on them. Napster had mainstreamed distributed file sharing. Those same techniques were being used in clusters of servers that were all controlled by a central IT administration team. Remember those virtual machines IBM gave us: you could now cluster and virtualize workloads and have applications that were served from a large number of distributed computing systems. But as workloads grew, the fault tolerance and performance necessary to support them became more and more expensive. By the mid-2000s it was becoming more acceptable to move to a web-client architecture, which meant large companies wouldn't have to bundle up software and automate the delivery of that software and could instead use an intranet to direct users to a series of web pages that allowed them to perform business tasks. Salesforce was started in 1999. They are the poster child for software as a service and founder/CEO Marc Benioff coined the term platform as a service, allowing customers to build their own applications using the Salesforce development environment. But it wasn't until we started breaking web applications up and developed methods to authenticate and authorize parts of them to one another using technologies like SAML, introduced in 2002) and OAuth (2006) that we were able to move into a more micro-service oriented paradigm for programming. Amazon and Google had been experiencing massive growth and in 2006 Amazon created Amazon Web Services and offered virtual machines on demand to customers, using a service called Elastic Compute Cloud. Google launched G Suite in 2006, providing cloud-based mail, calendar, contacts, documents, and spreadsheets. Google then offered a cloud offering to help developers with apps in 2008 with Google App Engine. In both cases, the companies had invested heavily in developing infrastructure to support their own workloads and renting some of that out to customers just… made sense. Microsoft, seeing the emergence of Google as not just a search engine, but a formidable opponent on multiple fronts also joined into the Infrastructure as a Service as offering virtual machines for pennies per minute of compute time also joined the party in 2008. Google, Microsoft, and Amazon still account for a large percentage of cloud services offered to software developers. Over the past 10 years the technologies have evolved. Mostly just by incrementing a number, like OAuth 2.0 or HTML 5. Web applications have gotten broken up in smaller and smaller parts due to mythical programmer months meaning you need smaller teams who have contracts with other teams that their service, or micro-service, can specific tasks. Amazon, Google, and Microsoft see these services and build more workload specific services, like database as a service or putting a REST front-end on a database, or data lakes as a service. Standards like OAuth even allow vendors to provide Identity as a service, linking up all the things. The cloud, as we've come to call hosting services, has been maturing for 55 years, from shared compute time on mainframes to shared file storage space on a server to very small shared services like payment processing using Stripe. Consumers love paying a small monthly fee for access to an online portal or app rather than having to deploy large amounts of capital to bring in an old-school JDS Uniphase style tool to automate tasks in a company. Software developers love importing an SDK or calling a service to get a process for free, allowing developers to go to market much faster and look like magicians in the process. And we don't have teams at startups running around with fire extinguishers to keep gear humming along. This reduces the barrier to build new software and apps and democratizes software development. App stores and search engines then make it easier than ever to put those web apps and apps in front of people to make money. In 1959, John McCarthy had said “The cooperation of IBM is very important but it should be to their advantage to develop this new way of using a computer.” Like many new philosophies, it takes time to set in and evolve. And it takes a combination of advances to make something so truly disruptive possible. The time-sharing philosophy gave us Unix and Linux, which today are the operating systems running on a lot of these cloud servers. But we don't know or care about those because the web provides a layer on top of them that obfuscates the workload. Much as the operating system obfuscated the workload of the components of the system. Today those clouds obfuscate various layers of the stack so you can enter at any part of the stack you want whether it's a virtual computer or a service or just to consume a web app. And this has lead to an explosion of diverse and innovative ideas. Apple famously said “there's an app for that” but without the cloud there certainly wouldn't be. And without you, my dear listeners, there wouldn't be a podcast. So thank you so very much for tuning into another episode of the History of Computing Podcast. We're lucky to have you. Have a great day!

TechStuff
Techstuff Classic: Spotlight on Dennis Ritchie

TechStuff

Play Episode Listen Later Jun 21, 2019 37:39


Who was Dennis Ritchie? Why did Ritchie create the C programming language? What is the story of Ritchie’s involvement with UNIX? In this episode, Jonathan and Chris delve into the life and work of Dennis Ritchie. Learn more about your ad-choices at https://news.iheart.com/podcast-advertisers

Minister's Toolbox
EP 129: Not-So-Instant Replay: Dennis Ritchie

Minister's Toolbox

Play Episode Listen Later Jun 19, 2019 24:27


How does someone go from dabbling in drug abuse and various eastern philosophies to becoming the Sr. Pastor of a thriving church community? Listen to Dennis describe his journey. You can reach him @celebratethejourney.org

BSD Now
295: Fun with funlinkat()

BSD Now

Play Episode Listen Later Apr 25, 2019 61:02


Introducing funlinkat(), an OpenBSD Router with AT&T U-Verse, using NetBSD on a raspberry pi, ZFS encryption is still under development, Rump kernel servers and clients tutorial, Snort on OpenBSD 6.4, and more. Headlines Introducing funlinkat It turns out, every file you have ever deleted on a unix machine was probably susceptible to a race condition One of the first syscalls which was created in Unix-like systems is unlink. In FreeBSD this syscall is number 10 (source) and in Linux, the number is dependent on the architecture but for most of them is also the tenth syscall (source). This indicated that this is one of the primary syscalls. The unlink syscall is very simple and we provide one single path to the file that we want to remove. The “removing file” process itself is very interesting so let’s spend a moment to understand the it. First, by removing the file we are removing a link from the directory to it. In Unix-like systems we can have many links to a single file (hard links). When we remove all links to the file, the file system will mark the blocks used by the file as free (a different file system will behave differently but let’s not jump into a second digression). This is why the process is called unlinking and not “removing file”. While we unlink the file two or three things will happen: We will remove an entry in the directory with the filename. We will decrease a file reference count (in inode). If links go to zero - the file will be removed from the disk (again this doesn't mean that the blocks from the disk will be filled with zeros, though this may happen depending on the file system and configuration. However, in most cases this means that the file system will mark those blocks to as free and use them to write new data later This mostly means that “removing file” from a directory is an operation on the directory and not on the file (inode) itself. Another interesting subject is what happens if our system will perform only first or second step from the list. This depends on the file system and this is also something we will leave for another time. The problem with the unlink and even unlinkat function is that we don’t have any guarantee of which file we really are unlinking. When you delete a file using its name, you have no guarantee that someone has not already deleted the file, or renamed it, and created a new file with the name you are about to delete. We have some stats about the file that we want to unlink. We performed some tests. In the same time another process removed our file and recreated it. When we finally try to remove our file it is no longer the same file. It’s a classic race condition. Many programs will perform checks before trying to remove a file, to make sure it is the correct file, that you have the correct permissions etc. However this exposes the ‘Time-of-Check / Time-of-Use’ class of bugs. I check if the file I am about to remove is the one I created yesterday, it is, so I call unlink() on it. However, between when I checked the date on the file, and when I call unlink, I, some program I am running, might have updated the file. Or a malicious user might have put some other file at that name, so I would be the one who deleted it. In Unix-like operating systems we can get a handle for our file called file - a descriptor. File descriptors guarantee us that all the operations that we will be performing on it are done on the same file (inode). Even if someone was to unlink a number of directories entries, the operating system will not free the structures behind the file descriptor, and we can detect the file that was removed by someone and recreated (or just unlinked). So, for example, we have an alternative functions fstat which allows us to get file status of the given descriptor We already know that the file may have many links on the disk which point to the single inode. What happens when we open the file? Simplifying: kernel creates a memory representation of the inode (the inode itself is stored on the disk) called vnode. This single representation is used by all processes to refer the inode to the disk. If in a process we open the same file (inode) using different names (for example through hard links) all those files will be linked to the single vnode. That means that the pathname is not stored in the kernel. This is basically the reason why we don’t have a funlink function so that instead of the path we are providing just the file descriptor to the file. If we performed the fdunlink syscall, the kernel wouldn’t know which directory entry you would like to remove. Another problem is more architectural: as we discussed earlier unlinking is really an operation on the directory not on the file (inode) itself, so using funlink(fd) may create some confusion because we are not removing the inode corresponding to the file descriptor, we are performing action on the directory which points to the file. After some discussion we decided that the only sensible option for FreeBSD would be to create a funlinkat() function. This syscall would only performs additional sanitary checks if we are removing a directory entry which corresponds to the inode stored which refers to the file descriptor. int funlinkat(int dfd, const char *path, int fd, int flags); The API above will check if the path opened relative to the dfd points to the same vnode. Thanks to that we removed a race condition because all those sanitary checks are performed in the kernel mode while the file system is locked and there is no possibility to change it. The fd parameter may be set to the FD_NONE value which will mean that the sanitary check should not be performed and funlinkat will behave just like unlinkat. As you can notice I often refer to the unlink syscall but at the end the APIs looks like unlinkat syscall. It is true that the unlink syscall is very old and kind of deprecated. That said I referred to unlink because it’s just simpler. These days unlink simply uses the same code as unlinkat. Using an OpenBSD Router with AT&T U-Verse I upgraded to AT&T's U-verse Gigabit internet service in 2017 and it came with an Arris BGW-210 as the WiFi AP and router. The BGW-210 is not a terrible device, but I already had my own Airport Extreme APs wired throughout my house and an OpenBSD router configured with various things, so I had no use for this device. It's also a potentially-insecure device that I can't upgrade or fully disable remote control over. Fully removing the BGW-210 is not possible as we'll see later, but it is possible to remove it from the routing path. This is how I did it with OpenBSD. News Roundup How to use NetBSD on a Raspberry Pi Do you have an old Raspberry Pi lying around gathering dust, maybe after a recent Pi upgrade? Are you curious about BSD Unix? If you answered "yes" to both of these questions, you'll be pleased to know that the first is the solution to the second, because you can run NetBSD, as far back as the very first release, on a Raspberry Pi. BSD is the Berkley Software Distribution of Unix. In fact, it's the only open source Unix with direct lineage back to the original source code written by Dennis Ritchie and Ken Thompson at Bell Labs. Other modern versions are either proprietary (such as AIX and Solaris) or clever re-implementations (such as Minix and GNU/Linux). If you're used to Linux, you'll feel mostly right at home with BSD, but there are plenty of new commands and conventions to discover. If you're still relatively new to open source, trying BSD is a good way to experience a traditional Unix. Admittedly, NetBSD isn't an operating system that's perfectly suited for the Pi. It's a minimal install compared to many Linux distributions designed specifically for the Pi, and not all components of recent Pi models are functional under NetBSD yet. However, it's arguably an ideal OS for the older Pi models, since it's lightweight and lovingly maintained. And if nothing else, it's a lot of fun for any die-hard Unix geek to experience another side of the POSIX world. ZFS Encryption is still under development (as of March 2019) One of the big upcoming features that a bunch of people are looking forward to in ZFS is natively encrypted filesystems. This is already in the main development tree of ZFS On Linux, will likely propagate to FreeBSD (since FreeBSD ZFS will be based on ZoL), and will make it to Illumos if the Illumos people want to pull it in. People are looking forward to native encryption so much, in fact, that some of them have started using it in ZFS On Linux already, using either the development tip or one of the 0.8.0 release candidate pre-releases (ZoL is up to 0.8.0-rc3 as of now). People either doing this or planning to do this show up on the ZoL mailing list every so often. CFT for FreeBSD + ZoL Tutorial On Rump Kernel Servers and Clients The rump anykernel architecture allows to run highly componentized kernel code configurations in userspace processes. Coupled with the rump sysproxy facility it is possible to run loosely distributed client-server "mini-operating systems". Since there is minimum configuration and the bootstrap time is measured in milliseconds, these environments are very cheap to set up, use, and tear down on-demand. This document acts as a tutorial on how to configure and use unmodified NetBSD kernel drivers as userspace services with utilities available from the NetBSD base system. As part of this, it presents various use cases. One uses the kernel cryptographic disk driver (cgd) to encrypt a partition. Another one demonstrates how to operate an FFS server for editing the contents of a file system even though your user account does not have privileges to use the host's mount() system call. Additionally, using a userspace TCP/IP server with an unmodified web browser is detailed. Installing Snort on OpenBSD 6.4 As you may recall from previous posts, I am running an OpenBSD server on an APU2 air-cooled 3 Intel NIC box as my router/firewall for my secure home network. Given that all of my Internet traffic flows through this box, I thought it would be a cool idea to run an Intrusion Detection System (IDS) on it. Snort is the big hog of the open source world so I took a peek in the packages directory on one of the mirrors and lo and behold we have the latest & greatest version of Snort available! Thanks devs!!! I did some quick Googling and didn’t find much “modern” howto help out there so, after some trial and error, I have it up and running. I thought I’d give back in a small way and share a quickie howto for other Googlers out there who are looking for guidance. Here’s hoping that my title is good enough “SEO” to get you here! Beastie Bits os108 AT&T Archives: The UNIX Operating System httpd(8): Adapt to industry wide current best security practices Quotes From A Book That Bashes Unix OpenBSD QA wiki Feedback/Questions Malcolm - Laptop Experience : Dell XPS 13 DJ - Feedback Alex - GhostBSD and Wifi : FIXED Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv Your browser does not support the HTML5 video tag.

BSD Now
Episode 273: A Thoughtful Episode | BSD Now 273

BSD Now

Play Episode Listen Later Nov 23, 2018 74:32


Thoughts on NetBSD 8.0, Monitoring love for a GigaBit OpenBSD firewall, cat’s source history, X.org root permission bug, thoughts on OpenBSD as a desktop, and NomadBSD review. ##Headlines Some thoughts on NetBSD 8.0 NetBSD is a highly portable operating system which can be run on dozens of different hardware architectures. The operating system’s clean and minimal design allow it to be run in all sorts of environments, ranging from embedded devices, to servers, to workstations. While the base operating system is minimal, NetBSD users have access to a large repository of binary packages and a ports tree which I will touch upon later. I last tried NetBSD 7.0 about three years ago and decided it was time to test drive the operating system again. In the past three years NetBSD has introduced a few new features, many of them security enhancements. For example, NetBSD now supports write exclusive-or execute (W^X) protection and address space layout randomization (ASLR) to protect programs against common attacks. NetBSD 8.0 also includes USB3 support and the ability to work with ZFS storage volumes. Early impressions Since I had set up NetBSD with a Full install and enabled xdm during the setup process, the operating system booted to a graphical login screen. From here we can sign into our account. The login screen does not provide options to shut down or restart the computer. Logging into our account brings up the twm window manager and provides a virtual terminal, courtesy of xterm. There is a panel that provides a method for logging out of the window manager. The twm environment is sparse, fast and devoid of distractions. Software management NetBSD ships with a fairly standard collection of command line tools and manual pages, but otherwise it is a fairly minimal platform. If we want to run network services, have access to a web browser, or use a word processor we are going to need to install more software. There are two main approaches to installing new packages. The first, and easier approach, is to use the pkgin package manager. The pkgin utility works much the same way APT or DNF work in the Linux world, or as pkg works on FreeBSD. We can search for software by name, install or remove items. I found pkgin worked well, though its output can be terse. My only complaint with pkgin is that it does not handle “close enough” package names. For example, if I tried to run “pkgin install vlc” or “pkgin install firefox” I would quickly be told these items did not exist. But a more forgiving package manager will realize items like vlc2 or firefox45 are available and offer to install those. The pkgin tool installs new programs in the /usr/pkg/bin directory. Depending on your configuration and shell, this location may not be in your user’s path, and it will be helpful to adjust your PATH variable accordingly. The other common approach to acquiring new software is to use the pkgsrc framework. I have talked about using pkgsrc before and I will skip the details. Basically, we can download a collection of recipes for building popular open source software and run a command to download and install these items from their source code. Using pkgsrc basically gives us the same software as using pkgin would, but with some added flexibility on the options we use. Once new software has been installed, it may need to be enabled and activated, particularly if it uses (or is) a background service. New items can be enabled in the /etc/rc.conf file and started or stopped using the service command. This works about the same as the service command on FreeBSD and most non-systemd Linux distributions. Hardware I found that, when logged into the twm environment, NetBSD used about 130MB of RAM. This included kernel memory and all active memory. A fresh, Full install used up 1.5GB of disk space. I generally found NetBSD ran well in both VirtualBox and on my desktop computer. The system was quick and stable. I did have trouble getting a higher screen resolution in both environments. NetBSD does not offer VirtualBox add-on modules. There are NetBSD patches for VirtualBox out there, but there is some manual work involved in getting them working. When running on my desktop computer I think the resolution issue was one of finding and dealing with the correct video driver. Screen resolution aside, NetBSD performed well and detected all my hardware. Personal projects Since NetBSD provides users with a small, core operating system without many utilities if we want to use NetBSD for something we need to have a project in mind. I had four mini projects in mind I wanted to try this week: install a desktop environment, enable file sharing for computers on the local network, test multimedia (video, audio and YouTube capabilities), and set up a ZFS volume for storage. I began with the desktop. Specifically, I followed the same tutorial I used three years ago to try to set up the Xfce desktop. While Xfce and its supporting services installed, I was unable to get a working desktop out of the experience. I could get the Xfce window manager working, but not the entire session. This tutorial worked beautifully with NetBSD 7.0, but not with version 8.0. Undeterred, I switched gears and installed Fluxbox instead. This gave me a slightly more powerful graphical environment than what I had before with twm while maintaining performance. Fluxbox ran without any problems, though its application menu was automatically populated with many programs which were not actually installed. Next, I tried installing a few multimedia applications to play audio and video files. Here I ran into a couple of interesting problems. I found the music players I installed would play audio files, but the audio was quite slow. It always sounded like a cassette tape dragging. When I tried to play a video, the entire graphical session would crash, taking me back to the login screen. When I installed Firefox, I found I could play YouTube videos, and the video played smoothly, but again the audio was unusually slow. I set up two methods of sharing files on the local network: OpenSSH and FTP. NetBSD basically gives us OpenSSH for free at install time and I added an FTP server through the pkgin package manager which worked beautifully with its default configuration. I experimented with ZFS support a little, just enough to confirm I could create and access ZFS volumes. ZFS seems to work on NetBSD just as well, and with the same basic features, as it does on FreeBSD and mainstream Linux distributions. I think this is a good feature for the portable operating system to have since it means we can stick NetBSD on nearly any networked computer and use it as a NAS. Conclusions NetBSD, like its close cousins (FreeBSD and OpenBSD) does not do a lot of hand holding or automation. It offers a foundation that will run on most CPUs and we can choose to build on that foundation. I mention this because, on its own, NetBSD does not do much. If we want to get something out of it, we need to be willing to build on its foundation - we need a project. This is important to keep in mind as I think going into NetBSD and thinking, “Oh I’ll just explore around and expand on this as I go,” will likely lead to disappointment. I recommend figuring out what you want to do before installing NetBSD and making sure the required tools are available in the operating system’s repositories. Some of the projects I embarked on this week (using ZFS and setting up file sharing) worked well. Others, like getting multimedia support and a full-featured desktop, did not. Given more time, I’m sure I could find a suitable desktop to install (along with the required documentation to get it and its services running), or customize one based on one of the available window managers. However, any full featured desktop is going to require some manual work. Media support was not great. The right players and codecs were there, but I was not able to get audio to play smoothly. My main complaint with NetBSD relates to my struggle to get some features working to my satisfaction: the documentation is scattered. There are four different sections of the project’s website for documentation (FAQs, The Guide, manual pages and the wiki). Whatever we are looking for is likely to be in one of those, but which one? Or, just as likely, the tutorial we want is not there, but is on a forum or blog somewhere. I found that the documentation provided was often thin, more of a quick reference to remind people how something works rather than a full explanation. As an example, I found a couple of documents relating to setting up a firewall. One dealt with networking NetBSD on a LAN, another explored IPv6 support, but neither gave an overview on syntax or a basic guide to blocking all but one or two ports. It seemed like that information should already be known, or picked up elsewhere. Newcomers are likely to be a bit confused by software management guides for the same reason. Some pages refer to using a tool called pkg_add, others use pkgsrc and its make utility, others mention pkgin. Ultimately, these tools each give approximately the same result, but work differently and yet are mentioned almost interchangeably. I have used NetBSD before a few times and could stumble through these guides, but new users are likely to come away confused. One quirk of NetBSD, which may be a security feature or an inconvenience, depending on one’s point of view, is super user programs are not included in regular users’ paths. This means we need to change our path if we want to be able to run programs typically used by root. For example, shutdown and mount are not in regular users’ paths by default. This made checking some things tricky for me. Ultimately though, NetBSD is not famous for its convenience or features so much as its flexibility. The operating system will run on virtually any processor and should work almost identically across multiple platforms. That gives NetBSD users a good deal of consistency across a range of hardware and the chance to experiment with a member of the Unix family on hardware that might not be compatible with Linux or the other BSDs. ###Showing a Gigabit OpenBSD Firewall Some Monitoring Love I have a pretty long history of running my home servers or firewalls on “exotic” hardware. At first, it was Sun Microsystem hardware, then it moved to the excellent Soekris line, with some cool single board computers thrown in the mix. Recently I’ve been running OpenBSD Octeon on the Ubiquiti Edge Router Lite, an amazing little piece of kit at an amazing price point. Upgrade Time! This setup has served me for some time and I’ve been extremely happy with it. But, in the #firstworldproblems category, I recently upgraded the household to the amazing Gigabit fibre offering from Sonic. A great problem to have, but also too much of a problem for the little Edge Router Lite (ERL). The way the OpenBSD PF firewall works, it’s only able to process packets on a single core. Not a problem for the dual-core 500 MHz ERL when you’re pushing under ~200 Mbps, but more of a problem when you’re trying to push 1000 Mbps. I needed something that was faster on a per core basis but still satisfied my usual firewall requirements. Loosely: small form factor fan-less multiple Intel Ethernet ports (good driver support) low power consumption not your regular off-the-shelf kit relatively inexpensive After evaluating a LOT of different options I settled on the Protectli Vault FW2B. With the specs required for the firewall (2 GB RAM and 8 GB drive) it comes in at a mere $239 USD! Installation of OpenBSD 6.4 was pretty straight forward, with the only problem I had was Etcher did not want to recognize the ‘.fs’ extension on the install image as bootable image. I quickly fixed this with good old Unix dd(1) on the Mac. Everything else was incredibly smooth. After loading the same rulesets on my new install, the results were fantastic! Monitoring Now that the machine was up and running (and fast!), I wanted to know what it was doing. Over the years, I’ve always relied on the venerable pfstat software to give me an overview of my traffic, blocked packets, etc. It looks like this: As you can see it’s based on RRDtool, which was simply incredible in its time. Having worked on monitoring almost continuously for almost the past decade, I wanted to see if we could re-implement the same functionality using more modern tools as RRDtool and pfstat definitely have their limitations. This might be an opportunity to learn some new things as well. I came across pf-graphite which seemed to be a great start! He had everything I needed and I added a few more stats from the detailed interface statistics and the ability for the code to exit for running from cron(8), which is a bit more OpenBSD style. I added code for sending to some SaaS metrics platforms but ultimately stuck with straight Graphite. One important thing to note was to use the Graphite pickle port (2004) instead of the default plaintext port for submission. Also you will need to set a loginterface in your ‘pf.conf’. A bit of tweaking with Graphite and Grafana, and I had a pretty darn good recreation of my original PF stats dashboard! As you can see it’s based on RRDtool, which was simply incredible in its time. Having worked on monitoring almost continuously for almost the past decade, I wanted to see if we could re-implement the same functionality using more modern tools as RRDtool and pfstat definitely have their limitations. This might be an opportunity to learn some new things as well. I came across pf-graphite which seemed to be a great start! He had everything I needed and I added a few more stats from the detailed interface statistics and the ability for the code to exit for running from cron(8), which is a bit more OpenBSD style. I added code for sending to some SaaS metrics platforms but ultimately stuck with straight Graphite. One important thing to note was to use the Graphite pickle port (2004) instead of the default plaintext port for submission. Also you will need to set a loginterface in your ‘pf.conf’. A bit of tweaking with Graphite and Grafana, and I had a pretty darn good recreation of my original PF stats dashboard! ###The Source History of Cat I once had a debate with members of my extended family about whether a computer science degree is a degree worth pursuing. I was in college at the time and trying to decide whether I should major in computer science. My aunt and a cousin of mine believed that I shouldn’t. They conceded that knowing how to program is of course a useful and lucrative thing, but they argued that the field of computer science advances so quickly that everything I learned would almost immediately be outdated. Better to pick up programming on the side and instead major in a field like economics or physics where the basic principles would be applicable throughout my lifetime. I knew that my aunt and cousin were wrong and decided to major in computer science. (Sorry, aunt and cousin!) It is easy to see why the average person might believe that a field like computer science, or a profession like software engineering, completely reinvents itself every few years. We had personal computers, then the web, then phones, then machine learning… technology is always changing, so surely all the underlying principles and techniques change too. Of course, the amazing thing is how little actually changes. Most people, I’m sure, would be stunned to know just how old some of the important software on their computer really is. I’m not talking about flashy application software, admittedly—my copy of Firefox, the program I probably use the most on my computer, is not even two weeks old. But, if you pull up the manual page for something like grep, you will see that it has not been updated since 2010 (at least on MacOS). And the original version of grep was written in 1974, which in the computing world was back when dinosaurs roamed Silicon Valley. People (and programs) still depend on grep every day. My aunt and cousin thought of computer technology as a series of increasingly elaborate sand castles supplanting one another after each high tide clears the beach. The reality, at least in many areas, is that we steadily accumulate programs that have solved problems. We might have to occasionally modify these programs to avoid software rot, but otherwise they can be left alone. grep is a simple program that solves a still-relevant problem, so it survives. Most application programming is done at a very high level, atop a pyramid of much older code solving much older problems. The ideas and concepts of 30 or 40 years ago, far from being obsolete today, have in many cases been embodied in software that you can still find installed on your laptop. I thought it would be interesting to take a look at one such old program and see how much it had changed since it was first written. cat is maybe the simplest of all the Unix utilities, so I’m going to use it as my example. Ken Thompson wrote the original implementation of cat in 1969. If I were to tell somebody that I have a program on my computer from 1969, would that be accurate? How much has cat really evolved over the decades? How old is the software on our computers? Thanks to repositories like this one, we can see exactly how cat has evolved since 1969. I’m going to focus on implementations of cat that are ancestors of the implementation I have on my Macbook. You will see, as we trace cat from the first versions of Unix down to the cat in MacOS today, that the program has been rewritten more times than you might expect—but it ultimately works more or less the same way it did fifty years ago. Research Unix Ken Thompson and Dennis Ritchie began writing Unix on a PDP 7. This was in 1969, before C, so all of the early Unix software was written in PDP 7 assembly. The exact flavor of assembly they used was unique to Unix, since Ken Thompson wrote his own assembler that added some features on top of the assembler provided by DEC, the PDP 7’s manufacturer. Thompson’s changes are all documented in the original Unix Programmer’s Manual under the entry for as, the assembler. The first implementation of cat is thus in PDP 7 assembly. I’ve added comments that try to explain what each instruction is doing, but the program is still difficult to follow unless you understand some of the extensions Thompson made while writing his assembler. There are two important ones. First, the ; character can be used to separate multiple statements on the same line. It appears that this was used most often to put system call arguments on the same line as the sys instruction. Second, Thompson added support for “temporary labels” using the digits 0 through 9. These are labels that can be reused throughout a program, thus being, according to the Unix Programmer’s Manual, “less taxing both on the imagination of the programmer and on the symbol space of the assembler.” From any given instruction, you can refer to the next or most recent temporary label n using nf and nb respectively. For example, if you have some code in a block labeled 1:, you can jump back to that block from further down by using the instruction jmp 1b. (But you cannot jump forward to that block from above without using jmp 1f instead.) The most interesting thing about this first version of cat is that it contains two names we should recognize. There is a block of instructions labeled getc and a block of instructions labeled putc, demonstrating that these names are older than the C standard library. The first version of cat actually contained implementations of both functions. The implementations buffered input so that reads and writes were not done a character at a time. The first version of cat did not last long. Ken Thompson and Dennis Ritchie were able to persuade Bell Labs to buy them a PDP 11 so that they could continue to expand and improve Unix. The PDP 11 had a different instruction set, so cat had to be rewritten. I’ve marked up this second version of cat with comments as well. It uses new assembler mnemonics for the new instruction set and takes advantage of the PDP 11’s various addressing modes. (If you are confused by the parentheses and dollar signs in the source code, those are used to indicate different addressing modes.) But it also leverages the ; character and temporary labels just like the first version of cat, meaning that these features must have been retained when as was adapted for the PDP 11. The second version of cat is significantly simpler than the first. It is also more “Unix-y” in that it doesn’t just expect a list of filename arguments—it will, when given no arguments, read from stdin, which is what cat still does today. You can also give this version of cat an argument of - to indicate that it should read from stdin. In 1973, in preparation for the release of the Fourth Edition of Unix, much of Unix was rewritten in C. But cat does not seem to have been rewritten in C until a while after that. The first C implementation of cat only shows up in the Seventh Edition of Unix. This implementation is really fun to look through because it is so simple. Of all the implementations to follow, this one most resembles the idealized cat used as a pedagogic demonstration in K&R C. The heart of the program is the classic two-liner: while ((c = getc(fi)) != EOF) putchar(c); There is of course quite a bit more code than that, but the extra code is mostly there to ensure that you aren’t reading and writing to the same file. The other interesting thing to note is that this implementation of cat only recognized one flag, -u. The -u flag could be used to avoid buffering input and output, which cat would otherwise do in blocks of 512 bytes. BSD After the Seventh Edition, Unix spawned all sorts of derivatives and offshoots. MacOS is built on top of Darwin, which in turn is derived from the Berkeley Software Distribution (BSD), so BSD is the Unix offshoot we are most interested in. BSD was originally just a collection of useful programs and add-ons for Unix, but it eventually became a complete operating system. BSD seems to have relied on the original cat implementation up until the fourth BSD release, known as 4BSD, when support was added for a whole slew of new flags. The 4BSD implementation of cat is clearly derived from the original implementation, though it adds a new function to implement the behavior triggered by the new flags. The naming conventions already used in the file were adhered to—the fflg variable, used to mark whether input was being read from stdin or a file, was joined by nflg, bflg, vflg, sflg, eflg, and tflg, all there to record whether or not each new flag was supplied in the invocation of the program. These were the last command-line flags added to cat; the man page for cat today lists these flags and no others, at least on Mac OS. 4BSD was released in 1980, so this set of flags is 38 years old. cat would be entirely rewritten a final time for BSD Net/2, which was, among other things, an attempt to avoid licensing issues by replacing all AT&T Unix-derived code with new code. BSD Net/2 was released in 1991. This final rewrite of cat was done by Kevin Fall, who graduated from Berkeley in 1988 and spent the next year working as a staff member at the Computer Systems Research Group (CSRG). Fall told me that a list of Unix utilities still implemented using AT&T code was put up on a wall at CSRG and staff were told to pick the utilities they wanted to reimplement. Fall picked cat and mknod. The cat implementation bundled with MacOS today is built from a source file that still bears his name at the very top. His version of cat, even though it is a relatively trivial program, is today used by millions. Fall’s original implementation of cat is much longer than anything we have seen so far. Other than support for a -? help flag, it adds nothing in the way of new functionality. Conceptually, it is very similar to the 4BSD implementation. It is only longer because Fall separates the implementation into a “raw” mode and a “cooked” mode. The “raw” mode is cat classic; it prints a file character for character. The “cooked” mode is cat with all the 4BSD command-line options. The distinction makes sense but it also pads out the implementation so that it seems more complex at first glance than it actually is. There is also a fancy error handling function at the end of the file that further adds to its length. MacOS The very first release of Mac OS X thus includes an implementation of cat pulled from the NetBSD project. So the first Mac OS X implementation of cat is Kevin Fall’s cat. The only thing that had changed over the intervening decade was that Fall’s error-handling function err() was removed and the err() function made available by err.h was used in its place. err.h is a BSD extension to the C standard library. The NetBSD implementation of cat was later swapped out for FreeBSD’s implementation of cat. According to Wikipedia, Apple began using FreeBSD instead of NetBSD in Mac OS X 10.3 (Panther). But the Mac OS X implementation of cat, according to Apple’s own open source releases, was not replaced until Mac OS X 10.5 (Leopard) was released in 2007. The FreeBSD implementation that Apple swapped in for the Leopard release is the same implementation on Apple computers today. As of 2018, the implementation has not been updated or changed at all since 2007. So the Mac OS cat is old. As it happens, it is actually two years older than its 2007 appearance in MacOS X would suggest. This 2005 change, which is visible in FreeBSD’s Github mirror, was the last change made to FreeBSD’s cat before Apple pulled it into Mac OS X. So the Mac OS X cat implementation, which has not been kept in sync with FreeBSD’s cat implementation, is officially 13 years old. There’s a larger debate to be had about how much software can change before it really counts as the same software; in this case, the source file has not changed at all since 2005. The cat implementation used by Mac OS today is not that different from the implementation that Fall wrote for the 1991 BSD Net/2 release. The biggest difference is that a whole new function was added to provide Unix domain socket support. At some point, a FreeBSD developer also seems to have decided that Fall’s rawargs() function and cookargs() should be combined into a single function called scanfiles(). Otherwise, the heart of the program is still Fall’s code. I asked Fall how he felt about having written the cat implementation now used by millions of Apple users, either directly or indirectly through some program that relies on cat being present. Fall, who is now a consultant and a co-author of the most recent editions of TCP/IP Illustrated, says that he is surprised when people get such a thrill out of learning about his work on cat. Fall has had a long career in computing and has worked on many high-profile projects, but it seems that many people still get most excited about the six months of work he put into rewriting cat in 1989. The Hundred-Year-Old Program In the grand scheme of things, computers are not an old invention. We’re used to hundred-year-old photographs or even hundred-year-old camera footage. But computer programs are in a different category—they’re high-tech and new. At least, they are now. As the computing industry matures, will we someday find ourselves using programs that approach the hundred-year-old mark? Computer hardware will presumably change enough that we won’t be able to take an executable compiled today and run it on hardware a century from now. Perhaps advances in programming language design will also mean that nobody will understand C in the future and cat will have long since been rewritten in another language. (Though C has already been around for fifty years, and it doesn’t look like it is about to be replaced any time soon.) But barring all that, why not just keep using the cat we have forever? I think the history of cat shows that some ideas in computer science are in fact very durable. Indeed, with cat, both the idea and the program itself are old. It may not be accurate to say that the cat on my computer is from 1969. But I could make a case for saying that the cat on my computer is from 1989, when Fall wrote his implementation of cat. Lots of other software is just as ancient. So maybe we shouldn’t think of computer science and software development primarily as fields that disrupt the status quo and invent new things. Our computer systems are built out of historical artifacts. At some point, we may all spend more time trying to understand and maintain those historical artifacts than we spend writing new code. ##News Roundup Trivial Bug in X.Org Gives Root Permission on Linux and BSD Systems A vulnerability that is trivial to exploit allows privilege escalation to root level on Linux and BSD distributions using X.Org server, the open source implementation of the X Window System that offers the graphical environment. The flaw is now identified as CVE-2018-14665 (credited to security researcher Narendra Shinde). It has been present in xorg-server for two years, since version 1.19.0 and is exploitable by a limited user as long as the X server runs with elevated permissions. Privilege escalation and arbitrary file overwrite An advisory on Thursday describes the problem as an “incorrect command-line parameter validation” that also allows an attacker to overwrite arbitrary files. Privilege escalation can be accomplished via the -modulepath argument by setting an insecure path to modules loaded by the X.org server. Arbitrary file overwrite is possible through the -logfile argument, because of improper verification when parsing the option. Bug could have been avoided in OpenBSD 6.4 OpenBSD, the free and open-source operating system with a strong focus on security, uses xorg. On October 18, the project released version 6.4 of the OS, affected by CVE-2018-14665. This could have been avoided, though. Theo de Raadt, founder and leader of the OpenBSD project, says that X maintainer knew about the problem since at least October 11. For some reason, the OpenBSD developers received the message one hour before the public announcement this Thursday, a week after their new OS release. “As yet we don’t have answers about why our X maintainer (on the X security team) and his team provided information to other projects (some who don’t even ship with this new X server) but chose to not give us a heads-up which could have saved all the new 6.4 users a lot of grief,” Raadt says. Had OpenBSD developers known about the bug before the release, they could have taken steps to mitigate the problem or delay the launch for a week or two. To remedy the problem, the OpenBSD project provides a source code patch, which requires compiling and rebuilding the X server. As a temporary solution, users can disable the Xorg binary by running the following command: chmod u-s /usr/X11R6/bin/Xorg Trivial exploitation CVE-2018-14665 does not help compromise systems, but it is useful in the following stages of an attack. Leveraging it after gaining access to a vulnerable machine is fairly easy. Matthew Hickey, co-founder, and head of Hacker House security outfit created and published an exploit, saying that it can be triggered from a remote SSH session. Three hours after the public announcement of the security gap, Daemon Security CEO Michael Shirk replied with one line that overwrote shadow files on the system. Hickey did one better and fit the entire local privilege escalation exploit in one line. Apart from OpenBSD, other operating systems affected by the bug include Debian and Ubuntu, Fedora and its downstream distro Red Hat Enterprise Linux along with its community-supported counterpart CentOS. ###OpenBSD on the Desktop: some thoughts I’ve been using OpenBSD on my ThinkPad X230 for some weeks now, and the experience has been peculiar in some ways. The OS itself in my opinion is not ready for widespread desktop usage, and the development team is not trying to push it in the throat of anybody who wants a Windows or macOS alternative. You need to understand a little bit of how *NIX systems work, because you’ll use CLI more than UI. That’s not necessarily bad, and I’m sure I learned a trick or two that could translate easily to Linux or macOS. Their development process is purely based on developers that love to contribute and hack around, just because it’s fun. Even the mailing list is a cool place to hang on! Code correctness and security are a must, nothing gets committed if it doesn’t get reviewed thoroughly first - nowadays the first two properties should be enforced in every major operating system. I like the idea of a platform that continually evolves. pledge(2) and unveil(2) are the proof that with a little effort, you can secure existing software better than ever. I like the “sensible defaults” approach, having an OS ready to be used - UI included if you selected it during the setup process - is great. Just install a browser and you’re ready to go. Manual pages on OpenBSD are real manuals, not an extension of the “–help” command found in most CLI softwares. They help you understand inner workings of the operating system, no internet connection needed. There are some trade-offs, too. Performance is not first-class, mostly because of all the security mitigations and checks done at runtime3. I write Go code in neovim, and sometimes you can feel a slight slowdown when you’re compiling and editing multiple files at the same time, but usually I can’t notice any meaningful difference. Browsers are a different matter though, you can definitely feel something differs from the experience you can have on mainstream operating systems. But again, trade-offs. To use OpenBSD on the desktop you must be ready to sacrifice some of the goodies of mainstream OSes, but if you’re searching for a zen place to do your computing stuff, it’s the best you can get right now. ###Review: NomadBSD 1.1 One of the most recent additions to the DistroWatch database is NomadBSD. According to the NomadBSD website: “NomadBSD is a 64-bit live system for USB flash drives, based on FreeBSD. Together with automatic hardware detection and setup, it is configured to be used as a desktop system that works out of the box, but can also be used for data recovery.” The latest release of NomadBSD (or simply “Nomad”, as I will refer to the project in this review) is version 1.1. It is based on FreeBSD 11.2 and is offered in two builds, one for generic personal computers and one for Macbooks. The release announcement mentions version 1.1 offers improved video driver support for Intel and AMD cards. The operating system ships with Octopkg for graphical package management and the system should automatically detect, and work with, VirtualBox environments. Nomad 1.1 is available as a 2GB download, which we then decompress to produce a 4GB file which can be written to a USB thumb drive. There is no optical media build of Nomad as it is designed to be run entirely from the USB drive, and write data persistently to the drive, rather than simply being installed from the USB media. Initial setup Booting from the USB drive brings up a series of text-based menus which ask us to configure key parts of the operating system. We are asked to select our time zone, keyboard layout, keyboard model, keyboard mapping and our preferred language. While we can select options from a list, the options tend to be short and cryptic. Rather than “English (US)”, for example, we might be given “enUS”. We are also asked to create a password for the root user account and another one for a regular user which is called “nomad”. We can then select which shell nomad will use. The default is zsh, but there are plenty of other options, including csh and bash. We have the option of encrypting our user’s home directory. I feel it is important to point out that these settings, and nomad’s home directory, are stored on the USB drive. The options and settings we select will not be saved to our local hard drive and our configuration choices will not affect other operating systems already installed on our computer. At the end, the configuration wizard asks if we want to run the BSDstats service. This option is not explained at all, but it contacts BSDstats to provide some basic statistics on BSD users. The system then takes a few minutes to apply its changes to the USB drive and automatically reboots the computer. While running the initial setup wizard, I had nearly identical experiences when running Nomad on a physical computer and running the operating system in a VirtualBox virtual machine. However, after the initial setup process was over, I had quite different experiences depending on the environment so I want to divide my experiences into two different sections. Physical desktop computer At first, Nomad failed to boot on my desktop computer. From the operating system’s boot loader, I enabled Safe Mode which allowed Nomad to boot. At that point, Nomad was able to start up, but would only display a text console. The desktop environment failed to start when running in Safe Mode. Networking was also disabled by default and I had to enable a network interface and DHCP address assignment to connect to the Internet. Instructions for enabling networking can be found in FreeBSD’s Handbook. Once we are on-line we can use the pkg command line package manager to install and update software. Had the desktop environment worked then the Octopkg graphical package manager would also be available to make browsing and installing software a point-n-click experience. Had I been able to run the desktop for prolonged amounts of time I could have made use of such pre-installed items as the Firefox web browser, the VLC media player, LibreOffice and Thunderbird. Nomad offers a fairly small collection of desktop applications, but what is there is mostly popular, capable software. When running the operating system I noted that, with one user logged in, Nomad only runs 15 processes with the default configuration. These processes require less than 100MB of RAM, and the whole system fits comfortably on a 4GB USB drive. Conclusions Ultimately using Nomad was not a practical option for me. The operating system did not work well with my hardware, or the virtual environment. In the virtual machine, Nomad crashed consistently after just a few minutes of uptime. On the desktop computer, I could not get a desktop environment to run. The command line tools worked well, and the system performed tasks very quickly, but a command line only environment is not well suited to my workflow. I like the idea of what NomadBSD is offering. There are not many live desktop flavours of FreeBSD, apart from GhostBSD. It was nice to see developers trying to make a FreeBSD-based, plug-and-go operating system that would offer a desktop and persistent storage. I suspect the system would work and perform its stated functions on different hardware, but in my case my experiment was necessarily short lived. ##Beastie Bits FreeBSD lockless algorithm - seq Happy Bob’s Libtls tutorial Locking OpenBSD when it’s sleeping iio - The OpenBSD Way Installing Hugo and Hosting Website on OpenBSD Server Fosdem 2019 reminder: BSD devroom CfP OpenBGPD, gotta go fast! - Claudio Jeker Project Trident RC3 available FreeBSD 10.4 EOL Play “Crazy Train” through your APU2 speaker ##Feedback/Questions Tobias - Satisfying my storage hunger and wallet pains Lasse - Question regarding FreeBSD backups https://twitter.com/dlangille https://dan.langille.org/ Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

BSD Now
Episode 259: Long Live Unix | BSD Now 259

BSD Now

Play Episode Listen Later Aug 16, 2018 107:36


The strange birth and long life of Unix, FreeBSD jail with a single public IP, EuroBSDcon 2018 talks and schedule, OpenBSD on G4 iBook, PAM template user, ZFS file server, and reflections on one year of OpenBSD use. Picking the contest winner Vincent Bostjan Andrew Klaus-Hendrik Will Toby Johnny David manfrom Niclas Gary Eddy Bruce Lizz Jim Random number generator ##Headlines ###The Strange Birth and Long Life of Unix They say that when one door closes on you, another opens. People generally offer this bit of wisdom just to lend some solace after a misfortune. But sometimes it’s actually true. It certainly was for Ken Thompson and the late Dennis Ritchie, two of the greats of 20th-century information technology, when they created the Unix operating system, now considered one of the most inspiring and influential pieces of software ever written. A door had slammed shut for Thompson and Ritchie in March of 1969, when their employer, the American Telephone & Telegraph Co., withdrew from a collaborative project with the Massachusetts Institute of Technology and General Electric to create an interactive time-sharing system called Multics, which stood for “Multiplexed Information and Computing Service.” Time-sharing, a technique that lets multiple people use a single computer simultaneously, had been invented only a decade earlier. Multics was to combine time-sharing with other technological advances of the era, allowing users to phone a computer from remote terminals and then read e-mail, edit documents, run calculations, and so forth. It was to be a great leap forward from the way computers were mostly being used, with people tediously preparing and submitting batch jobs on punch cards to be run one by one. Over five years, AT&T invested millions in the Multics project, purchasing a GE-645 mainframe computer and dedicating to the effort many of the top researchers at the company’s renowned Bell Telephone Laboratories—­including Thompson and Ritchie, Joseph F. Ossanna, Stuart Feldman, M. Douglas McIlroy, and the late Robert Morris. But the new system was too ambitious, and it fell troublingly behind schedule. In the end, AT&T’s corporate leaders decided to pull the plug. After AT&T’s departure from the Multics project, managers at Bell Labs, in Murray Hill, N.J., became reluctant to allow any further work on computer operating systems, leaving some researchers there very frustrated. Although Multics hadn’t met many of its objectives, it had, as Ritchie later recalled, provided them with a “convenient interactive computing service, a good environment in which to do programming, [and] a system around which a fellowship could form.” Suddenly, it was gone. With heavy hearts, the researchers returned to using their old batch system. At such an inauspicious moment, with management dead set against the idea, it surely would have seemed foolhardy to continue designing computer operating systems. But that’s exactly what Thompson, Ritchie, and many of their Bell Labs colleagues did. Now, some 40 years later, we should be thankful that these programmers ignored their bosses and continued their labor of love, which gave the world Unix, one of the greatest computer operating systems of all time. The rogue project began in earnest when Thompson, Ritchie, and a third Bell Labs colleague, Rudd Canaday, began to sketch out on paper the design for a file system. Thompson then wrote the basics of a new operating system for the lab’s GE-645 mainframe. But with the Multics project ended, so too was the need for the GE-645. Thompson realized that any further programming he did on it was likely to go nowhere, so he dropped the effort. Thompson had passed some of his time after the demise of Multics writing a computer game called Space Travel, which simulated all the major bodies in the solar system along with a spaceship that could fly around them. Written for the GE-645, Space Travel was clunky to play—and expensive: roughly US $75 a game for the CPU time. Hunting around, Thompson came across a dusty PDP-7, a minicomputer built by Digital Equipment Corp. that some of his Bell Labs colleagues had purchased earlier for a circuit-analysis project. Thompson rewrote Space Travel to run on it. And with that little programming exercise, a second door cracked ajar. It was to swing wide open during the summer of 1969 when Thompson’s wife, Bonnie, spent a month visiting his parents to show off their newborn son. Thompson took advantage of his temporary bachelor existence to write a good chunk of what would become the Unix operating system for the discarded PDP‑7. The name Unix stems from a joke one of Thompson’s colleagues made: Because the new operating system supported only one user (Thompson), he saw it as an emasculated version of Multics and dubbed it “Un-multiplexed Information and Computing Service,” or Unics. The name later morphed into Unix. Initially, Thompson used the GE-645 to compose and compile the software, which he then downloaded to the PDP‑7. But he soon weaned himself from the mainframe, and by the end of 1969 he was able to write operating-system code on the PDP-7 itself. That was a step in the right direction. But Thompson and the others helping him knew that the PDP‑7, which was already obsolete, would not be able to sustain their skunkworks for long. They also knew that the lab’s management wasn’t about to allow any more research on operating systems. So Thompson and Ritchie got crea­tive. They formulated a proposal to their bosses to buy one of DEC’s newer minicomputers, a PDP-11, but couched the request in especially palatable terms. They said they were aiming to create tools for editing and formatting text, what you might call a word-processing system today. The fact that they would also have to write an operating system for the new machine to support the editor and text formatter was almost a footnote. Management took the bait, and an order for a PDP-11 was placed in May 1970. The machine itself arrived soon after, although the disk drives for it took more than six months to appear. During the interim, Thompson, Ritchie, and others continued to develop Unix on the PDP-7. After the PDP-11’s disks were installed, the researchers moved their increasingly complex operating system over to the new machine. Next they brought over the roff text formatter written by Ossanna and derived from the runoff program, which had been used in an earlier time-sharing system. Unix was put to its first real-world test within Bell Labs when three typists from AT&T’s patents department began using it to write, edit, and format patent applications. It was a hit. The patent department adopted the system wholeheartedly, which gave the researchers enough credibility to convince management to purchase another machine—a newer and more powerful PDP-11 model—allowing their stealth work on Unix to continue. During its earliest days, Unix evolved constantly, so the idea of issuing named versions or releases seemed inappropriate. But the researchers did issue new editions of the programmer’s manual periodically, and the early Unix systems were named after each such edition. The first edition of the manual was completed in November 1971. So what did the first edition of Unix offer that made it so great? For one thing, the system provided a hierarchical file system, which allowed something we all now take for granted: Files could be placed in directories—or equivalently, folders—that in turn could be put within other directories. Each file could contain no more than 64 kilobytes, and its name could be no more than six characters long. These restrictions seem awkwardly limiting now, but at the time they appeared perfectly adequate. Although Unix was ostensibly created for word processing, the only editor available in 1971 was the line-oriented ed. Today, ed is still the only editor guaranteed to be present on all Unix systems. Apart from the text-processing and general system applications, the first edition of Unix included games such as blackjack, chess, and tic-tac-toe. For the system administrator, there were tools to dump and restore disk images to magnetic tape, to read and write paper tapes, and to create, check, mount, and unmount removable disk packs. Most important, the system offered an interactive environment that by this time allowed time-sharing, so several people could use a single machine at once. Various programming languages were available to them, including BASIC, Fortran, the scripting of Unix commands, assembly language, and B. The last of these, a descendant of a BCPL (Basic Combined Programming Language), ultimately evolved into the immensely popular C language, which Ritchie created while also working on Unix. The first edition of Unix let programmers call 34 different low-level routines built into the operating system. It’s a testament to the system’s enduring nature that nearly all of these system calls are still available—and still heavily used—on modern Unix and Linux systems four decades on. For its time, first-­edition Unix provided a remarkably powerful environment for software development. Yet it contained just 4200 lines of code at its heart and occupied a measly 16 KB of main memory when it ran. Unix’s great influence can be traced in part to its elegant design, simplicity, portability, and serendipitous timing. But perhaps even more important was the devoted user community that soon grew up around it. And that came about only by an accident of its unique history. The story goes like this: For years Unix remained nothing more than a Bell Labs research project, but by 1973 its authors felt the system was mature enough for them to present a paper on its design and implementation at a symposium of the Association for Computing Machinery. That paper was published in 1974 in the Communications of the ACM. Its appearance brought a flurry of requests for copies of the software. This put AT&T in a bind. In 1956, AT&T had agreed to a U.S government consent decree that prevented the company from selling products not directly related to telephones and telecommunications, in return for its legal monopoly status in running the country’s long-distance phone service. So Unix could not be sold as a product. Instead, AT&T released the Unix source code under license to anyone who asked, charging only a nominal fee. The critical wrinkle here was that the consent decree prevented AT&T from supporting Unix. Indeed, for many years Bell Labs researchers proudly displayed their Unix policy at conferences with a slide that read, “No advertising, no support, no bug fixes, payment in advance.” With no other channels of support available to them, early Unix adopters banded together for mutual assistance, forming a loose network of user groups all over the world. They had the source code, which helped. And they didn’t view Unix as a standard software product, because nobody seemed to be looking after it. So these early Unix users themselves set about fixing bugs, writing new tools, and generally improving the system as they saw fit. The Usenix user group acted as a clearinghouse for the exchange of Unix software in the United States. People could send in magnetic tapes with new software or fixes to the system and get back tapes with the software and fixes that Usenix had received from others. In Australia, the University of New South Wales and the University of Sydney produced a more robust version of Unix, the Australian Unix Share Accounting Method, which could cope with larger numbers of concurrent users and offered better performance. By the mid-1970s, the environment of sharing that had sprung up around Unix resembled the open-source movement so prevalent today. Users far and wide were enthusiastically enhancing the system, and many of their improvements were being fed back to Bell Labs for incorporation in future releases. But as Unix became more popular, AT&T’s lawyers began looking harder at what various licensees were doing with their systems. One person who caught their eye was John Lions, a computer scientist then teaching at the University of New South Wales, in Australia. In 1977, he published what was probably the most famous computing book of the time, A Commentary on the Unix Operating System, which contained an annotated listing of the central source code for Unix. Unix’s licensing conditions allowed for the exchange of source code, and initially, Lions’s book was sold to licensees. But by 1979, AT&T’s lawyers had clamped down on the book’s distribution and use in academic classes. The anti­authoritarian Unix community reacted as you might expect, and samizdat copies of the book spread like wildfire. Many of us have nearly unreadable nth-­generation photocopies of the original book. End runs around AT&T’s lawyers indeed became the norm—even at Bell Labs. For example, between the release of the sixth edition of Unix in 1975 and the seventh edition in 1979, Thompson collected dozens of important bug fixes to the system, coming both from within and outside of Bell Labs. He wanted these to filter out to the existing Unix user base, but the company’s lawyers felt that this would constitute a form of support and balked at their release. Nevertheless, those bug fixes soon became widely distributed through unofficial channels. For instance, Lou Katz, the founding president of Usenix, received a phone call one day telling him that if he went down to a certain spot on Mountain Avenue (where Bell Labs was located) at 2 p.m., he would find something of interest. Sure enough, Katz found a magnetic tape with the bug fixes, which were rapidly in the hands of countless users. By the end of the 1970s, Unix, which had started a decade earlier as a reaction against the loss of a comfortable programming environment, was growing like a weed throughout academia and the IT industry. Unix would flower in the early 1980s before reaching the height of its popularity in the early 1990s. For many reasons, Unix has since given way to other commercial and noncommercial systems. But its legacy, that of an elegant, well-designed, comfortable environment for software development, lives on. In recognition of their accomplishment, Thompson and Ritchie were given the Japan Prize earlier this year, adding to a collection of honors that includes the United States’ National Medal of Technology and Innovation and the Association of Computing Machinery’s Turing Award. Many other, often very personal, tributes to Ritchie and his enormous influence on computing were widely shared after his death this past October. Unix is indeed one of the most influential operating systems ever invented. Its direct descendants now number in the hundreds. On one side of the family tree are various versions of Unix proper, which began to be commercialized in the 1980s after the Bell System monopoly was broken up, freeing AT&T from the stipulations of the 1956 consent decree. On the other side are various Unix-like operating systems derived from the version of Unix developed at the University of California, Berkeley, including the one Apple uses today on its computers, OS X. I say “Unix-like” because the developers of the Berkeley Software Distribution (BSD) Unix on which these systems were based worked hard to remove all the original AT&T code so that their software and its descendants would be freely distributable. The effectiveness of those efforts were, however, called into question when the AT&T subsidiary Unix System Laboratories filed suit against Berkeley Software Design and the Regents of the University of California in 1992 over intellectual property rights to this software. The university in turn filed a counterclaim against AT&T for breaches to the license it provided AT&T for the use of code developed at Berkeley. The ensuing legal quagmire slowed the development of free Unix-like clones, including 386BSD, which was designed for the Intel 386 chip, the CPU then found in many IBM PCs. Had this operating system been available at the time, Linus Torvalds says he probably wouldn’t have created Linux, an open-source Unix-like operating system he developed from scratch for PCs in the early 1990s. Linux has carried the Unix baton forward into the 21st century, powering a wide range of digital gadgets including wireless routers, televisions, desktop PCs, and Android smartphones. It even runs some supercomputers. Although AT&T quickly settled its legal disputes with Berkeley Software Design and the University of California, legal wrangling over intellectual property claims to various parts of Unix and Linux have continued over the years, often involving byzantine corporate relations. By 2004, no fewer than five major lawsuits had been filed. Just this past August, a software company called the TSG Group (formerly known as the SCO Group), lost a bid in court to claim ownership of Unix copyrights that Novell had acquired when it purchased the Unix System Laboratories from AT&T in 1993. As a programmer and Unix historian, I can’t help but find all this legal sparring a bit sad. From the very start, the authors and users of Unix worked as best they could to build and share, even if that meant defying authority. That outpouring of selflessness stands in sharp contrast to the greed that has driven subsequent legal battles over the ownership of Unix. The world of computer hardware and software moves forward startlingly fast. For IT professionals, the rapid pace of change is typically a wonderful thing. But it makes us susceptible to the loss of our own history, including important lessons from the past. To address this issue in a small way, in 1995 I started a mailing list of old-time Unix ­aficionados. That effort morphed into the Unix Heritage Society. Our goal is not only to save the history of Unix but also to collect and curate these old systems and, where possible, bring them back to life. With help from many talented members of this society, I was able to restore much of the old Unix software to working order, including Ritchie’s first C compiler from 1972 and the first Unix system to be written in C, dating from 1973. One holy grail that eluded us for a long time was the first edition of Unix in any form, electronic or otherwise. Then, in 2006, Al Kossow from the Computer History Museum, in Mountain View, Calif., unearthed a printed study of Unix dated 1972, which not only covered the internal workings of Unix but also included a complete assembly listing of the kernel, the main component of this operating system. This was an amazing find—like discovering an old Ford Model T collecting dust in a corner of a barn. But we didn’t just want to admire the chrome work from afar. We wanted to see the thing run again. In 2008, Tim Newsham, an independent programmer in Hawaii, and I assembled a team of like-minded Unix enthusiasts and set out to bring this ancient system back from the dead. The work was technically arduous and often frustrating, but in the end, we had a copy of the first edition of Unix running on an emulated PDP-11/20. We sent out messages announcing our success to all those we thought would be interested. Thompson, always succinct, simply replied, “Amazing.” Indeed, his brainchild was amazing, and I’ve been happy to do what I can to make it, and the story behind it, better known. Digital Ocean http://do.co/bsdnow ###FreeBSD jails with a single public IP address Jails in FreeBSD provide a simple yet flexible way to set up a proper server layout. In the most setups the actual server only acts as the host system for the jails while the applications themselves run within those independent containers. Traditionally every jail has it’s own IP for the user to be able to address the individual services. But if you’re still using IPv4 this might get you in trouble as the most hosters don’t offer more than one single public IP address per server. Create the internal network In this case NAT (“Network Address Translation”) is a good way to expose services in different jails using the same IP address. First, let’s create an internal network (“NAT network”) at 192.168.0.0/24. You could generally use any private IPv4 address space as specified in RFC 1918. Here’s an overview: https://en.wikipedia.org/wiki/Privatenetwork. Using pf, FreeBSD’s firewall, we will map requests on different ports of the same public IP address to our individual jails as well as provide network access to the jails themselves. First let’s check which network devices are available. In my case there’s em0 which provides connectivity to the internet and lo0, the local loopback device. options=209b [...] inet 172.31.1.100 netmask 0xffffff00 broadcast 172.31.1.255 nd6 options=23 media: Ethernet autoselect (1000baseT ) status: active lo0: flags=8049 metric 0 mtu 16384 options=600003 inet6 ::1 prefixlen 128 inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2 inet 127.0.0.1 netmask 0xff000000 nd6 options=21``` > For our internal network, we create a cloned loopback device called lo1. Therefore we need to customize the /etc/rc.conf file, adding the following two lines: cloned_interfaces="lo1" ipv4_addrs_lo1="192.168.0.1-9/29" > This defines a /29 network, offering IP addresses for a maximum of 6 jails: ipcalc 192.168.0.1/29 Address: 192.168.0.1 11000000.10101000.00000000.00000 001 Netmask: 255.255.255.248 = 29 11111111.11111111.11111111.11111 000 Wildcard: 0.0.0.7 00000000.00000000.00000000.00000 111 => Network: 192.168.0.0/29 11000000.10101000.00000000.00000 000 HostMin: 192.168.0.1 11000000.10101000.00000000.00000 001 HostMax: 192.168.0.6 11000000.10101000.00000000.00000 110 Broadcast: 192.168.0.7 11000000.10101000.00000000.00000 111 Hosts/Net: 6 Class C, Private Internet > Then we need to restart the network. Please be aware of currently active SSH sessions as they might be dropped during restart. It’s a good moment to ensure you have KVM access to that server ;-) service netif restart > After reconnecting, our newly created loopback device is active: lo1: flags=8049 metric 0 mtu 16384 options=600003 inet 192.168.0.1 netmask 0xfffffff8 inet 192.168.0.2 netmask 0xffffffff inet 192.168.0.3 netmask 0xffffffff inet 192.168.0.4 netmask 0xffffffff inet 192.168.0.5 netmask 0xffffffff inet 192.168.0.6 netmask 0xffffffff inet 192.168.0.7 netmask 0xffffffff inet 192.168.0.8 netmask 0xffffffff inet 192.168.0.9 netmask 0xffffffff nd6 options=29 Setting up > pf part of the FreeBSD base system, so we only have to configure and enable it. By this moment you should already have a clue of which services you want to expose. If this is not the case, just fix that file later on. In my example configuration, I have a jail running a webserver and another jail running a mailserver: Public IP address IP_PUB="1.2.3.4" Packet normalization scrub in all Allow outbound connections from within the jails nat on em0 from lo1:network to any -> (em0) webserver jail at 192.168.0.2 rdr on em0 proto tcp from any to $IP_PUB port 443 -> 192.168.0.2 just an example in case you want to redirect to another port within your jail rdr on em0 proto tcp from any to $IP_PUB port 80 -> 192.168.0.2 port 8080 mailserver jail at 192.168.0.3 rdr on em0 proto tcp from any to $IP_PUB port 25 -> 192.168.0.3 rdr on em0 proto tcp from any to $IP_PUB port 587 -> 192.168.0.3 rdr on em0 proto tcp from any to $IP_PUB port 143 -> 192.168.0.3 rdr on em0 proto tcp from any to $IP_PUB port 993 -> 192.168.0.3 > Now just enable pf like this (which is the equivalent of adding pf_enable=YES to /etc/rc.conf): sysrc pf_enable="YES" > and start it: service pf start Install ezjail > Ezjail is a collection of scripts by erdgeist that allow you to easily manage your jails. pkg install ezjail > As an alternative, you could install ezjail from the ports tree. Now we need to set up the basejail which contains the shared base system for our jails. In fact, every jail that you create get’s will use that basejail to symlink directories related to the base system like /bin and /sbin. This can be accomplished by running ezjail-admin install > In the next step, we’ll copy the /etc/resolv.conf file from our host to the newjail, which is the template for newly created jails (the parts that are not provided by basejail), to ensure that domain resolution will work properly within our jails later on: cp /etc/resolv.conf /usr/jails/newjail/etc/ > Last but not least, we enable ezjail and start it: sysrc ezjail_enable="YES" service ezjail start Create a jail > Creating a jail is as easy as it could probably be: ezjail-admin create webserver 192.168.0.2 ezjail-admin start webserver > Now you can access your jail using: ezjail-admin console webserver > Each jail contains a vanilla FreeBSD installation. Deploy services > Now you can spin up as many jails as you want to set up your services like web, mail or file shares. You should take care not to enable sshd within your jails, because that would cause problems with the service’s IP bindings. But this is not a problem, just SSH to the host and enter your jail using ezjail-admin console. EuroBSDcon 2018 Talks & Schedule (https://2018.eurobsdcon.org/talks-schedule/) News Roundup OpenBSD on an iBook G4 (https://bobstechsite.com/openbsd-on-an-ibook-g4/) > I've mentioned on social media and on the BTS podcast a few times that I wanted to try installing OpenBSD onto an old "snow white" iBook G4 I acquired last summer to see if I could make it a useful machine again in the year 2018. This particular eBay purchase came with a 14" 1024x768 TFT screen, 1.07GHz PowerPC G4 processor, 1.5GB RAM, 100GB of HDD space and an ATI Radeon 9200 graphics card with 32 MB of SDRAM. The optical drive, ethernet port, battery & USB slots are also fully-functional. The only thing that doesn't work is the CMOS battery, but that's not unexpected for a device that was originally released in 2004. Initial experiments > This iBook originally arrived at my door running Apple Mac OSX Leopard and came with the original install disk, the iLife & iWork suites for 2008, various instruction manuals, a working power cable and a spare keyboard. As you'll see in the pictures I took for this post the characters on the buttons have started to wear away from 14 years of intensive use, but the replacement needs a very good clean before I decide to swap it in! > After spending some time exploring the last version of OSX to support the IBM PowerPC processor architecture I tried to see if the hardware was capable of modern computing with Linux. Something I knew ahead of trying this was that the WiFi adapter was unlikely to work because it's a highly proprietary component designed by Apple to work specifically with OSX and nothing else, but I figured I could probably use a wireless USB dongle later to get around this limitation. > Unfortunately I found that no recent versions of mainstream Linux distributions would boot off this machine. Debian has dropped support 32-bit PowerPC architectures and the PowerPC variants of Ubuntu 16.04 LTS (vanilla, MATE and Lubuntu) wouldn't even boot the installer! The only distribution I could reliably install on the hardware was Lubuntu 14.04 LTS. > Unfortunately I'm not the biggest fan of the LXDE desktop for regular work and a lot of ported applications were old and broken because it clearly wasn't being maintained by people that use the hardware anymore. Ubuntu 14.04 is also approaching the end of its support life in early 2019, so this limited solution also has a limited shelf-life. Over to BSD > I discussed this problem with a few people on Mastodon and it was pointed out to me that OSX is built on the Darwin kernel, which happens to be a variant of BSD. NetBSD and OpenBSD fans in particular convinced me that their communities still saw the value of supporting these old pieces of kit and that I should give BSD a try. > So yesterday evening I finally downloaded the "macppc" version of OpenBSD 6.3 with no idea what to expect. I hoped for the best but feared the worst because my last experience with this operating system was trying out PC-BSD in 2008 and discovering with disappointment that it didn't support any of the hardware on my Toshiba laptop. > When I initially booted OpenBSD I was a little surprised to find the login screen provided no visual feedback when I typed in my password, but I can understand the security reasons for doing that. The initial desktop environment that was loaded was very basic. All I could see was a console output window, a terminal and a desktop switcher in the X11 environment the system had loaded. > After a little Googling I found this blog post had some fantastic instructions to follow for the post-installation steps: https://sohcahtoa.org.uk/openbsd.html. I did have to adjust them slightly though because my iBook only has 1.5GB RAM and not every package that page suggests is available on macppc by default. You can see a full list here: https://ftp.openbsd.org/pub/OpenBSD/6.3/packages/powerpc/. Final thoughts > I was really impressed with the performance of OpenBSD's "macppc" port. It boots much faster than OSX Leopard on the same hardware and unlike Lubuntu 14.04 it doesn't randomly hang for no reason or crash if you launch something demanding like the GIMP. > I was pleased to see that the command line tools I'm used to using on Linux have been ported across too. OpenBSD also had no issues with me performing basic desktop tasks on XFCE like browsing the web with NetSurf, playing audio files with VLC and editing images with the GIMP. Limited gaming is also theoretically possible if you're willing to build them (or an emulator) from source with SDL support. > If I wanted to use this system for heavy duty work then I'd probably be inclined to run key applications like LibreOffice on a Raspberry Pi and then connect my iBook G4 to those using VNC or an SSH connection with X11 forwarding. BSD is UNIX after all, so using my ancient laptop as a dumb terminal should work reasonably well. > In summary I was impressed with OpenBSD and its ability to breathe new life into this old Apple Mac. I'm genuinely excited about the idea of trying BSD with other devices on my network such as an old Asus Eee PC 900 netbook and at least one of the many Raspberry Pi devices I use. Whether I go the whole hog and replace Fedora on my main production laptop though remains to be seen! The template user with PAM and login(1) (http://oshogbo.vexillium.org/blog/48) > When you build a new service (or an appliance) you need your users to be able to configure it from the command line. To accomplish this you can create system accounts for all registered users in your service and assign them a special login shell which provides such limited functionality. This can be painful if you have a dynamic user database. > Another challenge is authentication via remote services such as RADIUS. How can we implement services when we authenticate through it and log into it as a different user? Furthermore, imagine a scenario when RADIUS decides on which account we have the right to access by sending an additional attribute. > To address these two problems we can use a "template" user. Any of the PAM modules can set the value of the PAM_USER item. The value of this item will be used to determine which account we want to login. Only the "template" user must exist on the local password database, but the credential check can be omitted by the module. > This functionality exists in the login(1) used by FreeBSD, HardenedBSD, DragonFlyBSD and illumos. The functionality doesn't exist in the login(1) used in NetBSD, and OpenBSD doesn't support PAM modules at all. In addition what is also noteworthy is that such functionality was also in the OpenSSH but they decided to remove it and call it a security vulnerability (CVE 2015-6563). I can see how some people may have seen it that way, that’s why I recommend reading this article from an OpenPAM author and a FreeBSD security officer at the time. > Knowing the background let's take a look at an example. ```PAMEXTERN int pamsmauthenticate(pamhandlet *pamh, int flags _unused, int argc _unused, const char *argv[] _unused) { const char *user, *password; int err; err = pam_get_user(pamh, &user, NULL); if (err != PAM_SUCCESS) return (err); err = pam_get_authtok(pamh, PAM_AUTHTOK, &password, NULL); if (err == PAM_CONV_ERR) return (err); if (err != PAM_SUCCESS) return (PAM_AUTH_ERR); err = authenticate(user, password); if (err != PAM_SUCCESS) { return (err); } return (pam_set_item(pamh, PAM_USER, "template")); } In the listing above we have an example of a PAM module. The pamgetuser(3) provides a username. The pamgetauthtok(3) shows us a secret given by the user. Both functions allow us to give an optional prompt which should be shown to the user. The authenticate function is our crafted function which authenticates the user. In our first scenario we wanted to keep all users in an external database. If authentication is successful we then switch to a template user which has a shell set up for a script allowing us to configure the machine. In our second scenario the authenticate function authenticates the user in RADIUS. Another step is to add our PAM module to the /etc/pam.d/system or to the /etc/pam.d/login configuration: auth sufficient pamtemplate.so nowarn allowlocal Unfortunately the description of all these options goes beyond this article - if you would like to know more about it you can find them in the PAM manual. The last thing we need to do is to add our template user to the system which you can do by the adduser(8) command or just simply modifying the /etc/master.passwd file and use pwdmkdb(8) program: $ tail -n /etc/master.passwd template::1000:1000::0:0:User &:/:/usr/local/bin/templatesh $ sudo pwdmkdb /etc/master.passwd As you can see,the template user can be locked and we still can use it in our PAM module (the * character after login). I would like to thank Dag-Erling Smørgrav for pointing this functionality out to me when I was looking for it some time ago. iXsystems iXsystems @ VMWorld ###ZFS file server What is the need? At work, we run a compute cluster that uses an Isilon cluster as primary NAS storage. Excluding snapshots, we have about 200TB of research data, some of them in compressed formats, and others not. We needed an offsite backup file server that would constantly mirror our primary NAS and serve as a quick recovery source in case of a data loss in the the primary NAS. This offsite file server would be passive - will never face the wrath of the primary cluster workload. In addition to the role of a passive backup server, this solution would take on some passive report generation workloads as an ideal way of offloading some work from the primary NAS. The passive work is read-only. The backup server would keep snapshots in a best effort basis dating back to 10 years. However, this data on this backup server would be archived to tapes periodically. A simple guidance of priorities: Data integrity > Cost of solution > Storage capacity > Performance. Why not enterprise NAS? NetApp FAS or EMC Isilon or the like? We decided that enterprise grade NAS like NetAPP FAS or EMC Isilon are prohibitively expensive and an overkill for our needs. An open source & cheaper alternative to enterprise grade filesystem with the level of durability we expect turned up to be ZFS. We’re already spoilt from using snapshots by a clever Copy-on-Write Filesystem(WAFL) by NetApp. ZFS providing snapshots in almost identical way was a big influence in the choice. This is also why we did not consider just a CentOS box with the default XFS filesystem. FreeBSD vs Debian for ZFS This is a backup server, a long-term solution. Stability and reliability are key requirements. ZFS on Linux may be popular at this time, but there is a lot of churn around its development, which means there is a higher probability of bugs like this to occur. We’re not looking for cutting edge features here. Perhaps, Linux would be considered in the future. FreeBSD + ZFS We already utilize FreeBSD and OpenBSD for infrastructure services and we have nothing but praises for the stability that the BSDs have provided us. We’d gladly use FreeBSD and OpenBSD wherever possible. Okay, ZFS, but why not FreeNAS? IMHO, FreeNAS provides a integrated GUI management tool over FreeBSD for a novice user to setup and configure FreeBSD, ZFS, Jails and many other features. But, this user facing abstraction adds an extra layer of complexity to maintain that is just not worth it in simpler use cases like ours. For someone that appreciates the commandline interface, and understands FreeBSD enough to administer it, plain FreeBSD + ZFS is simpler and more robust than FreeNAS. Specifications Lenovo SR630 Rackserver 2 X Intel Xeon silver 4110 CPUs 768 GB of DDR4 ECC 2666 MHz RAM 4 port SAS card configured in passthrough mode(JBOD) Intel network card with 10 Gb SFP+ ports 128GB M.2 SSD for use as boot drive 2 X HGST 4U60 JBOD 120(2 X 60) X 10TB SAS disks ###Reflection on one-year usage of OpenBSD I have used OpenBSD for more than one year, and it is time to give a summary of the experience: (1) What do I get from OpenBSD? a) A good UNIX tutorial. When I am curious about some UNIXcommands’ implementation, I will refer to OpenBSD source code, and I actually gain something every time. E.g., refresh socket programming skills from nc; know how to process file efficiently from cat. b) A better test bed. Although my work focus on developing programs on Linux, I will try to compile and run applications on OpenBSD if it is possible. One reason is OpenBSD usually gives more helpful warnings. E.g., hint like this: ...... warning: sprintf() is often misused, please use snprintf() ...... Or you can refer this post which I wrote before. The other is sometimes program run well on Linux may crash on OpenBSD, and OpenBSD can help you find hidden bugs. c) Some handy tools. E.g. I find tcpbench is useful, so I ported it into Linux for my own usage (project is here). (2) What I give back to OpenBSD? a) Patches. Although most of them are trivial modifications, they are still my contributions. b) Write blog posts to share experience about using OpenBSD. c) Develop programs for OpenBSD/BSD: lscpu and free. d) Porting programs into OpenBSD: E.g., I find google/benchmark is a nifty tool, but lacks OpenBSD support, I submitted PR and it is accepted. So you can use google/benchmark on OpenBSD now. Generally speaking, the time invested on OpenBSD is rewarding. If you are still hesitating, why not give a shot? ##Beastie Bits BSD Users Stockholm Meetup BSDCan 2018 Playlist OPNsense 18.7 released Testing TrueOS (FreeBSD derivative) on real hardware ThinkPad T410 Kernel Hacker Wanted! Replace a pair of 8-bit writes to VGA memory with a single 16-bit write Reduce taskq and context-switch cost of zio pipe Proposed FreeBSD Memory Management change, expected to improve ZFS ARC interactions Tarsnap ##Feedback/Questions Anian_Z - Question Robert - Pool question Lain - Congratulations Thomas - L2arc Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

Fate Masters
Fate Masters Episódio 30 - Fate Masters Responde (Volume 1)

Fate Masters

Play Episode Listen Later Jun 30, 2017 57:22


Nesse Fate Masters ainda em baixo contingente, o Mr. Mickey Fábio, com o retorno do Cicerone do Fate Horror Luís Cavalheiro, responde as perguntas que vocês da comunidade fizeram no Post do Facebook que o Velho Lich Rafael Meyer criou. Felizmente, as perguntas foram tantas que não conseguimos terminar com as perguntas respondidas. Então, continuem postando suas perguntas! Lembrem-se: qualquer dúvidas, críticas, sugestões e opiniões você pode enviar na comunidade do Google+ do Fate Masters, na comunidade do Facebook do Fate (com a hashtag #fatemasters), e pelo email fatemasterspodcast@gmail.com Link para o programa em MP3 Participantes: Fábio Emilio Costa Luís Cavalheiro Duração: 57min Cronologia do Podcast: 00:00:10 - Abertura e explicação do Fate Masters Responde 00:02:17 - O que fazer quando o jogador foge a todo momento de seus aspectos? (Pedro Gustavo) 00:09:00 - Quais são as principais formas (ou que vocês sugerem) para lidar com veículos em Fate? (Filipe Dalmatti Lima) 00:12:55 - Quais são as mecânicas que vocês conhecem para lidar com combates de massa? (Filipe Dalmatti Lima) 00:17:49 - Analogias a XP para ser usadas para beneficiar um jogador (Pedro Gustavo) 00:23:19 - Quais dicas vocês dariam para compor uma aventura de Mistério ou Investigação Policial? (Rodrigo Marini) 00:30:50 - Quais hacks, ferramentas ou adaptações trazem realismo pra Fate? (Pedro Gustavo) 00:38:58 - Sistema favorito para narrar aventuras de super heróis (Digo isso em termos de mecânicas para super poderes) Quais são os prós e os contras de cada um? (Filipe Dalmatti Lima) 00:54:37 - Considerações Finais Links Relacionados: Post do Facebook que o Velho Lich Rafael Meyer criou Destino dos Camundongos Aventureiros Fate em Quatro Cores Idéia do Mundos Colidem de Fractal de Cenário Notes on Writing Weird Fiction Supernatural Horror in Literature Urban Shadows Slip Nitrate City Nest The Mentalist Linus Torvalds Richard Stallman Dennis Ritchie House Daring Comics Wearing the Cape Icons Andromeda Romance in the Air Gods and Monsters Mesas Predestinadas - Dystopian Universe Beta (Sessão 0) Mesas Predestinadas - Dystopian Universe Beta (Sessão 1) Mesas Predestinadas - Dystopian Universe Beta (Sessão 2) Mesas Predestinadas - Camundongos Aventureiros Demos Corporation Earthdawn Destino de Nárnia Link para a comunidade do Google+ do Fate Masters Comente esse post no site do Fate Masters! Assine no iTunes Trilha Sonora do Podcast: Ambient Pills por Zeropage Ambient Pills Update por Zeropage

BSD Now
190: The Moore You Know

BSD Now

Play Episode Listen Later Apr 19, 2017 130:59


This week, we look forward with the latest OpenBSD release, look back with Dennis Ritchie's paper on the evolution of Unix Time Sharing, have an Interview with Kris This episode was brought to you by OpenBSD 6.1 RELEASED (http://undeadly.org/cgi?action=article&sid=20170411132956) Mailing list post (https://marc.info/?l=openbsd-announce&m=149191716921690&w=2') We are pleased to announce the official release of OpenBSD 6.1. This is our 42nd release. New/extended platforms: New arm64 platform, using clang(1) as the base system compiler. The loongson platform now supports systems with Loongson 3A CPU and RS780E chipset. The following platforms were retired: armish, sparc, zaurus New vmm(4)/ vmd(8) IEEE 802.11 wireless stack improvements Generic network stack improvements Installer improvements Routing daemons and other userland network improvements Security improvements dhclient(8)/ dhcpd(8)/ dhcrelay(8) improvements Assorted improvements OpenSMTPD 6.0.0 OpenSSH 7.4 LibreSSL 2.5.3 mandoc 1.14.1 *** Fuzz Testing OpenSSH (http://vegardno.blogspot.ca/2017/03/fuzzing-openssh-daemon-using-afl.html) Vegard Nossum writes a blog post explaining how to fuzz OpenSSH using AFL It starts by compiling AFL and SSH with LLVM to get extra instrumentation to make the fuzzing process better, and faster Sandboxing, PIE, and other features are disabled to increase debuggability, and to try to make breaking SSH easier Privsep is also disabled, because when AFL does make SSH crash, the child process crashing causes the parent process to exit normally, and AFL then doesn't realize that a crash has happened. A one-line patch disables the privsep feature for the purposes of testing A few other features are disabled to make testing easier (disabling replay attack protection allows the same inputs to be reused many times), and faster: the local arc4random_buf() is patched to return a buffer of zeros disabling CRC checks disabling MAC checks disabling encryption (allow the NULL cipher for everything) add a call to _AFLINIT(), to enable “deferred forkserver mode” disabling closefrom() “Skipping expensive DH/curve and key derivation operations” Then, you can finally get around to writing some test cases The steps are all described in detail In one day of testing, the author found a few NULL dereferences that have since been fixed. Maybe you can think of some other code paths through SSH that should be tested, or want to test another daemon *** Getting OpenBSD running on Raspberry Pi 3 (http://undeadly.org/cgi?action=article&sid=20170409123528) Ian Darwin writes in about his work deploying the arm64 platform and the Raspberry Pi 3 So I have this empty white birdhouse-like thing in the yard, open at the front. It was intended to house the wireless remote temperature sensor from a low-cost weather station, which had previously been mounted on a dark-colored wall of the house [...]. But when I put the sensor into the birdhouse, the signal is too weak for the weather station to receive it (the mounting post was put in place by a previous owner of our property, and is set deeply in concrete). So the next plan was to pop in a tiny OpenBSD computer with a uthum(4) temperature sensor and stream the temperature over WiFi. The Raspberry Pi computers are interesting in their own way: intending to bring low-cost computing to everybody, they take shortcuts and omit things that you'd expect on a laptop or desktop. They aren't too bright on their own: there's very little smarts in the board compared to the "BIOS" and later firmwares on conventional systems. Some of the "smarts" are only available as binary files. This was part of the reason that our favorite OS never came to the Pi Party for the original rpi, and didn't quite arrive for the rpi2. With the rpi3, though, there is enough availability that our devs were able to make it boot. Some limitations remain, though: if you want to build your own full release, you have to install the dedicated raspberrypi-firmware package from the ports tree. And, the boot disks have to have several extra files on them - this is set up on the install sets, but you should be careful not to mess with these extra files until you know what you're doing! But wait! Before you read on, please note that, as of April 1, 2017, this platform boots up but is not yet ready for prime time: there's no driver for SD/MMC but that's the only thing the hardware can level-0 boot from, so you need both the uSD card and a USB disk, at least while getting started; there is no support for the built-in WiFi (a Broadcom BCM43438 SDIO 802.11), so you have to use wired Ethernet or a USB WiFi dongle (for my project an old MSI that shows up as ural(4) seems to work fine); the HDMI driver isn't used by the kernel (if a monitor is plugged in uBoot will display its messages there), so you need to set up cu with a 3V serial cable, at least for initial setup. the ports tree isn't ready to cope with the base compiler being clang yet, so packages are "a thing of the future" But wait - there's more! The "USB disk" can be a USB thumb drive, though they're generally slower than a "real" disk. My first forays used a Kingston DTSE9, the hardy little steel-cased version of the popular DataTraveler line. I was able to do the install, and boot it, once (when I captured the dmesg output shown below). After that, it failed - the boot process hung with the ever-unpopular "scanning usb for storage devices..." message. I tried the whole thing again with a second DTSE9, and with a 32GB plastic-cased DataTraveler. Same results. After considerable wasted time, I found a post on RPI's own site which dates back to the early days of the PI 3, in which they admit that they took shortcuts in developing the firmware, and it just can't be made to work with the Kingston DataTraveler! Not having any of the "approved" devices, and not living around the corner from a computer store, I switched to a Sabrent USB dock with a 320GB Western Digital disk, and it's been rock solid. Too big and energy-hungry for the final project, but enough to show that the rpi3 can be solid with the right (solid-state) disk. And fast enough to build a few simple ports - though a lot will not build yet. I then found and installed OpenBSD onto a “PNY” brand thumb drive and found it solid - in fact I populated it by dd'ing from one of the DataTraveller drives, so they're not at fault. Check out the full article for detailed setup instructions *** Dennis M. Ritchie's Paper: The Evolution of the Unix Time Sharing System (http://www.read.seas.harvard.edu/~kohler/class/aosref/ritchie84evolution.pdf) From the abstract: This paper presents a brief history of the early development of the Unix operating system. It concentrates on the evolution of the file system, the process-control mechanism, and the idea of pipelined commands. Some attention is paid to social conditions during the development of the system. During the past few years, the Unix operating system has come into wide use, so wide that its very name has become a trademark of Bell Laboratories. Its important characteristics have become known to many people. It has suffered much rewriting and tinkering since the first publication describing it in 1974 [1], but few fundamental changes. However, Unix was born in 1969 not 1974, and the account of its development makes a little-known and perhaps instructive story. This paper presents a technical and social history of the evolution of the system. High level document structure: Origins The PDP-7 Unix file system Process control IO Redirection The advent of the PDP-11 The first PDP-11 system Pipes High-level languages Conclusion One of the comforting things about old memories is their tendency to take on a rosy glow. The programming environment provided by the early versions of Unix seems, when described here, to be extremely harsh and primitive. I am sure that if forced back to the PDP-7 I would find it intolerably limiting and lacking in conveniences. Nevertheless, it did not seem so at the time; the memory fixes on what was good and what lasted, and on the joy of helping to create the improvements that made life better. In ten years, I hope we can look back with the same mixed impression of progress combined with continuity. Interview - Kris Moore - kris@trueos.org (mailto:kris@trueos.org) | @pcbsdkris (https://twitter.com/pcbsdkris) Director of Engineering at iXSystems FreeNAS News Roundup Compressed zfs send / receive now in FreeBSD's vendor area (https://svnweb.freebsd.org/base?view=revision&revision=316894) Andriy Gapon committed a whole lot of ZFS updates to FreeBSD's vendor area This feature takes advantage of the new compressed ARC feature, which means blocks that are compressed on disk, remain compressed in ZFS' RAM cache, to use the compressed blocks when using ZFS replication. Previously, blocks were uncompressed, sent (usually over the network), then recompressed on the other side. This is rather wasteful, and can make the process slower, not just because of the CPU time wasted decompressing/recompressing the data, but because it means more data has to be sent over the network. This caused many users to end up doing: zfs send | xz -T0 | ssh unxz | zfs recv, or similar, to compress the data before sending it over the network. With this new feature, zfs send with the new -c flag, will transmit the already compressed blocks instead. This change also adds longopts versions of all of the zfs send flags, making them easier to understand when written in shell scripts. A lot of fixes, man page updates, etc. from upstream OpenZFS Thanks to everyone who worked on these fixes and features! We'll announce when these have been committed to head for testing *** Granting privileges using the FreeBSD MAC framework (https://mysteriouscode.io/blog/granting-privileges-using-mac-framework/) The MAC (Mandatory Access Control) framework allows finer grained permissions than the standard UNIX permissions that exist in the base system FreeBSD's kernel provides quite sophisticated privilege model that extends the traditional UNIX user-and-group one. Here I'll show how to leverage it to grant access to specific privileges to group of non-root users. mac(9) allows creating pluggable modules with policies that can extend existing base system security definitions. struct macpolicyops consist of many entry points that we can use to amend the behaviour. This time, I wanted to grant a privilege to change realtime priority to a selected group. While Linux kernel lets you specify a named group, FreeBSD doesn't have such ability, hence I created this very simple policy. The privilege check can be extended using two user supplied functions: privcheck and privgrant. The first one can be used to further restrict existing privileges, i.e. you can disallow some specific priv to be used in jails, etc. The second one is used to explicitly grant extra privileges not available for the target in base configuration. The core of the macrtprio module is dead simple. I defined sysctl tree for two oids: enable (on/off switch for the policy) and gid (the GID target has to be member of), then I specified our custom version of mpoprivgrant called rtprioprivgrant. Body of my granting function is even simpler. If the policy is disabled or the privilege that is being checked is not PRIVSCHED_RTPRIO, we simply skip and return EPERM. If the user is member of the designated group we return 0 that'll allow the action – target would change realtime privileges. Another useful thing the MAC framework can be used to grant to non-root users: PortACL: The ability to bind to TCP/UDP ports less than 1024, which is usually restricted to root. Some other uses for the MAC framework are discussed in The FreeBSD Handbook (https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/mac.html) However, there are lots more, and we would really like to see more tutorials and documentation on using MAC to make more secure servers, but allowing the few specific things that normally require root access. *** The Story of the PING Program (http://ftp.arl.army.mil/~mike/ping.html) This is from the homepage of Mike Muuss: Yes, it's true! I'm the author of ping for UNIX. Ping is a little thousand-line hack that I wrote in an evening which practically everyone seems to know about. :-) I named it after the sound that a sonar makes, inspired by the whole principle of cho-location. In college I'd done a lot of modeling of sonar and radar systems, so the "Cyberspace" analogy seemed very apt. It's exactly the same paradigm applied to a new problem domain: ping uses timed IP/ICMP ECHOREQUEST and ECHOREPLY packets to probe the "distance" to the target machine. My original impetus for writing PING for 4.2a BSD UNIX came from an offhand remark in July 1983 by Dr. Dave Mills while we were attending a DARPA meeting in Norway, in which he described some work that he had done on his "Fuzzball" LSI-11 systems to measure path latency using timed ICMP Echo packets. In December of 1983 I encountered some odd behavior of the IP network at BRL. Recalling Dr. Mills' comments, I quickly coded up the PING program, which revolved around opening an ICMP style SOCKRAW AFINET Berkeley-style socket(). The code compiled just fine, but it didn't work -- there was no kernel support for raw ICMP sockets! Incensed, I coded up the kernel support and had everything working well before sunrise. Not surprisingly, Chuck Kennedy (aka "Kermit") had found and fixed the network hardware before I was able to launch my very first "ping" packet. But I've used it a few times since then. grin If I'd known then that it would be my most famous accomplishment in life, I might have worked on it another day or two and added some more options. The folks at Berkeley eagerly took back my kernel modifications and the PING source code, and it's been a standard part of Berkeley UNIX ever since. Since it's free, it has been ported to many systems since then, including Microsoft Windows95 and WindowsNT. In 1993, ten years after I wrote PING, the USENIX association presented me with a handsome scroll, pronouncing me a Joint recipient of The USENIX Association 1993 Lifetime Achievement Award presented to the Computer Systems Research Group, University of California at Berkeley 1979-1993. ``Presented to honor profound intellectual achievement and unparalleled service to our Community. At the behest of CSRG principals we hereby recognize the following individuals and organizations as CSRG participants, contributors and supporters.'' Wow! The best ping story I've ever heard was told to me at a USENIX conference, where a network administrator with an intermittent Ethernet had linked the ping program to his vocoder program, in essence writing: ping goodhost | sed -e 's/.*/ping/' | vocoder He wired the vocoder's output into his office stereo and turned up the volume as loud as he could stand. The computer sat there shouting "Ping, ping, ping..." once a second, and he wandered through the building wiggling Ethernet connectors until the sound stopped. And that's how he found the intermittent failure. FreeBSD: /usr/local/lib/libpkg.so.3: Undefined symbol "utimensat" (http://glasz.org/sheeplog/2017/02/freebsd-usrlocalliblibpkgso3-undefined-symbol-utimensat.html) The internet will tell you that, of course, 10.2 is EOL, that packages are being built for 10.3 by now and to better upgrade to the latest version of FreeBSD. While all of this is true and running the latest versions is generally good advise, in most cases it is unfeasible to do an entire OS upgrade just to be able to install a package. Points out the ABI variable being used in /usr/local/etc/pkg/repos/FreeBSD.conf Now, if you have 10.2 installed and 10.3 is the current latest FreeBSD version, this url will point to packages built for 10.3 resulting in the problem that, when running pkg upgrade pkg it'll go ahead and install the latest version of pkg build for 10.3 onto your 10.2 system. Yikes! FreeBSD 10.3 and pkgng broke the ABI by introducing new symbols, like utimensat. The solution: Have a look at the actual repo url http://pkg.FreeBSD.org/FreeBSD:10:amd64… there's repo's for each release! Instead of going through the tedious process of upgrading FreeBSD you just need to Use a repo url that fits your FreeBSD release: Update the package cache: pkg update Downgrade pkgng (in case you accidentally upgraded it already): pkg delete -f pkg pkg install -y pkg Install your package There you go. Don't fret. But upgrade your OS soon ;) Beastie Bits CPU temperature collectd report on NetBSD (https://imil.net/blog/2017/01/22/collectd_NetBSD_temperature/) Booting FreeBSD 11 with NVMe and ZFS on AMD Ryzen (https://www.servethehome.com/booting-freebsd-11-nvme-zfs-amd-ryzen/) BeagleBone Black Tor relay (https://torbsd.github.io/blog.html#busy-bbb) FreeBSD - Disable in-tree GDB by default on x86, mips, and powerpc (https://reviews.freebsd.org/rS317094) CharmBUG April Meetup (https://www.meetup.com/CharmBUG/events/238218742/) The origins of XXX as FIXME (https://www.snellman.net/blog/archive/2017-04-17-xxx-fixme/) *** Feedback/Questions Felis - L2ARC (http://dpaste.com/2APJE4E#wrap) Gabe - FreeBSD Server Install (http://dpaste.com/0BRJJ73#wrap) FEMP Script (http://dpaste.com/05EYNJ4#wrap) Scott - FreeNAS & LAGG (http://dpaste.com/1CV323G#wrap) Marko - Backups (http://dpaste.com/3486VQZ#wrap) ***

BSD Now
173: Carry on my Wayland son

BSD Now

Play Episode Listen Later Dec 21, 2016 74:38


This week on the show, we've got some great stories to bring you, a look at the odder side of UNIX history This episode was brought to you by Headlines syspatch in testing state (http://marc.info/?l=openbsd-tech&m=148058309126053&w=2) Antoine Jacoutot ajacoutot@ openbsd has posted a call for testing for OpenBSD's new syspatch tool “syspatch(8), a "binary" patch system for -release is now ready for early testing. This does not use binary diffing to update the system, but regular signed tarballs containing the updated files (ala installer).” “I would appreciate feedback on the tool. But please send it directly to me, there's no need to pollute the list. This is obviously WIP and the tool may or may not change in drastic ways.” “These test binary patches are not endorsed by the OpenBSD project and should not be trusted, I am only providing them to get early feedback on the tool. If all goes as planned, I am hoping that syspatch will make it into the 6.1 release; but for it to happen, I need to know how it breaks your systems :-)” Instructions (http://syspatch.openbsd.org/pub/OpenBSD/6.0/syspatch/amd64/README.txt) If you test it, report back and let us know how it went *** Weston working (https://lists.freebsd.org/pipermail/freebsd-current/2016-December/064198.html) Over the past few years we've had some user-interest in the state of Wayland / Weston on FreeBSD. In the past day or so, Johannes Lundberg has sent in a progress report to the FreeBSD mailing lists. Without further ADO: We had some progress with Wayland that we'd like to share. Wayland (v1.12.0) Working Weston (v1.12.0) Working (Porting WIP) Weston-clients (installed with wayland/weston port) Working XWayland (run X11 apps in Wayland compositor) Works (maximized window only) if started manually but not when launching X11 app from Weston. Most likely problem with Weston IPC. Sway (i3-compatible Wayland compositor) Working SDL20 (Wayland backend) games/stonesoup-sdl briefly tested. https://twitter.com/johalun/status/811334203358867456 GDM (with Wayland) Halted - depends on logind. GTK3 gtk3-demo runs fine on Weston (might have to set GDK_BACKEND=wayland first. GTK3 apps working (gedit, gnumeric, xfce4-terminal tested, xfce desktop (4.12) does not yet support GTK3)“ Johannes goes on to give instructions on how / where you can fetch their WiP and do your own testing. At the moment you'll need Matt Macy's newer Intel video work, as well as their ports tree which includes all the necessary software bits. Before anybody asks, yes we are watching this for TrueOS! *** Where the rubber meets the road (part two) (https://functionallyparanoid.com/2016/12/15/where-the-rubber-meets-the-road-part-two/) Continuing with our story from Brian Everly from a week ago, we have an update today on the process to dual-boot OpenBSD with Arch Linux. As we last left off, Arch was up and running on the laptop, but some quirks in the hardware meant OpenBSD would take a bit longer. With those issues resolved and the HD seen again, the next issue that reared its head was OpenBSD not seeing the partition tables on the disk. After much frustration, it was time to nuke and pave, starting with OpenBSD first this time. After a successful GPT partitioning and install of OpenBSD, he went back to installing Arch, and then the story got more interesting. “I installed Arch as I detailed in my last post; however, when I fired up gdisk I got a weird error message: “Warning! Disk size is smaller than the main header indicates! Loading secondary header from the last sector of the disk! You should use ‘v' to verify disk integrity, and perhaps options on the expert's menu to repair the disk.” Immediately after this, I saw a second warning: “Caution: Invalid backup GPT header, but valid main header; regenerating backup header from main header.” And, not to be outdone, there was a third: “Warning! Main and backup partition tables differ! Use the ‘c' and ‘e' options on the recovery & transformation menu to examine the two tables.” Finally (not kidding), there was a fourth: “Warning! One or more CRCs don't match. You should repair the disk!” Given all of that, I thought to myself, “This is probably why I couldn't see the disk properly when I partitioned it under Linux on the OpenBSD side. I'll let it repair things and I should be good to go.” I then followed the recommendation and repaired things, using the primary GPT table to recreate the backup one. I then installed Arch and figured I was good to go.“ After confirming through several additional re-installs that the behavior was reproducible, he then decided to go full on crazy,and partition with MBR. That in and of itself was a challenge, since as he mentions, not many people dual-boot OpenBSD with Linux on MBR, especially using luks and lvm! If you want to see the details on how that was done, check it out. The story ends in success though! And better yet: “Now that I have everything working, I'll restore my config and data to Arch, configure OpenBSD the way I like it and get moving. I'll take some time and drop a note on the tech@ mailing list for OpenBSD to see if they can figure out what the GPT problem was I was running into. Hopefully it will make that part of the code stronger to get an edge-case bug report like this.” Take note here, if you run into issues like this with any OS, be sure to document in detail what happened so developers can explore solutions to the issue. *** FreeBSD and ZFS as a time capsule for OS X (https://blog.feld.me/posts/2016/12/using-freebsd-as-a-time-capsule-for-osx/) Do you have any Apple users in your life? Perhaps you run FreeBSD for ZFS somewhere else in the house or office. Well today we have a blog post from Mark Felder which shows how you can use FreeBSD as a time-capsule for your OSX systems. The setup is quite simple, to get started you'll need packages for netatalk3 and avahi-app for service discovery. Next up will be your AFP configuration. He helpfully provides a nice example that you should be able to just cut-n-paste. Be sure to check the hosts allow lines and adjust to fit your network. Also of note will be the backup location and valid users to adjust. A little easier should be the avahi setup, which can be a straight copy-n-paste from the site, which will perform the service advertisements. The final piece is just enabling specific services in /etc/rc.conf and either starting them by hand, or rebooting. At this point your OSX systems should be able to discover the new time-capsule provider on the network and DTRT. *** News Roundup netbenches - FreeBSD network forwarding performance benchmark results (https://github.com/ocochard/netbenches) Olivier Cochard-Labbé, original creator of FreeNAS, and leader of the BSD Router Project, has a github repo of network benchmarks There are many interesting results, and all of the scripts, documentation, and configuration files to run the tests yourself IPSec Performance on an Atom C2558, 12-head vs IPSec Performance Branch (https://github.com/ocochard/netbenches/tree/master/Atom_C2558_4Cores-Intel_i350/ipsec/results/fbsd12.projects-ipsec.equilibrium) Compared to: Xeon L5630 2.13GHz (https://github.com/ocochard/netbenches/tree/2f3bb1b3c51e454736f1fcc650c3328071834f8d/Xeon_L5630-4Cores-Intel_82599EB/ipsec/results/fbsd11.0) and IPSec with Authentication (https://github.com/ocochard/netbenches/tree/305235114ba8a3748ad9681c629333f87f82613a/Atom_C2558_4Cores-Intel_i350/ipsec.ah/results/fbsd12.projects-ipsec.equilibrium) I look forward to seeing tests on even more hardware, as people with access to different configurations try out these benchmarks *** A tcpdump Tutorial and Primer with Examples (https://danielmiessler.com/study/tcpdump/) Most users will be familiar with the basics of using tcpdump, but this tutorial/primer is likely to fill in a lot of blanks, and advance many users understanding of tcpdump “tcpdump is the premier network analysis tool for information security professionals. Having a solid grasp of this über-powerful application is mandatory for anyone desiring a thorough understanding of TCP/IP. Many prefer to use higher level analysis tools such as Wireshark, but I believe this to usually be a mistake.” tcpdump is an important tool for any system or network administrator, it is not just for security. It is often the best way to figure out why the network is not behaving as expected. “In a discipline so dependent on a true understanding of concepts vs. rote learning, it's important to stay fluent in the underlying mechanics of the TCP/IP suite. A thorough grasp of these protocols allows one to troubleshoot at a level far beyond the average analyst, but mastery of the protocols is only possible through continued exposure to them.” Not just that, but TCP/IP is a very interesting protocol, considering how little it has changed in its 40+ year history “First off, I like to add a few options to the tcpdump command itself, depending on what I'm looking at. The first of these is -n, which requests that names are not resolved, resulting in the IPs themselves always being displayed. The second is -X, which displays both hex and ascii content within the packet.” “It's also important to note that tcpdump only takes the first 96 bytes of data from a packet by default. If you would like to look at more, add the -s number option to the mix, where number is the number of bytes you want to capture. I recommend using 0 (zero) for a snaplength, which gets everything.” The page has a nice table of the most useful options It also has a great primer on doing basic filtering If you are relatively new to using tcpdump, I highly recommend you spend a few minutes reading through this article *** How Unix made it to the top (http://minnie.tuhs.org/pipermail/tuhs/2016-December/007519.html) Doug McIlroy gives us a nice background post on how “Unix made it to the top” It's fairly short / concise, so I felt it would be good to read in its entirety. “It has often been told how the Bell Labs law department became the first non-research department to use Unix, displacing a newly acquired stand-alone word-processing system that fell short of the department's hopes because it couldn't number the lines on patent applications, as USPTO required. When Joe Ossanna heard of this, he told them about roff and promised to give it line-numbering capability the next day. They tried it and were hooked. Patent secretaries became remote members of the fellowship of the Unix lab. In due time the law department got its own machine. Less well known is how Unix made it into the head office of AT&T. It seems that the CEO, Charlie Brown, did not like to be seen wearing glasses when he read speeches. Somehow his PR assistant learned of the CAT phototypesetter in the Unix lab and asked whether it might be possible to use it to produce scripts in large type. Of course it was. As connections to the top never hurt, the CEO's office was welcomed as another ouside user. The cost--occasionally having to develop film for the final copy of a speech--was not onerous. Having teethed on speeches, the head office realized that Unix could also be useful for things that didn't need phototypesetting. Other documents began to accumulate in their directory. By the time we became aware of it, the hoard came to include minutes of AT&T board meetings. It didn't seem like a very good idea for us to be keeping records from the inner sanctum of the corporation on a computer where most everybody had super-user privileges. A call to the PR guy convinced him of the wisdom of keeping such things on their own premises. And so the CEO's office bought a Unix system. Just as one hears of cars chosen for their cupholders, so were theseusers converted to Unix for trivial reasons: line numbers and vanity.“ Odd Comments and Strange Doings in Unix (http://orkinos.cmpe.boun.edu.tr/~kosar/odd.html) Everybody loves easter-eggs, and today we have some fun odd ones from the history throughout UNIX told by Dennis Ritchie. First up, was a fun one where the “mv” command could sometimes print the following “values of b may give rise to dom!” “Like most of the messages recorded in these compilations, this one was produced in some situation that we considered unlikely or as result of abuse; the details don't matter. I'm recording why the phrase was selected. The very first use of Unix in the "real business" of Bell Labs was to type and produce patent applications, and for a while in the early 1970s we had three typists busily typing away in the grotty lab on the sixth floor. One day someone came in and observed on the paper sticking out of one of the Teletypes, displayed in magnificent isolation, this ominous phrase: values of b may give rise to dom! It was of course obvious that the typist had interrupted a printout (generating the "!" from the ed editor) and moved up the paper, and that the context must have been something like "varying values of beta may give rise to domain wall movement" or some other fragment of a physically plausible patent application.But the phrase itself was just so striking! Utterly meaningless, but it looks like what... a warning? What is "dom?" At the same time, we were experimenting with text-to-voice software by Doug McIlroy and others, and of course the phrase was tried out with it. For whatever reason, its rendition of "give rise to dom!" accented the last word in a way that emphasized the phonetic similarity between "doom" and the first syllable of "dominance." It pronounced "beta" in the British style, "beeta." The entire occurrence became a small, shared treasure.The phrase had to be recorded somewhere, and it was, in the v6 source. Most likely it was Bob Morris who did the deed, but it could just as easily have been Ken. I hope that your browser reproduces the b as a Greek beta.“ Next up is one you might have heard before: /* You are not expected to understand this */> Every now and then on Usenet or elsewhere I run across a reference to a certain comment in the source code of the Sixth Edition Unix operating system. I've even been given two sweatshirts that quote it. Most probably just heard about it, but those who saw it in the flesh either had Sixth Edition Unix (ca. 1975) or read the annotated version of this system by John Lions (which was republished in 1996: ISBN 1-57298-013-7, Peer-to-Peer Communications).It's often quoted as a slur on the quantity or quality of the comments in the Bell Labs research releases of Unix. Not an unfair observation in general, I fear, but in this case unjustified. So we tried to explain what was going on. "You are not expected to understand this" was intended as a remark in the spirit of "This won't be on the exam," rather than as an impudent challenge. There's a few other interesting stories as well, if the odd/fun side of UNIX history at all interests you, I would recommend checking it out. Beastie Bits With patches in review the #FreeBSD base system builds 100% reproducibly (https://twitter.com/ed_maste/status/811289279611682816) BSDCan 2017 Call for Participation (https://www.freebsdfoundation.org/news-and-events/call-for-papers/bsdcan-2017/) ioCell 2.0 released (https://github.com/bartekrutkowski/iocell/releases) who even calls link_ntoa? (http://www.tedunangst.com/flak/post/who-even-calls-link-ntoa) Booting Androidx86 under bhyve (https://twitter.com/pr1ntf/status/809528845673996288) Feedback/Questions Chris - VNET (http://pastebin.com/016BfvU9) Brian - Package Base (http://pastebin.com/8JJeHuRT) Wim - TrueOS Desktop All-n-one (http://pastebin.com/VC0DPQUF) Daniel - Long Boots (http://pastebin.com/q7pFu7pR) Bryan - ZFS / FreeNAS (http://pastebin.com/xgUnbzr7) Bryan - FreeNAS Security (http://pastebin.com/qqCvVTLB) ***

Nación Lumpen
NL11: Las edades del programador

Nación Lumpen

Play Episode Listen Later Oct 23, 2016 83:57


En esta ocasión buscamos dar una visión del mundo de la programación desde la perspectiva de un programador de la franja de 20 años hasta la de uno de la franja de 50 años. Alex Guerrero, @alexguerrero_ Álvaro Castellanos, @alvarocaste Sebastián Ortega, @_sortega Cesar Jiménez, @eltator Alberto Gómez, @agocorona Dedicatoria para Dennis Ritchie por el 5º aniversario de su fallecimiento. Leer más: http://www.nacionlumpen.com/podcast/2016/10/23/NL11_edades.html

Nación Lumpen
NL11: Las edades del programador

Nación Lumpen

Play Episode Listen Later Oct 23, 2016 83:57


En esta ocasión buscamos dar una visión del mundo de la programación desde la perspectiva de un programador de la franja de 20 años hasta la de uno de la franja de 50 años.Alex Guerrero, @alexguerrero_Álvaro Castellanos, @alvarocasteSebastián Ortega, @_sortegaCesar Jiménez, @eltatorAlberto Gómez, @agocoronaDedicatoria para Dennis Ritchie por el 5º aniversario de su fallecimiento.Leer más: http://www.nacionlumpen.com/podcast/2016/10/23/NL11_edades.html

RGBA
24: My Phone Kinda Burns

RGBA

Play Episode Listen Later Oct 21, 2016 27:36


This week we discuss the death of the Note, an App Store soap opera and the second death of a great man. Follow-up Dear Dash Users - Kapeli Blog (https://blog.kapeli.com/dear-dash-users/) Dash and Apple: My Side of the Story - Kapeli Blog (https://blog.kapeli.com/dash-and-apple-my-side-of-the-story/) Dash developer speaks: Here's his full story – iMore (http://www.imore.com/dash-developer-speaks-heres-his-full-story) Show Notes Samsung temporarily halts production of Galaxy Note 7: official (http://english.yonhapnews.co.kr/business/2016/10/10/33/0502000000AEN20161010004100320F.html) Samsung Note 7 Burns in Women's Pocket in Tapei (http://www.appledaily.com.tw/realtimenews/article/local/20161008/964168) Explosion galaxy note7 in the burger king korea - YouTube (https://www.youtube.com/watch?v=PnCGyzUf5rw&) Samsung's Galaxy Note 7 Smartphone Banned From All U.S. Flights - Mac Rumors (http://www.macrumors.com/2016/10/14/samsung-galaxy-note-7-banned-from-flights/) Daring Fireball: 'Packed It With So Much Innovation' (http://daringfireball.net/linked/2016/10/12/packed-with-innovation) https://twitter.com/arter97/status/786002483424272384/ (https://twitter.com/arter97/status/786002483424272384/) Dennis Ritchie, Father of C and Co-Developer of Unix, Dies – WIRED (https://www.wired.com/2011/10/dennis-ritchie/) Apple's redesigned London store is kitted out with untethered iPhones - CNET (https://www.cnet.com/news/apple-unveils-redesigned-london-store/) Apple Offers a Temporary Workaround if the Home Button Fails on an iPhone 7 - Mac Rumors (http://www.macrumors.com/2016/10/15/apple-workaround-home-button-fails-iphone-7/) SIM-Free iPhone 7 and iPhone 7 Plus Now Available From Apple Online Store - Mac Rumors (http://www.macrumors.com/2016/10/13/apple-selling-sim-free-iphone-7/) The new BMW 5 Series Sedan. (https://www.press.bmwgroup.com/global/article/detail/T0264349EN/the-new-bmw-5-series-sedan) Amazon Echo (https://www.amazon.com/Amazon-Echo-Bluetooth-Speaker-with-WiFi-Alexa/dp/B00X4WHP5E/ref=as_li_ss_tl?ie=UTF8&qid=1477079848&sr=8-2&keywords=echo&linkCode=ll1&tag=hpstrpxl-us-20&linkId=2755f3997247917560fe82d266eba707) Amazon.com: Echo: Digital Music (https://www.amazon.com/b/ref=as_li_ss_tl?node=15451028011&linkCode=ll2&tag=hpstrpxl-us-20&linkId=14d41dd9c2e2c4fac98265055541285c) -- Awesome theme song by Jim Kulakowski (http://jimkulakowski.com/) Feedback, comments very welcomed! http://rgba.fm/contact (http://rgba.fm/contact/).

Kodsnack in English
Kodsnack 166 - On the periphery of the monolith

Kodsnack in English

Play Episode Listen Later Jul 26, 2016 32:32


Fredrik talks to James Turnbull of Kickstarter, Docker and several other companies. Topics range from switching between types of companies and solutions to writing books, documentation and contributing to software in ways other than code. Of course, we also discuss Docker, whether it’s succeeded in various ways and where it might be going. Who should be thinking about Docker? How to start thinking about it? Where do you start picking on your monolith to start bringing it into the container future? This episode was recorded during the developer conference Øredev 2015, where James gave a presentation on Orchestrating Docker. Thank you Cloudnet for sponsoring our VPS! Comments, questions or tips? We are @kodsnack, @tobiashieta, @oferlund och @bjoreman on Twitter, have a page on Facebook and can be emailed on info@kodsnack.se if you want to write something longer. We read everything you send. If you like Kodsnack we would love a review in iTunes! Links James Turnbull Docker - the company and the software solution Puppet labs Public-benefit corporation Immutable infrastructure Docker swarm Docker compose Kubernetes Mesos Mesosphere Elasticsearch Memcached Redis Amazon cloudformation James' books William Gibson Dennis Ritchie Daniel Friedman Vagrant Jekyll Titles A lot of similar paralells The unit of the container A unit of compute I want my code to run somewhere where it makes me money A new way of thinking about architecture On the periphery of the monolith Useful information trapped in the heads of smart people My commits tend to be more documentation than code Aspects of being an engineer A higher level of tolerance and precision

Teahour
#81 - 和微软的爱恨情仇

Teahour

Play Episode Listen Later Nov 15, 2015 81:53


本期节目由 思客教学 赞助,思客教学 “专注 IT 领域远程学徒式” 教育。 本期由 Terry 主持, 采访到了过纯中, 和他聊聊他与微软的爱恨情仇,说说他如何用 Windows 作为桌面来进行“开源软件”开发的。 Visual Basic Silverlight WPF RIA Jon on Software EJB J2EE Development without EJB ADO.NET Ubuntu Django ASP.NET MVC UNIX is very simple, it just needs a genius to understand its simplicity. Rich Hickey Simplicity Matters Simple Made Easy Agile Web Development With Rails Sublime Mosh Quora Ruby社区应该去Rails化了 Cuba Express Aaron Patterson Journey active_model_serializers windows PR AppVeyor Lotus Trailblazer Rails Engine Concern SPA react-rails ECMAScript 6 Ember React Angular Vue Yehuda Bower webpack React Hot Loader Flux redux alt TypeScript Anders Hejlsberg CoffeeScript Haml Slim Been Hero EventMachine Basecamp 3 wechat gem state_machine state_machines-graphviz aasm Edsger React Native 轮子哥 Special Guest: 过纯中.

Minister's Toolbox
EP 06: The Journey of a Church Planter

Minister's Toolbox

Play Episode Listen Later Aug 24, 2015 24:27


Dennis Ritchie, pastor of Oasis Church in Cheshire, CT shares his journey from a life out of control to church volunteer to worship leader to pastor.  Dennis' testimony is an encouragement to anyone who thinks that the events of their life are arbitrary. To contact Dennis or attend his church if you are in Connecticut, go to their website:celebratethejourney.org

Sophos Podcasts
Sophos Security Chet Chat - Episode 75 - October 14, 2011

Sophos Podcasts

Play Episode Listen Later Oct 26, 2013 17:53


John Shier joined Chet this week as they discussed the death of UNIX and C co-creator Dennis Ritchie, the Virus Bulletin 2011 conference, Apple's release of iOS 5 and OS X 10.7.2, Microsoft Patch Tuesday, and the German R2D2 Trojan.

TechStuff
Spotlight on Dennis Ritchie

TechStuff

Play Episode Listen Later Jul 30, 2012 37:31


Who was Dennis Ritchie? Why did Ritchie create the C programming language? What is the story of Ritchie’s involvement with UNIX? In this episode, Jonathan and Chris delve into the life and work of Dennis Ritchie. Learn more about your ad-choices at https://news.iheart.com/podcast-advertisers

"El Explicador" 2012 07 24 Martes La Revolución Informática y Dennis Ritchie

"El Explicador"

Play Episode Listen Later Jul 25, 2012 61:25


Enrique Ganem. La historia de las computadoras electrónicas. El unicio del ambiente UNIX. El papel de Dennis Ritchie en el desarrollo de dos ideas fundamentales en la computación: UNIX y el Lenguaje C. El alcance de UNIX y C en el mundo moderno: Todo esto y más. Contactos: elexplicador@yahoo.com.mx, Facebook: Enrique Ganem Sitio Oficial y Twitter: @ENRIQUE_GANEM. Gracias!.

The Paunch Stevenson Show
Ep 182 11/19/11

The Paunch Stevenson Show

Play Episode Listen Later Nov 19, 2011 77:30


Celebrating 6 Years of the Paunch Stevenson Show (Part 1)! In this episode: Sylvester Stallone's $5,000 Montegrappa Chaos pen, American cheese, chain letters, the Joke Society of America, how Greg and Rob met 25 years ago, middle school madness, celebrity deaths (Steve Jobs- creator of Apple, and Dennis Ritchie, creator of Unix), Steve Jobs' insanity, TNT's Pirates of Silicon Valley (1999) TV movie, Walter Isaacson's Steve Jobs biography, Pixar and the "violent Toy Story cut", Adobe Flash fails on mobile devices, other celebrity deaths (Andy Rooney, Al Davis), Steven Tyler falls again and loses a tooth, John Lennon's tooth sold for $31,000 dollars, cloning the Beatles, Chris Tucker goes bankrupt, Paunch luck (Rob misses the Ghostbusters theater re-release), losing power in the freak October snow storm, Greg meets Jerry O'Connell at the Live with Regis and Kelly show in NYC, Bill Cosby gets flashed by a topless woman in New York, and Greg meets John Lithgow at Barnes and Noble in Princeton, NJ. 77.5 minutes - http://www.paunchstevenson.com

Developology - by developers for developers, from all fields and knowledge levels

Dennis Ritchie dies, Oracle kills the BEAST, Android developer conference, Flash for mobile slashed. Memento Pattern. We argue–err–discuss the differences between our preferred IDEs, compare features, what should be included in the “perfect IDE.”

John Wants Answers
Dennis Ritchie and Occupy Wall Street

John Wants Answers

Play Episode Listen Later Nov 11, 2011 29:30


Topics are Dennis Ritchie and Occupy Wall Street. Guest Keith Stattenfield. Musical guest Casey Heney.

John Wants Answers (Audio)
Dennis Ritchie and Occupy Wall Street

John Wants Answers (Audio)

Play Episode Listen Later Nov 11, 2011 29:30


Topics are Dennis Ritchie and Occupy Wall Street. Guest Keith Stattenfield. Musical guest Casey Heney.

Outriders
Lives remembered and mathematical music

Outriders

Play Episode Listen Later Nov 3, 2011 24:49


This week Jamillah and Rhod talk about the lives of Dennis Ritchie and John McCarthy and together they find out how beautiful and musical mathematics can be.

Boys of Tech
Boys of Tech 139: Double trouble

Boys of Tech

Play Episode Listen Later Oct 23, 2011 38:28


The new Wiredoo search engine, Dennis Ritchie passes away, a throw-in-the-air 360-degree panoramic camera, government use of spyware, Bangladesh's low-cost Doel laptops, Myki cards hacked, the Sony PSN hacked again, Apple wins case preventing the Australian launch of the Samsung Galaxy Tab 10.1, leaked Google memo suggests Google+ may be struggling, ethical hacker surprised at legal threats, 10 trillion digits of pi smashes previous world record, New Zealand survey shows Internet performance is better at home than at work, New Zealand to attempt the world land speed record.

Tech45
Tech45 - 078 - Schuifdeuren vs. klapdeuren

Tech45

Play Episode Listen Later Oct 19, 2011 67:20


De nominaties voor de European Podcast Awards zijn weer begonnen! Jullie kunnen op Tech45 stemmen via deze link! Je kan iedere dag stemmen. En wij? Wij appreciëren dat uiteraard ten zeerste! Gastheer Maarten Hendrikx, @maartenhendrikx op Twitter of via zijn website. Panel Davy Buntinx, @dbuntinx op Twitter, of via zijn website. Cindy de Smet, @drsmetty op Twitter. Marco Frissen, @mfrissen op Twitter, of via zijn website. Stefaan Lesage, @stefaanlesage op Twitter of via de Devia website. Gast Jo Hendriks, @knightwise op Twitter, of via zijn website. Onderwerpen Davy heeft een iPhone 4S gekocht en verteld over zijn ervaringen. We gaan ook nog even op iOS5 in. Marco verteld over de Nikon AW100 camera die hij getest heeft. En ook de Kingston Wi-Drive. Google doet van opschonen. Dennis Ritchie, de vader van C en Unix is niet meer. Tips Stefaan: Maarten: Scribblenauts Remix Davy: Stephen Hawking geeft een lezing aan de KU Leuven! Cindy: Fast Moving Targets Marco: Wolfram Alpha en de bijbehorende iOS apps Feedback Het Tech45-team apprecieert alle feedback die ingestuurd wordt. Heb je dus opmerkingen, reacties of suggesties, laat dan een commentaar hieronder achter. Via twitter kan natuurlijk ook @tech45cast. Ook audio-reacties in .mp3-formaat zijn altijd welkom. Items voor de volgende aflevering kunnen getweet worden met de hashtag '#tech45'. Vergeet ook niet dat je 'live' kan komen meepraten via live.tech45.eu op dinsdag 25 oktober vanaf 21u30. Deze aflevering van de podcast kan je downloaden via deze link, rechtstreeks beluisteren via de onderstaande player, of gewoon gratis abonneren via iTunes.

Ca va trancher
Ca va trancher 37

Ca va trancher

Play Episode Listen Later Oct 19, 2011 79:35


37ème numéro de Ça va trancher avec deux invités cette semaine, Emilie et Régis du podcast XY Mag'. Un grand merci à eux de nous avoir accompagné dans ce délire ! Cette semaine on y parle autant de queues qui manquent que de Nolwenn Leroy. Mais rassurez-vous on s'attarde un peu sur le retour du retour du retour de Retour vers le Futur, Dennis Ritchie "monsieur C", Uncharted et Warcraft, pour ne citer que ça. Le tout agrémenté de la bonne humeur habituelle, des vannes au ras des pâquerettes et des nanars musicaux de la semaine. Pour les nanars musicaux : http://youtu.be/9lPqJGLRpdA http://youtu.be/ZlVUXLBJg14 http://nemotaku.fr/journalismetotal/

Ca va trancher
Ca va trancher 37 (80min)

Ca va trancher

Play Episode Listen Later Oct 18, 2011 80:14


37ème numéro de Ça va trancher avec deux invités cette semaine, Emilie et Régis du podcast XY Mag'. Un grand merci à eux de nous avoir accompagné dans ce délire ! Cette semaine on y parle autant de queues qui manquent que de Nolwenn Leroy. Mais rassurez-vous on s'attarde un peu sur le retour du retour du retour de Retour vers le Futur, Dennis Ritchie "monsieur C", Uncharted et Warcraft, pour ne citer que ça. Le tout agrémenté de la bonne humeur habituelle, des vannes au ras des pâquerettes et des nanars musicaux de la semaine. Pour les nanars musicaux : http://youtu.be/9lPqJGLRpdA http://youtu.be/ZlVUXLBJg14 http://nemotaku.fr/journalismetotal/

NerdCast
NerdCast 281 - Superstições, mandingas e farofa

NerdCast

Play Episode Listen Later Oct 14, 2011 84:51


Lambda lambda lambda! Hoje Alottoni, Sra Jovem Nerd, Amigo Imaginário, Portuguesa e Azaghal contam os mais loucos causos das CRENDICES E SUPERSTIÇÕES! Neste podcast: Aprenda o que fazer antes de entrar em um avião, nunca fique devendo para Iemanjá, cuidado com as rezas da sua avó, recuse o café de sua namorada e não esqueça do presente da cigana! Tempo de duração: 84 min NERDOFFICE S02E34 (Segunda Guerra no Twitter e VINGADORES!) COMENTADO NA LEITURA DE E-MAILS Teoria da conspiração entre Steve Jobs, Apple, Assassin's Creed e os Templários iPad em Os Incríveis Bill Gates reconhecendo o valor do Macintosh Comparação entre Steve Jobs e Silvio Santos Tumblr iWrong Apple Evolution Tiras sobre a morte de Steve Jobs, por André Farias (tira 1 | tira 2) Falecimento de Dennis Ritchie, criador da linguagem de programação C Texto de Diego Klautau em homenagem ao Jovem Nerd Ilustrações: Steve Jobs, por Gustavo Higashi Thanks Steve, por Airton Portugal Tirinha o valor de Steve Jobs, por Sandro Hojo Pôster de grandes homens com grandes ideias, por Fabiano Hikaru Pôster do filme “Demissão no elevador”, por Luciano Abrahão Pensamento de Azaghâl no NerdOffice, por Adalberto Leonel “E se o Protocolo Bluehand falhasse... no espaço”, por Lincoln de Souza Tira de despedida de Charlie Schulz Steve Jobs patenteia número 280 E-MAILS Mande suas críticas, elogios, sugestões e caneladas para nerdcast@jovemnerd.com.br

NerdCast
NerdCast 281 - Superstições, mandingas e farofa

NerdCast

Play Episode Listen Later Oct 14, 2011 84:51


Lambda lambda lambda! Hoje Alottoni, Sra Jovem Nerd, Amigo Imaginário, Portuguesa e Azaghal contam os mais loucos causos das CRENDICES E SUPERSTIÇÕES! Neste podcast: Aprenda o que fazer antes de entrar em um avião, nunca fique devendo para Iemanjá, cuidado com as rezas da sua avó, recuse o café de sua namorada e não esqueça do presente da cigana! Tempo de duração: 84 min NERDOFFICE S02E34 (Segunda Guerra no Twitter e VINGADORES!) COMENTADO NA LEITURA DE E-MAILS Teoria da conspiração entre Steve Jobs, Apple, Assassin's Creed e os Templários iPad em Os Incríveis Bill Gates reconhecendo o valor do Macintosh Comparação entre Steve Jobs e Silvio Santos Tumblr iWrong Apple Evolution Tiras sobre a morte de Steve Jobs, por André Farias (tira 1 | tira 2) Falecimento de Dennis Ritchie, criador da linguagem de programação C Texto de Diego Klautau em homenagem ao Jovem Nerd Ilustrações: Steve Jobs, por Gustavo Higashi Thanks Steve, por Airton Portugal Tirinha o valor de Steve Jobs, por Sandro Hojo Pôster de grandes homens com grandes ideias, por Fabiano Hikaru Pôster do filme “Demissão no elevador”, por Luciano Abrahão Pensamento de Azaghâl no NerdOffice, por Adalberto Leonel “E se o Protocolo Bluehand falhasse... no espaço”, por Lincoln de Souza Tira de despedida de Charlie Schulz Steve Jobs patenteia número 280 E-MAILS Mande suas críticas, elogios, sugestões e caneladas para nerdcast@jovemnerd.com.br

NerdCast
NerdCast 281 - Superstições, mandingas e farofa

NerdCast

Play Episode Listen Later Oct 14, 2011 84:51


Lambda lambda lambda! Hoje Alottoni, Sra Jovem Nerd, Amigo Imaginário, Portuguesa e Azaghal contam os mais loucos causos das CRENDICES E SUPERSTIÇÕES! Neste podcast: Aprenda o que fazer antes de entrar em um avião, nunca fique devendo para Iemanjá, cuidado com as rezas da sua avó, recuse o café de sua namorada e não esqueça do presente da cigana! Tempo de duração: 84 min NERDOFFICE S02E34 (Segunda Guerra no Twitter e VINGADORES!) COMENTADO NA LEITURA DE E-MAILS Teoria da conspiração entre Steve Jobs, Apple, Assassin's Creed e os Templários iPad em Os Incríveis Bill Gates reconhecendo o valor do Macintosh Comparação entre Steve Jobs e Silvio Santos Tumblr iWrong Apple Evolution Tiras sobre a morte de Steve Jobs, por André Farias (tira 1 | tira 2) Falecimento de Dennis Ritchie, criador da linguagem de programação C Texto de Diego Klautau em homenagem ao Jovem Nerd Ilustrações: Steve Jobs, por Gustavo Higashi Thanks Steve, por Airton Portugal Tirinha o valor de Steve Jobs, por Sandro Hojo Pôster de grandes homens com grandes ideias, por Fabiano Hikaru Pôster do filme “Demissão no elevador”, por Luciano Abrahão Pensamento de Azaghâl no NerdOffice, por Adalberto Leonel “E se o Protocolo Bluehand falhasse... no espaço”, por Lincoln de Souza Tira de despedida de Charlie Schulz Steve Jobs patenteia número 280 E-MAILS Mande suas críticas, elogios, sugestões e caneladas para nerdcast@jovemnerd.com.br

Nerdcast
NerdCast 281 - Superstições, mandingas e farofa

Nerdcast

Play Episode Listen Later Oct 14, 2011 84:51


Lambda lambda lambda! Hoje Alottoni, Sra Jovem Nerd, Amigo Imaginário, Portuguesa e Azaghal contam os mais loucos causos das CRENDICES E SUPERSTIÇÕES! Neste podcast: Aprenda o que fazer antes de entrar em um avião, nunca fique devendo para Iemanjá, cuidado com as rezas da sua avó, recuse o café de sua namorada e não esqueça do presente da cigana! Tempo de duração: 84 min NERDOFFICE S02E34 (Segunda Guerra no Twitter e VINGADORES!) COMENTADO NA LEITURA DE E-MAILS Teoria da conspiração entre Steve Jobs, Apple, Assassin's Creed e os Templários iPad em Os Incríveis Bill Gates reconhecendo o valor do Macintosh Comparação entre Steve Jobs e Silvio Santos Tumblr iWrong Apple Evolution Tiras sobre a morte de Steve Jobs, por André Farias (tira 1 | tira 2) Falecimento de Dennis Ritchie, criador da linguagem de programação C Texto de Diego Klautau em homenagem ao Jovem Nerd Ilustrações: Steve Jobs, por Gustavo Higashi Thanks Steve, por Airton Portugal Tirinha o valor de Steve Jobs, por Sandro Hojo Pôster de grandes homens com grandes ideias, por Fabiano Hikaru Pôster do filme “Demissão no elevador”, por Luciano Abrahão Pensamento de Azaghâl no NerdOffice, por Adalberto Leonel “E se o Protocolo Bluehand falhasse... no espaço”, por Lincoln de Souza Tira de despedida de Charlie Schulz Steve Jobs patenteia número 280 E-MAILS Mande suas críticas, elogios, sugestões e caneladas para nerdcast@jovemnerd.com.br

Porrada Franca – Rádio Online PUC Minas
NerdCast 281 - Superstições, mandingas e farofa

Porrada Franca – Rádio Online PUC Minas

Play Episode Listen Later Oct 14, 2011 84:51


Lambda lambda lambda! Hoje Alottoni, Sra Jovem Nerd, Amigo Imaginário, Portuguesa e Azaghal contam os mais loucos causos das CRENDICES E SUPERSTIÇÕES! Neste podcast: Aprenda o que fazer antes de entrar em um avião, nunca fique devendo para Iemanjá, cuidado com as rezas da sua avó, recuse o café de sua namorada e não esqueça do presente da cigana! Tempo de duração: 84 min NERDOFFICE S02E34 (Segunda Guerra no Twitter e VINGADORES!) COMENTADO NA LEITURA DE E-MAILS Teoria da conspiração entre Steve Jobs, Apple, Assassin's Creed e os Templários iPad em Os Incríveis Bill Gates reconhecendo o valor do Macintosh Comparação entre Steve Jobs e Silvio Santos Tumblr iWrong Apple Evolution Tiras sobre a morte de Steve Jobs, por André Farias (tira 1 | tira 2) Falecimento de Dennis Ritchie, criador da linguagem de programação C Texto de Diego Klautau em homenagem ao Jovem Nerd Ilustrações: Steve Jobs, por Gustavo Higashi Thanks Steve, por Airton Portugal Tirinha o valor de Steve Jobs, por Sandro Hojo Pôster de grandes homens com grandes ideias, por Fabiano Hikaru Pôster do filme “Demissão no elevador”, por Luciano Abrahão Pensamento de Azaghâl no NerdOffice, por Adalberto Leonel “E se o Protocolo Bluehand falhasse... no espaço”, por Lincoln de Souza Tira de despedida de Charlie Schulz Steve Jobs patenteia número 280 E-MAILS Mande suas críticas, elogios, sugestões e caneladas para nerdcast@jovemnerd.com.br

NerdCast
NerdCast 281 - Superstições, mandingas e farofa

NerdCast

Play Episode Listen Later Oct 14, 2011 84:51


Lambda lambda lambda! Hoje Alottoni, Sra Jovem Nerd, Amigo Imaginário, Portuguesa e Azaghal contam os mais loucos causos das CRENDICES E SUPERSTIÇÕES! Neste podcast: Aprenda o que fazer antes de entrar em um avião, nunca fique devendo para Iemanjá, cuidado com as rezas da sua avó, recuse o café de sua namorada e não esqueça do presente da cigana! Tempo de duração: 84 min NERDOFFICE S02E34 (Segunda Guerra no Twitter e VINGADORES!) COMENTADO NA LEITURA DE E-MAILS Teoria da conspiração entre Steve Jobs, Apple, Assassin's Creed e os Templários iPad em Os Incríveis Bill Gates reconhecendo o valor do Macintosh Comparação entre Steve Jobs e Silvio Santos Tumblr iWrong Apple Evolution Tiras sobre a morte de Steve Jobs, por André Farias (tira 1 | tira 2) Falecimento de Dennis Ritchie, criador da linguagem de programação C Texto de Diego Klautau em homenagem ao Jovem Nerd Ilustrações: Steve Jobs, por Gustavo Higashi Thanks Steve, por Airton Portugal Tirinha o valor de Steve Jobs, por Sandro Hojo Pôster de grandes homens com grandes ideias, por Fabiano Hikaru Pôster do filme “Demissão no elevador”, por Luciano Abrahão Pensamento de Azaghâl no NerdOffice, por Adalberto Leonel “E se o Protocolo Bluehand falhasse... no espaço”, por Lincoln de Souza Tira de despedida de Charlie Schulz Steve Jobs patenteia número 280 E-MAILS Mande suas críticas, elogios, sugestões e caneladas para nerdcast@jovemnerd.com.br

Mangocast
Mangocast - Rapidito Digital 002

Mangocast

Play Episode Listen Later Oct 13, 2011 19:16


The Retrobits Podcast
Show 104: The Unix Operating System

The Retrobits Podcast

Play Episode Listen Later Nov 30, 2007 32:13


cat retrobits.txt > /dev/listenerWelcome to Show #104!  This week's topic: The Unix Operating System! RetroGamer Magazine caters to the vintage gamer - have a look at their web site!A Computer Called LEO - a business thinking about automation - in the 1930s?  Here's an Amazon link for the book...Virtual II, an Apple II/II+/IIe emulator for Mac OS, has just released a new version!Haven't heard about the SCO/Linux controversy?  You can read about it at Wikipedia...Wikipedia also has a very comprehensive article on Unix itself!Check out the home page of one of Unix's creators - Dennis Ritchie...And here is the promised link on the Unix timeline!Be sure to send any comments, questions or feedback to retrobits@gmail.com. For online discussions on Retrobits Podcast topics, check out the Retrobits Podcast forum on the PETSCII Forums page! Our Theme Song is "Sweet" from the "Re-Think" album by Galigan. Thanks for listening! - Earl This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 2.5 License. 

The Retrobits Podcast
Show 065: The C Programming Language, Part II

The Retrobits Podcast

Play Episode Listen Later Nov 13, 2006 38:15


  Floating point numbers keep it real.Welcome to Show 065!  This week's Topic: The C Programming Language, Part II! Topics and links discussed in the podcast... The World of Commodore 2006 is coming on Dec 2, 2006 in Toronto, Ontario, Canada!  Check out the info at the TPUG web site.If you want to try C/C++ (or C#, or VB.NET), have a look at the free Visual Studio 2005 Express Editions from Microsoft.Another great option for C/C++ on Windows and DOS is OpenWatcom!If C for the 6502 is your goal, then cc65 has just the compiler for you...Specifically for the Commodore 64 and 128, here's some information on a popular compiler, Power C!Brian Kernighan, co-author of The C Programming Language, has a website here.  You can read about him on Wikipedia here...Dennis Ritchie, creator of C and co-author of The C Programming Language, has a website here, and Wikipedia info here...The GE-600 (otherwise known as Honeywell 6000) with the funky 36 bit C variables?  Read about it at Wikipedia here...Finally, some history of the evolution of the C language can be found at faqs.org! Be sure to send any comments, questions or feedback to retrobits@gmail.com. For online discussions on Retrobits Podcast topics, check out the Retrobits Podcast forum on the PETSCII Forums page! Our Theme Song is "Sweet" from the "Re-Think" album by Galigan. Thanks for listening! - Earl  

The Retrobits Podcast
Show 064: The C Programming Language

The Retrobits Podcast

Play Episode Listen Later Nov 6, 2006 31:09


  Integers just don't get the point.Welcome to Show 064!  This week's Topic: The C Programming Language! Topics and links discussed in the podcast... The October edition of the 1 MHz podcast is out.  Don't miss it!Wikipedia, as usual, a good place to start - this time, on the C Programming Language.Here's an article written by Dennis Ritchie on the history of C.  Nothing like getting your history straight from the source!Here's some info on the book: The C Programming Language - note the various human languages it's been translated into...BDS C for CP/M - for free!  What a deal.Turbo C 2.01 for DOS, also free, courtesy of the Borland Antique Software collection.  Thanks Borland! Be sure to send any comments, questions or feedback to retrobits@gmail.com. For online discussions on Retrobits Podcast topics, check out the Retrobits Podcast forum on the PETSCII Forums page! Our Theme Song is "Sweet" from the "Re-Think" album by Galigan. Thanks for listening! - Earl