Podcasts about dec pdp

  • 12PODCASTS
  • 23EPISODES
  • 36mAVG DURATION
  • ?INFREQUENT EPISODES
  • Dec 28, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about dec pdp

Latest podcast episodes about dec pdp

TheOxfordAmbientCollective
Computer Echoes:0111(^) DEC PDP 8

TheOxfordAmbientCollective

Play Episode Listen Later Dec 28, 2024 5:38


Computer Echoes:0111(^) DEC PDP 8 by TheOxfordAmbientCollective

The Holmes Archive of Electronic Music

Episode 68 Numbers   Playlist John Cage, “49 Waltzes For The Five Boroughs” from The Waltz Project (17 Contemporary Waltzes For Piano) (1981 Nonesuch). Piano, Alan Feinberg, Robert Moran, Yvar Mikhashoff. Cage worked by using chance operations to make decisions about key aspects of his works. So, by the nature of his method, he worked strictly by the numbers. But the choices become multifaceted when you consider how he applied these random choices to the matrix of sound sources available for a given piece. “49 Waltzes for the Five Boroughs for performer(s) or listener(s) or record maker(s)” is a case in point. Think of the numbers. The work translates a graphic map containing 147 New York street addresses or locations arranged through chance operations into 49 groups of three (consisting of five players each). Cage used “hundreds of coin tosses and the I Ching” to arrive at a “tapestry” of sound, combining hundreds of traditional waltz fragments. First realized by Cage in 1977, the recorded version heard here uses three pianists playing the waltzes plus other ancillary sound making devices plus pre-recorded environmental tapes made in various parts of New York. 5:15 Timothy Sullivan, “Numbers, Names” from Computer Music From Colgate Volume 1 (1980 Redwood Records). Computer composition by Timothy Sullivan; Percussion, Frank Bennett. Created at the Colgate Computer Music Studio at the University Computer Center using a DEC PDP-10 with an on-line interactive system and a four channel digital to analog converter designed and built by Joseph Zingeim. 12:28 Philip S. Gross, excerpts from The International Morse Code: A Teaching Record Using The Audio-Vis-Tac Method (1962 Folkways). Including instructions and drills from the tracks “Numbers And The Alphabet,” “Learning The Numbers,” and “Numbers.” 2:23 Kraftwerk, “Nummern (Numbers)” from Live - Paris '76 & Utrecht '81 (2019 Radio Looploop). An unofficial release of a live performance in Utrecht, 1981. 3:37 107-34-8933 (Nik Raicevic), “Cannabis Sativa” from Numbers (1970 Narco). Self-released album prior to this record being issued by Buddha in the same year as the album Head. Recorded at Gold Star Studio in Hollywood, where the Moog Modular Synthesizer was played by “107-34-8933,” aka Nik Raicevic. From the liner notes: “What is the sound of tomorrow? The sound of notes or the sound of numbers?” 17:55 The Conet Project, “Recordings Of Shortwave Numbers Stations” (1997 Irdial Discs). Original 1997 release reports the following at the end of page 15 of the booklet: "A complete set of recordings of all known Morse stations will also be posted in the fourth quarter of 1997". I don't think that released ever appeared. The track included here is my edit of excerpted examples from the four-CD collection of numbers stations recordings from around the globe. 7:12 Thom Holmes, “Numbers” from Intervals (2017 Wave Magnet). A composition using recordings of numbers stations as the primary source, combined with audio processing and synthesizers. 5:57 Background music: Numbers stations remix (Holmes) based on tracks found on “Recordings Of Shortwave Numbers Stations” by The Conet Project (1997 Irdial Discs).   Opening and closing sequences voiced by Anne Benkovitz. Additional opening, closing, and other incidental music by Thom Holmes. For additional notes, please see my blog, Noise and Notations.

The History of Computing
Chess Throughout The History Of Computers

The History of Computing

Play Episode Listen Later Sep 16, 2021 12:58


Chess is a game that came out of 7th century India, originally called chaturanga. It evolved over time, perfecting the rules - and spread to the Persians from there. It then followed the Moorish conquerers from Northern Africa to Spain and from there spread through Europe. It also spread from there up into Russia and across the Silk Road to China. It's had many rule formations over the centuries but few variations since computers learned to play the game. Thus, computers learning chess is a pivotal time in the history of the game. Part of chess is thinking through every possible move on the board and planning a strategy. Based on the move of each player, we can review the board, compare the moves to known strategies, and base our next move on either blocking the strategy of our opponent or carrying out a strategy of our own to get a king into checkmate. An important moment in the history of computers is when computers got to the point that they could beat a chess grandmaster. That story goes back to an inspiration from the 1760s where Wolfgang von Kempelen built a machine called The Turk to impress Austrian Empress Maria Theresa. The Turk was a mechanical chess playing robot with a Turkish head in Ottoman robes that moved pieces. The Turk was a maze of cogs and wheals and moved the pieces during play. It travelled through Europe, beating the great Napoleon Bonaparte and then the young United States, also besting Benjamin Franklin. It had many owners and they all kept the secret of the Turk. Countless thinkers wrote about theories about how it worked, including Edgar Allen Poe. But eventually it was consumed by fire and the last owner told the secret. There had been a person in the box moving the pieces the whole time. All those moving parts were an illusion. And still in 1868 a knockoff of a knockoff called Ajeeb was built by a cabinet maker named Charles Hooper. Again, people like Theodore Roosevelt and Harry Houdini were bested, along with thousands of onlookers. Charles Gumpel built another in 1876 - this time going from a person hiding in a box to using a remote control. These machines inspired people to think about what was possible. And one of those people was Leonardo Torres y Quevedo who built a board that also had electomagnets move pieces and light bulbs to let you know when the king was in check or mate. Like all good computer games it also had sound. He started the project in 1910 and by 1914 it could play a king and rook endgame, or a game where there are two kings and a rook and the party with the rook tries to get the other king into checkmate. At the time even a simplified set of instructions was revolutionary and he showed his invention off at the Paris where notable other thinkers were at a conference, including Norbert Weiner who later described how minimax search could be used to play chess in his book Cybernetics. Quevedo had built an analytical machine based on Babbage's works in 1920 but adding electromagnets for memory and would continue building mechanical or analog calculating machines throughout his career. Mikhail Botvinnik was 9 at that point and the Russian revolution wound down in 1923 when the Soviet Union was founded following the fall of the Romanovs. He would become the first Russian Grandmaster in 1950, in the early days of the Cold War. That was the same year Claude Shannon wrote his seminal work, “Programming a Computer for Playing Chess.” The next year Alan Turing actually did publish executable code to play on a Ferranti Mark I but sadly never got to see it complete before his death. The prize to actually play a game would go to Paul Stein and Mark Wells in 1956 working on the MANIAC. Due to the capacity of computers at the time, the board was smaller but the computer beat an actual human. But the Russians were really into chess in the years that followed the crowing of their first grandmaster. In fact it became a sign of the superior Communist politic. Botvinnik also happened to be interested in electronics, and went to school in Leningrad University's Mathematics Department. He wanted to teach computers to play a full game of chess. He focused on selective searches which never got too far as the Soviet machines of the era weren't that powerful. Still the BESM managed to ship a working computer that could play a full game in 1957. Meanwhile John McCarthy at MIT introduced the idea of an alpha-beta search algorithm to minimize the number of nodes to be traversed in a search and he and Alan Kotok shipped A Chess Playing Program for the IBM 7090 Computer, which would be updated by Richard Greenblatt when moving from the IBM mainframes to a DEC PDP-6 in 1965, as a side project for his work on Project MAC while at MIT. Here we see two things happening. One we are building better and better search algorithms to allow for computers to think more moves ahead in smarter ways. The other thing happening was that computers were getting better. Faster certainly, but more space to work with in memory, and with the move to a PDP, truly interactive rather than batch processed. Mac Hack VI as Greenblatt's program would eventually would be called, added transposition tables - to show lots of previous games and outcomes. He tuned the algorithms, what we would call machine learning today, and in 1967 became the first computer program to defeat a person at the tournament level and get a chess rating. For his work, Greenblatt would become an honorary member of the US Chess Federation. By 1970 there were enough computers playing chess to have the North American Computer Chess Championships and colleges around the world started holding competitions. By 1971 Ken Thompson of Bell Labs, in a sign of the times, wrote a computer chess game for Unix. And within just 5 years we got the first chess game for the personal computer, called Microchess. From there computers got incrementally better at playing chess. Computer games that played chess shipped to regular humans, dedicated physical games, little cheep electronics knockoffs. By the 80s regular old computers could evaluate thousands of moves. Ken Thompson kept at it, developing Belle from 1972 and it continued on to 1983. He and others added move generators, special circuits, dedicated memory for the transposition table, and refined the alpha-beta algorithm started by McCarthy, getting to the point where it could evaluate nearly 200,000 moves a second. He even got the computer to the rank of master but the gains became much more incremental. And then came IBM to the party. Deep Blue began with researcher Feng-hsiung Hsu, as a project called ChipTest at Carnegie Mellon University. IBM Research asked Hsu and Thomas Anantharamanto complete a project they started to build a computer program that could take out a world champion. He started with Thompson's Belle. But with IBM's backing he had all the memory and CPU power he could ask for. Arthur Hoane and Murray Campell joined and Jerry Brody from IBM led the team to sprint towards taking their device, Deep Thought, to a match where reigning World Champion Gary Kasparov beat the machine in 1989. They went back to work and built Deep Blue, which beat Kasparov in their third attempt in 1997. Deep Blue was comprised of 32 RS/6000s running 200 MHz chips, split across two racks, and running IBM AIX - with a whopping 11.38 gigaflops of speed. And chess can be pretty much unbeatable today on an M1 MacBook Air, which comes pretty darn close to running at a teraflop. Chess gives us an unobstructed view at the emergence of computing in an almost linear fashion. From the human powered codification of electromechanical foundations of the industry to the emergence of computational thinking with Shannon and cybernetics to MIT on IBM servers when Artificial Intelligence was young to Project MAC with Greenblatt to Bell Labs with a front seat view of Unix to college competitions to racks of IBM servers. It even has little misdirections with pre-World War II research from Konrad Zuse, who wrote chess algorithms. And the mechanical Turk concept even lives on with Amazon's Mechanical Turk services where we can hire people to do things that are still easier for humans than machines.

The History of Computing
Origins of the Modern Patent And Copyright Systems

The History of Computing

Play Episode Listen Later Jun 7, 2021 17:03


Once upon a time, the right to copy text wasn't really necessary. If one had a book, one could copy the contents of the book by hiring scribes to labor away at the process and books were expensive. Then came the printing press. Now, the printer of a work would put a book out and another printer could set their press up to reproduce the same text. More people learned to read and information flowed from the presses at the fastest pace in history.  The printing press spread from Gutenberg's workshop in the 1440s throughout Germany and then to the rest of Europe and appearing in England when William Caxton built the first press there in 1476. It was a time of great change, causing England to retreat into protectionism, and Henry VIII tried to restrict what could be printed in the 1500s. But Parliament needed to legislate further.  England was first to establish copyright when Parliament passed the Licensing of the Press Act in 1662, which regulated what could be printed. This was more to prevent printing scandalous materials and basically gave a monopoly to The Stationers' Company to register, print, copy, and publish books. They could enter another printer and destroy their presses. That went on for a few decades until the act was allowed to lapse in 1694 but began the 350 year journey of refining what copyright and censorship means to a modern society.  The next big step came in England when the Statute of Anne was passed in 1710. It was named for the reigning last Queen of the House of Stuart. While previously a publisher could appeal to have a work censored by others because the publisher had created it, this statute took a page out of the patent laws and granted a right of protection against copying a work for 14 years. Reading through the law and further amendments it is clear that lawmakers were thinking far more deeply about the balance between protecting the license holder of a work and how to get more books to more people. They'd clearly become less protectionist and more concerned about a literate society.  There are examples in history of granting exclusive rights to an invention from the Greeks to the Romans to Papal Bulls. These granted land titles, various rights, or a status to people. Edward the Confessor started the process of establishing the Close Rolls in England in the 1050s, where a central copy of all those granted was kept. But they could also be used to grant a monopoly, with the first that's been found being granted by Edward III to John Kempe of Flanders as a means of helping the cloth industry in England to flourish.  Still, this wasn't exactly an exclusive right but instead a right to emigrate. And the letters were personal and so letters patent evolved to royal grants, which Queen Elizabeth was providing in the late 1500s. That emerged out of the need for patent laws proven by Venicians in the late 1400s, when they started granting exclusive rights by law to inventions for 10 years. King Henry II of France established a royal patent system in France and over time the French Academy of Sciences was put in charge of patent right review. English law evolved and perpetual patents granted by monarchs were stifling progress. Monarchs might grant patents to raise money and so allow a specific industry to turn into a monopoly to raise funds for the royal family. James I was forced to revoke the previous patents, but a system was needed. And so the patent system was more formalized and those for inventions got limited to 14 years when the Statue of Monopolies was passed in England in 1624. The evolution over the next few decades is when we started seeing drawings added to patent requests and sometimes even required. We saw forks in industries and so the addition of medical patents, and an explosion in various types of patents requested.  They weren't just in England. The mid-1600s saw the British Colonies issuing their own patents. Patent law was evolving outside of England as well. The French system was becoming larger with more discoveries. By 1729 there were digests of patents being printed in Paris and we still keep open listings of them so they're easily proven in court. Given the maturation of the Age of Enlightenment, that clashed with the financial protectionism of patent laws and intellectual property as a concept emerged but borrowed from the patent institutions bringing us right back to the Statute of Anne, which established the modern Copyright system. That and the Statue of Monopolies is where the British Empire established the modern copyright and patent systems respectively, which we use globally today. Apparently they were worth keeping throughout the Age of Revolution, mostly probably because they'd long been removed from the monarchal control and handed to various public institutions. The American Revolution came and went. The French Revolution came and went. The Latin American wars of independence, revolutions throughout the 1820s , the end of Feudalism, Napoleon. But the wars settled down and a world order of sorts came during the late 1800s. One aspect of that world order was the Berne Convention, which was signed in 1886. This  established the bilateral recognition of copyrights among sovereign nations that signed onto the treaty, rather than have various nations enter into pacts between one another. Now, the right to copy works were automatically in force at creation, so authors no longer had to register their mark in Berne Convention countries. Following the Age of Revolutions, there was also an explosion of inventions around the world. Some ended up putting copyrighted materials onto reproducible forms. Early data storage. Previously we could copyright sheet music but the introduction of the player piano led to the need to determine the copyright ability of piano rolls in White-Smith Music v. Apollo in 1908. Here we saw the US Supreme Court find that these were not copies as interpreted in the US Copyright Act because only a machine could read them and they basically told congress to change the law. So Congress did. The Copyright Act of 1909 then specified that even if only a machine can use information that's protected by copyright, the copyright protection remains. And so things sat for a hot minute as we learned first mechanical computing, which is patentable under the old rules and then electronic computing which was also patentable. Jacquard patented his punch cards in 1801. But by the time Babbage and Lovelace used them in his engines that patent had expired. And the first digital computer to get a patent was the Eckert-Mauchly ENIAC, which was filed in 1947, granted in 1964, and because there was a prior unpatented work, overturned in 1973. Dynamic RAM was patented in 1968. But these were for physical inventions. Software took a little longer to become a legitimate legal quandary. The time it took to reproduce punch cards and the lack of really mass produced software didn't become an issue until after the advent of transistorized computers with Whirlwind, the DEC PDP, and the IBM S/360. Inventions didn't need a lot of protections when they were complicated and it took years to build one. I doubt the inventor of the Antikythera Device in Ancient Greece thought to protect their intellectual property because they'd of likely been delighted if anyone else in the world would have thought to or been capable of creating what they created. Over time, the capabilities of others rises and our intellectual property becomes more valuable because progress moves faster with each generation. Those Venetians saw how technology and automation was changing the world and allowed the protection of inventions to provide a financial incentive to invent. Licensing the commercialization of inventions then allows us to begin the slow process of putting ideas on a commercialization assembly line.  Books didn't need copyright until they could be mass produced and were commercially viable. That came with mass production. A writer writes, or creates intellectual property and a publisher prints and distributes. Thus we put the commercialization of literature and thoughts and ideas on an assembly line. And we began doing so far before the Industrial Revolution.  Once there were more inventions and some became capable of mass producing the registered intellectual property of others, we saw a clash in copyrights and patents. And so we got the Copyright Act of 1909. But with digital computers we suddenly had software emerging as an entire industry. IBM had customized software for customers for decades but computer languages like FORTRAN and mass storage devices that could be moved between computers allowed software to be moved between computers and sometimes entire segments of business logic moved between companies based on that software. By the 1960s, companies were marketing computer programs as a cottage industry.  The first computer program was deposited at the US Copyright Office in 1961. It was a simple thing. A tape with a computer program that had been filed by North American Aviation. Imagine the examiners looking at it with their heads cocked to the side a bit. “What do we do with this?” They hadn't even figured it out when they got three more from General Dynamics and two more programs showed up from a student at Columbia Law.  A punched tape held a bunch of punched cards. A magnetic tape just held more punched tape that went faster. This was pretty much what those piano rolls from the 1909 law had on them. Registration was added for all five in 1964. And thus software copyright was born. But of course it wasn't just a metallic roll that had impressions for when a player piano struck a hammer. If someone found a roll on the ground, they could put it into another piano and hit play. But the likelihood that they could put reproduce the piano roll was low. The ability to reproduce punch cards had been there. But while it likely didn't take the same amount of time it took to reproduce a copy Plato's Republic before the advent of the printing press, the occurrences weren't frequent enough to encounter a likely need for adjudication. That changed with high speed punch devices and then the ability to copy magnetic tape. Contracts (which we might think of as EULAs today in a way) provided a license for a company to use software, but new questions were starting to form around who was bound to the contract and how protection was extended based on a number of factors. Thus the LA, or License Agreement part of EULA rather than just a contract when buying a piece of software.  And this brings us to the forming of the modern software legal system. That's almost a longer story than the written history we have of early intellectual property law, so we'll pick that up in the next episode of the podcast!

Causality

The successor to the Therac-6 and Therac-20 RadioTherapy machines would integrate the powerful DEC PDP-11 mini-computer to control all of the Therac-25s functions, including the safety interlocks, for the first time. In two years the 11 machines in service would overdose six people across two countries, killing three of them before they figured out why.With John Chidgey.This show is Podcasting 2.0 Enhanced Reports into the Incidents: An Investigation of the Therac-25 Accidents ‎Medical Devices: The Therac-25 The Therac-25: 30 Years Later A Usage-Model Based Approach to Test Therac-25 Good Computing: A Virtue Approach To Computer Ethics: Chapter 6 Links of Potential Interest: Therac-25 The PDP-11 Assembly Language PDP-11 Digital Equipment Corporation Programmed Data Processor Rad (unit) Order of Magnitude Myelitis Collimator Fatal Dose: Radiation Deaths linked to AECL Computer Errors Reactor Accidents: The Human Fallout An Overview on Radiotherapy: From Its History to Its Current Applications in Dermatology In MedTech History: Mammography The programmer behind the THERAC-25 Fiasco was never found AMA: My professor investigated the Therac-25 incident How history, principles and standards led to the safety PLC Support Causality on PatreonEpisode sponsors:Premium Jane: Premium Jane are a US-based provider of organic CBD products that meet the highest standards of quality and purity. Visit premiumjane.com and use the Coupon Code PJ20OFF for 20% off. Hurry it's only for a limited time!Many Tricks: If you’re looking for some Mac software that can do Many Tricks remember to specifically visit the URL below for more information about their amazingly useful apps. Don’t forget the return of Usher with the Usher 2 Pre-Sale! Visit manytricks.com/pragmatic and use the Coupon Code (listen to the episode to get the code) for 25% off the total price of your order. Hurry it's only for a limited time!Episode Gold Producers: 'r' and Chip Salzenberg.Episode Silver Producers: Mitch Biegler, John Whitlow, Kevin Koch, Oliver Steele, Lesley Law Chan, Hafthor and Shane O'Neill.

The History of Computing
Polish Innovations In Computing

The History of Computing

Play Episode Listen Later Jan 27, 2020 12:13


Computing In Poland Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us to innovate (and sometimes cope with) the future! Today we're going to do something a little different. Based on a recent trip to Katowice and Krakow, and a great visit to the Museum of Computer and Information Technology in Katowice, we're going to look at the history of computing in Poland. Something they are proud of and should be proud of. And I'm going to mispronounce some words. Because they are averse to vowels. But not really, instead because I'm just not too bright. Apologies in advance. First, let's take a stroll through an overly brief history of Poland itself. Atilla the Hun and other conquerors pushed Germanic tribes from Poland in the fourth century which led to a migration of Slavs from the East into the area. After a long period of migration, duke Mieszko established the Piast dynasty in 966, and they created the kingdom of Poland in 1025, which lasted until 1370 when Casimir the Great died without an heir. That was replaced by the Jagiellonian dynasty which expanded until they eventually developed into the Polish-Lithuanian Commonwealth in 1569. Turns out they overextended themselves until the Russians, Prussians, and Austria invaded and finally took control in 1795, partitioning Poland. Just before that, Polish clockmaker Jewna Jakobson built a mechanical computing machine, a hundred years after Pascal, in 1770. And innovations In mechanical computing continued on with Abraham Izrael Stern and his son through the 1800s and Bruno's Intergraph, which could solve complex differential equations. And so the borders changed as Prussia gave way to Germany until World War I when the Second Polish Republic was established. And the Poles got good at cracking codes as they struggled to stay sovereign against Russian attacks. Just as they'd struggled to stay sovereign for well over a century. Then the Germans and Soviets formed a pact in 1939 and took the country again. During the war, Polish scientists not only assisted with work on the Enigma but also with the nuclear program in the US, the Manhattan Project. Stanislaw Ulam was recruited to the project and helped with ENIAC by developing the Monte Carlo method along with Jon Von Neumann. The country remained partitioned until Germany fell in WWII and the Soviets were able to effectively rule the Polish People's Republic until a socal-Democratic movement swept the country in 1989, resulting in the current government and Poland moving from the Eastern Bloc to NATO and eventually the EU around the same time the wall fell in Berlin. Able to put the Cold War behind them, Polish cities are now bustling with technical innovation and is now home some of the best software developers I've ever met. Polish contributions to a more modern computer science began in 1924 when Jan Lukasiewicz developed Polish Notation, a way of writing mathematical expressions such that they are operator-first. during World War II when the Polish Cipher Bureau were the first that broke the Enigma encryption, at different levels from 1932 to 1939. They had been breaking codes since using them to thwart a Russian invasion in the 1920s and had a pretty mature operation at this point. But it was a slow, manUal process, so Marian Rejewski, one of the cryptographers developed a card catalog of permutations and used a mechanical computing device he invented a few years earlier called a cyclometer to decipher the codes. The combination led to the bomba kryptologiczna which was shown to the allies 5 weeks before the war started and in turn led to the Ultra program and eventually Colossus once Alan Turing got a hold of it, conceptually after meeting Rejewski. After the war he became an accountant to avoid being forced into slave cryptographic work by the Russians. In 1948 the Group for Mathematical Apparatus of the Mathematical Institute in Warsaw was formed and the academic field of computer research was formed in Poland. Computing continued in Poland during the Soviet-controlled era. EMAL-1 was started in 1953 but was never finished. The XYZ computer came along in 1958. Jack Karpiński built the first real vacuum tube mainframe in Poland, called the AAH in 1957 to analyze weather patterns and improve forecasts. He then worked with a team to build the AKAT-1 to simulate lots of labor intensive calculations like heat transfer mechanics. Karpinski founded the Laboratory for Artificial Intelligence of the Polish Academy of Sciences. He would win a UNESCO award and receive a 6 month scholarship to study in the US, which the polish government used to spy on American progress in computing. He came home armed with some innovative ideas from the West and by 1964 built what he called the Perceptron, a computer that could be taught to identify shapes and even some objects. Nothing like that had existed in Poland or anywhere else controlled by communist regimes at the time. From 65 to 68 he built the KAR-65, even faster, to study CERN data. By then there was a rising mainframe and minicomputer industry outside of academia in Poland. Production of the Odra mainframe-era computers began in 1959 in Wroclaw, Poland and his work was seen by them and Elwro as a threat do they banned him from publishing for a time. Elwro built a new factory in 1968, copying IBM standardization. In 1970, Karpiński realized he had to play ball with the government and got backing from officials in the government. He would then designed the k-202 minicomputer in 1971. Minicomputers were on the rise globally and he introduced the concept of paging to computer science, key in virtual memory. This time he recruited 113 programmers and hardware engineers and by 73 were using Intel 4004 chips to build faster computers than the DEC PDP-11. But the competitors shut him down. They only sold 30 and by 1978 he retired to Switzerland (that sounds better than fled) - but he returned to Poland following the end of communism in the country and the closing of the Elwro plant in 1989. By then the Personal Computing revolution was upon us. That had begun in Poland with the Meritum, a TRS-80 clone, back in 1983. More copying. But the Elwro 800 Junior shipped in 1986 and by 1990 when the communists split the country could benefit from computers being mass produced and the removal of export restrictions that were stifling innovation and keeping Poles from participating in the exploding economy around computers. Energized, the Poles quickly learned to write code and now graduate over 40,000 people in IT from universities, by some counts making Poland a top 5 tech country. And as an era of developers graduate they are founding museums to honor those who built their industry. It has been my privilege to visit two of them at this point. The description of the one in Krakow reads: The Interactive Games and Computers Museum of the Past Era is a place where adults will return to their childhood and children will be drawn into a lots of fun. We invite you to play on more than 20 computers / consoles / arcade machines and to watch our collection of 200 machines and toys from the '70's-'90's. The second is the Museum of Computer and Information Technology in Katowice, and the most recent that I had the good fortune to visit. Both have systems found at other types of computer history museums such as a Commodore PET but showcasing the locally developed systems and looking at them on a timeline it's quickly apparent that while Poland had begun to fall behind by the 80s, it was more a reflection of why the strikes throughout caused the Eastern Bloc to fall, because Russian influence couldn't. Much as the Polish-Lithuanian Commonwealth couldn't support Polish control of Lithuania in the late 1700s. There were other accomplishments such as The ZAM-2. And the first fully Polish machine, the BINEG. And rough set theory. And ultrasonic mercury memory.

Retrocomputaria
Repórter Retro 055

Retrocomputaria

Play Episode Listen Later Dec 25, 2019 56:29


Bem-vindos à edição 55 do Repórter Retro. Links do podcast Ainda 50 anos do UNIX, o LCM+L recupera o Unix v0 num DEC PDP-7 (e os bastidores de bônus). xAD… xAD conserta um Atari 130XE com mod S-Video… …e filosofa sobre retr0bright enquanto testa um Atari 800XE e um leitor cassete Atari XC12. O x86 … Continue lendo Repórter Retro 055 →

BSD Now
323: OSI Burrito Guy

BSD Now

Play Episode Listen Later Nov 7, 2019 49:22


The earliest Unix code, how to replace fail2ban with blacklistd, OpenBSD crossed 400k commits, how to install Bolt CMS on FreeBSD, optimized hammer2, appeasing the OSI 7-layer burrito guys, and more. Headlines The Earliest Unix Code: An Anniversary Source Code Release (https://computerhistory.org/blog/the-earliest-unix-code-an-anniversary-source-code-release/) What is it that runs the servers that hold our online world, be it the web or the cloud? What enables the mobile apps that are at the center of increasingly on-demand lives in the developed world and of mobile banking and messaging in the developing world? The answer is the operating system Unix and its many descendants: Linux, Android, BSD Unix, MacOS, iOS—the list goes on and on. Want to glimpse the Unix in your Mac? Open a Terminal window and enter “man roff” to view the Unix manual entry for an early text formatting program that lives within your operating system. 2019 marks the 50th anniversary of the start of Unix. In the summer of 1969, that same summer that saw humankind’s first steps on the surface of the Moon, computer scientists at the Bell Telephone Laboratories—most centrally Ken Thompson and Dennis Ritchie—began the construction of a new operating system, using a then-aging DEC PDP-7 computer at the labs. This man sent the first online message 50 years ago (https://www.cbc.ca/radio/thecurrent/the-current-for-oct-29-2019-1.5339212/this-man-sent-the-first-online-message-50-years-ago-he-s-since-seen-the-web-s-dark-side-emerge-1.5339244) As many of you have heard in the past, the first online message ever sent between two computers was "lo", just over 50 years ago, on Oct. 29, 1969. It was supposed to say "log," but the computer sending the message — based at UCLA — crashed before the letter "g" was typed. A computer at Stanford 560 kilometres away was supposed to fill in the remaining characters "in," as in "log in." The CBC Radio show, “The Current” has a half-hour interview with the man who sent that message, Leonard Kleinrock, distinguished professor of computer science at UCLA "The idea of the network was you could sit at one computer, log on through the network to a remote computer and use its services there," 50 years later, the internet has become so ubiquitous that it has almost been rendered invisible. There's hardly an aspect in our daily lives that hasn't been touched and transformed by it. Q: Take us back to that day 50 years ago. Did you have the sense that this was going to be something you'd be talking about a half a century later? A: Well, yes and no. Four months before that message was sent, there was a press release that came out of UCLA in which it quotes me as describing what my vision for this network would become. Basically what it said is that this network would be always on, always available. Anybody with any device could get on at anytime from any location, and it would be invisible. Well, what I missed ... was that this is going to become a social network. People talking to people. Not computers talking to computers, but [the] human element. Q: Can you briefly explain what you were working on in that lab? Why were you trying to get computers to actually talk to one another? A: As an MIT graduate student, years before, I recognized I was surrounded by computers and I realized there was no effective [or efficient] way for them to communicate. I did my dissertation, my research, on establishing a mathematical theory of how these networks would work. But there was no such network existing. AT&T said it won't work and, even if it does, we want nothing to do with it. So I had to wait around for years until the Advanced Research Projects Agency within the Department of Defence decided they needed a network to connect together the computer scientists they were supervising and supporting. Q: For all the promise of the internet, it has also developed some dark sides that I'm guessing pioneers like yourselves never anticipated. A: We did not. I knew everybody on the internet at that time, and they were all well-behaved and they all believed in an open, shared free network. So we did not put in any security controls. When the first spam email occurred, we began to see the dark side emerge as this network reached nefarious people sitting in basements with a high-speed connection, reaching out to millions of people instantaneously, at no cost in time or money, anonymously until all sorts of unpleasant events occurred, which we called the dark side. But in those early days, I considered the network to be going through its teenage years. Hacking to spam, annoying kinds of effects. I thought that one day this network would mature and grow up. Well, in fact, it took a turn for the worse when nation states, organized crime and extremists came in and began to abuse the network in severe ways. Q: Is there any part of you that regrets giving birth to this? A: Absolutely not. The greater good is much more important. News Roundup How to use blacklistd(8) with NPF as a fail2ban replacement (https://www.unitedbsd.com/d/63-how-to-use-blacklistd8-with-npf-as-a-fail2ban-replacement) blacklistd(8) provides an API that can be used by network daemons to communicate with a packet filter via a daemon to enforce opening and closing ports dynamically based on policy. The interface to the packet filter is in /libexec/blacklistd-helper (this is currently designed for npf) and the configuration file (inspired from inetd.conf) is in etc/blacklistd.conf Now, blacklistd(8) will require bpfjit(4) (Just-In-Time compiler for Berkeley Packet Filter) in order to properly work, in addition to, naturally, npf(7) as frontend and syslogd(8), as a backend to print diagnostic messages. Also remember npf shall rely on the npflog* virtual network interface to provide logging for tcpdump() to use. Unfortunately (dont' ask me why ??) in 8.1 all the required kernel components are still not compiled by default in the GENERIC kernel (though they are in HEAD), and are rather provided as modules. Enabling NPF and blacklistd services would normally result in them being automatically loaded as root, but predictably on securelevel=1 this is not going to happen. FreeBSD’s handbook chapter on blacklistd (https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/firewalls-blacklistd.html) OpenBSD crossed 400,000 commits (https://marc.info/?l=openbsd-tech&m=157059352620659&w=2) Sometime in the last week OpenBSD crossed 400,000 commits (*) upon all our repositories since starting at 1995/10/18 08:37:01 Canada/Mountain. That's a lot of commits by a lot of amazing people. (*) by one measure. Since the repository is so large and old, there are a variety of quirks including ChangeLog missing entries and branches not convertible to other repo forms, so measuring is hard. If you think you've got a great way of measuring, don't be so sure of yourself -- you may have overcounted or undercounted. Subject to the notes Theo made about under and over counting, FreeBSD should hit 1 million commits (base + ports + docs) some time in 2020 NetBSD + pkgsrc are approaching 600,000, but of course pkgsrc covers other operating systems too How to Install Bolt CMS with Nginx and Let's Encrypt on FreeBSD 12 (https://www.howtoforge.com/how-to-install-bolt-cms-nginx-ssl-on-freebsd-12/) Bolt is a sophisticated, lightweight and simple CMS built with PHP. It is released under the open-source MIT-license and source code is hosted as a public repository on Github. A bolt is a tool for Content Management, which strives to be as simple and straightforward as possible. It is quick to set up, easy to configure, uses elegant templates. Bolt is created using modern open-source libraries and is best suited to build sites in HTML5 with modern markup. In this tutorial, we will go through the Bolt CMS installation on FreeBSD 12 system by using Nginx as a web server, MySQL as a database server, and optionally you can secure the transport layer by using acme.sh client and Let's Encrypt certificate authority to add SSL support. Requirements The system requirements for Bolt are modest, and it should run on any fairly modern web server: PHP version 5.5.9 or higher with the following common PHP extensions: pdo, mysqlnd, pgsql, openssl, curl, gd, intl, json, mbstring, opcache, posix, xml, fileinfo, exif, zip. Access to SQLite (which comes bundled with PHP), or MySQL or PostgreSQL. Apache with mod_rewrite enabled (.htaccess files) or Nginx (virtual host configuration covered below). A minimum of 32MB of memory allocated to PHP. hammer2 - Optimize hammer2 support threads and dispatch (http://lists.dragonflybsd.org/pipermail/commits/2019-September/719632.html) Refactor the XOP groups in order to be able to queue strategy calls, whenever possible, to the same CPU as the issuer. This optimizes several cases and reduces unnecessary IPI traffic between cores. The next best thing to do would be to not queue certain XOPs to an H2 support thread at all, but I would like to keep the threads intact for later clustering work. The best scaling case for this is when one has a large number of user threads doing I/O. One instance of a single-threaded program on an otherwise idle machine might see a slightly reduction in performance but at the same time we completely avoid unnecessarily spamming all cores in the system on the behalf of a single program, so overhead is also significantly lower. This will tend to increase the number of H2 support threads since we need a certain degree of multiplication for domain separation. This should significantly increase I/O performance for multi-threaded workloads. You know, we might as well just run every network service over HTTPS/2 and build another six layers on top of that to appease the OSI 7-layer burrito guys (http://boston.conman.org/2019/10/17.1) I've seen the writing on the wall, and while for now you can configure Firefox not to use DoH, I'm not confident enough to think it will remain that way. To that end, I've finally set up my own DoH server for use at Chez Boca. It only involved setting up my own CA to generate the appropriate certificates, install my CA certificate into Firefox, configure Apache to run over HTTP/2 (THANK YOU SO VERY XXXXX­XX MUCH GOOGLE FOR SHOVING THIS HTTP/2 XXXXX­XXX DOWN OUR THROATS!—no, I'm not bitter) and write a 150 line script that just queries my own local DNS, because, you know, it's more XXXXX­XX secure or some XXXXX­XXX reason like that. Sigh. Beastie Bits An Oral History of Unix (https://www.princeton.edu/~hos/Mahoney/unixhistory) NUMA Siloing in the FreeBSD Network Stack [pdf] (https://people.freebsd.org/~gallatin/talks/euro2019.pdf) EuroBSDCon 2019 videos available (https://www.youtube.com/playlist?list=PLskKNopggjc6NssLc8GEGSiFYJLYdlTQx) Barbie knows best (https://twitter.com/eksffa/status/1188638425567682560) For the #OpenBSD #e2k19 attendees. I did a pre visit today. (https://twitter.com/bob_beck/status/1188226661684301824) Drawer Find (https://twitter.com/pasha_sh/status/1187877745499561985) Slides - Removing ROP Gadgets from OpenBSD - AsiaBSDCon 2019 (https://www.openbsd.org/papers/asiabsdcon2019-rop-slides.pdf) Feedback/Questions Bostjan - Open source doesn't mean secure (http://dpaste.com/1M5MVCX#wrap) Malcolm - Allan is Correct. (http://dpaste.com/2RFNR94) Michael - FreeNAS inside a Jail (http://dpaste.com/28YW3BB#wrap) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) Your browser does not support the HTML5 video tag.

软件那些事儿
217. DEC的PDP-7电脑,UNIX产生在这个设备上

软件那些事儿

Play Episode Listen Later Oct 22, 2019 28:54


unix dec pdp
软件那些事儿
217. DEC的PDP-7电脑,UNIX产生在这个设备上

软件那些事儿

Play Episode Listen Later Oct 22, 2019 28:54


unix dec pdp
软件那些事儿
217. DEC的PDP-7电脑,UNIX产生在这个设备上

软件那些事儿

Play Episode Listen Later Oct 22, 2019 28:54


unix dec pdp
软件那些事儿
217. DEC的PDP-7电脑,UNIX产生在这个设备上

软件那些事儿

Play Episode Listen Later Oct 22, 2019 28:54


unix dec pdp
Advent of Computing
Episode 15 - Lost in the Colossal Cave

Advent of Computing

Play Episode Listen Later Oct 20, 2019 28:57


Colossal Cave Adventure is one of the most influential video games of all time. Originally written for the DEC PDP-10 mainframe in 1975 the game has not only spread to just about any computer out there, but it has inspired the entire adventure/RPG genera. In this episode we are going to look at how Adventure got it's start, how it evolved into a full game, and how it came to be a lunch title for the IBM PC. Advent of Computing now has merch! If you want to support the show of just show off, you can buy shirts and more here: http://tee.pub/lic/MKt4UiBp22g

The History of Computing
The Evolution Of The Microchip

The History of Computing

Play Episode Listen Later Sep 13, 2019 31:14


The Microchip Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Todays episode is on the history of the microchip, or microprocessor. This was a hard episode, because it was the culmination of so many technologies. You don't know where to stop telling the story - and you find yourself writing a chronological story in reverse chronological order. But few advancements have impacted humanity the way the introduction of the microprocessor has. Given that most technological advances are a convergence of otherwise disparate technologies, we'll start the story of the microchip with the obvious choice: the light bulb. Thomas Edison first demonstrated the carbon filament light bulb in 1879. William Joseph Hammer, an inventor working with Edison, then noted that if he added another electrode to a heated filament bulb that it would glow around the positive pole in the vacuum of the bulb and blacken the wire and the bulb around the negative pole. 25 years later, John Ambrose Fleming demonstrated that if that extra electrode is made more positive than the filament the current flows through the vacuum and that the current could only flow from the filament to the electrode and not the other direction. This converted AC signals to DC and represented a boolean gate. In the 1904 Fleming was granted Great Britain's patent number 24850 for the vacuum tube, ushering in the era of electronics. Over the next few decades, researchers continued to work with these tubes. Eccles and Jordan invented the flip-flop circuit at London's City and Guilds Technical College in 1918, receiving a patent for what they called the Eccles-Jordan Trigger Circuit in 1920. Now, English mathematician George Boole back in the earlier part of the 1800s had developed Boolean algebra. Here he created a system where logical statements could be made in mathematical terms. Those could then be performed using math on the symbols. Only a 0 or a 1 could be used. It took awhile, John Vincent Atanasoff and grad student Clifford Berry harnessed the circuits in the Atanasoff-Berry computer in 1938 at Iowa State University and using Boolean algebra, successfully solved linear equations but never finished the device due to World War II, when a number of other technological advancements happened, including the development of the ENIAC by John Mauchly and J Presper Eckert from the University of Pennsylvania, funded by the US Army Ordinance Corps, starting in 1943. By the time it was taken out of operation, the ENIAC had 20,000 of these tubes. Each digit in an algorithm required 36 tubes. Ten digit numbers could be multiplied at 357 per second, showing the first true use of a computer. John Von Neumann was the first to actually use the ENIAC when they used one million punch cards to run the computations that helped propel the development of the hydrogen bomb at Los Alamos National Laboratory. The creators would leave the University and found the Eckert-Mauchly Computer Corporation. Out of that later would come the Univac and the ancestor of todays Unisys Corporation. These early computers used vacuum tubes to replace gears that were in previous counting machines and represented the First Generation. But the tubes for the flip-flop circuits were expensive and had to be replaced way too often. The second generation of computers used transistors instead of vacuum tubes for logic circuits. The integrated circuit is basically a wire set into silicon or germanium that can be set to on or off based on the properties of the material. These replaced vacuum tubes in computers to provide the foundation of the boolean logic. You know, the zeros and ones that computers are famous for. As with most modern technologies the integrated circuit owes its origin to a number of different technologies that came before it was able to be useful in computers. This includes the three primary components of the circuit: the transistor, resistor, and capacitor. The silicon that chips are so famous for was actually discovered by Swedish chemist Jöns Jacob Berzelius in 1824. He heated potassium chips in a silica container and washed away the residue and viola - an element! The transistor is a semiconducting device that has three connections that amplify data. One is the source, which is connected to the negative terminal on a battery. The second is the drain, and is a positive terminal that, when touched to the gate (the third connection), the transistor allows electricity through. Transistors then acts as an on/off switch. The fact they can be on or off is the foundation for Boolean logic in modern computing. The resistor controls the flow of electricity and is used to control the levels and terminate lines. An integrated circuit is also built using silicon but you print the pattern into the circuit using lithography rather than painstakingly putting little wires where they need to go like radio operators did with the Cats Whisker all those years ago. The idea of the transistor goes back to the mid-30s when William Shockley took the idea of a cat's wicker, or fine wire touching a galena crystal. The radio operator moved the wire to different parts of the crystal to pick up different radio signals. Solid state physics was born when Shockley, who first studied at Cal Tech and then got his PhD in Physics, started working on a way to make these useable in every day electronics. After a decade in the trenches, Bell gave him John Bardeen and Walter Brattain who successfully finished the invention in 1947. Shockley went on to design a new and better transistor, known as a bipolar transistor and helped move us from vacuum tubes, which were bulky and needed a lot of power, to first gernanium, which they used initially and then to silicon. Shockley got a Nobel Prize in physics for his work and was able to recruit a team of extremely talented young PhDs to help work on new semiconductor devices. He became increasingly frustrated with Bell and took a leave of absence. Shockley moved back to his hometown of Palo Alto, California and started a new company called the Shockley Semiconductor Laboratory. He had some ideas that were way before his time and wasn't exactly easy to work with. He pushed the chip industry forward but in the process spawned a mass exodus of employees that went to Fairchild in 1957. He called them the “Traitorous 8” to create what would be Fairchild Semiconductors. The alumni of Shockley Labs ended up spawning 65 companies over the next 20 years that laid foundation of the microchip industry to this day, including Intel. . If he were easier to work with, we might not have had the innovation that we've seen if not for Shockley's abbrasiveness! All of these silicon chip makers being in a small area of California then led to that area getting the Silicon Valley moniker, given all the chip makers located there. At this point, people were starting to experiment with computers using transistors instead of vacuum tubes. The University of Manchester created the Transistor Computer in 1953. The first fully transistorized computer came in 1955 with the Harwell CADET, MIT started work on the TX-0 in 1956, and the THOR guidance computer for ICBMs came in 1957. But the IBM 608 was the first commercial all-transistor solid-state computer. The RCA 501, Philco Transac S-1000, and IBM 7070 took us through the age of transistors which continued to get smaller and more compact. At this point, we were really just replacing tubes with transistors. But the integrated circuit would bring us into the third generation of computers. The integrated circuit is an electronic device that has all of the functional blocks put on the same piece of silicon. So the transistor, or multiple transistors, is printed into one block. Jack Kilby of Texas Instruments patented the first miniaturized electronic circuit in 1959, which used germanium and external wires and was really more of a hybrid integrated Circuit. Later in 1959, Robert Noyce of Fairchild Semiconductor invented the first truly monolithic integrated circuit, which he received a patent for. While doing so independently, they are considered the creators of the integrated circuit. The third generation of computers was from 1964 to 1971, and saw the introduction of metal-oxide-silicon and printing circuits with photolithography. In 1965 Gordon Moore, also of Fairchild at the time, observed that the number of transistors, resistors, diodes, capacitors, and other components that could be shoved into a chip was doubling about every year and published an article with this observation in Electronics Magazine, forecasting what's now known as Moore's Law. The integrated circuit gave us the DEC PDP and later the IBM S/360 series of computers, making computers smaller, and brought us into a world where we could write code in COBOL and FORTRAN. A microprocessor is one type of integrated circuit. They're also used in audio amplifiers, analog integrated circuits, clocks, interfaces, etc. But in the early 60s, the Minuteman missal program and the US Navy contracts were practically the only ones using these chips, at this point numbering in the hundreds, bringing us into the world of the MSI, or medium-scale integration chip. Moore and Noyce left Fairchild and founded NM Electronics in 1968, later renaming the company to Intel, short for Integrated Electronics. Federico Faggin came over in 1970 to lead the MCS-4 family of chips. These along with other chips that were economical to produce started to result in chips finding their way into various consumer products. In fact, the MCS-4 chips, which split RAM , ROM, CPU, and I/O, were designed for the Nippon Calculating Machine Corporation and Intel bought the rights back, announcing the chip in Electronic News with an article called “Announcing A New Era In Integrated Electronics.” Together, they built the Intel 4004, the first microprocessor that fit on a single chip. They buried the contacts in multiple layers and introduced 2-phase clocks. Silicon oxide was used to layer integrated circuits onto a single chip. Here, the microprocessor, or CPU, splits the arithmetic and logic unit, or ALU, the bus, the clock, the control unit, and registers up so each can do what they're good at, but live on the same chip. The 1st generation of the microprocessor was from 1971, when these 4-bit chips were mostly used in guidance systems. This boosted the speed by five times. The forming of Intel and the introduction of the 4004 chip can be seen as one of the primary events that propelled us into the evolution of the microprocessor and the fourth generation of computers, which lasted from 1972 to 2010. The Intel 4004 had 2,300 transistors. The Intel 4040 came in 1974, giving us 3,000 transistors. It was still a 4-bit data bus but jumped to 12-bit ROM. The architecture was also from Faggin but the design was carried out by Tom Innes. We were firmly in the era of LSI, or Large Scale Integration chips. These chips were also used in the Busicom calculator, and even in the first pinball game controlled by a microprocessor. But getting a true computer to fit on a chip, or a modern CPU, remained an elusive goal. Texas Instruments ran an ad in Electronics with a caption that the 8008 was a “CPU on a Chip” and attempted to patent the chip, but couldn't make it work. Faggin went to Intel and they did actually make it work, giving us the first 8-bit microprocessor. It was then redesigned in 1972 as the 8080. A year later, the chip was fabricated and then put on the market in 1972. Intel made the R&D money back in 5 months and sparked the idea for Ed Roberts to build The Altair 8800. Motorola and Zilog brought competition in the 6900 and Z-80, which was used in the Tandy TRS-80, one of the first mass produced computers. N-MOSs transistors on chips allowed for new and faster paths and MOS Technology soon joined the fray with the 6501 and 6502 chips in 1975. The 6502 ended up being the chip used in the Apple I, Apple II, NES, Atari 2600, BBC Micro, Commodore PET and Commodore VIC-20. The MOS 6510 variant was then used in the Commodore 64. The 8086 was released in 1978 with 3,000 transistors and marked the transition to Intel's x86 line of chips, setting what would become the standard in future chips. But the IBM wasn't the only place you could find chips. The Motorola 68000 was used in the Sun-1 from Sun Microsystems, the HP 9000, the DEC VAXstation, the Comodore Amiga, the Apple Lisa, the Sinclair QL, the Sega Genesis, and the Mac. The chips were also used in the first HP LaserJet and the Apple LaserWriter and used in a number of embedded systems for years to come. As we rounded the corner into the 80s it was clear that the computer revolution was upon us. A number of computer companies were looking to do more than what they could do with he existing Intel, MOS, and Motorola chips. And ARPA was pushing the boundaries yet again. Carver Mead of Caltech and Lynn Conway of Xerox PARC saw the density of transistors in chips starting to plateau. So with DARPA funding they went out looking for ways to push the world into the VLSI era, or Very Large Scale Integration. The VLSI project resulted in the concept of fabless design houses, such as Broadcom, 32-bit graphics, BSD Unix, and RISC processors, or Reduced Instruction Set Computer Processor. Out of the RISC work done at UC Berkely came a number of new options for chips as well. One of these designers, Acorn Computers evaluated a number of chips and decided to develop their own, using VLSI Technology, a company founded by more Fairchild Semiconductor alumni) to manufacture the chip in their foundry. Sophie Wilson, then Roger, worked on an instruction set for the RISC. Out of this came the Acorn RISC Machine, or ARM chip. Over 100 billion ARM processors have been produced, well over 10 for every human on the planet. You know that fancy new A13 that Apple announced. It uses a licensed ARM core. Another chip that came out of the RISC family was the SUN Sparc. Sun being short for Stanford University Network, co-founder Andy Bchtolsheim, they were close to the action and released the SPARC in 1986. I still have a SPARC 20 I use for this and that at home. Not that SPARC has gone anywhere. They're just made by Oracle now. The Intel 80386 chip was a 32 bit microprocessor released in 1985. The first chip had 275,000 transistors, taking plenty of pages from the lessons learned in the VLSI projects. Compaq built a machine on it, but really the IBM PC/AT made it an accepted standard, although this was the beginning of the end of IBMs hold on the burgeoning computer industry. And AMD, yet another company founded by Fairchild defectors, created the Am386 in 1991, ending Intel's nearly 5 year monopoly on the PC clone industry and ending an era where AMD was a second source of Intel parts but instead was competing with Intel directly. We can thank AMD's aggressive competition with Intel for helping to keep the CPU industry going along Moore's law! At this point transistors were only 1.5 microns in size. Much, much smaller than a cats whisker. The Intel 80486 came in 1989 and again tracking against Moore's Law we hit the first 1 million transistor chip. Remember how Compaq helped end IBM's hold on the PC market? When the Intel 486 came along they went with AMD. This chip was also important because we got L1 caches, meaning that chips didn't need to send instructions to other parts of the motherboard but could do caching internally. From then on, the L1 and later L2 caches would be listed on all chips. We'd finally broken 100MHz! Motorola released the 68050 in 1990, hitting 1.2 Million transistors, and giving Apple the chip that would define the Quadra and also that L1 cache. The DEC Alpha came along in 1992, also a RISC chip, but really kicking off the 64-bit era. While the most technically advanced chip of the day, it never took off and after DEC was acquired by Compaq and Compaq by HP, the IP for the Alpha was sold to Intel in 2001, with the PC industry having just decided they could have all their money. But back to the 90s, ‘cause life was better back when grunge was new. At this point, hobbyists knew what the CPU was but most normal people didn't. The concept that there was a whole Univac on one of these never occurred to most people. But then came the Pentium. Turns out that giving a chip a name and some marketing dollars not only made Intel a household name but solidified their hold on the chip market for decades to come. While the Intel Inside campaign started in 1991, after the Pentium was released in 1993, the case of most computers would have a sticker that said Intel Inside. Intel really one upped everyone. The first Pentium, the P5 or 586 or 80501 had 3.1 million transistors that were 16.7 micrometers. Computers kept getting smaller and cheaper and faster. Apple answered by moving to the PowerPC chip from IBM, which owed much of its design to the RISC. Exactly 10 years after the famous 1984 Super Bowl Commercial, Apple was using a CPU from IBM. Another advance came in 1996 when IBM developed the Power4 chip and gave the world multi-core processors, or a CPU that had multiple CPU cores inside the CPU. Once parallel processing caught up to being able to have processes that consumed the resources on all those cores, we saw Intel's Pentium D, and AMD's Athlon 64 x2 released in May 2005 bringing multi-core architecture to the consumer. This led to even more parallel processing and an explosion in the number of cores helped us continue on with Moore's Law. There are now custom chips that reach into the thousands of cores today, although most laptops have maybe 4 cores in them. Setting multi-core architectures aside for a moment, back to Y2K when Justin Timberlake was still a part of NSYNC. Then came the Pentium Pro, Pentium II, Celeron, Pentium III, Xeon, Pentium M, Xeon LV, Pentium 4. On the IBM/Apple side, we got the G3 with 6.3 million transistors, G4 with 10.5 million transistors, and the G5 with 58 million transistors and 1,131 feet of copper interconnects, running at 3GHz in 2002 - so much copper that NSYNC broke up that year. The Pentium 4 that year ran at 2.4 GHz and sported 50 million transistors. This is about 1 transistor per dollar made off Star Trek: Nemesis in 2002. I guess Attack of the Clones was better because it grossed over 300 Million that year. Remember how we broke the million transistor mark in 1989? In 2005, Intel started testing Montecito with certain customers. The Titanium-2 64-bit CPU with 1.72 billion transistors, shattering the billion mark and hitting a billion two years earlier than projected. Apple CEO Steve Jobs announced Apple would be moving to the Intel processor that year. NeXTSTEP had been happy as a clam on Intel, SPARC or HP RISC so given the rapid advancements from Intel, this seemed like a safe bet and allowed Apple to tell directors in IT departments “see, we play nice now.” And the innovations kept flowing for the next decade and a half. We packed more transistors in, more cache, cleaner clean rooms, faster bus speeds, with Intel owning the computer CPU market and AMD slowly growing from the ashes of Acorn computer into the power-house that AMD cores are today, when embedded in other chips designs. I'd say not much interesting has happened, but it's ALL interesting, except the numbers just sound stupid they're so big. And we had more advances along the way of course, but it started to feel like we were just miniaturizing more and more, allowing us to do much more advanced computing in general. The fifth generation of computing is all about technologies that we today consider advanced. Artificial Intelligence, Parallel Computing, Very High Level Computer Languages, the migration away from desktops to laptops and even smaller devices like smartphones. ULSI, or Ultra Large Scale Integration chips not only tells us that chip designers really have no creativity outside of chip architecture, but also means millions up to tens of billions of transistors on silicon. At the time of this recording, the AMD Epic Rome is the single chip package with the most transistors, at 32 billion. Silicon is the seventh most abundant element in the universe and the second most in the crust of the planet earth. Given that there's more chips than people by a huge percentage, we're lucky we don't have to worry about running out any time soon! We skipped RAM in this episode. But it kinda' deserves its own, since RAM is still following Moore's Law, while the CPU is kinda' lagging again. Maybe it's time for our friends at DARPA to get the kids from Berkley working at VERYUltra Large Scale chips or VULSIs! Or they could sign on to sponsor this podcast! And now I'm going to go take a VERYUltra Large Scale nap. Gentle listeners I hope you can do that as well. Unless you're driving while listening to this. Don't nap while driving. But do have a lovely day. Thank you for listening to yet another episode of the History of Computing Podcast. We're so lucky to have you!

BSD Now
315: Recapping vBSDcon 2019

BSD Now

Play Episode Listen Later Sep 12, 2019 76:55


vBSDcon 2019 recap, Unix at 50, OpenBSD on fan-less Tuxedo InfinityBook, humungus - an hg server, how to configure a network dump in FreeBSD, and more. Headlines vBSDcon Recap Allan and Benedict attended vBSDcon 2019, which ended last week. It was held again at the Hyatt Regency Reston and the main conference was organized by Dan Langille of BSDCan fame.The two day conference was preceded by a one day FreeBSD hackathon, where FreeBSD developers had the chance to work on patches and PRs. In the evening, a reception was held to welcome attendees and give them a chance to chat and get to know each other over food and drinks. The first day of the conference was opened with a Keynote by Paul Vixie about DNS over HTTPS (DoH). He explained how we got to the current state and what challenges (technical and social) this entails. If you missed this talk and are dying to see it, it will also be presented at EuroBSDCon next week John Baldwin followed up by giving an overview of the work on “In-Kernel TLS Framing and Encryption for FreeBSD” abstract (https://www.vbsdcon.com/schedule/2019-09-06.html#talk:132615) and the recent commit we covered in episode 313. Meanwhile, Brian Callahan was giving a separate session in another room about “Learning to (Open)BSD through its porting system: an attendee-driven educational session” where people had the chance to learn about how to create ports for the BSDs. David Fullard’s talk about “Transitioning from FreeNAS to FreeBSD” was his first talk at a BSD conference and described how he built his own home NAS setup trying to replicate FreeNAS’ functionality on FreeBSD, and why he transitioned from using an appliance to using vanilla FreeBSD. Shawn Webb followed with his overview talk about the “State of the Hardened Union”. Benedict’s talk about “Replacing an Oracle Server with FreeBSD, OpenZFS, and PostgreSQL” was well received as people are interested in how we liberated ourselves from the clutches of Oracle without compromising functionality. Entertaining and educational at the same time, Michael W. Lucas talk about “Twenty Years in Jail: FreeBSD Jails, Then and Now” closed the first day. Lucas also had a table in the hallway with his various tech and non-tech books for sale. People formed small groups and went into town for dinner. Some returned later that night to some work in the hacker lounge or talk amongst fellow BSD enthusiasts. Colin Percival was the keynote speaker for the second day and had an in-depth look at “23 years of software side channel attacks”. Allan reprised his “ELI5: ZFS Caching” talk explaining how the ZFS adaptive replacement cache (ARC) work and how it can be tuned for various workloads. “By the numbers: ZFS Performance Results from Six Operating Systems and Their Derivatives” by Michael Dexter followed with his approach to benchmarking OpenZFS on various platforms. Conor Beh was also a new speaker to vBSDcon. His talk was about “FreeBSD at Work: Building Network and Storage Infrastructure with pfSense and FreeNAS”. Two OpenBSD talks closed the talk session: Kurt Mosiejczuk with “Care and Feeding of OpenBSD Porters” and Aaron Poffenberger with “Road Warrior Disaster Recovery: Secure, Synchronized, and Backed-up”. A dinner and reception was enjoyed by the attendees and gave more time to discuss the talks given and other things until late at night. We want to thank the vBSDcon organizers and especially Dan Langille for running such a great conference. We are grateful to Verisign as the main sponsor and The FreeBSD Foundation for sponsoring the tote bags. Thanks to all the speakers and attendees! humungus - an hg server (https://humungus.tedunangst.com/r/humungus) Features View changes, files, changesets, etc. Some syntax highlighting. Read only. Serves multiple repositories. Allows cloning via the obvious URL. Supports go get. Serves files for downloads. Online documentation via mandoc. Terminal based admin interface. News Roundup OpenBSD on fan-less Tuxedo InfinityBook 14″ v2. (https://hazardous.org/archive/blog/openbsd/2019/09/02/OpenBSD-on-Infinitybook14) The InfinityBook 14” v2 is a fanless 14” notebook. It is an excellent choice for running OpenBSD - but order it with the supported wireless card (see below.). I’ve set it up in a dual-boot configuration so that I can switch between Linux and OpenBSD - mainly to spot differences in the drivers. TUXEDO allows a variety of configurations through their webshop. The dual boot setup with grub2 and EFI boot will be covered in a separate blogpost. My tests were done with OpenBSD-current - which is as of writing flagged as 6.6-beta. See Article for breakdown of CPU, Wireless, Video, Webcam, Audio, ACPI, Battery, Touchpad, and MicroSD Card Reader Unix at 50: How the OS that powered smartphones started from failure (https://arstechnica.com/gadgets/2019/08/unix-at-50-it-starts-with-a-mainframe-a-gator-and-three-dedicated-researchers/) Maybe its pervasiveness has long obscured its origins. But Unix, the operating system that in one derivative or another powers nearly all smartphones sold worldwide, was born 50 years ago from the failure of an ambitious project that involved titans like Bell Labs, GE, and MIT. Largely the brainchild of a few programmers at Bell Labs, the unlikely story of Unix begins with a meeting on the top floor of an otherwise unremarkable annex at the sprawling Bell Labs complex in Murray Hill, New Jersey. It was a bright, cold Monday, the last day of March 1969, and the computer sciences department was hosting distinguished guests: Bill Baker, a Bell Labs vice president, and Ed David, the director of research. Baker was about to pull the plug on Multics (a condensed form of MULTiplexed Information and Computing Service), a software project that the computer sciences department had been working on for four years. Multics was two years overdue, way over budget, and functional only in the loosest possible understanding of the term. Trying to put the best spin possible on what was clearly an abject failure, Baker gave a speech in which he claimed that Bell Labs had accomplished everything it was trying to accomplish in Multics and that they no longer needed to work on the project. As Berk Tague, a staffer present at the meeting, later told Princeton University, “Like Vietnam, he declared victory and got out of Multics.” Within the department, this announcement was hardly unexpected. The programmers were acutely aware of the various issues with both the scope of the project and the computer they had been asked to build it for. Still, it was something to work on, and as long as Bell Labs was working on Multics, they would also have a $7 million mainframe computer to play around with in their spare time. Dennis Ritchie, one of the programmers working on Multics, later said they all felt some stake in the success of the project, even though they knew the odds of that success were exceedingly remote. Cancellation of Multics meant the end of the only project that the programmers in the Computer science department had to work on—and it also meant the loss of the only computer in the Computer science department. After the GE 645 mainframe was taken apart and hauled off, the computer science department’s resources were reduced to little more than office supplies and a few terminals. Some of Allan’s favourite excerpts: In the early '60s, Bill Ninke, a researcher in acoustics, had demonstrated a rudimentary graphical user interface with a DEC PDP-7 minicomputer. Acoustics still had that computer, but they weren’t using it and had stuck it somewhere out of the way up on the sixth floor. And so Thompson, an indefatigable explorer of the labs’ nooks and crannies, finally found that PDP-7 shortly after Davis and Baker cancelled Multics. With the rest of the team’s help, Thompson bundled up the various pieces of the PDP-7—a machine about the size of a refrigerator, not counting the terminal—moved it into a closet assigned to the acoustics department, and got it up and running. One way or another, they convinced acoustics to provide space for the computer and also to pay for the not infrequent repairs to it out of that department’s budget. McIlroy’s programmers suddenly had a computer, kind of. So during the summer of 1969, Thompson, Ritchie, and Canaday hashed out the basics of a file manager that would run on the PDP-7. This was no simple task. Batch computing—running programs one after the other—rarely required that a computer be able to permanently store information, and many mainframes did not have any permanent storage device (whether a tape or a hard disk) attached to them. But the time-sharing environment that these programmers had fallen in love with required attached storage. And with multiple users connected to the same computer at the same time, the file manager had to be written well enough to keep one user’s files from being written over another user’s. When a file was read, the output from that file had to be sent to the user that was opening it. It was a challenge that McIlroy’s team was willing to accept. They had seen the future of computing and wanted to explore it. They knew that Multics was a dead-end, but they had discovered the possibilities opened up by shared development, shared access, and real-time computing. Twenty years later, Ritchie characterized it for Princeton as such: “What we wanted to preserve was not just a good environment in which to do programming, but a system around which a fellowship could form.” Eventually when they had the file management system more or less fleshed out conceptually, it came time to actually write the code. The trio—all of whom had terrible handwriting—decided to use the Labs’ dictating service. One of them called up a lab extension and dictated the entire code base into a tape recorder. And thus, some unidentified clerical worker or workers soon had the unenviable task of trying to convert that into a typewritten document. Of course, it was done imperfectly. Among various errors, “inode” came back as “eye node,” but the output was still viewed as a decided improvement over their assorted scribbles. In August 1969, Thompson’s wife and son went on a three-week vacation to see her family out in Berkeley, and Thompson decided to spend that time writing an assembler, a file editor, and a kernel to manage the PDP-7 processor. This would turn the group’s file manager into a full-fledged operating system. He generously allocated himself one week for each task. Thompson finished his tasks more or less on schedule. And by September, the computer science department at Bell Labs had an operating system running on a PDP-7—and it wasn’t Multics. By the summer of 1970, the team had attached a tape drive to the PDP-7, and their blossoming OS also had a growing selection of tools for programmers (several of which persist down to this day). But despite the successes, Thompson, Canaday, and Ritchie were still being rebuffed by labs management in their efforts to get a brand-new computer. It wasn’t until late 1971 that the computer science department got a truly modern computer. The Unix team had developed several tools designed to automatically format text files for printing over the past year or so. They had done so to simplify the production of documentation for their pet project, but their tools had escaped and were being used by several researchers elsewhere on the top floor. At the same time, the legal department was prepared to spend a fortune on a mainframe program called “AstroText.” Catching wind of this, the Unix crew realized that they could, with only a little effort, upgrade the tools they had written for their own use into something that the legal department could use to prepare patent applications. The computer science department pitched lab management on the purchase of a DEC PDP-11 for document production purposes, and Max Mathews offered to pay for the machine out of the acoustics department budget. Finally, management gave in and purchased a computer for the Unix team to play with. Eventually, word leaked out about this operating system, and businesses and institutions with PDP-11s began contacting Bell Labs about their new operating system. The Labs made it available for free—requesting only the cost of postage and media from anyone who wanted a copy. The rest has quite literally made tech history. See the link for the rest of the article How to configure a network dump in FreeBSD? (https://www.oshogbo.vexillium.org/blog/68/) A network dump might be very useful for collecting kernel crash dumps from embedded machines and machines with a larger amount of RAM then available swap partition size. Besides net dumps we can also try to compress the core dump. However, often this may still not be enough swap to keep whole core dump. In such situation using network dump is a convenient and reliable way for collecting kernel dump. So, first, let’s talk a little bit about history. The first implementation of the network dumps was implemented around 2000 for the FreeBSD 4.x as a kernel module. The code was implemented in 2010 with the intention of being part of FreeBSD 9.0. However, the code never landed in FreeBSD. Finally, in 2018 with the commit r333283 by Mark Johnston the netdump client code landed in the FreeBSD. Subsequently, many other commitments were then implemented to add support for the different drivers (for example r333289). The first official release of FreeBSD, which support netdump is FreeBSD 12.0. Now, let’s get back to the main topic. How to configure the network dump? Two machines are needed. One machine is to collect core dump, let’s call it server. We will use the second one to send us the core dump - the client. See the link for the rest of the article Beastie Bits Sudo Mastery 2nd edition is not out (https://mwl.io/archives/4530) Empirical Notes on the Interaction Between Continuous Kernel Fuzzing and Development (http://users.utu.fi/kakrind/publications/19/vulnfuzz_camera.pdf) soso (https://github.com/ozkl/soso) GregKH - OpenBSD was right (https://youtu.be/gUqcMs0svNU?t=254) Game of Trees (https://gameoftrees.org/faq.html) Feedback/Questions BostJan - Another Question (http://dpaste.com/1ZPCCQY#wrap) Tom - PF (http://dpaste.com/3ZSCB8N#wrap) JohnnyK - Changing VT without keys (http://dpaste.com/3QZQ7Q5#wrap) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) Your browser does not support the HTML5 video tag.

The History of Computing
The History of Symantec

The History of Computing

Play Episode Listen Later Aug 11, 2019 12:09


Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Todays episode is on the History of Symantec. This is really more part one of a part two series. Broadcom announced they were acquiring Symantec in August of 2019, the day before we recorded this episode. Who is this Symantec and what do they do - and why does Broadcom want to buy them for 10.7 Billion dollars? For starters, by themselves Symantec is a Fortune 500 company with over $4 billion dollars in annual revenues so $10.7 Billion is a steal for an enterprise software company. Except they're just selling the Enterprise software division and keeping Norton in the family. With just shy of 12,000 employees, Symantec has twisted and turned and bought and sold companies for a long time. But how did they become a Fortune 500 company? It all started with Eisenhower. ARPA or the Advanced Research Projects Agency, which would later add the word Defense to their name, become DARPA and build a series of tubes call the interweb. While originally commissioned so Ike could counter Sputnik, ARPA continued working to fund projects in computers and in the 1970s, this kid out of the University of Texas named Gary Hendrix saw that they were funding natural language understanding projects. This went back to Turing and DARPA wanted to give some AI-complete a leap forward, trying to make computers as intelligent as people. This was obviously before Terminator told us that was a bad idea (pro-tip, it's a good idea). Our intrepid hero Gary saw that sweet, sweet grant money and got his PhD from the UT Austin Computational Linguistics Lab. He wrote some papers on robotics and the Stanford Research Institute, or SRI for short. Yes, that's the same SRI that invented the hosts.txt file and is responsible for keeping DNS for the first decade or so of the internet. So our pal Hendrix joins SRI and chases that grant money, leaving SRI in 1980 with about 15 other Stanford researchers to start a company they called Machine Intelligence Corporation. That went bust and so he started Symantec Corporation in 1982 got a grant from the National Science foundation to build natural language processing software; it turns out syntax and semantics make for a pretty good mashup. So the new company Symantec built out a database and some advanced natural language code, but by 1984 the PC revolution was on and that code had been built for a DEC PDP so could not be run on the emerging PCs in the industry. Symantec was then acquired by C&E Software short for the names of its founders, Dennis Coleman and Gordon Eubanks. The Symantec name stayed and Eubanks became the chairman of the board for the new company. C&E had been working on PC software called Q&A, which the new team finished and then added natural language processing to make using the tools easier to use. They called that “The Intelligent Assistant” and they now had a tool that would take them through the 80s. People swapped rolls, and due to a sharp focus on sales they did well. During the early days of the PC, dealers - or small computer stores that were popping up all over the country, were critical to selling hardware and software. Every Symantec employee would go on the road for six days a week, visiting 6 dealers a day. It was grueling but kept them growing and building. They became what we now call a “portfolio” company in 1985 when they introduced NoteIt, a natural language processing tool used to annotate docs in Lotus 1-2-3. Lotus was in the midst of eating the lunch of previous tools. They added another devision and made SQZ a Lotus 1-2-3 spreadsheet tool. This is important, they were a 3 product company with divisions when in 1987 they got even more aggressive and purchased Breakthrough Software who made an early project management tool called TimeLine. And this is when they did something unique for a PC software company: they split each product into groups that leveraged a shared pool of resources. Each product had a GM that was responsible for the P&L. The GM ran the development, Quality Assurance, Tech Support, and Product Market - those teams reported directly to the GM, who reported to then CEO Eubanks. But there was a shared sales, finance, and operations team. This laid the framework for massive growth, increased sales, and took Symantec to their IPO in 1989. Symantec purchased what was at the time the most popular CRM app called ACT! In 1993 Meanwhile, Peter Norton had a great suite of tools for working with DOS. Things that, well, maybe should have been built into operating systems (and mostly now are). Norton could compress files, do file recovery, etc. The cash Symantec raised allowed them to acquire The Peter Norton Company in 1999 which would completely change the face of the company. This gave them development tools for PC and Mac as Norton had been building those. This lead to the introduction of Symantec Antivirus for the Macintosh and called the anti-virus for PC Norton Antivirus because people already trusted that name. Within two years, with the added sales and marketing air cover that the Symantec sales machine provided, the Norton group was responsible for 82% of Symantecs total revenues. So much so that Symantec dropped building Q&A because Microsoft was winning in their market. I remember this moment pretty poignantly. Sure, there were other apps for the Mac like Virex, and other apps for Windows, like McAfee. But the Norton tools were the gold standard. At least until they later got bloated. The next decade was fast, from the outside looking in, except when Symantec acquired Veritas in 2004. This made sense as Symantec had become a solid player in the security space and before the cloud, backup seemed somewhat related. I'd used Backup Exec for a long time and watched Veritas products go from awesome to, well, not as awesome. John Thompson was the CEO through that decade and Symantec grew rapidly - purchasing systems management solution Altiris in 2007 and got a Data Loss Prevention solution that year in Vontu. Application Performance Management, or APM wasn't very security focused so that business until was picked up by Vector Capital in 2008. They also picked up MessageLabs and AppStream in 2008. Enrique Salem replaced Thompson and Symantec bought Versign's CA business in 2010. If you remember from our encryption episode, that was already spun off of RSA. Certificates are security-focused. Email encryption tool PGP and GuardianEdge were also picked up in 2010 providing key management tools for all those, um, keys the CA was issuing. These tools were never integrated properly though. They also picked up Rulespace in 2010 to get what's now their content filtering solution. Symantec acquired LiveOffice in 2012 to get enterprise vault and instant messaging security - continuing to solidify the line of security products. They also acquired Odyssey Software for SCCM plugins to get better at managing embedded, mobile, and rugged devices. Then came Nukona to get a MAM product, also in 2012. During this time, Steve Bennett was hired as CEO and fired in 2014. Then Michael Brown, although in the interim Veritas was demerged in 2014 and as their products started getting better they were sold to The Carlyle Group in 2016 for $8B. Then Greg Clark became CEO in 2016, when Symantec purchased Blue Coat. Greg Clark then orchestrated the LifeLock acquisition for $2.3B of that $8B. Thoma Bravo then bought CA business to merge with DigiCert in 2017. Then in 2019 Rick Hill became CEO. Does this seem like a lot of buying and selling? It is. But it also isn't. If you look at what Symantec has done, they have a lot of things they can sell customers for various needs in the information security space. At times, they've felt like a holding company. But ever since the Norton acquisition, they've had very specific moves that continue to solidify them as one of the top security vendors in the space. Their sales teams don't spend six days a week on the road and go to six customers a day, but they have a sales machine. And the've managed to leverage that to get inside what we call the buying tornado of many emergent technologies and then sell the company before the tornado ends. They still have Norton, of course. Even though practically every other product in the portfolio has come and gone over the years. What does all of this mean? The Broadcom acquisition of the enterprise security division maybe tells us that Symantec is about to leverage that $10+ billion dollars to buy more software companies. And sell more companies after a little integration and incubation, then getting out of it before the ocean gets too red, the tech too stale, or before Microsoft sherlocks them. Because that's what they do. And they do it profitably every single time. We often think of how an acquiring company gets a new product - but next time you see a company buying another one, think about this: that company probably had multiple offers. What did the team at the company being acquired get out of this deal? And we'll work on that in the next episode, when we explore the history of Broadcom. Thank you for sticking with us through this episode of the History of Computing Podcast and have a great day!

The History of Computing

Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Todays episode is about the Xerox Alto. Close your eyes and… Wait, don't close your eyes if you're driving. Or on a bike. Or boating. Or… Nevermind, don't close your eyes But do use your imagination, and think of what it would be like if you opened your phone… Also don't open your phone while driving. But imagine opening your phone and ordering a pizza using a black screen with green text and no pictures. If that were the case, you probably wouldn't use an app to order a pizza. Without a graphical interface, or GUI, games wouldn't have such wide appeal. Without a GUI you wouldn't probably use a computer nearly as much. You might be happier, but we'll leave that topic to another podcast. Let's jump in our time machine and head back to 1973. The Allman Brothers stopped drinking mushroom tea long enough to release Ramblin' Man, Elton John put out Crocodile Rock, both Carpenters were still alive, and Free Bird was released by Lynard Skynyrd. Nixon was the president of the United States, and suspends offensive actions in North Vietnam, 5 days before being sworn into his second term as president. He wouldn't make it all four years of course because not long after, Watergate broke, and by the end of the year Nixon claimed “I'm not a crook”. The first handheld cell call is made by Martin Cooper, the World Trade Center opens, Secretariat wins the Belmont Stakes, Skylab 3 is launched, OJ was a running back instead of running from the police, being gay was removed from the DSM, and the Endangered Species Act was passed in the US. But many a researcher at the Palo Alto Research Center, known as Xerox Parc, probably didn't notice much of this as they were hard at work at doing something many people in Palo Alto talk about these days but rarely do: changing the world. In 1973, Xerox released the Alto, which had the first computer operating system designed from the ground up to support a GUI. It was inspired by the oN-Line System (or NLS for short), which had been designed by Douglas Engelbert of the Stanford Research Institute in the 60s on a DARPA grant. They'd spent a year developing it and that was the day to shine for Doublers Steward, John Ellenby, Bob Nishimura, and Abbey Silverstone. The Alto ran the Alto Executive operating system, had a 2.5 megabyte hard drive, ran with four 74181 MSI chips that ran at a 5.88 MHz clock speed and came with between 96 and 512 kiloBytes of memory. It came with a mouse, which had been designed by Engelbert for NLS. The Alto I ran a pilot of 30 and then an additional 90 were produced and sold before the Alto II was released. Over the course of 10 years, Xerox would sell 2000 more. Some of the programming concepts were borrowed from the Data General Nova, designed by Edson de Castro, a former DEC product manager responsible for the PDP-8. The Alto could run 16 cooperative, prioritized tasks. It was about the size of a mini refrigerator and had a CRTO on a swivel. It also came with an Ethernet connection, a keyboard, a three-button mouse a disk drive, and first a wheel mouse, later followed up with a ball mouse. That monitor was in portrait rather than the common landscape of later computers. You wrote software in BCPL and Mesa. It used raster graphics, came with a document editor, the Laurel email app, and gave us an actual multi-player video game. Oh, and a early graphics editor. And the first versions of Smalltalk - a language we'll do an upcoming episode on, ran on the Alto. 50 of these were donated to universities around the world in 1978, including Stanford, MIT, and Carnegie Mellon, inspiring a whole generation of computer scientists. One ended up in the White House. But perhaps the most important of the people that were inspired, was Steve Jobs, when he saw one at Xerox Parc, the inspiration for the first Mac. The sales numbers weren't off the charts though. Byte magazine said: It is unlikely that a person outside of the computer-science research community will ever be able to buy an Alto. They are not intended for commercial sale, but rather as development tools for Xerox, and so will not be mass-produced. What makes them worthy of mention is the fact that a large number of the personal computers of tomorrow will be designed with knowledge gained from the development of the Alto. The Alto was sold for $32,000 in 1979 money, or well over $100,000 today. So they were correct. $220,000,000 over 10 years is nothing. The Alto then begat the Xerox Star, which in 1981 killed the Alto and sold at half the price. But Xerox was once-bitten, twice shy. They'd introduced a machine to rival the DEC PDP-10 and didn't want to jump into this weird new PC business too far. If they had wanted to they might have released something somewhere between the Star and the Commodore VIC-20, which ran for about $300. Even after the success of the Apple II, which still paled in comparison to the business Xerox is most famous for: copiers. Imagine what they thought of the IBM PCs and Apple II, when they were a decade ahead of that? I've heard may say that with all of this technology being invented at Xerox, that they could have owned the IT industry. Sure, Apple went from $774,000 in 1977 to $118 million in 1980 but then CEO Peter McColough was more concerned about the loss of market share for copiers, which dipped from 65 to 46 percent at the time. Xerox revenues had gone from $1.6 billion dollars to $8 billion in the 70s. And there were 100,000 people working in that group! And in the 90s Xerox stock would later skyrocket up to $250/share! They invented Laser Printing, WYSIWYGs, the GUI, Ethernet, Object Oriented Programming, Ubiquitous computing with the PARCtab, networking over optical cables, data storage, and so so so much more. The interconnected world of today likely wouldn't be what it is without other people iterating on their contributions, but more specifically likely wouldn't be what it is if they had hoarded them. They made a modicum of money off most of these - and that money helped to fund further research, like hosting the first live streamed concert. Xerox still rakes in over $10 billion in a year in revenue and unlike many companies that went all-in on PCs or other innovations during the incredible 112 year run of Xerox, they're still doing pretty well. Commodore went bankrupt in 1994, 10 years after Dell was founded. Computing was changing so fast, who can blame Xerox? IBM was reinvented in the 80s because of the PC boom - but it also almost put them out of business. We'll certainly cover that in a future episode. I'm glad Xerox is still in business, still making solid products, and still researching all the things! So thank you to everyone at every level of Xerox, for all your organization has contributed over the years, including the Alto, which shaped how computers are used today. And thank YOU patient listeners, for tuning in to this episode of the History Of Computing Podcast. We hope you have a great day!

The History of Computing
The History of Computer Viruses

The History of Computing

Play Episode Listen Later Jul 26, 2019 17:00


Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past, we're able to be prepared for the innovations of the future! Todays episode is not about Fear, Uncertainty, and Death. Instead it's about viruses. As with many innovations in technology, early technology had security vulnerabilities. In fact, we still have them!  Today there are a lot of types of malware. And most gets to devices over the Internet. But we had viruses long before the Internet; in fact we've had them about as long as we've had computers. The concept of the virus came from a paper published by a Hungarian Scientist in 1949 called “Theory of Self-reproducing automata.” The first virus though, didn't come until 1971 with Creeper. It copied between DEC PDP-10s running TENEX over the ARPANET, the predecessor to the Internet. It didn't hurt anything; it just output a simple little message to the teletype that read “I'm the creeper: catch me if you can.” The original was written by Bob Thomas but it was made self-replicating by Ray Tomlinson thus basically making him the father of the worm. He also happened to make the first email program. You know that @ symbol in an email address? He put it there. Luckily he didn't make that self replicating as well.  The first antivirus software was written to, um, to catch Creeper. Also written by Ray Tomlinson in 1972 when his little haxie had gotten a bit out of control. This makes him the father of the worm, creator of the anti-virus industry, and the creator of phishing, I mean, um email. My kinda' guy.  The first virus to rear its head in the wild came in 1981 when a 15 year old Mt Lebanon high school kid named Rich Skrenta wrote Elk Cloner. Rich went on to work at Sun, AOL, create Newhoo (now called the Open Directory Project) and found Blekko, which became part of IBM Watson in 2015 (probably because of the syntax used in searching and indexes). But back to 1982. Because Blade Runner, E.T., and Tron were born that year. As was Elk Cloner, which that snotty little kid Rich wrote to mess with gamers. The virus would attach itself to a game running on version 3.3 of the Apple DOS operating system (the very idea of DOS on an Apple today is kinda' funny) and then activate on the 50th play of the game, displaying a poem about the virus on the screen. Let's look at the Whitman-esque prose: Elk Cloner: The program with a personality It will get on all your disks It will infiltrate your chips Yes, it's Cloner! It will stick to you like glue It will modify RAM too Send in the Cloner! This wasn't just a virus. It was a boot sector virus! I guess Apple's MASTER CREATE would then be the first anti-virus software. Maybe Rich sent one to Kurt Angle, Orin Hatch, Daya, or Mark Cuban. All from Mt Lebanon. Early viruses were mostly targeted at games and bulletin board services. Fred Cohen coined the term Computer Virus the next year, in 1983.  The first PC virus came also to DOS, but this time to MS-DOS in 1986. Ashar, later called Brain, was the brainchild of Basit and Amjad Farooq Alvi, who supposedly were only trying to protect their own medical software from piracy. Back then people didn't pay for a lot of the software they used. As organizations have gotten bigger and software has gotten cheaper the pirate mentality seems to have subsided a bit. For nearly a decade there was a slow roll of viruses here and there, mainly spread by being promiscuous with how floppy disks were shared. A lot of the viruses were boot sector viruses and a lot of them weren't terribly harmful. After all, if they erased the computer they couldn't spread very far. The virus started “Welcome to the Dungeon.” The following year, the poor Alvi brothers realized if they'd of said Welcome to the Jungle they'd be rich, but Axl Rose beat them to it. The brothers still run a company called Brain Telecommunication Limited in Pakistan. We'll talk about zombies later. There's an obvious connection here.  Brain was able to spread because people started sharing software over bulletin board systems. This was when trojan horses, or malware masked as a juicy piece of software, or embedded into other software started to become prolific. The Rootkits, or toolkits that an attacker could use to orchestrate various events on the targeted computer, began to get a bit more sophisticated, doing things like phoning home for further instructions. By the late 80s and early 90s, more and more valuable data was being stored on computers and so lax security created an easy way to get access to that data. Viruses started to go from just being pranks by kids to being something more.  A few people saw the writing on the wall. Bernd Fix wrote a tool to remove a virus in 1987. Andreas Luning and Kai Figge released The Ultimate Virus Killer, an Antivirus for the Atari ST. NOD antivirus was released as well as Flushot Plus and Anti4us. But the one that is still a major force in the IT industry is McAfee VirusScan, founded by a former NASA programmer named John Mcafee. McAfee resigned in 1994. His personal life is… how do I put this… special. He currently claims to be on the run from the CIA. I'm not sure the CIA is aware of this.  Other people saw the writing on the wall as well, but went… A different direction. This was when the first file-based viruses started to show up. They infected ini files, .exe files, and .com files. Places like command.com were ripe targets because operating systems didn't sign things yet. Jerusalem and Vienna were released in 1987. Maybe because he listened to too much Bad Medicine from Bon Jovi, but Robert Morris wrote the ARPANET worm in 1988, which reproduced until it filled up the memory of computers and shut down 6,000 devices. 1988 also saw Friday the 13th delete files and causing real damage. And Cascade came this year, the first known virus to be encrypted. The code and wittiness of the viruses were evolving.  In 1989 we got the AIDS Trojan. This altered autoexec.bat and counted how many times a computer would boot. At 90 boots, the virus would hide the dos directories and encrypt the names of files on C:/ making the computer unusable unless the infected computer owner sent $189 a PO Box in Panama. This was the first known instance of ransomeware. 1990 gave us the first polymorphic virus.  Symantec released Norton Antivirus in 1991, the same year the first polymorphic virus was found in the wild, called Tequila. Polymorphic viruses change as they spread, making it difficult to find by signature based antivirus detection products. In 1992 we got Michelangelo which John Mcafee said would hit 5 million computers. At this point, there were 1,000 viruses. 1993 Brough us Leandro and Freddy Krueger, 94 gave us OneHalf, and 1995 gave us Concept, the first known macro virus. 1994 gave us the first hoax with “Good Times” - I think of that email sometimes when I get messages of petitions online for things that will never happen.  But then came the Internet as we know it today. By the mid 90s, Microsoft had become a force to be reckoned with. This provided two opportunities. The first was the ability for someone writing a virus to have a large attack surface. All of the computers on the Internet were easy targets, especially before network address translation started to somewhat hide devices behind gateways and firewalls. The second was that a lot of those computers were running the same software. This meant if you wrote a tool for Windows that you could get your tool on a lot of computers. One other thing was happening: Macros. Macros are automations that can run inside Microsoft Office that could be used to gain access to lower level functions in the early days. Macro viruses often infected the .dot or template used when creating new Word documents, and so all new word documents would then be infected. As those documents were distributed over email, websites, or good old fashioned disks, they spread.  An ecosystem with a homogenous distribution of the population that isn't inoculated against an antigen is a ripe hunting ground for a large-scale infection. And so the table was set. It's March, 1999. David Smith of Aberdeen Township was probably listening to Livin' La Vida Loca by Ricky Martin. Or Smash Mouth. Or Sugar Ray. Or watching the genie In A Bottle video from Christina Aguilera. Because MTV still had some music videos. Actually, David probably went to see American Pie, The Blair Witch Project, Fight Club, or the Matrix then came home and thought he needed more excitement in his life. So he started writing a little prank. This prank was called Melissa.  As we've discussed, there had been viruses before, but nothing like Melissa. The 100,000 computers that were infected and 1 billion dollars of damage created doesn't seem like anything by todays standards, but consider this: about 100,000,000 PCs were being sold per year at that point, so that's roughly one tenth a percent of the units shipped. Melissa would email itself to the first 50 people in an Outlook database, a really witty approach for the time. Suddenly, it was everywhere; and it lasted for years. Because Office was being used on Windows and Mac, the Mac could be a carrier for the macro virus although the payload would do nothing. Most computer users by this time knew they “could” get a virus, but this was the first big outbreak and a wakeup call.  Think about this, if there are supposed to be 24 billion computing devices by 2020, then next year this would mean a similar infection would hit 240 million devices. That would mean it hits ever person in Germany, the UK, France, and the Nordic countries. David was fined $5,000 and spent 20 months in jail. He now helps hunt down creators of malware.  Macroviruses continued to increase over the coming years and while there aren't too many still running rampant, you do still see them today. Happy also showed up in 1999 but it just made fireworks. Who doesn't like fireworks? At this point, the wittiness of the viruses, well, it was mostly in the name and not the vulnerability. ILOVEYOU from 2000 was a vbscript virus and Pikachu from that year tried to get kids to let it infect computers.  2001 gave us Code Red, which attacked IIS and caused an estimated $2 Billion in damages. Other worms were Anna Kournikova, Sircam, Nimda and Klez. The pace of new viruses was going, as was how many devices were infected. Melissa started to look like a drop in the bucket. And Norton and other antivirus vendors had to release special tools, just to remove a specific virus.  Attack of the Clones was released in 2002 - not about the clones of Melissa that started wreaking havoc on businesses. Mylife was one of these. We also got Beast, a trojan that deployed a remote administration tool. I'm not sure if that's what evolved into SCCM yet.  In 2003 we got simile, the first metamorphic virus, blaster, sobbing, seem, graybeard, bolgimo, agobot, and then slammer, which was the fastest to spread at that time. This one hit a buffer overflow bug in Microsoft SQL and hit 75,000 devices in 10 minutes. 2004 gave us Bagle, which had its own email server, Sasser, and MyDoom, which dropped speeds for the whole internet by about 10 percent. MyDoom convinced users to open a nasty email attachment that said “Andy, I'm just doing my job, nothing personal.” You have to wonder what that meant… The witty worm wasn't super-witty, but Netsky, Vundo, bifrost, Santy, and Caribe were. 2005 gave us commwarrior (sent through texts), zotob, Zlob, but the best was that a rootlet ended up making it on CDs from Sony. 2006 brought us Starbucks, Nyxem, Leap, Brotox, stration. 2007 gave us Zeus and Storm. But then another biggee in 2008. Sure, Torpig, Mocmex, Koobface, Bohmini, and Rustock were a thing. But Conficker was a dictionary attack to get at admin passwords creating a botnet that was millions of computers strong and spread over hundreds of countries. At this point a lot of these were used to perform distributed denial of services attacks or to just send massive, and I mean massive amounts of spam.  Since then we've had student and duqu, Flame, Daspy, ZeroAccess. But in 2013 we got CryptoLocker which made us much more concerned about ransomware. At this point, entire cities can be taken down with targeted, very specific attacks. The money made from Wannacry in 2017 might or might not have helped developed North Korean missiles. And this is how these things have evolved. First they were kids, then criminal organizations saw an opening. I remember seeing those types trying to recruit young hax0rs at DefCon 12. Then governments got into it and we get into our modern era of “cyberwarfare.” Today, people like Park Jin Hyok are responsible for targeted attacks causing billions of dollars worth of damage.  Mobile attacks were up 54% year over year, another reason vendors like Apple and Google keep evolving the security features of their operating systems. Criminals will steal an estimated 33 billion records in 2023. 60 million Americans have been impacted by identity theft. India, Japan, and Taiwan are big targets as well. The cost of each breach at a company is now estimated to have an average cost of nearly 8 million dollars in the United States, making this about financial warfare. But it's not all doom and gloom. Wars in cyberspace between nation states, most of us don't really care about that. What we care about is keeping malware off our computers so the computers don't run like crap and so unsavory characters don't steal our crap. Luckily, that part has gotten easier than ever. 

TheOutliersInn's podcast
Episode 39 - The Man Behind the Curtain

TheOutliersInn's podcast

Play Episode Listen Later Jun 18, 2019 41:38


Topic: This episode finds us short of guests, but the show must go on.!  So, in an act of desperation and in search of fresh meat, we look inward and bring Chas – our podcast technician extraordinaire – from behind the curtain and into the spotlight.  Little did we know we were going to enter “Dr Whoopee’s Wayback Machine” and go back in time to discuss the earliest in personal computer technology.  Names like “Sinclair” and “Apple-II” – when a 20 MEGAbyte hard drive and 640 KILObytes of memory was the most anyone could ever possibly need – and a 40 MEGAbyte hard drive was living like a Saudi Prince.  Then there were the classic arcade games like Asteroids and Defender that are largely lost in time – except the movie “Pixel” brought them back from being the technology equivalent of cave drawings.  So pull-up a stool, open a can of Stroh’s or Utica Club and enjoy the show! Hosts: Joseph Paris, Founder of the OpEx Society & The XONITEK Group of Companies   Benjamin Taylor,  Managing Partner of RedQuadrant. Guests: Chas   About Chas:  Chas is a technology enthusiast (nut) and has been since around the year 1978 or so when he was introduced to a DEC PDP 11/70.  The rabbit hole got progressively deeper from that point on.   He's dabbled with many of the classic computers of the 70's, 80's and early 90's and still has his original Apple //e and Apple IIgs computers.  Modern day machines running emulation software allows him to dabble with machines that weren't available or accessible when they were new as well as playing classic arcade games from the days of his somewhat misspent youth.  He's still terrible at them but loves playing then nonetheless.

ANTIC The Atari 8-bit Podcast
ANTIC Interview 333 - Cynde Moya, Collections Manager at Living Computers: Museum + Labs

ANTIC The Atari 8-bit Podcast

Play Episode Listen Later Apr 18, 2018 46:57


Cynde Moya, Collections Manager at Living Computers: Museum + Labs   Cynde Moya is Collections Manager at Living Computers: Museum + Labs. Located in Seattle, Washington, Living Computers is a computer museum that provides hands-on experiences using computers ranging from micros to mainframes. (Last time I was there, there was a Xerox Alto, an Apple I, and yes, an Atari 400 with a number of game carts, plus big iron like a Control Data 6500 and DEC PDP-10 - all those machines and more usable by visitors.)   As Collections Manager, Cynde takes care of the museum's collection, and catalogs it.   This interview took place on April 9, 2018.   “It's definitely not all glory when you're cleaning dead rats out of an old computer."   Cynde on Twitter   Living Computers Museum + Labs

washington seattle atari moya antic collections manager xerox alto dec pdp living computers museum labs
BSD Now
188: And then the murders began

BSD Now

Play Episode Listen Later Apr 5, 2017 83:39


Today on BSD Now, the latest Dragonfly BSD release, RaidZ performance, another OpenSSL Vulnerability, and more; all this week on BSD Now. This episode was brought to you by Headlines DragonFly BSD 4.8 is released (https://www.dragonflybsd.org/release48/) Improved kernel performance This release further localizes cache lines and reduces/removes cache ping-ponging on globals. For bulk builds on many-cores or multi-socket systems, we have around a 5% improvement, and certain subsystems such as namecache lookups and exec()s see massive focused improvements. See the corresponding mailing list post with details. Support for eMMC booting, and mobile and high-performance PCIe SSDs This kernel release includes support for eMMC storage as the boot device. We also sport a brand new SMP-friendly, high-performance NVMe SSD driver (PCIe SSD storage). Initial device test results are available. EFI support The installer can now create an EFI or legacy installation. Numerous adjustments have been made to userland utilities and the kernel to support EFI as a mainstream boot environment. The /boot filesystem may now be placed either in its own GPT slice, or in a DragonFly disklabel inside a GPT slice. DragonFly, by default, creates a GPT slice for all of DragonFly and places a DragonFly disklabel inside it with all the standard DFly partitions, such that the disk names are roughly the same as they would be in a legacy system. Improved graphics support The i915 driver has been updated to match the version found with the Linux 4.6 kernel. Broadwell and Skylake processor users will see improvements. Other user-affecting changes Kernel is now built using -O2. VKernels now use COW, so multiple vkernels can share one disk image. powerd() is now sensitive to time and temperature changes. Non-boot-filesystem kernel modules can be loaded in rc.conf instead of loader.conf. *** #8005 poor performance of 1MB writes on certain RAID-Z configurations (https://github.com/openzfs/openzfs/pull/321) Matt Ahrens posts a new patch for OpenZFS Background: RAID-Z requires that space be allocated in multiples of P+1 sectors,because this is the minimum size block that can have the required amount of parity. Thus blocks on RAIDZ1 must be allocated in a multiple of 2 sectors; on RAIDZ2 multiple of 3; and on RAIDZ3 multiple of 4. A sector is a unit of 2^ashift bytes, typically 512B or 4KB. To satisfy this constraint, the allocation size is rounded up to the proper multiple, resulting in up to 3 "pad sectors" at the end of some blocks. The contents of these pad sectors are not used, so we do not need to read or write these sectors. However, some storage hardware performs much worse (around 1/2 as fast) on mostly-contiguous writes when there are small gaps of non-overwritten data between the writes. Therefore, ZFS creates "optional" zio's when writing RAID-Z blocks that include pad sectors. If writing a pad sector will fill the gap between two (required) writes, we will issue the optional zio, thus doubling performance. The gap-filling performance improvement was introduced in July 2009. Writing the optional zio is done by the io aggregation code in vdevqueue.c. The problem is that it is also subject to the limit on the size of aggregate writes, zfsvdevaggregationlimit, which is by default 128KB. For a given block, if the amount of data plus padding written to a leaf device exceeds zfsvdevaggregation_limit, the optional zio will not be written, resulting in a ~2x performance degradation. The solution is to aggregate optional zio's regardless of the aggregation size limit. As you can see from the graphs, this can make a large difference in performance. I encourage you to read the entire commit message, it is well written and very detailed. *** Can you spot the OpenSSL vulnerability (https://guidovranken.wordpress.com/2017/01/28/can-you-spot-the-vulnerability/) This code was introduced in OpenSSL 1.1.0d, which was released a couple of days ago. This is in the server SSL code, ssl/statem/statemsrvr.c, sslbytestocipherlist()), and can easily be reached remotely. Can you spot the vulnerability? So there is a loop, and within that loop we have an ‘if' statement, that tests a number of conditions. If any of those conditions fail, OPENSSLfree(raw) is called. But raw isn't the address that was allocated; raw is increment every loop. Hence, there is a remote invalid free vulnerability. But not quite. None of those checks in the ‘if' statement can actually fail; earlier on in the function, there is a check that verifies that the packet contains at least 1 byte, so PACKETget1 cannot fail. Furthermore, earlier in the function it is verified that the packet length is a multiple of 3, hence PACKETcopybytes and PACKET_forward cannot fail. So, does the code do what the original author thought, or expected it to do? But what about the next person that modifies that code, maybe changing or removing one of the earlier checks, allowing one of those if conditions to fail, and execute the bad code? Nonetheless OpenSSL has acknowledged that the OPENSSL_free line needs a rewrite: Pull Request #2312 (https://github.com/openssl/openssl/pull/2312) PS I'm not posting this to ridicule the OpenSSL project or their programming skills. I just like reading code and finding corner cases that impact security, which is an effort that ultimately works in everybody's best interest, and I like to share what I find. Programming is a very difficult enterprise and everybody makes mistakes. Thanks to Guido Vranken for the sharp eye and the blog post *** Research Debt (http://distill.pub/2017/research-debt/) I found this article interesting as it relates to not just research, but a lot of technical areas in general Achieving a research-level understanding of most topics is like climbing a mountain. Aspiring researchers must struggle to understand vast bodies of work that came before them, to learn techniques, and to gain intuition. Upon reaching the top, the new researcher begins doing novel work, throwing new stones onto the top of the mountain and making it a little taller for whoever comes next. People expect the climb to be hard. It reflects the tremendous progress and cumulative effort that's gone into the research. The climb is seen as an intellectual pilgrimage, the labor a rite of passage. But the climb could be massively easier. It's entirely possible to build paths and staircases into these mountains. The climb isn't something to be proud of. The climb isn't progress: the climb is a mountain of debt. Programmers talk about technical debt: there are ways to write software that are faster in the short run but problematic in the long run. Poor Exposition – Often, there is no good explanation of important ideas and one has to struggle to understand them. This problem is so pervasive that we take it for granted and don't appreciate how much better things could be. Undigested Ideas – Most ideas start off rough and hard to understand. They become radically easier as we polish them, developing the right analogies, language, and ways of thinking. Bad abstractions and notation – Abstractions and notation are the user interface of research, shaping how we think and communicate. Unfortunately, we often get stuck with the first formalisms to develop even when they're bad. For example, an object with extra electrons is negative, and pi is wrong Noise – Being a researcher is like standing in the middle of a construction site. Countless papers scream for your attention and there's no easy way to filter or summarize them. We think noise is the main way experts experience research debt. There's a tradeoff between the energy put into explaining an idea, and the energy needed to understand it. On one extreme, the explainer can painstakingly craft a beautiful explanation, leading their audience to understanding without even realizing it could have been difficult. On the other extreme, the explainer can do the absolute minimum and abandon their audience to struggle. This energy is called interpretive labor Research distillation is the opposite of research debt. It can be incredibly satisfying, combining deep scientific understanding, empathy, and design to do justice to our research and lay bare beautiful insights. Distillation is also hard. It's tempting to think of explaining an idea as just putting a layer of polish on it, but good explanations often involve transforming the idea. This kind of refinement of an idea can take just as much effort and deep understanding as the initial discovery. + The distillation can often times require an entirely different set of skills than the original creation of the idea. Almost all of the BSD projects have some great ideas or subsystems that just need distillation into easy to understand and use platforms or tools. Like the theoretician, the experimentalist or the research engineer, the research distiller is an integral role for a healthy research community. Right now, almost no one is filling it. Anyway, if that bit piqued your interest, go read the full article and the suggested further reading. *** News Roundup And then the murders began. (https://blather.michaelwlucas.com/archives/2902) A whole bunch of people have pointed me at articles like this one (http://thehookmag.com/2017/03/adding-murders-began-second-sentence-book-makes-instantly-better-125462/), which claim that you can improve almost any book by making the second sentence “And then the murders began.” It's entirely possible they're correct. But let's check, with a sampling of books. As different books come in different tenses and have different voices, I've made some minor changes. “Welcome to Cisco Routers for the Desperate! And then the murders begin.” — Cisco Routers for the Desperate, 2nd ed “Over the last ten years, OpenSSH has become the standard tool for remote management of Unix-like systems and many network devices. And then the murders began.” — SSH Mastery “The Z File System, or ZFS, is a complicated beast, but it is also the most powerful tool in a sysadmin's Batman-esque utility belt. And then the murders begin.” — FreeBSD Mastery: Advanced ZFS “Blood shall rain from the sky, and great shall be the lamentation of the Linux fans. And then, the murders will begin.” — Absolute FreeBSD, 3rd Ed Netdata now supports FreeBSD (https://github.com/firehol/netdata) netdata is a system for distributed real-time performance and health monitoring. It provides unparalleled insights, in real-time, of everything happening on the system it runs (including applications such as web and database servers), using modern interactive web dashboards. From the release notes: apps.plugin ported for FreeBSD Check out their demo sites (https://github.com/firehol/netdata/wiki) *** Distrowatch Weekly reviews RaspBSD (https://distrowatch.com/weekly.php?issue=20170220#raspbsd) RaspBSD is a FreeBSD-based project which strives to create a custom build of FreeBSD for single board and hobbyist computers. RaspBSD takes a recent snapshot of FreeBSD and adds on additional components, such as the LXDE desktop and a few graphical applications. The RaspBSD project currently has live images for Raspberry Pi devices, the Banana Pi, Pine64 and BeagleBone Black & Green computers. The default RaspBSD system is quite minimal, running a mere 16 processes when I was logged in. In the background the operating system runs cron, OpenSSH, syslog and the powerd power management service. Other than the user's shell and terminals, nothing else is running. This means RaspBSD uses little memory, requiring just 16MB of active memory and 31MB of wired or kernel memory. I made note of a few practical differences between running RaspBSD on the Pi verses my usual Raspbian operating system. One minor difference is RaspBSD turns off the Pi's external power light after booting. Raspbian leaves the light on. This means it looks like the Pi is off when it is running RaspBSD, but it also saves a little electricity. Conclusions: Apart from these little differences, running RaspBSD on the Pi was a very similar experience to running Raspbian and my time with the operating system was pleasantly trouble-free. Long-term, I think applying source updates to the base system might be tedious and SD disk operations were slow. However, the Pi usually is not utilized for its speed, but rather its low cost and low-energy usage. For people who are looking for a small home server or very minimal desktop box, RaspBSD running on the Pi should be suitable. Research UNIX V8, V9 and V10 made public by Alcatel-Lucent (https://media-bell-labs-com.s3.amazonaws.com/pages/20170327_1602/statement%20regarding%20Unix%203-7-17.pdf) Alcatel-Lucent USA Inc. (“ALU-USA”), on behalf of itself and Nokia Bell Laboratories agrees, to the extent of its ability to do so, that it will not assert its copyright rights with respect to any non-commercial copying, distribution, performance, display or creation of derivative works of Research Unix®1 Editions 8, 9, and 10. Research Unix is a term used to refer to versions of the Unix operating system for DEC PDP-7, PDP-11, VAX and Interdata 7/32 and 8/32 computers, developed in the Bell Labs Computing Science Research Center. The version breakdown can be viewed on its Wikipedia page (https://en.wikipedia.org/wiki/Research_Unix) It only took 30+ years, but now they're public You can grab them from here (http://www.tuhs.org/Archive/Distributions/Research/) If you're wondering what happened with Research Unix, After Version 10, Unix development at Bell Labs was stopped in favor of a successor system, Plan 9 (http://plan9.bell-labs.com/plan9/); which itself was succeeded by Inferno (http://www.vitanuova.com/inferno/). *** Beastie Bits The BSD Family Tree (https://github.com/freebsd/freebsd/blob/master/share/misc/bsd-family-tree) Unix Permissions Calculator (http://permissions-calculator.org/) NAS4Free release 11.0.0.4 now available (https://sourceforge.net/projects/nas4free/files/NAS4Free-11.0.0.4/11.0.0.4.4141/) Another BSD Mag released for free downloads (https://bsdmag.org/download/simple-quorum-drive-freebsd-ctl-ha-beast-storage-system/) OPNsense 17.1.4 released (https://forum.opnsense.org/index.php?topic=4898.msg19359) *** Feedback/Questions gozes asks via twitter about how get involved in FreeBSD (https://twitter.com/gozes/status/846779901738991620) ***

Internet History Podcast
127. The History of the iPhone, On Its 10th Anniversary

Internet History Podcast

Play Episode Listen Later Jan 6, 2017 63:30


"So… Three things: A widescreen iPod with touch controls. A revolutionary mobile phone. And a breakthrough internet communications device. An iPod… a phone… and an internet communicator… An iPod, a phone… are you getting it? These are not three separate devices. This is one device! And we are calling it iPhone.”- Steve Jobs, January 9, 2007Those words have become so famous in the history of technology that I imagine a large percentage of listeners have them memorized. Ten years ago this Monday, January 9, Steve Jobs stood on stage and announced the iPhone to the world. It was the crowning achievement in the career of the greatest technologist of our time, the moment that the modern era of computing began.On the ten year anniversary of the birth of the iPhone, this is the story of that moment and the history of that device which can take a rightful place alongside the original Macintosh, the first IBM PC, the Apple I, the Altair 8800, the DEC PDP-8, the IBM System/360 and the ENIAC as one of most important machines to have brought computing into everyday life. See acast.com/privacy for privacy and opt-out information.

The Tinycast

Further Reading Medical Devices: The Therac-25, a fantastic technical overview in an appendix from Dr. Nancy Levinson‘s book Safeware. ComputingCases.org ethics class material on Therac 25. Therac-25 and the DEC PDP-11 on Wikipedia. Full Transcript This is Matt Croydon and you are listening to The Tinycast. I write software for a living. I write open … Continue reading Therac-25 →