POPULARITY
We've talked about the history of microchips, transistors, and other chip makers. Today we're going to talk about Intel in a little more detail. Intel is short for Integrated Electronics. They were founded in 1968 by Robert Noyce and Gordon Moore. Noyce was an Iowa kid who went off to MIT to get a PhD in physics in 1953. He went off to join the Shockley Semiconductor Lab to join up with William Shockley who'd developed the transistor as a means of bringing a solid-state alternative to vacuum tubes in computers and amplifiers. Shockley became erratic after he won the Nobel Prize and 8 of the researchers left, now known as the “traitorous eight.” Between them came over 60 companies, including Intel - but first they went on to create a new company called Fairchild Semiconductor where Noyce invented the monolithic integrated circuit in 1959, or a single chip that contains multiple transistors. After 10 years at Fairchild, Noyce joined up with coworker and fellow traitor Gordon Moore. Moore had gotten his PhD in chemistry from Caltech and had made an observation while at Fairchild that the number of transistors, resistors, diodes, or capacitors in an integrated circuit was doubling every year and so coined Moore's Law, that it would continue to to do so. They wanted to make semiconductor memory cheaper and more practical. They needed money to continue their research. Arthur Rock had helped them find a home at Fairchild when they left Shockley and helped them raise $2.5 million in backing in a couple of days. The first day of the company, Andy Grove joined them from Fairchild. He'd fled the Hungarian revolution in the 50s and gotten a PhD in chemical engineering at the University of California, Berkeley. Then came Leslie Vadász, another Hungarian emigrant. Funding and money coming in from sales allowed them to hire some of the best in the business. People like Ted Hoff , Federico Faggin, and Stan Mazor. That first year they released 64-bit static random-access memory in the 3101 chip, doubling what was on the market as well as the 3301 read-only memory chip, and the 1101. Then DRAM, or dynamic random-access memory in the 1103 in 1970, which became the bestselling chip within the first couple of years. Armed with a lineup of chips and an explosion of companies that wanted to buy the chips, they went public within 2 years of being founded. 1971 saw Dov Frohman develop erasable programmable read-only memory, or EPROM, while working on a different problem. This meant they could reprogram chips using ultraviolet light and electricity. In 1971 they also created the Intel 4004 chip, which was started in 1969 when a calculator manufacturer out of Japan ask them to develop 12 different chips. Instead they made one that could do all of the tasks of the 12, outperforming the ENIAC from 1946 and so the era of the microprocessor was born. And instead of taking up a basement at a university lab, it took up an eight of an inch by a sixth of an inch to hold a whopping 2,300 transistors. The chip didn't contribute a ton to the bottom line of the company, but they'd built the first true microprocessor, which would eventually be what they were known for. Instead they were making DRAM chips. But then came the 8008 in 1972, ushering in an 8-bit CPU. The memory chips were being used by other companies developing their own processors but they knew how and the Computer Terminal Corporation was looking to develop what was a trend for a hot minute, called programmable terminals. And given the doubling of speeds those gave way to microcomputers within just a few years. The Intel 8080 was a 2 MHz chip that became the basis of the Altair 8800, SOL-20, and IMSAI 8080. By then Motorola, Zilog, and MOS Technology were hot on their heals releasing the Z80 and 6802 processors. But Gary Kildall wrote CP/M, one of the first operating systems, initially for the 8080 prior to porting it to other chips. Sales had been good and Intel had been growing. By 1979 they saw the future was in chips and opened a new office in Haifa, Israiel, where they designed the 8088, which clocked in at 4.77 MHz. IBM chose this chip to be used in the original IBM Personal Computer. IBM was going to use an 8-bit chip, but the team at Microsoft talked them into going with the 16-bit 8088 and thus created the foundation of what would become the Wintel or Intel architecture, or x86, which would dominate the personal computer market for the next 40 years. One reason IBM trusted Intel is that they had proven to be innovators. They had effectively invented the integrated circuit, then the microprocessor, then coined Moore's Law, and by 1980 had built a 15,000 person company capable of shipping product in large quantities. They were intentional about culture, looking for openness, distributed decision making, and trading off bureaucracy for figuring out cool stuff. That IBM decision to use that Intel chip is one of the most impactful in the entire history of personal computers. Based on Microsoft DOS and then Windows being able to run on the architecture, nearly every laptop and desktop would run on that original 8088/86 architecture. Based on the standards, Intel and Microsoft would both market that their products ran not only on those IBM PCs but also on any PC using the same architecture and so IBM's hold on the computing world would slowly wither. On the back of all these chips, revenue shot past $1 billion for the first time in 1983. IBM bought 12 percent of the company in 1982 and thus gave them the Big Blue seal of approval, something important event today. And the hits kept on coming with the 286 to 486 chips coming along during the 1980s. Intel brought the 80286 to market and it was used in the IBM PC AT in 1984. This new chip brought new ways to manage addresses, the first that could do memory management, and the first Intel chip where we saw protected mode so we could get virtual memory and multi-tasking. All of this was made possible with over a hundred thousand transistors. At the time the original Mac used a Motorola 68000 but the sales were sluggish while they flourished at IBM and slowly we saw the rise of the companies cloning the IBM architecture, like Compaq. Still using those Intel chips. Jerry Sanders had actually left Fairchild a little before Noyce and Moore to found AMD and ended up cloning the instructions in the 80286, after entering into a technology exchange agreement with Intel. This led to AMD making the chips at volume and selling them on the open market. AMD would go on to fast-follow Intel for decades. The 80386 would go on to simply be known as the Intel 386, with over 275,000 transistors. It was launched in 1985, but we didn't see a lot of companies use them until the early 1990s. The 486 came in 1989. Now we were up to a million transistors as well as a math coprocessor. We were 50 times faster than the 4004 that had come out less than 20 years earlier. I don't want to take anything away from the phenomenal run of research and development at Intel during this time but the chips and cores and amazing developments were on autopilot. The 80s also saw them invest half a billion in reinvigorating their manufacturing plants. With quality manufacturing allowing for a new era of printing chips, the 90s were just as good to Intel. I like to think of this as the Pentium decade with the first Pentium in 1993. 32-bit here we come. Revenues jumped 50 percent that year closing in on $9 billion. Intel had been running an advertising campaign around Intel Inside. This represented a shift from the IBM PC to the Intel. The Pentium Pro came in 1995 and we'd crossed 5 million transistors in each chip. And the brand equity was rising fast. More importantly, so was revenue. 1996 saw revenues pass $20 billion. The personal computer was showing up in homes and on desks across the world and most had Intel Inside - in fact we'd gone from Intel inside to Pentium Inside. 1997 brought us the Pentium II with over 7 million transistors, the Xeon came in 1998 for servers, and 1999 Pentium III. By 2000 they introduced the first gigahertz processor at Intel and they announced the next generation after Pentium: Itanium, finally moving the world to the 64 bit processor. As processor speeds slowed they were able to bring multi-core processors and massive parallelism out of the hallowed halls of research and to the desktop computer in 2005. 2006 saw Intel go from just Windows to the Mac. And we got 45 nanometer logic technology in 2006 using hafnium-based high-k for transistor gates represented a shift from the silicon-gated transistors of the 60s and allowed them to move to hundreds of millions of transistors packed into a single chip. i3, i5, i7, an on. The chips now have over a couple hundred million transistors per core with 8 cores on a chip potentially putting us over 1.7 or 1.8 transistors per chip. Microsoft, IBM, Apple, and so many others went through huge growth and sales jumps then retreated dealing with how to run a company of the size they suddenly became. This led each to invest heavily into ending a lost decade effectively with R&D - like when IBM built the S/360 or Apple developed the iMac and then iPod. Intel's strategy had been research and development. Build amazing products and they sold. Bigger, faster, better. The focus had been on power. But mobile devices were starting to take the market by storm. And the ARM chip was more popular on those because with a reduced set of instructions they could use less power and be a bit more versatile. Intel coined Moore's Law. They know that if they don't find ways to pack more and more transistors into smaller and smaller spaces then someone else will. And while they haven't been huge in the RISC-based System on a Chip space, they do continue to release new products and look for the right product-market fit. Just like they did when they went from more DRAM and SRAM to producing the types of chips that made them into a powerhouse. And on the back of a steadily rising revenue stream that's now over $77 billion they seem poised to be able to whether any storm. Not only on the back of R&D but also some of the best manufacturing in the industry. Chips today are so powerful and small and contain the whole computer from the era of those Pentiums. Just as that 4004 chip contained a whole ENIAC. This gives us a nearly limitless canvas to design software. Machine learning on a SoC expands the reach of what that software can process. Technology is moving so fast in part because of the amazing work done at places like Intel, AMD, and ARM. Maybe that positronic brain that Asimov promised us isn't as far off as it seems. But then, I thought that in the 90s as well so I guess we'll see.
A BIOS (Basic Input/Output System) is a piece of firmware on a PC that sits between the hardware and the operating system. It takes care of some essential functions like hardware startup tests, power management, boot device order, and control of microprocessor support chips. The original firmware on IBM PCs and PC compatibles was called the "BIOS", but most PCs manufactured in the last decade use a newer standard known as UEFI for their firmware. However, the term BIOS is still used generically to refer to UEFI compatible firmware, so in this episode we discuss PC firmware more generally than any specific BIOS. We discuss what a BIOS does and why a user may want to enter its setup mode to configure it. Show Notes Episode 2: What is an Operating System? Episode 65: What is a Device Driver? Follow us on Twitter @KopecExplains. Theme “Place on Fire” Copyright 2019 Creo, CC BY 4.0 Find out more at http://kopec.live
Commercial and industrial buildings waste around $200 billion worth of energy every year, and that is just in the US. Mark Chung is the Co-founder of Verdigris, a breakthrough AI-based sensor technology that helps companies monitor and reduce their energy usage. In this episode, Mark joins our host Donna Loughlin to discuss his mission to cut down on energy waste for the good of future generations, talking about how he identified an issue that went unnoticed for years and how he has since built a game-changing company that is truly making a difference. Before any world-changing innovation, there was a moment, an event, a realization that sparked the idea before it happened. This is a podcast about that moment — about that idea. Before IT Happened takes you on a journey with the innovators who imagined — and are still imagining — our future. Join host Donna Loughlin as her guests tell their stories of how they brought their visions to life. JUMP STRAIGHT INTO: (02:58) - Growing up in Texas as a first-generation Asian American - “I recall when I was really young having one of the first IBM PCs that was available for consumer purchase. My dad got one and he taught me how to program tic-tac-toe on it. He was kind of a programmer. I was around computers and technology ever since I was a little kid.” (08:35) Mark's Before IT Happened moment - “I couldn't be just a bystander watching climate change erode when I have the ability to develop technology that could change the trajectory.” (14:00) The desire to tackle the $200 billion energy waste problem for future generations - “We sort of came to this realization that all of these devices were actually speaking a language that AI could understand, we just needed to figure out what it was saying.” (22:39) A problem no one had tried to solve before - “It was just surprising that even in the highest performing building with an unlimited budget to try and solve this challenge, there was no tech there. Nobody has tried to solve the problem before!” (26:33) - The benefits of energy savings for private enterprises and the government - “In the last two years those kinds of policies, the compliance and the company emphasis on it, has really elevated the conversation of energy management.” (34:05) - Mark's North Star - “Everything in the world is interconnected and we have a responsibility as part of that interconnection to measure what we're doing, and how does it impact everything else. You can't just take a bunch of coal out of the ground, burn it up and make it someone else's problem.” EPISODE RESOURCES: Connect with Mark Chung on https://twitter.com/mychung (Twitter )and https://www.linkedin.com/in/markchung (LinkedIn) Learn more about https://verdigris.co/ (Verdigris) Listen to Before IT Happened's: https://www.beforeithappened.com/podcast-episodes/in-the-pursuit-of-cleaner-farming-making-great-wine-and-chasing-monarch-butterflies-with-carlo-mondavi-episode-12 (In the Pursuit of a Cleaner Farming, Making Great Wine and Chasing Monarch Butterflies with Carlo Mondavi) Thank you for listening! Follow https://www.beforeithappened.com/ (Before IT Happened) on https://www.instagram.com/beforeithappenedshow/ (Instagram) and https://twitter.com/TheBIHShow (Twitter), and don't forget to subscribe, rate and share the show wherever you listen to podcasts! Before IT Happened is produced by Donna Loughlin and https://www.studiopodsf.com/ (StudioPod Media) with additional editing and sound design by https://nodalab.com/ (nodalab). The Executive Producer is Katie Sunku Wood and all episodes are written by Susanna Camp.
The first stage of building up a business is to break things down. Michael Dell started a computer company in his dorm room by cracking open some early IBM PCs and figuring out what he could do better, faster, and cheaper. Then he did the same thing to the entire model of computer sales. Learn from Dell how to revolutionize an industry — using deconstruction to gain insight your competitors lack, and then building something bigger and better.Read a transcript of this episode: https://mastersofscale.comSubscribe to the Masters of Scale weekly newsletter: http://eepurl.com/dlirtXSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Lee Felsenstein went to the University of California, Berkeley in the 1960s. He worked at the tape manufacturer Ampex, where Oracle was born out of before going back to Berkeley to finish his degree. He was one of the original members of the Homebrew Computer Club, and as with so many inspired by the Altair S-100 bus, designed the Sol-20, arguably the first microcomputer that came with a built-in keyboard that could be hooked up to a television in 1976. The Apple II was introduced the following year. Adam Osborne was another of the Homebrew Computer Club regulars who wrote An Introduction to Microcomputers and sold his publishing company to McGraw-Hill in 1979. Flush with cash, he enlisted Felsenstein to help create another computer, which became the Osborne 1. The first commercial portable computer, although given that it weighed almost 25 pounds, is more appropriate to call a luggable computer. Before Felsensten built computers, though, he worked with a few others on a community computing project they called Community Memory. Judith Milhon was an activist in the 1960s Civil Rights movement who helped organize marches and rallies and went to jail for civil disobedience. She moved to Ohio, where she met Efrem Lipkin, and as with many in what we might think of as the counterculture now, they moved to San Francisco in 1968. St Jude, as she became called learned to program in 1967 and ended up at the Berkeley Computer Company after the work on the Berkeley timesharing projects was commercialized. There, she met Pam Hardt at Project One. Project One was a technological community built around an alternative high school founded by Ralph Scott. They brought together a number of non-profits to train people in various skills and as one might expect in the San Francisco area counterculture they had a mix of artists, craftspeople, filmmakers, and people with deep roots in technology. So much so that it became a bit of a technological commune. They had a warehouse and did day care, engineering, film processing, documentaries, and many participated in anti-Vietnam war protests. They had all this space and Hardt called around to find the computer. She got an SDS-940 mainframe donated by TransAmerica in 1971. Xerox had gotten out of the computing business and TransAmerica's needs were better suited for other computers at the time. They had this idea to create a bulletin board system for the community and created a project at Project One they called Resource One. Plenty thought computers were evil at the time, given their rapid advancements during the Cold War era, and yet many also thought there was incredible promise to democratize everything. Peter Deutsch then donated time and an operating system he'd written a few years before. She then published a request for help in the People's Computer Computer magazine and got a lot of people who just made their own things. An early precursor to maybe micro-services, where various people tinkered with data and programs. They were able to do so because of the people who could turn that SDS into a timesharing system. St Jude's partner Lipkin took on the software part of the project. Chris Macie wrote a program that digitized information on social services offered in the area that was maintained by Mary Janowitz, Sherry Reson, and Mya Shone. That was eventually taken over by the United Way until the 1990s. Felsenstein helped with the hardware. They used teletype terminals to connect a video terminal and keyboard built into a wooden cabinet so real humans could access the system. The project then evolved into what was referred to as Community Memory. Community Memory Community Memory became the first public computerized bulletin board system established in 1973 in Berkeley, California. The first Community Memory terminal was located at Leopard's Record in Berkeley. This was the first opportunity for people who were not studying the scientific subject to be able to use computers. It became very popular but soon was shut down by the founders because they face hurdles to replicate the equipment and languages being used. They were unable to expand the project. This allowed them to expand the timesharing system into the community and became a free online community-based resource used to share knowledge, organize, and grow. The initial stage of Community Memory from 1973 to 1975, was an experiment to see how people would react to using computers to share information. Operating from 1973 to 1992, it went from minicomputers to microcomputers as those became more prevelant. Before Resource One and Community Memory, computers weren't necessarily used for people. They were used for business, scientific research, and military purposes. After Community Memory, Felsenstein and others in the area and around the world helped make computers personal. Commun tty Memory was one aspect of that process but there were others that unfolded in the UK, France, Germany and even the Soviet Union - although those were typically impacted by embargoes and a lack of the central government's buy-in for computing in general. After the initial work was done, many of the core instigators went in their own directions. For example, Felsenstein went on to create the SOL and pursue his other projects in personal computing. Many had families or moved out of the area after the Vietnam War ended in 1975. The economy still wasn't great, but the technical skills made them more employable. Some of the developers and a new era of contributors regrouped and created a new non-profit in 1977. They started from scratch and developed their own software, database, and communication packages. It was very noisy so they encased it in a card box. It had a transparent plastic top so they could see what was being printed out. This program ran from 1984 to 1989. After more research, a new terminal was released in 1989 in Berkeley. By then it had evolved into a pre-web social network. The modified keyboard had brief instructions mounted on it, which showed the steps to send a message, how to attach keywords to messages, and how to search those keywords to find messages from others. Ultimately, the design underwent three generations, ending in a network of text-based browsers running on basic IBM PCs accessing a Unix server. It was never connected to the Internet, and closed in 1992. By then, it was large, unpowered, and uneconomical to run in an era where servers and graphical interfaces were available. A booming economy also ironically meant a shortage of funding. The job market exploded for programmers in the decade that led up to the dot com bubble and with inconsistent marketing and outreach, Community Memory shut down in 1992. Many of the people involved with Resource One and Community memory went on to have careers in computing. St Jude helped found the cypherpunks and created Mondo 2000 magazine, a magazine dedicated to that space where computers meet culture. She also worked with Efrem Lipkin on CoDesign, and he was a CTO for many of the dot coms in the late 1990s. Chris Neustrup became a programmer for Agilent. The whole operation had been funded by various grants and donations and while there haven't been any studies on the economic impact due to how hard it is to attribute inspiration rather than direct influence, the payoff was nonetheless considerable.
Imagine a game that begins with a printout that reads: You are standing at the end of a road before a small brick building. Around you is a forest. A small stream flows out of the building and down a gully. In the distance there is a tall gleaming white tower. Now imagine typing some information into a teletype and then reading the next printout. And then another. A trail of paper lists your every move. This is interactive gaming in the 1970s. Later versions had a monitor so a screen could just show a cursor and the player needed to know what to type. Type N and hit enter and the player travels north. “Search” doesn't work but “look” does. “Take water” works as does “Drink water” but it takes hours to find dwarves and dragons and figure out how to battle or escape. This is one of the earliest games we played and it was marvelous. The game was called Colossal Cave Adventure and it was one of the first conversational adventure games. Many came after it in the 70s and 80s, in an era before good graphics were feasible. But the imagination was strong. The Oregon Trail was written before it, in 1971 and Trek73 came in 1973, both written for HP minicomputers. Dungeon was written in 1975 for a PDP-10. The author, Don Daglow, went on the work on games like Utopia and Neverwinter Nights Another game called Dungeon showed up in 1975 as well, on the PLATO network at the University of Illinois Champagne-Urbana. As the computer monitor spread, so spread games. William Crowther got his degree in physics at MIT and then went to work at Bolt Baranek and Newman during the early days of the ARPANET. He was on the IMP team, or the people who developed the Interface Message Processor, the first nodes of the packet switching ARPANET, the ancestor of the Internet. They were long hours, but when he wasn't working, he and his wife Pat explored caves. She was a programmer as well. Or he played the new Dungeons & Dragons game that was popular with other programmers. The two got divorced in 1975 and like many suddenly single fathers he searched for something for his daughters to do when they were at the house. Crowther combined exploring caves, Dungeons & Dragons, and FORTRAN to get Colossal Cave Adventure, often just called Adventure. And since he worked on the ARPANET, the game found its way out onto the growing computer network. Crowther moved to Palo Alto and went to work for Xerox PARC in 1976 before going back to BBN and eventually retiring from Cisco. Crowther loosely based the game mechanics on the ELIZA natural language processing work done by Joseph Weizenbaum at the MIT Artificial Intelligence Laboratory in the 1960s. That had been a project to show how computers could be shown to understand text provided to computers. It was most notably used in tests to have a computer provide therapy sessions. And writing software for the kids or gaming can be therapeutic as well. As can replaying happier times. Crowther explored Mammoth Cave National Park in Kentucky in the early 1970s. The characters in the game follow along his notes about the caves, exploring the area around it using natural language while the computer looked for commands in what was entered. It took about 700 lines to do the original Fortran code for the PDP-10 he had at his disposal at BBN. When he was done he went off on vacation, and the game spread. Programmers in that era just shared code. Source needed to be recompiled for different computers, so they had to. Another programmer was Don Woods, who also used a PDP-10. He went to Princeton in the 1970s and was working at the Stanford AI Lab, or SAIL, at the time. He came across the game and asked Crowther if it would be OK to add a few features and did. His version got distributed through DECUS, or the Digital Equipment Computer Users Society. A lot of people went there for software at the time. The game was up to 3,000 lines of code when it left Woods. The adventurer could now enter the mysterious cave in search of the hidden treasures. The concept of the computer as a narrator began with Collosal Cave Adventure and is now widely used. Although we now have vast scenery rendered and can point and click where we want to go so don't need to type commands as often. The interpreter looked for commands like “move”, “interact” with other characters, “get” items for the inventory, etc. Woods went further and added more words and the ability to interpret punctuation as well. He also added over a thousand lines of text used to identify and describe the 40 locations. Woods continued to update that game until the mid-1990s. James Gillogly of RAND ported the code to C so it would run on the newer Unix architecture in 1977 and it's still part of many a BSD distribution. Microsoft published a version of Adventure in 1979 that was distributed for the Apple II and TRS-80 and followed that up in 1981 with a version for Microsoft DOS or MS-DOS. Adventure was now a commercial product. Kevin Black wrote a version for IBM PCs. Peter Gerrard ported it to Amiga Bob Supnik rose to a Vice President at Digital Equipment, not because he ported the game, but it didn't hurt. And throughout the 1980s, the game spread to other devices as well. Peter Gerrard implemented the version for the Tandy 1000. The Original Adventure was a version that came out of Aventuras AD in Spain. They gave it one of the biggest updates of all. Colossal Cave Adventure was never forgotten, even though it was Zork was replaced. Zork came along in 1977 and Adventureland in 1979. Ken and Roberta Williams played the game in 1979. Ken had bounced around the computer industry for awhile and had a teletype terminal at home when he came across Colossal Cave Adventure in 1979. The two became transfixed and opened their own company to make the game they released the next year called Mystery House. And the text adventure genre moved to a new level when they sold 15,000 copies and it became the first hit. Rogue, and others followed, increasingly interactive, until fully immersive graphical games replaced the adventure genre in general. That process began when Warren Robinett of Atari created the 1980 game, Adventure. Robinett saw Colossal Cave Adventure when he visited the Stanford Artificial Intelligence Laboratory in 1977. He was inspired into a life of programming by a programming professor he had in college named Ken Thompson while he was on sabbatical from Bell Labs. That's where Thompason, with Dennis Ritchie and one of the most amazing teams of programmers ever assembled, gave the world Unix and the the C programming language at Bell Labs. Adventure game went on to sell over a million copies and the genre of fantasy action-adventure games moved from text to video.
It's Spook Month on Advent of Computing! Every October we cover the more spooky, scary, and frustrating side of computers. To kick off this year we are looking at viruses again, this time with a special eye to the first infections for IBM PCs and compatible systems. Besides the technical changes, this drops us into an interesting transitionary period. Up to this point viruses had been something of an in-joke amongst hackers and computer nerds, but with the creation of viruses like Brain and VirDem we see them start to enter public awareness. Selected Sources: https://dl.acm.org/doi/pdf/10.1145/358198.358210 - Reflections on Trusting Trust http://web.archive.org/web/20060427081139/http://www.brain.net.pk/aboutus.htm - Brain Computing on Brain Virus https://archive.org/details/computervirusesh0000burg - Computer Viruses: A High-Tech Disease
This podcast covers New Girl Season 2, Episode 4, Neighbors, which originally aired on October 9, 2012 and was written by Berkley Johnson and directed by Steve Pink. Here’s a quick recap of the episode:This episode, the loft gets some new younger neighbors and while Jess and Schmidt try to be friends with them, they worship Jess and hate Schmidt. Meanwhile, Nick feels compelled to “break” Schmidt and make him feel like he’s old which he does through pranking him. Winston is realizing what more he wants out of life and makes a career change.We discuss Pop Culture References such as:Central to this episode, Jess creates “catch phrases” that come from TV shows like: "Did I Do That" by Steve Urkel from Family Matters“How rude” by Stephanie Tanner from Full HouseAdditional Pop Culture References such as:Old Spice + “the guy on the horse” - Nick is wearing Old Spice deodorant and Schmidt was unsure of the smell. Nick tries to explain that Old Spice as a brand is coming back in style, especially because of the “guy on the horse”, which is a reference to a series of Old Spice commercials. Mr. Belvedere - Jess was watching Mr. Belvedere at the beginning of the episode. Mr. Belvedere is an American sitcom that aired between 1985 and 1990 based on the 1947 novel Belvedere. The series follows posh butler Lynn Belvedere as he struggles to adapt to the Owens household. Characters from 80s Sitcoms - Jess mentions she can do character voices from any 80s sitcom. Specifically she mentions: Alf - Alf is an American TV sitcom that aired between 1986 to 1990. The title character is ALF (an acronym for "alien life form") who crash-lands in the garage of the suburban middle-class Tanner family. Cousin Larry Appleton [Perfect Strangers] - Larry Appleton is a character on the series Perfect Strangers, which aired from 1986 to 1993. The series chronicles the coexistence of midwestern American Larry Appleton and his distant cousin from eastern Mediterranean Europe, Balki Bartokomous. “Get out of the City Cousin Larry Appleton” was a phrase used by the character Balki in the show. Fraiser Crane [Fraiser] - Dr. Fraiser Crane is the title character of the series Fraiser which aired from 1993 until 2004. Fraiser was created as a spin-off of the TV show Cheers, continuing the story of psychiatrist Frasier Crane as he returned to his hometown of Seattle and started building a new life as a radio advice show host while reconnecting with his father and brother and making new friends.Dolla-dolla bills, y'all - Schmidt mentions he’ll be at the neighbors party with bells on, and then makes a bad reference to the phrase ‘Dolla Dolla Bills, Y’all’ by instead saying ‘Dolla Dolla Bells Y’all’. This phrase is made known from the lyrics of the 1993 song C.R.E.A.M by Wu-Tang Clan. This phrase has also gone on to become a larger part of internet culture.[Jewish] Peter Pan - Schmidt mentions he’ll never stop growing and that he’s like a Jewish Peter Pan (that he also phrases as “Petya Pan”, “Petter Pan”, and “Pesach Pan”). Created by Scottish novelist and playwright J. M. Barrie, Peter Pan is a fictional character that is a free-spirited and mischievous young boy who can fly and never grows up. This Screen Rant article also compares the entire show New Girl to Peter Pan (note: there are spoilers in this article).Gran Torino (as in referring to Clint Eastwood being old) - When Nick is trying to talk to Schmidt in the middle of his spiral about the neighbors thinking he’s old, Schmidt calls Nick Gran Torino. Gran Torino is an American drama film directed and produced by Clint Eastwood, who also starred in the film and plays an older man.Top Gun / Anthony Eds, the "Goose" man - Schmidt walks into the neighbors’ apartment and sees they are watching the movie Top Gun and calls out the character “Goose” played by the actor Anthony Edwards. Top Gun is a drama movie that follows United States Naval Aviator LT Pete "Maverick" Mitchell and his Radar Intercept Officer (RIO) LTJG Nick "Goose" Bradshaw as they are stationed aboard the USS Enterprise. Hilary Swank - Schmidt mentions that the actor Anthony Edwards is like the Hilary Swank of bald men. Hilary Swank is an American actress and film producer known for her roles in Million Dollar Baby, Boys Don’t Cry -- both of which she has won awards for -- P.S. I Love You, and The Horseman. [F]rank Sinatra - Winston calls himself “Prank” Sinatra after the famed musician Frank Sinatra as if to say he could smoothly pull off pranks. Francis “Frank” Albert Sinatra was an American singer and actor who was one of the most popular and influential musical artists of the 20th century. He is one of the best-selling music artists of all time, having sold more than 150 million records worldwide.Parkour - Schmidt proclaims Parkour when he is quickly moving around and jumping on items in the apartment. Parkour is a training discipline where people aim to get from one point to another in a complex environment, without assisting equipment and in the fastest and most efficient way possible.We also cover Schmidt’s happiness that the neighbors don’t hate him because of his age but rather his personality as our “Schmidtism”. In the “In the 2020s” section, our “not” was how Schmidt dismissed Nick as a professional because he wasn’t wearing a suit and our “yes” was how the loft were good friends including Schmidt encouraging Jess to follow her passion of teaching and the loft encouraging Winston with his job. We also give a brief look into the careers of Charlie Saxton (Chaz), Morgan Krantz (Fife), Jinny Chung (Sutton), and Jasmine Di Angelo (Brorie), the guest stars of this episode.Also in this episode were the following guest stars who we do not discuss in the podcast: Stone Eisenmann (Young Nick) and Jordan Fuller (Young Winston).On this episode, we discussed actuarial life expectancy and shared our results from this quiz which we took for fun. We also discuss Schmidt and Nick’s ages being inconsistent through the show so far as well as top pranks within movies listed on this article and youtube link. Lastly, we also mention how Zooey Deschanel has made a guest appearance on the show Frasier who Jess does an impersonation of in the beginning of the episode.While not discussed in the podcast, we noted other references in this episode including:When Schmidt is sharing that he’s younger and more successful than the rest of the loft, he considers himself Snow Leopard and the rest of the loft DOS.Snow Leopard - This is the 7th major operating release for the Apple operating system for their Macintosh computers. This operating system was released in August of 2009 and the next major release didn’t come until July 2011.DOS - This was a disk operating system that was initially used in IBM PCs but the acronym was used for over a dozen other operating systems in the time frame of the 1960s - 1990s.TGIF - In this episode, Jess mentions that there’s a TGIF marathon on. The TGIF lineup specifically stood for "Thank Goodness It's Funny." and was a block of family friendly comedies from 1989 - 2000, 2003 - 2005, and 2018-2019.Sitar - When the neighbors are trying to invite Jess over, they mention that they have a sitar. The sitar is an Indian string instrument used in Hindustani classical music. It was invented in medieval India and was popularized by the composer Ravi Shankar and was used in the 1960s by bands like The Beatles and The Rolling Stones.Pigskin - When Winston is making his big speech, he compares his job to the “pigskin” in the game of life. Pigskin is slang for a football which was originally inflated with the bladders of animals like pigs that were later covered by leather leading to the term “pigskin”. Footballs that were made from the inflation of pig bladders were replaced in the 1860s when vulcanized rubber was invented.After Schmidt realizes the neighbors don’t like him, he tries to act cooler by doing burpees:Olympics - When Schmidt is doing burpees, the neighbors sarcastically ask when the next Olympics is. The Olympic Games are an international sports competition featuring summer and winter sports where thousands of athletes from around the world compete and represent their countries. They are held every 4 years with either a Summer or Winter Olympics every 2 years.“Set a PR” - When Schmidt is doing burpees, he exclaims that he’s going to “set a PR”. A “PR” is a personal record and is used to evaluate one’s own best performance as a goal to beat in future workout sessions.This episode got a 7/10 Rating from Kritika and a 7.5/10 from Kelly and we both had the same favorite character again: Nick!Thanks for listening and stay tuned for Episode 5!Music: "Hotshot” by scottholmesmusic.comFollow us on Twitter, Instagram or email us at whosthatgirlpod@gmail.com!Website: https://smallscreenchatter.com/
This episode of the Business Karaoke Podcast is a first - it is our first episode in Japanese!The Business Karaoke exists to modernize the conversations around doing business in and with Japan. In order to be authentic to that promise we need to explore today's pressing topics of innovation, people and technology from both sides and in both languages. To premier our Japanese dialogue, we were joined by Junji Matsuguma. Junji is cross-products systems architect at IBM, utilizing customer workshops about IT infrastructure optimization, hybrid cloud, IT economics study and Design Thinking for customer's digital transformation. I had the pleasure of getting to know Junji through our mutual interest of evangelizing Design Thinking among Japanese clients. Below is the list of our conversation so you can quickly navigate and find what is most relevant for you.00:53 An introduction to Junji | 自己紹介02:55 Working at home during COVID-19 | 新型コロナウイルスの時期で在宅で仕事をすること。08:30 Tips for video conferencing | ビデオカンファレンスのチッピス。13:56 Changes in the IT industry | ITインダストリーの変化。18:00 Role of Design Thinking | デザインシンキングの役割。20:15 How are traditional Japanese companies responding to Design Thinking? | 日本の伝統的な会社はデザインシンキングを受けれるでしょうか?23:41 Junji's personal 'wow' points of Design Thinking | 個人的にデザインシンキングのいいポイント。27:55 Benefits of Design Thinking with customers | お客さんとデザインシンキングの利益。38:00 The world after COVID-19 | 新型コロナウイルス後の世界。46:50 Hopes for the future| 将来の希望。As usual, a big thank you to YOU for listening and an even bigger thank you to Junji for his time.-- Read more on Junji Matsuguma before or connect with him on LinkedIn. Junji joined IBM in 1988 and worked as a hardware engineer for product development from gate array modules to IBM PCs and ThinkPads. Junji changed his role to pre-sales technical specialist for engineering workstation and blade servers. He now works as cross-products systems architect utilizing customer workshops about IT infrastructure optimization, hybrid cloud, IT economics study and Design Thinking for customer's digital transformation.His hobby is handcraft. He is learning “relieur”, or book binding in traditional European way from 2005 and now in advanced course.Support the show (https://www.buymeacoffee.com/1QnboZC)
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us to innovate (and sometimes cope with) the future! Today we're going to look at an often forgotten period in the history of computers. The world before DOS. I've been putting off telling the story of CP/M. But it's time. Picture this: It's 1974. It's the end of the Watergate scandal. The oil crisis. The energy crisis. Stephen King's first book Carrie is released. The Dolphins demolish my Minnesota Vikings 24-7 in the Super Bowl. Patty Hearst is kidnapped. The Oakland As win the World Series. Muhammad Ali pops George Forman in the grill to win the Heavyweight title. Charles de Gaulle opens in Paris. The Terracotta Army is discovered in China. And in one of the most telling shifts that we were moving from the 60s into the mid-70s, the Volkswagen Golf replaces the Beetle. I mean, the Hippies shifted to Paul Anka, Paper Lace, and John Denver. The world was settling down. And the world was getting ready for something to happen. A lot of people might not have known it yet, but the Intel 8080 series of chips was about to change the world. Gary Kildall could see it. He'd bought the first commercial microprocessor, the Intel 4004 when it came out in 1971. He'd been enamored and consulted with Intel. He finished his doctorate in computer science and went tot he Naval Postgraduate School in Monterrey to teach and developed Kildall's Method, to optimize compilers. But then he met the 8080 chip. The Intel Intellec-8 was an early computer that he wanted to get an operating system running on. He'd written PL/M or the Programming Language for Microcomputers and he would write the CP/M operating system, short for Control Program/Monitor, loosely based on TOPS-10, the OS that ran on his DECsystem-10 mainframe. He would license PL/M through Intel but operating systems weren't really a thing just yet. By 1977, personal computers were on the rise and he would take it to market though calling the company Digital Research, Inc. His wife Dorothy ran the company. And they would go into a nice rise in sales. 250,000 licenses in 3 years. This was the first time consumers could interact with computer hardware in a standardized fashion across multiple systems. They would port the code to the Z80 processors, people would run CP/M on Apple Its, Altair's, IMSaI, Kaypro, Epson, Osbourne, Commodore and even the trash 80, or TRS-80. The world was hectic and not that standard, but there were really 3 main chips so the software actually ran on 3,000 models during an explosion in personal computer hobbyists. CP/M quickly rose and became the top operating system on the market. We would get WordStar, dBase, VisiCalc, MultiPlan, SuperCalc, Delphi, and Turbo Pascal for the office. And for fun, we'd get Colossal Cave Adventure, Gorillas, and Zork. It bootstrapped from floppy disks. They made $5 million bucks in 1981. Almost like cocoaine money at the time. Gary got a private airplane. And John Opel from IBM called. Bill Gates told him to. IBM wanted to buy the rights to CP/M. Digital Research and IBM couldn't come to terms. And this is where it gets tricky. IBM was going to make CP/M the standard operating system for the IBM PC. Microsoft jumped on the opportunity and found a tool called 86-DOS from a company called Seattle Computer Products. The cool thing there is that used the CP/M Api and so would be easy to have compatible software. Paul Allen worked with them to license the software then compiled it for the IBM. This was the first MS DOS and became the standard, branded as PC DOS for IBM. Later, Kildall agreed to sell CP/M for $240 on the IBM PCs. The problem was that PC DOS came in at $40. If you knew nothing about operating systems, which would you buy? And so even though it had compatibility with the CP/M API, PC DOS really became the standard. So much so that Digital Research would clone the Microsoft DOS and release their own DR DOS. Kildall would later describe Bill Gates using the following quote: "He is divisive. He is manipulative. He is a user. He has taken much from me and the industry.” While Kildall considered DOS theft, he was told not to sue because the laws simply weren't yet clear. At first though, it didn't seem to hurt. Digital Research continued to grow. By 1983 computers were booming. Digital Research would hit $45 million in sales. They had gone from just Gary to 530 employees by then. Gangbusters. Although they did notice that they missed the mark on the 8088 chips from Intel and even with massive rises in sales had lost market share to Unix System V and all the variants that would come from that. CP/M would add DOS emulation. But sales began to slip. The IBM 5150 and subsequent machines just took over the market. And CP/M, once a dominant player, would be left behind. Gary would move more into research and development but by 1985 resigned as the CEO of Digital Research, in a year where they laid off 200 employees. He helped start a show called the Computer Chronicles in 1983. It has been something I've been watching a lot recently, researching these episodes and it's awesome! He was a kinda and wicked smart man. Even to people who had screwed him over. As many would after them, Digital Research went into long-term legal drama, involving the US Department of Justice. But none of that saved them. And it wouldn't save any of the other companies that went there either. Digital Research would sell to Novell for $80 million in 1991 and various parts of the intellectual property would live on with compilers, interpreters, and DR DOS living on. For example, as Caldera OpenDOS. But CP/M itself would be done. Kildall would die in a bar in Monterey, California in 1994. One of the pioneers of the personal computer market. From CP/M to disk buffering the data structure that made the CD, he was all over the place in personal computers. And CP/M was the gold standard of operating systems for a few years. One of the reasons I put this episode off is because I didn't know how I would end it. Like, what's the story here. I think it's mostly that I've heard it said that he could have been Bill Gates. I think that's a drastic oversimplification. CP/M could have been the operating system on the PC. But a lot of other things could have happened as well. He was wealthy, just not Bill Gates level wealthy. And rather than go into a downward spiral over what we don't have, maybe we should all be happy with what we have. And much of his technology survived for decades to come. So he left behind a family and a legacy. In uncertain times, focus on the good and do well with it. And thank you for being you. And for tuning in to this episode of the History of Computing Podcast.
#Frequencymodulation synthesis (or FM synthesis) is a form of sound synthesis whereby the frequency of a waveform is changed by modulating its frequency with a modulator. The frequency of an oscillator is altered "in accordance with the amplitude of a modulating signal". (Dodge & Jerse 1997, p. 115) FM synthesis can create both harmonic and inharmonic sounds. To synthesize harmonic sounds, the modulating signal must have a harmonic relationship to the original carrier signal. As the amount of frequency modulation increases, the sound grows progressively complex. Through the use of modulators with frequencies that are non-integer multiples of the carrier signal (i.e. inharmonic), inharmonic bell-like and percussive spectra can be created. #FMsynthesis using analog oscillators may result in pitch instability. However, FM synthesis can also be implemented digitally, which is more stable and became standard practice. Digital FM synthesis (implemented as phase modulation) was the basis of several musical instruments beginning as early as 1974. Yamaha built the first prototype digital synthesizer in 1974, based on FM synthesis,[1] before commercially releasing the Yamaha GS-1 in 1980.[2] The Synclavier I, manufactured by New England Digital Corporation beginning in 1978, included a digital FM synthesizer, using an FM synthesis algorithm licensed from Yamaha.[3] Yamaha's groundbreaking DX7 synthesizer, released in 1983, brought FM to the forefront of synthesis in the mid-1980s. FM synthesis had also become the usual setting for games and software until the mid-nineties. Through sound cards like the AdLib and Sound Blaster, IBM PCs popularized Yamaha chips like OPL2 and OPL3. OPNB was used as main basic sound generator board in SNK Neo Geo operated arcades (MVS) and home console (AES). The related OPN2 was used in the Fujitsu FM Towns Marty and Sega Genesis as one of its sound generator chips. Similarly, Sharp X68000 and MSX (Yamaha computer unit) also use #FM -based soundchip, OPM. An analog (or analogue) synthesizer is a synthesizer that uses analog circuits and analog signals to generate sound electronically. The earliest analog synthesizers in the 1920s and 1930s, such as the Trautonium, were built with a variety of vacuum-tube (thermionic valve) and electro-mechanical technologies. After the 1960s, analog synthesizers were built using operational amplifier (op-amp) integrated circuits, and used potentiometers (pots, or variable resistors) to adjust the sound parameters. Analog synthesizers also use low-pass filters and high-pass filters to modify the sound. While 1960s-era analog synthesizers such as the Moog used a number of independent electronic modules connected by patch cables, later analog synthesizers such as the Minimoog integrated them into single units, eliminating patch cords in favour of integrated signal routing systems. #synthesizer weki --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app --- Send in a voice message: https://anchor.fm/vegansteven/message
#Frequencymodulation synthesis (or FM synthesis) is a form of sound synthesis whereby the frequency of a waveform is changed by modulating its frequency with a modulator. The frequency of an oscillator is altered "in accordance with the amplitude of a modulating signal". (Dodge & Jerse 1997, p. 115) FM #synthesis can create both harmonic and inharmonic sounds. To synthesize harmonic sounds, the modulating signal must have a harmonic relationship to the original carrier signal. As the amount of frequency modulation increases, the sound grows progressively complex. Through the use of modulators with frequencies that are non-integer multiples of the carrier signal (i.e. inharmonic), inharmonic bell-like and percussive spectra can be created. FM synthesis using analog oscillators may result in pitch instability. However, FM synthesis can also be implemented digitally, which is more stable and became standard practice. Digital FM synthesis (implemented as phase modulation) was the basis of several musical instruments beginning as early as 1974. Yamaha built the first prototype digital synthesizer in 1974, based on FM synthesis,[1] before commercially releasing the Yamaha GS-1 in 1980.[2] The Synclavier I, manufactured by New England Digital Corporation beginning in 1978, included a digital FM synthesizer, using an FM synthesis algorithm licensed from Yamaha.[3] Yamaha's groundbreaking DX7 synthesizer, released in 1983, brought FM to the forefront of synthesis in the mid-1980s. FM synthesis had also become the usual setting for games and software until the mid-nineties. Through sound cards like the AdLib and Sound Blaster, IBM PCs popularized Yamaha chips like OPL2 and OPL3. OPNB was used as main basic sound generator board in SNK Neo Geo operated arcades (MVS) and home console (AES). The related OPN2 was used in the Fujitsu FM Towns Marty and Sega Genesis as one of its sound generator chips. Similarly, Sharp X68000 and MSX (Yamaha computer unit) also use FM-based soundchip, OPM. weki --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app --- Send in a voice message: https://anchor.fm/vegansteven/message
Today we're going to look at an operating system from the 80s and 90s called OS/2. OS/2 was a bright shining light for a bit. IBM had a task force that wanted to build a personal computer. They'd been watching the hobbyists for some time and felt they could take off the shelf parts and build a PC. So they did.. But they needed an operating system. They reached out to Microsoft in 1980, who'd been successful with the Altair and so seemed a safe choice. By then, IBM had the IBM Entry Systems Division based out of their Boca Raton, Florida offices. The open architecture allowed them to ship fast. And it afforded them the chance to ship a computer with, check this out, options for an operating system. Wild idea, right? The options initially provided were CP/M and PC DOS, which was MS-DOS ported to the IBM open architecture. CP/M sold for $240 and PC DOS sold for $40. PC DOS had come from Microsoft's acquisition of 86-DOS from Seattle Computer Products. The PC shipped in 1981, lightning fast for an IBM product. At the time Apple, Atari, Commodore, and were in control of the personal computer market. IBM had dominated the mainframe market for decades and once the personal computer market reached $100 million dollars in sales, it was time to go get some of that. And so the IBM PC would come to be an astounding success and make it not uncommon to see PCs on people's desks at work or even at home. And being that most people didn't know a difference, PC DOS would ship on most. By 1985 it was clear that Microsoft had entered and subsequently dominated the PC market. And it was clear that due to the open architecture that other vendors were starting to compete. And after 5 years of working together on PC DOS and 3 versions later, Microsoft and IBM signed a Joint Development Agreement and got to work on the next operating system. One they thought would change everything and set IBM PCs up to dominate the market for decades to come. Over that time, they'd noticed some gaps in DOS. One of the most substantial is that after the projects and files got too big, they became unwieldy. They wanted an object oriented operating system. Another is protected mode. The 286 chips from Intel had protected mode dating back to 1982 and IBM engineers felt they needed to harness that in order to get multi-tasking safely and harness virtual memory to provide better support for all these crazy new windowing things they'd learned with their GUI overlay to DOS called TOPview. So after the Joint Development agreement was signed , IBM let Ed Iacobucci lead the charge on their side and Microsoft had learned a lot from their attempts at a windowing operating system. The two organizations borrowed ideas from all the literature and Unix and of course the Mac. And really built a much better operating system than anything available at the time. Microsoft had been releasing Windows the whole time. Windows 1 came in 1985 and Windows 2 came in 1987, the same year OS/2 1.0 was released. In fact, one of the most dominant PC models to ever ship, the PS/2 computer, would ship that year as well. The initial release didn't have a GUI. That wouldn't come until version 1.1 nearly a year later in 1988. SNA shipped to interface with IBM mainframes in that release as well. And TCP/IP and Ethernet would come in version 1.2 in 1989. During this time, Microsoft steadily introduced new options in Windows and claimed both publicly and privately in meetings with IBM that OS/2 was the OS of the future and Windows would some day go away. They would release an extended edition that included a built-in database. Based on protected mode developers didn't have to call the BIOS any more and could just use provided APIs. You could switch the foreground application using control-escape. In Windows that would become Alt-Tab. 1.2 brought the hpfs file system, bringing longer file names, a journaled file system to protect against data loss during crashes, and extended attributes, similar to how those worked on the Mac. But many of the features would ship in a version of Windows that would be released just a few months before. Like that GUI. Microsoft's presentation manager came in Windows 2.1 just a few months before OS/2 1.1. Microsoft had an independent sales team. Every manufacturer that bundled Windows meant there were more drivers for Windows so a wider variety of hardware could be used. Microsoft realized that DOS was old and building on top of DOS was going to some day be a big, big problem. They started something similar to what we'd call a fork today of OS/2. And in 1988 they lured Dave Cutler from Digital who had been the architect of the VMS operating system. And that moment began the march towards a new operating system called NT, which borrowed much of the best from VMS, Microsoft Windows, and OS/2 - and had little baggage. Microsoft was supposed to make version 3 of OS/2 but NT OS/2 3.0 would become just Windows NT when Microsoft stopped developing on OS/2. It took 12 years, because um, they had a loooooot of customers after the wild success of first Windows 3 and then Windows 95, but eventually Cutler's NT would replace all other operating systems in the family with the release of Windows 2000. But by 1990 when Microsoft released Windows 3 they sold millions of copies. Due to great OEM agreements they were on a lot of computers that people bought. The Joint Development Agreement would finally end. IBM had enough of what they assumed meant getting snowed by Microsoft. It took a couple of years for Microsoft to recover. In 1992, the war was on. Microsoft released Windows 3.1 and it was clear that they were moving ideas and people between the OS/2 and Windows teams. I mean, the operating systems actually looked a lot alike. TCP/IP finally shipped in Windows in 1992, 3 years after the companies had co-developed the feature for OS/2. But both would go 32 bit in 1992. OS /2 version 2.0 would also ship, bringing a lot of features. And both took off the blinders thinking about what the future would hold. Microsoft with Windows 95 and NT on parallel development tracks and IBM launched multiple projects to find a replacement operating system. They tried an internal project, Workstation OS, which fizzled. IBM did the unthinkable for Workplace OS. They entered into an alliance with Apple, taking on a number of Apple developers who formed what would be known as the Pink team. The Pinks moved into separate quarters and formed a new company called Taligent with Apple and IBM backing. Taligent planned to bring a new operating system to market in the mid-1990s. They would laser focus on PowerPC chips thus abandoning what was fast becoming the WinTel world. They did show Workspace OS at Comdex one year, but by then Bill Gates was all to swing by the booth knowing he'd won the battle. But they never shipped. By the mid-90s, Taligent would be rolled into IBM and focus on Java projects. Raw research that came out of the project is pretty pervasive today though. Those was an example of a forward looking project, though - and OS/2 continued to be developed with OS/2 Warp (or 3) getting released in 1994. It included IBM Works, which came with a word processor that wasn't Microsoft Word, a spreadsheet that wasn't Microsoft Excel, and a database that wasn't Microsoft Access. Works wouldn't last past 1996. After all, Microsoft had Charles Simony by then. He'd invented the GUI word processor at Xerox PARC and was light years ahead of the Warp options. And the Office Suite in general was gaining adoption fast. Warp was faster than previous releases, had way more options, and even browser support for early Internet adopters. But by then Windows 95 had taken the market by storm and OS/2 would see a rapidly declining customer base. After spending nearly a billion dollars a year on OS development, IBM would begin downsizing once the battle with Microsoft was lost. Over 1,300 people. And as the number of people dropped, defects with the code grew and the adoption dropped even faster. OS/2 would end in 2001. By then it was clear that IBM had lost the exploding PC market and that Windows was the dominant operating system in use. IBM's control of the PC had slowly eroded and while they eeked out a little more profit from the PC, they would ultimately sell the division that built and marketed computers to Lenovo in 2005. Lenovo would then enjoy the number one spot in the market for a long time. The blue ocean had resulted in lower margins though, and IBM had taken a different, more services-oriented direction. OS/2 would live on. IBM discontinued support in 2006. It should have probably gone fully open source in 2005. It had already been renamed and rebranded as eComStation first by an IBM Business Partner called Serenity. It would go opensource(ish) and openoffice.org would be included in version two in 2010. Betas of 2.2 have been floating around since 2013 but as with many other open source compilations of projects, it seems to have mostly fizzled out. Ed Iacobucci would go on to found or co-found other companies, including Citrix, which flourishes to this day. So what really happened here. It would be easy, but an over-simplification to say that Microsoft just kinda' took the operating system. IBM had a vision of an operating system that, similar to the Mac OS, would work with a given set of hardware. Microsoft, being an independent software developer with no hardware, would obviously have a different vision, wanting an operating system that could work with any hardware - you know, the original open architecture that allowed early IBM PCs to flourish. IBM had a big business suit and tie corporate culture. Microsoft did not. IBM employed a lot of computer scientists. Microsoft employed a lot of hackers. IBM had a large bureaucracy, Microsoft could build an operating system like NT mostly based on hiring a single brilliant person and rapidly building an elite team around them. IBM was a matrixed organization. I've been told you aren't an enterprise unless you're fully matrixed. Microsoft didn't care about all that. They just wanted the marketshare. When Microsoft abandoned OS/2, IBM could have taken the entire PC market from them. But I think Microsoft knew that the IBM bureaucracy couldn't react quickly enough at an extremely pivotal time. Things were moving so fast. And some of the first real buying tornados just had to be reacted to at lightning speeds. These days we have literature and those going through such things can bring in advisors or board members to help them. Like the roles Marc Andreeson plays with Airbnb and others. But this was uncharted territory and due to some good, shrewd and maybe sometimes downright bastardly decisions, Microsoft ended up leap-frogging everyone by moving fast, sometimes incurring technical debt that would take years to pay down, and grabbing the market at just the right time. I've heard this story oversimplified in one word: subterfuge. But that's not entirely fair. When he was hired in 1993, Louis Gerstner pivoted IBM from a hardware and software giant into a leaner services organization. One that still thrives today. A lot of PC companies came and went. And the PC business infused IBM with the capital to allow the company to shoot from $29 billion in revenues to $168 billion just 9 years later. From the top down, IBM was ready to leave red oceans and focus on markets with fewer competitors. Microsoft was hiring the talent. Picking up many of the top engineers from the advent of interactive computing. And they learned from the failures of the Xeroxes and Digital Equipments and IBMs of the world and decided to do things a little differently. When I think of a few Microsoft engineers that just wanted to build a better DOS sitting in front of a 60 page refinement of how a feature should look, I think maybe I'd have a hard time trying to play that game as well. I'm all for relentless prioritization. And user testing features and being deliberate about what you build. But when you see a limited window, I'm OK acting as well. That's the real lesson here. When the day needs seizing, good leaders will find a way to blow up the establishment and release the team to go out and build something special. And so yah, Microsoft took the operating system market once dominated by CP/M and with IBM's help, established themselves as the dominant player. And then took it from IBM. But maybe they did what they had to do… Just like IBM did what they had to do, which was move on to more fertile hunting grounds for their best in the world sales teams. So tomorrow, think of bureaucracies you've created or had created to constrain you. And think of where they are making the world better vs where they are just giving some controlling jackrabbit a feeling of power. And then go change the world. Because that is what you were put on this planet to do. Thank you so much for listening in to this episode of the history of computing podcast. We are so lucky to have you.
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us to innovate the future! Today we're going to look at one of the more underwhelming operating systems released: Windows 1.0. Doug Englebart released the NLS, or oN-Line System in 1968. It was expensive to build, practically impossible to replicate, and was only made possible by NASA and ARPA grants. But it introduced the world to the computer science research community to what would be modern video monitors, windowing systems, hypertext, and the mouse. Modern iterations of these are still with us today, as is a much more matured desktop metaphor. Some of his research team ended up at Xerox PARC and the Xerox Alto was released in 1973, building on many of the concepts and continuing to improve upon them. They sold about 2,000 Altos for around $32,000. As the components came down in price, Xerox tried to go a bit more mass market with the Xerox Star in 1981. They sold about 25,000 for about half the price. The windowing graphics got better, the number of users were growing, the number of developers were growing, and new options for components were showing up all over the place. Given that Xerox was a printing company, the desktop metaphor continued to evolve. Apple released the Lisa in 1983. They sold 10,000 for about $10,000. Again, the windowing system and desktop metaphor continued on and Apple quickly released the iconic Mac shortly thereafter, introducing much better windowing and a fully matured desktop metaphor, becoming the first computer considered mass market that was shipped with a graphical user interface. It was revolutionary and they sold 280,000 in the first year. The proliferation of computers in our daily lives and the impact on the economy was ready for the j-curve. And while IBM had shown up to compete in the PC market, they had just been leapfrogged by Apple. Jobs would be forced out of Apple the following year, though. By 1985, Microsoft had been making software for a long time. They had started out with BASIC for the Altair and had diversified, bringing BASIC to the Mac and releasing a DOS that could run on a number of platforms. And like many of those early software companies, it could have ended there. In a masterful stroke of business, Bill Gates ended up with their software on the IBM PCs that Apple had just basically made antiques - and they'd made plenty of cash off of doing so. But then Gates sees Visi On at COMDEX and it's not surprise that the Microsoft version of a graphical user interface would look a bit like Visi, a bit like what Microsoft had seen from Xerox PARC on a visit in 1983, and of course, with elements that were brought in from the excellent work the original Mac team had made. And of course, not to take anything away from early Microsoft developers, they added many of their own innovations as well. Ultimately though, it was a 16-bit shell that allowed for multi-tasking and sat on top of the Microsoft DOS. Something that would continue on until the NT lineage of operating systems fully supplanted the original Windows line, which ended with Millineum Edition. Windows 1.0 was definitely a first try. IBM TopView had shipped that year as well. I've always considered it more of a windowing system, but it allowed multitasking and was object-oriented. It really looked more like a DOS menu system. But the Graphics Environment Manager or GEM had direct connections to Xerox PARC through Lee Lorenzen. It's hard to imagine but at the time CP/M had been the dominant operating system and so GEM could sit on top of it or MS-DOS and was mostly found on Atari computers. That first public release was actually 1.01 and 1.02 would come 6 months later, adding internationalization with 1.03 continuing that trend. 1.04 would come in 1987 adding support for Via graphics and a PS/2 mouse. Windows 1 came with many of the same programs other vendors supplied, including a calculator, a clipboard viewer, a calendar, a pad for writing that still exists called Notepad, a painting tool, and a game that went by its original name of Reversi, but which we now call Othello. One important concept is that Windows was object-oriented. As with any large software project, it wouldn't have been able to last as long as it did if it hadn't of been. One simplistic explanation for this paradigm is that it had an API and there was a front-end that talked to the kernel through those APIs. Microsoft hadn't been first to the party and when they got to the party they certainly weren't the prettiest. But because the Mac OS wasn't just a front-end that made calls to the back-end, Apple would be slow to add multi-tasking support, which came in their OS 5, in 1987. And they would be slow to adopt new technology thereafter, having to bring Steve Jobs back to Apple because they had no operating system of the future, after failed projects to build one. Windows 1.0 had executable files (or exe files) that could only be run in the Windowing system. It had virtual memory. It had device drivers so developers could write and compile binary programs that could communicate with the OS APIs, including with device drivers. One big difference - Bill Atkinson and Andy Hertzfeld spent a lot of time on frame buffers and moving pixels so they could have overlapping windows. The way Windows handled how a window appeared were in .ini (pronounced like any) files and that kind of thing couldn't be done in a window manager without clipping, or leaving artifacts behind. And so it was that, by the time I was in college, I was taught by a professor that Microsoft had stolen the GUI concept from Apple. But it was an evolution. Sure, Apple took it to the masses but before that, Xerox had borrowed parts from NLS and NLS had borrowed pointing devices from Whirlwind. And between Xerox and Microsoft, there had been IBM and GEM. Each evolved and added their own innovations. In fact, many of the actual developers hopped from company to company, spreading ideas and philosophies as they went. But Windows had shipped. And when Jobs called Bill Gates down to Cupertino, shouting that Gates had ripped off Apple, Gates responded with one of my favorite quotes in the history of computing: "I think it's more like we both had this rich neighbor named Xerox and I broke into his house to steal the TV set and found out that you had already stolen it." The thing I've always thought was missing from that Bill Gates quote is that Xerox had a rich neighbor they stole the TV from first, called ARPA. And the US Government was cool with it - one of the main drivers of decades of crazy levels of prosperity filling their coffers with tax revenues. And so, the next version of Windows, Windows 2.0 would come in 1987. But Windows 1.0 would be supported by Microsoft for 16 years. No other operating system has been officially supported for so long. And by 1988 it was clear that Microsoft was going to win this fight. Apple filed a lawsuit claiming that Microsoft had borrowed a bit too much of their GUI. Apple had licensed some of the GUI elements to Microsoft and Apple identified over 200 things, some big, like title bars, that made up a copyrightable work. That desktop metaphor that Susan Kare and others on the original Mac team had painstakingly developed. Well, turns out that they live on in every OS because Judge Vaughn Walker on the Ninth Circuit threw out the lawsuit. And Microsoft would end up releasing Windows 3 in 1990, shipping on practically every PC built since. And so I'll leave this story here. But we'll do a dedicated episode for Windows 3 because it was that important. Thank you to all of the innovators who brought these tools to market and ultimately made our lives better. Each left their mark with increasingly small and useful enhancements to the original. We owe them so much no matter the platform we prefer. And thank you, listeners, for tuning in for this episode of the History of Computing Podcast. We are so lucky to have you.
Mavis Beacon Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Today we're going to give thanks to a wonderful lady. A saint. The woman that taught me to type. Mavis Beacon. Over the years I often wondered what Mavis was like. She took me from a kid that was filled with wonder about these weird computers we had floating around school to someone that could type over a hundred words a minute. She always smiled. I never saw her frown once. I thought she must be a teacher somewhere. She must be a kind lady whose only goal in the world was to teach young people how to type. And indeed, she's taught over a million people to type in her days as a teacher. In fact she'd been teaching for years by the time I first encountered her. Mavis Beacon was initially written for MS-DOS in 1987 and released by The Software Toolworks. Norm Worthington, Mike Duffy joined Walt Bilofsky started the company out of Sherman Oaks, California in 1980 and also made Chessmaster in 1986. They started with HDOS, a health app for the Osborne 1. They worked on Small C and Grogramma, releasing a conversation simulation tool from Joseph Weizenbaum in 1981. They wrote Mavis Beacon Teaches Typing in 1987 for IBM PCs. It took "Three guys, three computers, three beds, in four months”. It was an instant success. They went public in 1988 and were acquired by Pearson in 1994 for around half a billion dollars, becoming Mindscape in 1994. By 1998 she'd taught over 6,000,000 kids to type. Today, Encore Software produces the software and Software MacKiev distributes a version for the Mac. The software integrates with iTunes, supports competitive typing games, and still tracks words-per-minute. But who was Mavis? What inspired her to teach generations of children to type? Why hasn't she aged? Mavis was named after Mavis Staples but she was a beacon to anyone looking to learn to type, thus Mavis Beacon. Mavis was initially portrayed by Haitian-born Renée L'Espérance, who was discovered working behind the perfume counter at Saks Fifth Avenue Beverly Hills by talk-show host Les Crane in 1985. He then brought her in to be the model. Featuring an African-American woman regrettably caused some marketing problems but didn't impact the success of the release. So until the next episode, think about this: Mavis Beacon, real or not, taught me and probably another 10 million kids to type. She opened the door for us to do more with computers. I could never write code or books or even these episodes at a rate if it hadn't been for her. So I owe her my sincerest of gratitude. And Norm Worthington, for having the idea in the first place. And I owe you my gratitude, for tuning into another episode of the History of Computing Podcast. We're lucky to have you. Have a great day!
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Todays episode is about the Xerox Alto. Close your eyes and… Wait, don't close your eyes if you're driving. Or on a bike. Or boating. Or… Nevermind, don't close your eyes But do use your imagination, and think of what it would be like if you opened your phone… Also don't open your phone while driving. But imagine opening your phone and ordering a pizza using a black screen with green text and no pictures. If that were the case, you probably wouldn't use an app to order a pizza. Without a graphical interface, or GUI, games wouldn't have such wide appeal. Without a GUI you wouldn't probably use a computer nearly as much. You might be happier, but we'll leave that topic to another podcast. Let's jump in our time machine and head back to 1973. The Allman Brothers stopped drinking mushroom tea long enough to release Ramblin' Man, Elton John put out Crocodile Rock, both Carpenters were still alive, and Free Bird was released by Lynard Skynyrd. Nixon was the president of the United States, and suspends offensive actions in North Vietnam, 5 days before being sworn into his second term as president. He wouldn't make it all four years of course because not long after, Watergate broke, and by the end of the year Nixon claimed “I'm not a crook”. The first handheld cell call is made by Martin Cooper, the World Trade Center opens, Secretariat wins the Belmont Stakes, Skylab 3 is launched, OJ was a running back instead of running from the police, being gay was removed from the DSM, and the Endangered Species Act was passed in the US. But many a researcher at the Palo Alto Research Center, known as Xerox Parc, probably didn't notice much of this as they were hard at work at doing something many people in Palo Alto talk about these days but rarely do: changing the world. In 1973, Xerox released the Alto, which had the first computer operating system designed from the ground up to support a GUI. It was inspired by the oN-Line System (or NLS for short), which had been designed by Douglas Engelbert of the Stanford Research Institute in the 60s on a DARPA grant. They'd spent a year developing it and that was the day to shine for Doublers Steward, John Ellenby, Bob Nishimura, and Abbey Silverstone. The Alto ran the Alto Executive operating system, had a 2.5 megabyte hard drive, ran with four 74181 MSI chips that ran at a 5.88 MHz clock speed and came with between 96 and 512 kiloBytes of memory. It came with a mouse, which had been designed by Engelbert for NLS. The Alto I ran a pilot of 30 and then an additional 90 were produced and sold before the Alto II was released. Over the course of 10 years, Xerox would sell 2000 more. Some of the programming concepts were borrowed from the Data General Nova, designed by Edson de Castro, a former DEC product manager responsible for the PDP-8. The Alto could run 16 cooperative, prioritized tasks. It was about the size of a mini refrigerator and had a CRTO on a swivel. It also came with an Ethernet connection, a keyboard, a three-button mouse a disk drive, and first a wheel mouse, later followed up with a ball mouse. That monitor was in portrait rather than the common landscape of later computers. You wrote software in BCPL and Mesa. It used raster graphics, came with a document editor, the Laurel email app, and gave us an actual multi-player video game. Oh, and a early graphics editor. And the first versions of Smalltalk - a language we'll do an upcoming episode on, ran on the Alto. 50 of these were donated to universities around the world in 1978, including Stanford, MIT, and Carnegie Mellon, inspiring a whole generation of computer scientists. One ended up in the White House. But perhaps the most important of the people that were inspired, was Steve Jobs, when he saw one at Xerox Parc, the inspiration for the first Mac. The sales numbers weren't off the charts though. Byte magazine said: It is unlikely that a person outside of the computer-science research community will ever be able to buy an Alto. They are not intended for commercial sale, but rather as development tools for Xerox, and so will not be mass-produced. What makes them worthy of mention is the fact that a large number of the personal computers of tomorrow will be designed with knowledge gained from the development of the Alto. The Alto was sold for $32,000 in 1979 money, or well over $100,000 today. So they were correct. $220,000,000 over 10 years is nothing. The Alto then begat the Xerox Star, which in 1981 killed the Alto and sold at half the price. But Xerox was once-bitten, twice shy. They'd introduced a machine to rival the DEC PDP-10 and didn't want to jump into this weird new PC business too far. If they had wanted to they might have released something somewhere between the Star and the Commodore VIC-20, which ran for about $300. Even after the success of the Apple II, which still paled in comparison to the business Xerox is most famous for: copiers. Imagine what they thought of the IBM PCs and Apple II, when they were a decade ahead of that? I've heard may say that with all of this technology being invented at Xerox, that they could have owned the IT industry. Sure, Apple went from $774,000 in 1977 to $118 million in 1980 but then CEO Peter McColough was more concerned about the loss of market share for copiers, which dipped from 65 to 46 percent at the time. Xerox revenues had gone from $1.6 billion dollars to $8 billion in the 70s. And there were 100,000 people working in that group! And in the 90s Xerox stock would later skyrocket up to $250/share! They invented Laser Printing, WYSIWYGs, the GUI, Ethernet, Object Oriented Programming, Ubiquitous computing with the PARCtab, networking over optical cables, data storage, and so so so much more. The interconnected world of today likely wouldn't be what it is without other people iterating on their contributions, but more specifically likely wouldn't be what it is if they had hoarded them. They made a modicum of money off most of these - and that money helped to fund further research, like hosting the first live streamed concert. Xerox still rakes in over $10 billion in a year in revenue and unlike many companies that went all-in on PCs or other innovations during the incredible 112 year run of Xerox, they're still doing pretty well. Commodore went bankrupt in 1994, 10 years after Dell was founded. Computing was changing so fast, who can blame Xerox? IBM was reinvented in the 80s because of the PC boom - but it also almost put them out of business. We'll certainly cover that in a future episode. I'm glad Xerox is still in business, still making solid products, and still researching all the things! So thank you to everyone at every level of Xerox, for all your organization has contributed over the years, including the Alto, which shaped how computers are used today. And thank YOU patient listeners, for tuning in to this episode of the History Of Computing Podcast. We hope you have a great day!
An airhacks.fm conversation with Andrew Guibert (@andrew_guibert) about: old IBM PCs and old school Legos, starting programming in elementary school to write video games, the market for enterprise software is better, than the market for video games, World of Warcraft is good for practicing team work, ice hockey, snowboarding and baseball, getting job at IBM by pitching Nintendo WII hacking, why Java EE is exciting for young developers, OpenLiberty is a dream team at IBM, providing Java EE support for WebSphere Liberty and WebSphere "traditional" customers, Java EE 8 was good, and MicroProfile is a good platform for innovation, quick MicroProfile iterations, sprinkling MicroProfile goodness into existing applications, MicroProfile helps glue things together, OpenLiberty strictly follows the Java EE standards, how OpenLiberty knows what Java EE 8 is, OpenLiberty is built on an OSGi runtime, features are modules with dependencies, OpenLiberty comprises public and internal features, Java EE 8 is a convenience feature which pulls in other modules / features, OpenLIberty also supports users features, OpenLiberty works with EclipseLink, as well as, Hibernate, OpenLiberty comes with generic JPA support with transaction integration, Erin Schnabel fixes OpenLiberty configuration at JavaONE, IBM booth with vi in a few seconds, Erin Schnabel is a 10x-er, IBM MQ / MQS could be the best possible developer experience as JMS provider, Liberty Bikes - a Java EE 8 / MicroProfile Tron-like game, scaling websockets with session affinity, tiny ThinWARs, there is MicroProfile discussion for JWT token production, controlling OpenLiberty from MineCraft, testing JDBC connections, BulkHeads with porcupine, all concurrency in OpenLiberty runs on single, self-tuning ThreadPool Andy on twitter: @andrew_guibert and github.
First off, we can’t talk Microsoft without acknowledging what was truly their first acquisition, MS-DOS. In 1980, after not being able to reach a license agreement with a competitor, IBM tasked Microsoft with developing or licensing an operating system for their upcoming IBM 5150 Personal Computer. Microsoft had already been hired to write the BASICprogramming language for the PC, but now they had asked for them to provide the OS to go with it. No sweat, said Microsoft… DOS, which was developed as QDOS (Quick-and-Dirty Operating System) and originally launched as 86-DOS the company Seattle Computer Products. 86-DOS had been written by Tim Paterson — the owner and operator of Seattle Computer Products, in just six weeks’ time. So rather than re-invent the wheel Microsoft paid Tim Paterson just $75,000 in the summer of 1981 for version 1.10 of 86-DOS. Upon receiving PC-DOS in July, Microsoft simply renamed it to MS-DOS 1.10and handed it off to IBM in August of 1981, who decided to license it for distribution in November of the same year. From 1981 until 1993, IBM utilized a branded version of MS-DOS on their systems, known as PC DOS… and while IBM offered alternative operating systems for its PCs throughout the years, Microsoft’s licensed version of MS-DOS was sold on more than 9 out of every 10 IBM PCs sold — thus cementing Microsoft’s place in history as a software behemoth. Microsoft had been around for five years before being award the operating system contract by IMB, and while they may have survived or even thrived without the PC-DOS eal — there is no denying that their successful sales pitch to IBM changed the trajectory of the company, and the world forever.Historic Acquisitions 1987, Microsoft acquired Forethought, which had a little know presentation program that would later be known as Microsoft PowerPoint 1997, Microsoft acquired Hotmail.com as an integral part of their push for MSN.com leading up to the release of Windows XP 2000, Microsoft acquired the Visio Corporation whose diagraming program was rebranded as Microsoft Visio 2002, Microsoft purchased Navision for their ERP (enterprise resource planning) technologies which kicked off a new division of the company, Microsoft Business Solutions which has lead us to what we now call Microsoft Dynamics 2007–2008, in escalated efforts to keep pace with Google’s massive ad revenue, Microsoft acquired aQuantive and its subsidiaries which included Avenue A/Razorefish, followed quickly by Fast Search and Transfer 2011, Microsoft acquired Skype and created a new division of the company to house the chat and VOIP solution, a move that was surprising to some due to the technology behind Skype (Delphi) 2012, acquired Yammer, an enterprise social networking service used for private communication within organizations 2013, acquired Nokia in a last ditch effort by then CEO Steve Ballmer to save Microsoft in the competitive mobile space 2014, acquired Minecraft and its parent company for $2.5 Billion, shocking the world A move that has already paid for itself ten-fold 2016, acquired business social networking platform LinkedInRecent Acquisitions There is no denying that there has been a fundamental shift by Microsoft in their recent acquisitions under star CEO Satya Nadella. It seems that the company has shifted focus to brands and technologies that fit with their existing vision, and bolster their current product and services lineup rather than acquiring outright new technologies. LinkedIn has gone on to expand its reach by acquiring: Connectifier, a machine learning technology in 2016 for business leads generation Elearning juggernaut Lynda.com in 2018 for “LinkedIn Learning” Glint, an employee improvement and engagement platform FlipGrid, an education, collaboration and video platform was acquired in mid-2018 in an effort to add further value to Microsoft’s education stack of Office 365, Minecraft, Kodu programming language, and partnerships with the likes of Code.org and Kano Computers This also ties into their 2016 acquisition of Teacher Gaming, a provider of interactive educational software GitHub was purchased in late 2018, and raised eyebrows across the world as open source defenders were shocked that their mecca had been hijacked by the evil empire Microsoft surprised the world when they began offering some premium services for free, including the coveted private Git Repos GitHub also shows that Microsoft’s recent open source plays, like becoming a Premium Sponsor of the Open Source, open sourcing many of their products, moving their repos to GitHub and the shift to open source in Azure are no marketing moves — they are in it for the long haul“We are all-in on Open Source, and that is what really brings us together with GitHub. And we’re going to operate it as an open platform for any language, any framework, any platform … providing developers with a SaaS Service.” - Satya Nadella, CEO AI — Recently Microsoft has acquired conversational AI company XOXOCO, visual AI service Lobe, industrial AI platform Bonsai, conversational AI technology from Semantic Machines and AI from Maluuba Game Studios — In recent years, Microsoft has continued to acquire prestigious and talented game studios to show their long-term commitment to the Xbox platform and gaming in general 2018 — Obsidian Entertainment, inXile Entertainment, Playground Games, Compulsion Games, Undead Labs, and Ninja Theory They have also acquired in recent years Beam (now Mixer) video game streaming, Simplygon 3D graphics optimization, AltspaceVR virtual reality and Playfab gaming backend service“I think one of the deepest values at Microsoft Research and at other labs, is creativity, creation, coming up with new ideas that have never been thought of before. Combining two sort-of well-known ideas into a whole new innovative combination that leads to a whole new concept…” - Dr. Eric Horvitz, Technical FellowConclusion Microsoft sees the world heading to a developer-centric organism, where developers no longer exist in the bubble of technology, but they live and work and breath across all aspects of our society — including education. They know as technologies continue to evolve, we humans and the companies we work in will continue to struggle to keep pace with the rapid change — and in an effort to both lead the market, and to prepare us all — they are focusing on acquisitions to bolster their already strong platforms. Rather than re-inventing the wheel, they just add the newer and better wheels to their already fine-tuned machine. Microsoft Research and product development in general has always included taking two decent ideas and merging them together to make a great idea… and that is where I think Microsoft puts a focus on its recent acquisition strategy. Fill gaps and holes in their service, to bolster their services and platforms overall. Going back, it all is really about “DEVELOPERS! DEVELOPERS! DEVELOPERS!” … and I for one, think that we all owe Big Steve an apology. Until next time, enjoy this parting track from YouTube user Bad Squirrel...Follow or Subscribe to Microsoft Today Patreon Website Apple/iTunes Blubrry Breaker Facebook Google Podcasts Pocket Cast PodBean RadioPublic Spotify Stitcher YouTube Support the show.
The strange birth and long life of Unix, FreeBSD jail with a single public IP, EuroBSDcon 2018 talks and schedule, OpenBSD on G4 iBook, PAM template user, ZFS file server, and reflections on one year of OpenBSD use. Picking the contest winner Vincent Bostjan Andrew Klaus-Hendrik Will Toby Johnny David manfrom Niclas Gary Eddy Bruce Lizz Jim Random number generator ##Headlines ###The Strange Birth and Long Life of Unix They say that when one door closes on you, another opens. People generally offer this bit of wisdom just to lend some solace after a misfortune. But sometimes it’s actually true. It certainly was for Ken Thompson and the late Dennis Ritchie, two of the greats of 20th-century information technology, when they created the Unix operating system, now considered one of the most inspiring and influential pieces of software ever written. A door had slammed shut for Thompson and Ritchie in March of 1969, when their employer, the American Telephone & Telegraph Co., withdrew from a collaborative project with the Massachusetts Institute of Technology and General Electric to create an interactive time-sharing system called Multics, which stood for “Multiplexed Information and Computing Service.” Time-sharing, a technique that lets multiple people use a single computer simultaneously, had been invented only a decade earlier. Multics was to combine time-sharing with other technological advances of the era, allowing users to phone a computer from remote terminals and then read e-mail, edit documents, run calculations, and so forth. It was to be a great leap forward from the way computers were mostly being used, with people tediously preparing and submitting batch jobs on punch cards to be run one by one. Over five years, AT&T invested millions in the Multics project, purchasing a GE-645 mainframe computer and dedicating to the effort many of the top researchers at the company’s renowned Bell Telephone Laboratories—including Thompson and Ritchie, Joseph F. Ossanna, Stuart Feldman, M. Douglas McIlroy, and the late Robert Morris. But the new system was too ambitious, and it fell troublingly behind schedule. In the end, AT&T’s corporate leaders decided to pull the plug. After AT&T’s departure from the Multics project, managers at Bell Labs, in Murray Hill, N.J., became reluctant to allow any further work on computer operating systems, leaving some researchers there very frustrated. Although Multics hadn’t met many of its objectives, it had, as Ritchie later recalled, provided them with a “convenient interactive computing service, a good environment in which to do programming, [and] a system around which a fellowship could form.” Suddenly, it was gone. With heavy hearts, the researchers returned to using their old batch system. At such an inauspicious moment, with management dead set against the idea, it surely would have seemed foolhardy to continue designing computer operating systems. But that’s exactly what Thompson, Ritchie, and many of their Bell Labs colleagues did. Now, some 40 years later, we should be thankful that these programmers ignored their bosses and continued their labor of love, which gave the world Unix, one of the greatest computer operating systems of all time. The rogue project began in earnest when Thompson, Ritchie, and a third Bell Labs colleague, Rudd Canaday, began to sketch out on paper the design for a file system. Thompson then wrote the basics of a new operating system for the lab’s GE-645 mainframe. But with the Multics project ended, so too was the need for the GE-645. Thompson realized that any further programming he did on it was likely to go nowhere, so he dropped the effort. Thompson had passed some of his time after the demise of Multics writing a computer game called Space Travel, which simulated all the major bodies in the solar system along with a spaceship that could fly around them. Written for the GE-645, Space Travel was clunky to play—and expensive: roughly US $75 a game for the CPU time. Hunting around, Thompson came across a dusty PDP-7, a minicomputer built by Digital Equipment Corp. that some of his Bell Labs colleagues had purchased earlier for a circuit-analysis project. Thompson rewrote Space Travel to run on it. And with that little programming exercise, a second door cracked ajar. It was to swing wide open during the summer of 1969 when Thompson’s wife, Bonnie, spent a month visiting his parents to show off their newborn son. Thompson took advantage of his temporary bachelor existence to write a good chunk of what would become the Unix operating system for the discarded PDP‑7. The name Unix stems from a joke one of Thompson’s colleagues made: Because the new operating system supported only one user (Thompson), he saw it as an emasculated version of Multics and dubbed it “Un-multiplexed Information and Computing Service,” or Unics. The name later morphed into Unix. Initially, Thompson used the GE-645 to compose and compile the software, which he then downloaded to the PDP‑7. But he soon weaned himself from the mainframe, and by the end of 1969 he was able to write operating-system code on the PDP-7 itself. That was a step in the right direction. But Thompson and the others helping him knew that the PDP‑7, which was already obsolete, would not be able to sustain their skunkworks for long. They also knew that the lab’s management wasn’t about to allow any more research on operating systems. So Thompson and Ritchie got creative. They formulated a proposal to their bosses to buy one of DEC’s newer minicomputers, a PDP-11, but couched the request in especially palatable terms. They said they were aiming to create tools for editing and formatting text, what you might call a word-processing system today. The fact that they would also have to write an operating system for the new machine to support the editor and text formatter was almost a footnote. Management took the bait, and an order for a PDP-11 was placed in May 1970. The machine itself arrived soon after, although the disk drives for it took more than six months to appear. During the interim, Thompson, Ritchie, and others continued to develop Unix on the PDP-7. After the PDP-11’s disks were installed, the researchers moved their increasingly complex operating system over to the new machine. Next they brought over the roff text formatter written by Ossanna and derived from the runoff program, which had been used in an earlier time-sharing system. Unix was put to its first real-world test within Bell Labs when three typists from AT&T’s patents department began using it to write, edit, and format patent applications. It was a hit. The patent department adopted the system wholeheartedly, which gave the researchers enough credibility to convince management to purchase another machine—a newer and more powerful PDP-11 model—allowing their stealth work on Unix to continue. During its earliest days, Unix evolved constantly, so the idea of issuing named versions or releases seemed inappropriate. But the researchers did issue new editions of the programmer’s manual periodically, and the early Unix systems were named after each such edition. The first edition of the manual was completed in November 1971. So what did the first edition of Unix offer that made it so great? For one thing, the system provided a hierarchical file system, which allowed something we all now take for granted: Files could be placed in directories—or equivalently, folders—that in turn could be put within other directories. Each file could contain no more than 64 kilobytes, and its name could be no more than six characters long. These restrictions seem awkwardly limiting now, but at the time they appeared perfectly adequate. Although Unix was ostensibly created for word processing, the only editor available in 1971 was the line-oriented ed. Today, ed is still the only editor guaranteed to be present on all Unix systems. Apart from the text-processing and general system applications, the first edition of Unix included games such as blackjack, chess, and tic-tac-toe. For the system administrator, there were tools to dump and restore disk images to magnetic tape, to read and write paper tapes, and to create, check, mount, and unmount removable disk packs. Most important, the system offered an interactive environment that by this time allowed time-sharing, so several people could use a single machine at once. Various programming languages were available to them, including BASIC, Fortran, the scripting of Unix commands, assembly language, and B. The last of these, a descendant of a BCPL (Basic Combined Programming Language), ultimately evolved into the immensely popular C language, which Ritchie created while also working on Unix. The first edition of Unix let programmers call 34 different low-level routines built into the operating system. It’s a testament to the system’s enduring nature that nearly all of these system calls are still available—and still heavily used—on modern Unix and Linux systems four decades on. For its time, first-edition Unix provided a remarkably powerful environment for software development. Yet it contained just 4200 lines of code at its heart and occupied a measly 16 KB of main memory when it ran. Unix’s great influence can be traced in part to its elegant design, simplicity, portability, and serendipitous timing. But perhaps even more important was the devoted user community that soon grew up around it. And that came about only by an accident of its unique history. The story goes like this: For years Unix remained nothing more than a Bell Labs research project, but by 1973 its authors felt the system was mature enough for them to present a paper on its design and implementation at a symposium of the Association for Computing Machinery. That paper was published in 1974 in the Communications of the ACM. Its appearance brought a flurry of requests for copies of the software. This put AT&T in a bind. In 1956, AT&T had agreed to a U.S government consent decree that prevented the company from selling products not directly related to telephones and telecommunications, in return for its legal monopoly status in running the country’s long-distance phone service. So Unix could not be sold as a product. Instead, AT&T released the Unix source code under license to anyone who asked, charging only a nominal fee. The critical wrinkle here was that the consent decree prevented AT&T from supporting Unix. Indeed, for many years Bell Labs researchers proudly displayed their Unix policy at conferences with a slide that read, “No advertising, no support, no bug fixes, payment in advance.” With no other channels of support available to them, early Unix adopters banded together for mutual assistance, forming a loose network of user groups all over the world. They had the source code, which helped. And they didn’t view Unix as a standard software product, because nobody seemed to be looking after it. So these early Unix users themselves set about fixing bugs, writing new tools, and generally improving the system as they saw fit. The Usenix user group acted as a clearinghouse for the exchange of Unix software in the United States. People could send in magnetic tapes with new software or fixes to the system and get back tapes with the software and fixes that Usenix had received from others. In Australia, the University of New South Wales and the University of Sydney produced a more robust version of Unix, the Australian Unix Share Accounting Method, which could cope with larger numbers of concurrent users and offered better performance. By the mid-1970s, the environment of sharing that had sprung up around Unix resembled the open-source movement so prevalent today. Users far and wide were enthusiastically enhancing the system, and many of their improvements were being fed back to Bell Labs for incorporation in future releases. But as Unix became more popular, AT&T’s lawyers began looking harder at what various licensees were doing with their systems. One person who caught their eye was John Lions, a computer scientist then teaching at the University of New South Wales, in Australia. In 1977, he published what was probably the most famous computing book of the time, A Commentary on the Unix Operating System, which contained an annotated listing of the central source code for Unix. Unix’s licensing conditions allowed for the exchange of source code, and initially, Lions’s book was sold to licensees. But by 1979, AT&T’s lawyers had clamped down on the book’s distribution and use in academic classes. The antiauthoritarian Unix community reacted as you might expect, and samizdat copies of the book spread like wildfire. Many of us have nearly unreadable nth-generation photocopies of the original book. End runs around AT&T’s lawyers indeed became the norm—even at Bell Labs. For example, between the release of the sixth edition of Unix in 1975 and the seventh edition in 1979, Thompson collected dozens of important bug fixes to the system, coming both from within and outside of Bell Labs. He wanted these to filter out to the existing Unix user base, but the company’s lawyers felt that this would constitute a form of support and balked at their release. Nevertheless, those bug fixes soon became widely distributed through unofficial channels. For instance, Lou Katz, the founding president of Usenix, received a phone call one day telling him that if he went down to a certain spot on Mountain Avenue (where Bell Labs was located) at 2 p.m., he would find something of interest. Sure enough, Katz found a magnetic tape with the bug fixes, which were rapidly in the hands of countless users. By the end of the 1970s, Unix, which had started a decade earlier as a reaction against the loss of a comfortable programming environment, was growing like a weed throughout academia and the IT industry. Unix would flower in the early 1980s before reaching the height of its popularity in the early 1990s. For many reasons, Unix has since given way to other commercial and noncommercial systems. But its legacy, that of an elegant, well-designed, comfortable environment for software development, lives on. In recognition of their accomplishment, Thompson and Ritchie were given the Japan Prize earlier this year, adding to a collection of honors that includes the United States’ National Medal of Technology and Innovation and the Association of Computing Machinery’s Turing Award. Many other, often very personal, tributes to Ritchie and his enormous influence on computing were widely shared after his death this past October. Unix is indeed one of the most influential operating systems ever invented. Its direct descendants now number in the hundreds. On one side of the family tree are various versions of Unix proper, which began to be commercialized in the 1980s after the Bell System monopoly was broken up, freeing AT&T from the stipulations of the 1956 consent decree. On the other side are various Unix-like operating systems derived from the version of Unix developed at the University of California, Berkeley, including the one Apple uses today on its computers, OS X. I say “Unix-like” because the developers of the Berkeley Software Distribution (BSD) Unix on which these systems were based worked hard to remove all the original AT&T code so that their software and its descendants would be freely distributable. The effectiveness of those efforts were, however, called into question when the AT&T subsidiary Unix System Laboratories filed suit against Berkeley Software Design and the Regents of the University of California in 1992 over intellectual property rights to this software. The university in turn filed a counterclaim against AT&T for breaches to the license it provided AT&T for the use of code developed at Berkeley. The ensuing legal quagmire slowed the development of free Unix-like clones, including 386BSD, which was designed for the Intel 386 chip, the CPU then found in many IBM PCs. Had this operating system been available at the time, Linus Torvalds says he probably wouldn’t have created Linux, an open-source Unix-like operating system he developed from scratch for PCs in the early 1990s. Linux has carried the Unix baton forward into the 21st century, powering a wide range of digital gadgets including wireless routers, televisions, desktop PCs, and Android smartphones. It even runs some supercomputers. Although AT&T quickly settled its legal disputes with Berkeley Software Design and the University of California, legal wrangling over intellectual property claims to various parts of Unix and Linux have continued over the years, often involving byzantine corporate relations. By 2004, no fewer than five major lawsuits had been filed. Just this past August, a software company called the TSG Group (formerly known as the SCO Group), lost a bid in court to claim ownership of Unix copyrights that Novell had acquired when it purchased the Unix System Laboratories from AT&T in 1993. As a programmer and Unix historian, I can’t help but find all this legal sparring a bit sad. From the very start, the authors and users of Unix worked as best they could to build and share, even if that meant defying authority. That outpouring of selflessness stands in sharp contrast to the greed that has driven subsequent legal battles over the ownership of Unix. The world of computer hardware and software moves forward startlingly fast. For IT professionals, the rapid pace of change is typically a wonderful thing. But it makes us susceptible to the loss of our own history, including important lessons from the past. To address this issue in a small way, in 1995 I started a mailing list of old-time Unix aficionados. That effort morphed into the Unix Heritage Society. Our goal is not only to save the history of Unix but also to collect and curate these old systems and, where possible, bring them back to life. With help from many talented members of this society, I was able to restore much of the old Unix software to working order, including Ritchie’s first C compiler from 1972 and the first Unix system to be written in C, dating from 1973. One holy grail that eluded us for a long time was the first edition of Unix in any form, electronic or otherwise. Then, in 2006, Al Kossow from the Computer History Museum, in Mountain View, Calif., unearthed a printed study of Unix dated 1972, which not only covered the internal workings of Unix but also included a complete assembly listing of the kernel, the main component of this operating system. This was an amazing find—like discovering an old Ford Model T collecting dust in a corner of a barn. But we didn’t just want to admire the chrome work from afar. We wanted to see the thing run again. In 2008, Tim Newsham, an independent programmer in Hawaii, and I assembled a team of like-minded Unix enthusiasts and set out to bring this ancient system back from the dead. The work was technically arduous and often frustrating, but in the end, we had a copy of the first edition of Unix running on an emulated PDP-11/20. We sent out messages announcing our success to all those we thought would be interested. Thompson, always succinct, simply replied, “Amazing.” Indeed, his brainchild was amazing, and I’ve been happy to do what I can to make it, and the story behind it, better known. Digital Ocean http://do.co/bsdnow ###FreeBSD jails with a single public IP address Jails in FreeBSD provide a simple yet flexible way to set up a proper server layout. In the most setups the actual server only acts as the host system for the jails while the applications themselves run within those independent containers. Traditionally every jail has it’s own IP for the user to be able to address the individual services. But if you’re still using IPv4 this might get you in trouble as the most hosters don’t offer more than one single public IP address per server. Create the internal network In this case NAT (“Network Address Translation”) is a good way to expose services in different jails using the same IP address. First, let’s create an internal network (“NAT network”) at 192.168.0.0/24. You could generally use any private IPv4 address space as specified in RFC 1918. Here’s an overview: https://en.wikipedia.org/wiki/Privatenetwork. Using pf, FreeBSD’s firewall, we will map requests on different ports of the same public IP address to our individual jails as well as provide network access to the jails themselves. First let’s check which network devices are available. In my case there’s em0 which provides connectivity to the internet and lo0, the local loopback device. options=209b [...] inet 172.31.1.100 netmask 0xffffff00 broadcast 172.31.1.255 nd6 options=23 media: Ethernet autoselect (1000baseT ) status: active lo0: flags=8049 metric 0 mtu 16384 options=600003 inet6 ::1 prefixlen 128 inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2 inet 127.0.0.1 netmask 0xff000000 nd6 options=21``` > For our internal network, we create a cloned loopback device called lo1. Therefore we need to customize the /etc/rc.conf file, adding the following two lines: cloned_interfaces="lo1" ipv4_addrs_lo1="192.168.0.1-9/29" > This defines a /29 network, offering IP addresses for a maximum of 6 jails: ipcalc 192.168.0.1/29 Address: 192.168.0.1 11000000.10101000.00000000.00000 001 Netmask: 255.255.255.248 = 29 11111111.11111111.11111111.11111 000 Wildcard: 0.0.0.7 00000000.00000000.00000000.00000 111 => Network: 192.168.0.0/29 11000000.10101000.00000000.00000 000 HostMin: 192.168.0.1 11000000.10101000.00000000.00000 001 HostMax: 192.168.0.6 11000000.10101000.00000000.00000 110 Broadcast: 192.168.0.7 11000000.10101000.00000000.00000 111 Hosts/Net: 6 Class C, Private Internet > Then we need to restart the network. Please be aware of currently active SSH sessions as they might be dropped during restart. It’s a good moment to ensure you have KVM access to that server ;-) service netif restart > After reconnecting, our newly created loopback device is active: lo1: flags=8049 metric 0 mtu 16384 options=600003 inet 192.168.0.1 netmask 0xfffffff8 inet 192.168.0.2 netmask 0xffffffff inet 192.168.0.3 netmask 0xffffffff inet 192.168.0.4 netmask 0xffffffff inet 192.168.0.5 netmask 0xffffffff inet 192.168.0.6 netmask 0xffffffff inet 192.168.0.7 netmask 0xffffffff inet 192.168.0.8 netmask 0xffffffff inet 192.168.0.9 netmask 0xffffffff nd6 options=29 Setting up > pf part of the FreeBSD base system, so we only have to configure and enable it. By this moment you should already have a clue of which services you want to expose. If this is not the case, just fix that file later on. In my example configuration, I have a jail running a webserver and another jail running a mailserver: Public IP address IP_PUB="1.2.3.4" Packet normalization scrub in all Allow outbound connections from within the jails nat on em0 from lo1:network to any -> (em0) webserver jail at 192.168.0.2 rdr on em0 proto tcp from any to $IP_PUB port 443 -> 192.168.0.2 just an example in case you want to redirect to another port within your jail rdr on em0 proto tcp from any to $IP_PUB port 80 -> 192.168.0.2 port 8080 mailserver jail at 192.168.0.3 rdr on em0 proto tcp from any to $IP_PUB port 25 -> 192.168.0.3 rdr on em0 proto tcp from any to $IP_PUB port 587 -> 192.168.0.3 rdr on em0 proto tcp from any to $IP_PUB port 143 -> 192.168.0.3 rdr on em0 proto tcp from any to $IP_PUB port 993 -> 192.168.0.3 > Now just enable pf like this (which is the equivalent of adding pf_enable=YES to /etc/rc.conf): sysrc pf_enable="YES" > and start it: service pf start Install ezjail > Ezjail is a collection of scripts by erdgeist that allow you to easily manage your jails. pkg install ezjail > As an alternative, you could install ezjail from the ports tree. Now we need to set up the basejail which contains the shared base system for our jails. In fact, every jail that you create get’s will use that basejail to symlink directories related to the base system like /bin and /sbin. This can be accomplished by running ezjail-admin install > In the next step, we’ll copy the /etc/resolv.conf file from our host to the newjail, which is the template for newly created jails (the parts that are not provided by basejail), to ensure that domain resolution will work properly within our jails later on: cp /etc/resolv.conf /usr/jails/newjail/etc/ > Last but not least, we enable ezjail and start it: sysrc ezjail_enable="YES" service ezjail start Create a jail > Creating a jail is as easy as it could probably be: ezjail-admin create webserver 192.168.0.2 ezjail-admin start webserver > Now you can access your jail using: ezjail-admin console webserver > Each jail contains a vanilla FreeBSD installation. Deploy services > Now you can spin up as many jails as you want to set up your services like web, mail or file shares. You should take care not to enable sshd within your jails, because that would cause problems with the service’s IP bindings. But this is not a problem, just SSH to the host and enter your jail using ezjail-admin console. EuroBSDcon 2018 Talks & Schedule (https://2018.eurobsdcon.org/talks-schedule/) News Roundup OpenBSD on an iBook G4 (https://bobstechsite.com/openbsd-on-an-ibook-g4/) > I've mentioned on social media and on the BTS podcast a few times that I wanted to try installing OpenBSD onto an old "snow white" iBook G4 I acquired last summer to see if I could make it a useful machine again in the year 2018. This particular eBay purchase came with a 14" 1024x768 TFT screen, 1.07GHz PowerPC G4 processor, 1.5GB RAM, 100GB of HDD space and an ATI Radeon 9200 graphics card with 32 MB of SDRAM. The optical drive, ethernet port, battery & USB slots are also fully-functional. The only thing that doesn't work is the CMOS battery, but that's not unexpected for a device that was originally released in 2004. Initial experiments > This iBook originally arrived at my door running Apple Mac OSX Leopard and came with the original install disk, the iLife & iWork suites for 2008, various instruction manuals, a working power cable and a spare keyboard. As you'll see in the pictures I took for this post the characters on the buttons have started to wear away from 14 years of intensive use, but the replacement needs a very good clean before I decide to swap it in! > After spending some time exploring the last version of OSX to support the IBM PowerPC processor architecture I tried to see if the hardware was capable of modern computing with Linux. Something I knew ahead of trying this was that the WiFi adapter was unlikely to work because it's a highly proprietary component designed by Apple to work specifically with OSX and nothing else, but I figured I could probably use a wireless USB dongle later to get around this limitation. > Unfortunately I found that no recent versions of mainstream Linux distributions would boot off this machine. Debian has dropped support 32-bit PowerPC architectures and the PowerPC variants of Ubuntu 16.04 LTS (vanilla, MATE and Lubuntu) wouldn't even boot the installer! The only distribution I could reliably install on the hardware was Lubuntu 14.04 LTS. > Unfortunately I'm not the biggest fan of the LXDE desktop for regular work and a lot of ported applications were old and broken because it clearly wasn't being maintained by people that use the hardware anymore. Ubuntu 14.04 is also approaching the end of its support life in early 2019, so this limited solution also has a limited shelf-life. Over to BSD > I discussed this problem with a few people on Mastodon and it was pointed out to me that OSX is built on the Darwin kernel, which happens to be a variant of BSD. NetBSD and OpenBSD fans in particular convinced me that their communities still saw the value of supporting these old pieces of kit and that I should give BSD a try. > So yesterday evening I finally downloaded the "macppc" version of OpenBSD 6.3 with no idea what to expect. I hoped for the best but feared the worst because my last experience with this operating system was trying out PC-BSD in 2008 and discovering with disappointment that it didn't support any of the hardware on my Toshiba laptop. > When I initially booted OpenBSD I was a little surprised to find the login screen provided no visual feedback when I typed in my password, but I can understand the security reasons for doing that. The initial desktop environment that was loaded was very basic. All I could see was a console output window, a terminal and a desktop switcher in the X11 environment the system had loaded. > After a little Googling I found this blog post had some fantastic instructions to follow for the post-installation steps: https://sohcahtoa.org.uk/openbsd.html. I did have to adjust them slightly though because my iBook only has 1.5GB RAM and not every package that page suggests is available on macppc by default. You can see a full list here: https://ftp.openbsd.org/pub/OpenBSD/6.3/packages/powerpc/. Final thoughts > I was really impressed with the performance of OpenBSD's "macppc" port. It boots much faster than OSX Leopard on the same hardware and unlike Lubuntu 14.04 it doesn't randomly hang for no reason or crash if you launch something demanding like the GIMP. > I was pleased to see that the command line tools I'm used to using on Linux have been ported across too. OpenBSD also had no issues with me performing basic desktop tasks on XFCE like browsing the web with NetSurf, playing audio files with VLC and editing images with the GIMP. Limited gaming is also theoretically possible if you're willing to build them (or an emulator) from source with SDL support. > If I wanted to use this system for heavy duty work then I'd probably be inclined to run key applications like LibreOffice on a Raspberry Pi and then connect my iBook G4 to those using VNC or an SSH connection with X11 forwarding. BSD is UNIX after all, so using my ancient laptop as a dumb terminal should work reasonably well. > In summary I was impressed with OpenBSD and its ability to breathe new life into this old Apple Mac. I'm genuinely excited about the idea of trying BSD with other devices on my network such as an old Asus Eee PC 900 netbook and at least one of the many Raspberry Pi devices I use. Whether I go the whole hog and replace Fedora on my main production laptop though remains to be seen! The template user with PAM and login(1) (http://oshogbo.vexillium.org/blog/48) > When you build a new service (or an appliance) you need your users to be able to configure it from the command line. To accomplish this you can create system accounts for all registered users in your service and assign them a special login shell which provides such limited functionality. This can be painful if you have a dynamic user database. > Another challenge is authentication via remote services such as RADIUS. How can we implement services when we authenticate through it and log into it as a different user? Furthermore, imagine a scenario when RADIUS decides on which account we have the right to access by sending an additional attribute. > To address these two problems we can use a "template" user. Any of the PAM modules can set the value of the PAM_USER item. The value of this item will be used to determine which account we want to login. Only the "template" user must exist on the local password database, but the credential check can be omitted by the module. > This functionality exists in the login(1) used by FreeBSD, HardenedBSD, DragonFlyBSD and illumos. The functionality doesn't exist in the login(1) used in NetBSD, and OpenBSD doesn't support PAM modules at all. In addition what is also noteworthy is that such functionality was also in the OpenSSH but they decided to remove it and call it a security vulnerability (CVE 2015-6563). I can see how some people may have seen it that way, that’s why I recommend reading this article from an OpenPAM author and a FreeBSD security officer at the time. > Knowing the background let's take a look at an example. ```PAMEXTERN int pamsmauthenticate(pamhandlet *pamh, int flags _unused, int argc _unused, const char *argv[] _unused) { const char *user, *password; int err; err = pam_get_user(pamh, &user, NULL); if (err != PAM_SUCCESS) return (err); err = pam_get_authtok(pamh, PAM_AUTHTOK, &password, NULL); if (err == PAM_CONV_ERR) return (err); if (err != PAM_SUCCESS) return (PAM_AUTH_ERR); err = authenticate(user, password); if (err != PAM_SUCCESS) { return (err); } return (pam_set_item(pamh, PAM_USER, "template")); } In the listing above we have an example of a PAM module. The pamgetuser(3) provides a username. The pamgetauthtok(3) shows us a secret given by the user. Both functions allow us to give an optional prompt which should be shown to the user. The authenticate function is our crafted function which authenticates the user. In our first scenario we wanted to keep all users in an external database. If authentication is successful we then switch to a template user which has a shell set up for a script allowing us to configure the machine. In our second scenario the authenticate function authenticates the user in RADIUS. Another step is to add our PAM module to the /etc/pam.d/system or to the /etc/pam.d/login configuration: auth sufficient pamtemplate.so nowarn allowlocal Unfortunately the description of all these options goes beyond this article - if you would like to know more about it you can find them in the PAM manual. The last thing we need to do is to add our template user to the system which you can do by the adduser(8) command or just simply modifying the /etc/master.passwd file and use pwdmkdb(8) program: $ tail -n /etc/master.passwd template::1000:1000::0:0:User &:/:/usr/local/bin/templatesh $ sudo pwdmkdb /etc/master.passwd As you can see,the template user can be locked and we still can use it in our PAM module (the * character after login). I would like to thank Dag-Erling Smørgrav for pointing this functionality out to me when I was looking for it some time ago. iXsystems iXsystems @ VMWorld ###ZFS file server What is the need? At work, we run a compute cluster that uses an Isilon cluster as primary NAS storage. Excluding snapshots, we have about 200TB of research data, some of them in compressed formats, and others not. We needed an offsite backup file server that would constantly mirror our primary NAS and serve as a quick recovery source in case of a data loss in the the primary NAS. This offsite file server would be passive - will never face the wrath of the primary cluster workload. In addition to the role of a passive backup server, this solution would take on some passive report generation workloads as an ideal way of offloading some work from the primary NAS. The passive work is read-only. The backup server would keep snapshots in a best effort basis dating back to 10 years. However, this data on this backup server would be archived to tapes periodically. A simple guidance of priorities: Data integrity > Cost of solution > Storage capacity > Performance. Why not enterprise NAS? NetApp FAS or EMC Isilon or the like? We decided that enterprise grade NAS like NetAPP FAS or EMC Isilon are prohibitively expensive and an overkill for our needs. An open source & cheaper alternative to enterprise grade filesystem with the level of durability we expect turned up to be ZFS. We’re already spoilt from using snapshots by a clever Copy-on-Write Filesystem(WAFL) by NetApp. ZFS providing snapshots in almost identical way was a big influence in the choice. This is also why we did not consider just a CentOS box with the default XFS filesystem. FreeBSD vs Debian for ZFS This is a backup server, a long-term solution. Stability and reliability are key requirements. ZFS on Linux may be popular at this time, but there is a lot of churn around its development, which means there is a higher probability of bugs like this to occur. We’re not looking for cutting edge features here. Perhaps, Linux would be considered in the future. FreeBSD + ZFS We already utilize FreeBSD and OpenBSD for infrastructure services and we have nothing but praises for the stability that the BSDs have provided us. We’d gladly use FreeBSD and OpenBSD wherever possible. Okay, ZFS, but why not FreeNAS? IMHO, FreeNAS provides a integrated GUI management tool over FreeBSD for a novice user to setup and configure FreeBSD, ZFS, Jails and many other features. But, this user facing abstraction adds an extra layer of complexity to maintain that is just not worth it in simpler use cases like ours. For someone that appreciates the commandline interface, and understands FreeBSD enough to administer it, plain FreeBSD + ZFS is simpler and more robust than FreeNAS. Specifications Lenovo SR630 Rackserver 2 X Intel Xeon silver 4110 CPUs 768 GB of DDR4 ECC 2666 MHz RAM 4 port SAS card configured in passthrough mode(JBOD) Intel network card with 10 Gb SFP+ ports 128GB M.2 SSD for use as boot drive 2 X HGST 4U60 JBOD 120(2 X 60) X 10TB SAS disks ###Reflection on one-year usage of OpenBSD I have used OpenBSD for more than one year, and it is time to give a summary of the experience: (1) What do I get from OpenBSD? a) A good UNIX tutorial. When I am curious about some UNIXcommands’ implementation, I will refer to OpenBSD source code, and I actually gain something every time. E.g., refresh socket programming skills from nc; know how to process file efficiently from cat. b) A better test bed. Although my work focus on developing programs on Linux, I will try to compile and run applications on OpenBSD if it is possible. One reason is OpenBSD usually gives more helpful warnings. E.g., hint like this: ...... warning: sprintf() is often misused, please use snprintf() ...... Or you can refer this post which I wrote before. The other is sometimes program run well on Linux may crash on OpenBSD, and OpenBSD can help you find hidden bugs. c) Some handy tools. E.g. I find tcpbench is useful, so I ported it into Linux for my own usage (project is here). (2) What I give back to OpenBSD? a) Patches. Although most of them are trivial modifications, they are still my contributions. b) Write blog posts to share experience about using OpenBSD. c) Develop programs for OpenBSD/BSD: lscpu and free. d) Porting programs into OpenBSD: E.g., I find google/benchmark is a nifty tool, but lacks OpenBSD support, I submitted PR and it is accepted. So you can use google/benchmark on OpenBSD now. Generally speaking, the time invested on OpenBSD is rewarding. If you are still hesitating, why not give a shot? ##Beastie Bits BSD Users Stockholm Meetup BSDCan 2018 Playlist OPNsense 18.7 released Testing TrueOS (FreeBSD derivative) on real hardware ThinkPad T410 Kernel Hacker Wanted! Replace a pair of 8-bit writes to VGA memory with a single 16-bit write Reduce taskq and context-switch cost of zio pipe Proposed FreeBSD Memory Management change, expected to improve ZFS ARC interactions Tarsnap ##Feedback/Questions Anian_Z - Question Robert - Pool question Lain - Congratulations Thomas - L2arc Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv
The Games Episode: My First Arcade, Fun With Machines, Home Computers, Wumpus, Kat and Mouse, Commodores and Apples and IBM PCs, Connectivity, Joy, A Road Rash Moment, My Favorite Type of Game, Games Forever.
Jmichaele Keller joins us and shares that he considered himself a global citizen upon his first trip out of the states when he was a kid. He appreciated architecture, went to Rome and as far as Asia to investigate. He found a calling though in computers- in which he started before there were Windows. He got a job as a room service waiter and fell in love with hospitality. He got one of the first IBM PCs that Marriott had- which had 64K of RAM and a 10 Megabyte hard drive. He made his way into finance and wowed executives by budgeting and forecasting using that now archaic machine. From there he went on to software and on to real estate. Jmichaele ultimately found his way into the cannabis industry through the very important work of lab testing.