Geek Author presents educational series covering computer organization, computer architecture, and embedded system design. Each episode is a concise step taking the listener on a journey from the foundations of digital logic up to computer architecture and finally to system level input and output.…
Digital data has many benefits, but what happens if it's in error? Moreover, how can we tell if a bit has been flipped? Our discussion begins with parity.
Having learned how to program bitwise operations, it is now time to flex our bit bashing muscles by investigating some creative ways to perform common programming functions.
Inverting or flipping the bits of an integer is the third and last method of "bit bashing" we will discuss. There are two ways to invert bits: either flip all of them at once or use a mask to identify which bits to flip and which to leave alone.
The ability to set bits may not seem important at first, but many algorithms in computing depend on just that. Join us as we control bits and build integers from scratch using the bitwise-OR.
Discussing how to use bitwise operations to manipulate the bits of an integer would be academic if we couldn't perform the operations in our code. The good news is that we can!
Clearing bits within an integer is important if we want to isolate bits or set them to zero before we insert a new value. The bitwise-AND does this for us.
All areas of computing, from data compression to web design, from networking to digital image storage, from system administration to high-performance computing, benefit from bit manipulation.
A demultiplexer takes a single data stream and routes it to a selected output channel, a bit like one of those old A-B printer switches we used to physically select which printer we were sending data to. In this episode, we show how to design one.
A multiplexer, sometimes referred to as a data selector, allows us to select which digital stream to route to an output. Designing this circuit is a lot easier than it sounds.
What does it take to switch on a device? In some cases, like getting a soda from a vending machine, a number of conditions must be just right. That's where binary decoders come in.
Sometimes, it's nice to take a look at old tech to learn a new tool. The 7-segment display has been in our lives for years - mostly in alarm clocks. Join us as we use a Karnaugh map to design a driver for one.
This short episode shows how a complicated truth table can be clarified by using "don't cares" to represent input values.
Like a wild card in a game of poker, an unspecified truth table entry called a "don't care" can make our sum-of-products expressions so much nicer.
Many digital designs begin with a truth table. In this episode, we do just that, and then create the simplified sum-of-products expression by way of the Karnaugh map.
Let’s expand the capabilities of Karnaugh maps to combine more than just two rows of the truth table into a single product.
To make the move to a four-variable Karnaugh map, we are going to double the number of columns found in the three-variable map. And what happens when we halve the three-variable map? We get a two-variable Karnaugh map!
Here we introduce a graphical tool that when used correctly will produce a most simplified sum-of-products expression, all without meddling in any simplification of Boolean expressions.
Now that we've studied the sum-of-products form of Boolean expressions, it's time to take a look at the product-of-sums. This form uses logical OR's to generate zeros which are passed to the output through an AND gate.
The NAND gate outputs a logic zero only when all its inputs equal logic one. Let's explore how this universal gate can be used to implement any Boolean expression.
Who knew how easy it would be to derive a Boolean expression from a truth table? By following a few simple steps, sum-of-products expressions are quickly converted to and from truth tables. In addition, the SOP expression is a heck of a performer.
Because many students have trouble when trying to simplify Boolean expressions, we're going to dedicate another episode to examples of simplification. We're also going to show how sometimes, there's more than one way to crack an egg.
In this episode, we take a break from proving identities of Boolean algebra and start applying them. Why? Well, so we can build our Boolean logic circuits with fewer gates. That means they'll be cheaper, smaller, and faster. That's why.
In this episode, we add one more tool to our Boolean algebra toolbox: DeMorgan's Theorem. We then use it, along with some of our other tools, to modify an expression down to its simplest form.
We are familiar with algebraic laws such as multiply zero by anything, and we get zero. In this episode, we see how a Boolean expression containing a constant, a duplicated signal, or a signal being combined with its inverse will simplify...always.
In this episode, we bring together our knowledge of logic operations, truth tables, and Boolean expressions to prove some basic properties of Boolean algebra.
Truth tables and circuit diagrams fall short in many ways including their abilities to evaluate and manipulate combinational logic. By using algebraic methods to represent logic expressions, we can apply properties and identities to improve performance.
The simplest combinational logic circuits are made by inverting the output of a fundamental logic gate. Despite this simplicity, these gates are vital. In fact, we can realize any truth table using a circuit made only from AND gates with inverted outputs.
Individual logic gates are not very practical. Their power comes when you combine them to create combinational logic. This episode takes a look at combinational logic by working through an example in order to generate its truth table.
In this episode, we introduce one of the most important tools in the description of logic operations: the truth table. Not only do truth tables allow us to describe a logic operation, they provide a means for us to prove logical equivalence.
Logic gates are the fundamental building blocks of digital circuits. In this episode, we take a look at the four most basic gates: AND, OR, exclusive-OR, and the inverter, and show how an XOR gate can be used to compare two digital values.
By examining Run Length Limited (RLL) coding, we discover a way to compress the ones and zeros of our binary data by using differential coding. We also chat a bit about magnetic storage media.
In this episode, we continue our discussion of line codes by examining five schemes used with polar and bipolar signaling: NRZ-L, NRZ-I, RZ-AMI, Manchester, and differential Manchester. We also discuss differential coding and its benefits.
When sending digital data from one device to another, both devices must agree on how to represent ones and zeros. This episode presents how signal levels affect the delivery of data and how line codes are used to represent the ones and the zeros.
ASCII was developed when every computer was an island and over 35 years before the first emoji appeared. In this episode, we will take a look at how Unicode and UTF-8 expanded ASCII for ubiquitous use while maintaining backwards compatibility.
In 1963, the American Standards Association released a standard defining an 8-bit method to represent letters, punctuation, and control characters. This episode examines ASCII so that we can begin to see how computers represent language.
Regardless of the numeric base, scientific notation breaks numbers into three parts: sign, mantissa, and exponent. In this episode, we discuss how the computer stores those three parts to memory, and why IEEE 754 puts them together the way it does.
Up to this point, we've limited our discussion to binary integers. In this episode, we are moving the curtain to reveal the powers of two to the right of the binary point in order to begin representing fractions.
It turns out that twos complement is just one of many ways to use binary to represent negative numbers. In this episode, we examine the use of offset or biased notation to represent signed integers.
In this episode, we continue our discussion of twos complement binary representation by covering overflow and how shifting left and right can be used to perform multiplication and division by powers of two.
In this episode, we switch from base ten to binary as we introduce twos complement representation and show how computers store and manipulate signed integers.
In 1645, Blaise Pascal presented his Pascaline to the public. Using only addition and the method of tens complement, the device could add, subtract, multiply, and divide. We discuss tens complement as an introduction to signed representations in binary.
It may sound trivial, but in this episode we're going to learn to add and subtract...in binary. This will serve as a basis for learning about negative binary representations and the circuitry needed to perform additions in hardware.
We continue our discussion of Gray code by presenting algorithms used to convert between the weighted numeral system of unsigned binary and the Gray code ordered sequence. We also show how to implement these algorithms in our code.
Counting is pretty basic, right? Zero, one, two, three, four, and so on. This episode of Geek Author presents a situation where we might want to rearrange the sequence of integers in order to provide better reliability in our digital circuits.
Dividing up the range of analog values into discrete binary values during the analog to digital conversion process forces us to incur a rounding error. See what that error looks and sounds like in this episode of Geek Author.
Converting an analog signal to digital involves more than just digitizing some measurements. Consequences result from sampling an analog signal and care has to be taken to capture all the desired frequencies and avoid creating new ones.
Does capturing analog measurements with a computer sound like so much hocus pocus? In this episode, we will take a stab at lessening some of that mystic by showing how the Arduino platform can be used to perform this conversion.
Computers don't cope well with infinite, but that's pretty much what the real world is about, limitless accuracy with as near to limitless boundaries as can be imagined. So how do we fit infinite inside the computer? That's what this episode is about: converting analog measurements to binary with suitable accuracy. And we will do all of this with an eye to using these techniques later in our applications.
Ask a computer to store a decimal whole number in binary and it will do it without any fuss. A decimal fraction, however, that's another thing. In this episode, we will present a method called Packed Binary Coded Decimal, or BCD, that is used to represent decimal values in binary by storing each digit in its own nibble.
Binary can be challenging. The values tend to have a lot of digits, long sequences of ones or zeros can be difficult to distinguish, and the relative magnitudes of multiple binary values can be difficult to resolve. In this episode, we discuss a couple of the popular methods to quickly represent binary in a more human readable form.
This episode continues the work of the previous one by examining the methods used to convert between decimal and binary and vice versa. We also take a look at the effects of shifting the bits of a binary number both left and right and how those operations can be used to simulate multiplication and division. Oh, and since we will be discussing a lot of numbers, it couldn’t hurt to have a piece of paper and a pencil close by.