POPULARITY
Bilgisayar... Tüm dünyayı baştan aşağı değiştiren bir icat. Belki de tarihin en önemli kesiflerinde biri. Fakat bu devrim bir anda olmadı elbette. Basit bir hesap yapma aracından, yapay zekaya kadar uzanan bu serüven, insanlığın kendini aşma çabasının da hikayesiydi aslında. Hiçbir Şey Tesadüf Değil'de bu teknolojik devrimin arka planına odaklanıyoruz. İki bölümden oluşacak mini bu mini serinin ilk ayağındaysa, hayatımızı değiştiren bu teknolojiyi en ilkel günlerinden itibaren incelemeye çalışıyoruz.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Who would be your ultimate dinner guest—Winston Churchill or Kris Jenner? Today we debate our dream dinner guests, from historical legends to modern-day marketing masterminds like Mr. Beast. Spoiler: No son of mine should be saying Mr. Beast over George Boole. Google him. --- Support this podcast: https://podcasters.spotify.com/pod/show/zarna-garg/support
George Boole fue un matemático, educador, lógico y filósofo británico nacido en Lincoln, Inglaterra, el 2 de noviembre de 1815. Es considerado uno de los fundadores de las ciencias de la computación por su invención del álgebra de Boole, un sistema de lógica matemática que utiliza símbolos para representar operaciones lógicas como la conjunción, la disyunción y la negación. Boole comenzó su educación formal en la escuela primaria de Lincoln, donde destacó en matemáticas. A la edad de 16 años, comenzó a estudiar por su cuenta matemáticas avanzadas y lógica. En 1835, publicó su primer libro, The Mathematical Analysis of Logic, en el que desarrolló sus ideas sobre la lógica matemática. En 1844, Boole fue nombrado profesor de matemáticas en la Queen's College de Cork, Irlanda. En este puesto, continuó desarrollando su teoría de la lógica matemática. En 1854, publicó su obra más importante, An Investigation of the Laws of Thought, en la que expuso su álgebra de Boole en detalle. El álgebra de Boole tuvo un impacto significativo en el desarrollo de las ciencias de la computación. Se utiliza en la actualidad para diseñar circuitos electrónicos, procesadores de datos y algoritmos de software. Boole murió en Ballintemple, Irlanda, el 8 de diciembre de 1864, a la edad de 49 años. A continuación, se presentan algunos de los logros más importantes de George Boole: Invención del álgebra de Boole, un sistema de lógica matemática que utiliza símbolos para representar operaciones lógicas. Desarrollo de las leyes de la lógica matemática, que rigen el funcionamiento de los circuitos electrónicos y los procesadores de datos. Aportes significativos al campo de la probabilidad. Boole fue un matemático y lógico de gran importancia. Sus contribuciones a la lógica matemática y a la probabilidad han tenido un impacto significativo en el desarrollo de las ciencias de la computación y de la tecnología moderna. Libros recomendados: https://infogonzalez.com/libros --- Send in a voice message: https://podcasters.spotify.com/pod/show/infogonzalez/message
El 02 de noviembre del año 1815 en Inglaterra nació George Boole, el inventor del Álgebra Booleana.
Welcome to our podcast, where we explore the fascinating world of Indian logic. Join us as we delve into an article authored by an unknown writer, which takes us on a journey through the evolution of Indian logic, its key figures, and schools of thought. We explore the anviksiki of Medhatithi Gautama, the Sanskrit grammar rules of Pāṇini, the Vaisheshika school's analysis of atomism, the Nyaya school of Hindu philosophy, the tetralemma of Nagarjuna, the Arthashastra, Jain logic, the teachings of Kundakunda, Acharya Mahapragya, the Navya-Nyāya or Neo-Logical darśana, the Tattvacintāmani, and the contributions of George Boole and Augustus De Morgan. source:https://en.wikipedia.org/wiki/Indian_logic
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A brief history of computers, published by Adam Zerner on July 19, 2023 on LessWrong. Recently I've been learning about the history of computers. I find it to be incredibly interesting. I'd like to write a post about it to summarize and comment on what I've learned. I'm a little hesitant though. I'm no expert on this stuff. I'm largely learning about it all for the first time. So then, take all of this with a grain of salt. It's more of a conversation starter than a finished product. If you want something authoritative, I'd recommend the Stanford Encyclopedia of Philosophy. Logic Let's start with logic. Computers are largely based on boolean logic. Y'know, 1s and 0s. AND, OR, NOT. George Boole did a bunch of important work here in the mid 1800s, but let's try backing up even further. Was there anything important that came before Boolean logic? Yeah, there was. It goes all the way back to Aristotle in ~350 BCE. Aristotle did a bunch of groundbreaking work in the field of logic. Furthermore, after "breaking the ground", there weren't any significant developments until the mid 1800s. Wow! That's a long time. An unusually long time. In other fields like mathematics, natural sciences, literature and engineering, there were significant advances. I wonder why things in the field of logic were so quiet. Anyway, let's talk about what exactly Aristotle did. In short, he looked at arguments in the abstract. It's one thing to say that: Filo is a dog Therefore, Filo has feet It's another thing to say that: R is a P Therefore, R has Q The former is concrete. It's talking about dogs, feet and Filo. The latter is abstract. It's talking about P's, Q's and R's. Do you see the difference? Before Aristotle, people never thought about this stuff in terms of P's and Q's. They just thought about dogs and feet. Thinking about P's and Q's totally opened things up. Pretty cool. Abstraction is powerful. I think this is very much worth noting as an important milestone in the history of computers. Ok. So once Aristotle opened the flood gates with categorical logic, over time, people kinda piggybacked off of it and extended his work. For example, the Stoics did a bunch of work with propositional logic. Propositional logic is different from categorical logic. Categorical logic is about what categories things belong to. For example, earlier we basically said that dogs belong to the category of "things with feet" and that Filo belongs to the category of "dogs". With those two statements, we deduced that Filo must also belong to the category of "things with feet". It makes a lot of sense when you think about it visually: On the other hand, propositional logic is about things being true or false. For example, with this: I don't have an umbrella we can deduce things like: "It is raining or I have an umbrella" is true Propositional logic is about truth and uses operators like AND, OR, NOT, IF-THEN, and IF-AND-ONLY-IF. Categorical logic is about categories and uses operators like ALL, NO and SOME. After propositional logic, subsequent work was done. For example, predicate logic kinda piggybacked off of propositional logic. But in general, nothing too crazy was going on. Let's jump ahead to the mid 1800s and George Boole. Boole introduced stuff like this: (p and q) is false (p or q) is true (not (p and q)) is true But wait a minute. I'm confused. Didn't we get that sort of thing from propositional logic all the way back in 300 BCE from the Stoics? In researching this question I'm seeing things saying that it did in fact already exist, it's just that Boole made it more "systematic and formalized". I don't understand though. In what way did he made it more systematic and formalized? Oh well. Suffice it to say that boolean logic was a thing that we knew about. Let's move on. Jacquard loom I was going to star...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A brief history of computers, published by Adam Zerner on July 19, 2023 on LessWrong. Recently I've been learning about the history of computers. I find it to be incredibly interesting. I'd like to write a post about it to summarize and comment on what I've learned. I'm a little hesitant though. I'm no expert on this stuff. I'm largely learning about it all for the first time. So then, take all of this with a grain of salt. It's more of a conversation starter than a finished product. If you want something authoritative, I'd recommend the Stanford Encyclopedia of Philosophy. Logic Let's start with logic. Computers are largely based on boolean logic. Y'know, 1s and 0s. AND, OR, NOT. George Boole did a bunch of important work here in the mid 1800s, but let's try backing up even further. Was there anything important that came before Boolean logic? Yeah, there was. It goes all the way back to Aristotle in ~350 BCE. Aristotle did a bunch of groundbreaking work in the field of logic. Furthermore, after "breaking the ground", there weren't any significant developments until the mid 1800s. Wow! That's a long time. An unusually long time. In other fields like mathematics, natural sciences, literature and engineering, there were significant advances. I wonder why things in the field of logic were so quiet. Anyway, let's talk about what exactly Aristotle did. In short, he looked at arguments in the abstract. It's one thing to say that: Filo is a dog Therefore, Filo has feet It's another thing to say that: R is a P Therefore, R has Q The former is concrete. It's talking about dogs, feet and Filo. The latter is abstract. It's talking about P's, Q's and R's. Do you see the difference? Before Aristotle, people never thought about this stuff in terms of P's and Q's. They just thought about dogs and feet. Thinking about P's and Q's totally opened things up. Pretty cool. Abstraction is powerful. I think this is very much worth noting as an important milestone in the history of computers. Ok. So once Aristotle opened the flood gates with categorical logic, over time, people kinda piggybacked off of it and extended his work. For example, the Stoics did a bunch of work with propositional logic. Propositional logic is different from categorical logic. Categorical logic is about what categories things belong to. For example, earlier we basically said that dogs belong to the category of "things with feet" and that Filo belongs to the category of "dogs". With those two statements, we deduced that Filo must also belong to the category of "things with feet". It makes a lot of sense when you think about it visually: On the other hand, propositional logic is about things being true or false. For example, with this: I don't have an umbrella we can deduce things like: "It is raining or I have an umbrella" is true Propositional logic is about truth and uses operators like AND, OR, NOT, IF-THEN, and IF-AND-ONLY-IF. Categorical logic is about categories and uses operators like ALL, NO and SOME. After propositional logic, subsequent work was done. For example, predicate logic kinda piggybacked off of propositional logic. But in general, nothing too crazy was going on. Let's jump ahead to the mid 1800s and George Boole. Boole introduced stuff like this: (p and q) is false (p or q) is true (not (p and q)) is true But wait a minute. I'm confused. Didn't we get that sort of thing from propositional logic all the way back in 300 BCE from the Stoics? In researching this question I'm seeing things saying that it did in fact already exist, it's just that Boole made it more "systematic and formalized". I don't understand though. In what way did he made it more systematic and formalized? Oh well. Suffice it to say that boolean logic was a thing that we knew about. Let's move on. Jacquard loom I was going to star...
Con questa lezione iniziamo a conoscere i più grandi logici matematici che hanno segnato con il loro nome la storia di questa materia. Il primo di questi personaggi è l'inglese George Boole, che ha portato nella logica l'aspetto del calcolo. --- Send in a voice message: https://podcasters.spotify.com/pod/show/vito-rodolfo-albano7/message
The amount published in scientific journals has exploded over the past few hundred years. This helps in putting together a history of how various sciences evolved. And sometimes helps us revisit areas for improvement - or predict what's on the horizon. The rise of computers often begins with stories of Babbage. As we've covered a lot came before him and those of the era were often looking to automate calculating increasingly complex mathematic tables. Charles Babbage was a true Victorian era polymath. A lot was happening as the world awoke to a more scientific era and scientific publications grew in number and size. Born in London, Babbage loved math from an early age and went away to Trinity College in Cambridge in 1810. There he helped form the Analytical Society with John Herschel - a pioneer of early photography and a chemist and invented of the blueprint. And George Peacock, who established the British arm of algebraic logic, which when picked up by George Boole would go on to form part of Boolean algebra, ushering in the idea that everything can be reduced to a zero or a one. Babbage graduated from Cambridge and went on to become a Fellow of the Royal Society and helped found the Royal Astronomical Society. He published works with Herschel on electrodynamics that went on to be used by Michael Faraday later and even dabbled in actuarial tables - possibly to create a data driven insurance company. His father passed away in 1827, leaving him a sizable estate. And after applying multiple times he finally became a professor at Cambridge in 1828. He and the others from the Analytical Society were tinkering with things like generalized polynomials and what we think of today as a formal power series, all of which an be incredibly tedious and time consuming. Because it's iterative. Pascal and Leibnitz had pushed math forward and had worked on the engineering to automate various tasks, applying some of their science. This gave us Pascal's calculator and Leibnitz's work on information theory and his calculus ratiocinator added a stepped reckoner, now called the Leibniz wheel where he was able to perform all four basic arithmetic operations. Meanwhile, Babbage continued to bounce around between society, politics, science, mathematics, and even coining a book on manufacturing where he looked at rational design and profit sharing. He also looked at how tasks were handled and made observations about the skill level of each task and the human capital involved in carrying them out. Marx even picked up where Babbage left off and looked further into profitability as a motivator. He also invented the pilot for trains and was involved with lots of learned people of the day. Yet Babbage is best known for being the old, crusty gramps of the computer. Or more specifically the difference engine, which is different from a differential analyzer. A difference engine was a mechanical calculator that could perform polynomial functions. A differential analyzer on the other hand solves differential equations using wheels and disks. Babbage expanded on the ideas of Pascal and Leibniz and added to mechanical computing, making the difference engine, the inspiration of many a steampunk work of fiction. Babbage started work on the difference engine in 1819. Multiple engineers built different components for the engine and it was powered by a crank that spun a series of wheels, not unlike various clockworks available at the time. The project was paid for by the British Government who hoped it could save time calculating complex tables. Imagine doing all the work in spreadsheets manually. Each cell could take a fair amount of time and any mistake could be disastrous. But it was just a little before its time. The plans have been built and worked and while he did produce a prototype capable of raising numbers to the third power and perform some quadratic equations the project was abandoned in 1833. We'll talk about precision in a future episode. Again, the math involved in solving differential equations at the time was considerable and the time-intensive nature was holding back progress. So Babbage wasn't the only one working on such ideas. Gaspard-Gustave de Coriolis, known for the Coriolis effect, was studying the collisions of spheres and became a professor of mechanics in Paris. To aid in his works, he designed the first mechanical device to integrate differential equations in 1836. After Babbage scrapped his first, he moved on to the analytical engine, adding conditional branching, loops, and memory - and further complicating the machine. The engine borrowed the punchcard tech from the Jacquard loom and applied that same logic, along with the work of Leibniz, to math. The inputs would be formulas, much as Turing later described when concocting some of what we now call Artificial Intelligence. Essentially all problems could be solved given a formula and the output would be a printer. The analytical machine had 1,000 numbers worth of memory and a logic processor or arithmetic unit that he called a mill, which we'd call a CPU today. He even planned on a programming language which we might think of as assembly today. All of this brings us to the fact that while never built, it would have been a Turing-complete in that the simulation of those formulas was a Turing machine. Ada Lovelace contributed the concept of Bernoulli numbers in algorithms giving us a glimpse into what an open source collaboration might some day look like. And she was in many ways the first programmer - and daughter of Lord Byron and Anne Millbanke, a math whiz. She became fascinated with the engine and ended up becoming an expert at creating a set of instructions to punch on cards, thus the first programmer of the analytical engine and far before her time. In fact, there would be no programmer for 100 years with her depth of understanding. Not to make you feel inadequate, but she was 27 in 1843. Luigi Menabrea took the idea to France. And yet by the time Babbage died in 1871 without a working model. During those years, Per Georg Scheutz built a number of difference engines based on Babbage's published works - also funded by the government and would evolve to become the first calculator that could print. Martin Wiberg picked up from there and was able to move to 20 digit processing. George Grant at Harvard developed calculating machines and published his designs by 1876, starting a number of companies to fabricate gears along the way. James Thomson built a differential analyzer in 1876 to predict tides. And that's when his work on fluid dynamics and other technology seemed to be the connection between these machines and the military. Thomson's work would Joe added to work done by Arthur Pollen and we got our first automated fire-control systems. Percy Ludgate and Leonardo Torres wrote about Babbages work in the early years the 1900s and other branches of math needed other types of mechanical computing. Burroughs built a difference engine in 1912 and another in 1929. The differential analyzer was picked up by a number of scientists in those early years. But Vaneevar Bush was perhaps one of the most important. He, with Harold Locke Hazen built one at MIT and published an article on it in 1931. Here's where everything changes. The information was out there in academic journals. Bush published another in 1936 connecting his work to Babbage's. Bush's designs get used by a number of universities and picked up by the the Balistic Research Lab in the US. One of those installations was in the same basement ENIAC would be built in. Bush did more than inspire other mathematicians. Sometimes he paid them. His research assistant was Claude Shannon, who built the General Purpose Analog Computer in 1941 and went on to become founder of the whole concept of information theory, down to the bits to bytes. Shannon's computer was important as it came shortly after Alan Turing's work on Turing machines and so has been seen as a means to get to this concept of general, programmable computing - basically revisiting the Babbage concept of a thinking, or analytical machine. And Howard Aiken went a step further than mechanical computing and into electromechanical computing with he Mark I, where he referenced Babbage's work as well. Then we got the Atanasoff-Berry Computer in 1942. By then, our friend Bush had gone on to chair the National Defense Research Committee where he would serve under Roosevelt and Truman and help develop radar and the Manhattan Project as an administrator where he helped coordinate over 5,000 research scientists. Some helped with ENIAC, which was completed in 1945, thus beginning the era of programmable, digital, general purpose computers. Seeing how computers helped break Enigma machine encryption and solve the equations, blow up targets better, and solve problems that held science back was one thing - but unleashing such massive and instantaneous violence as the nuclear bomb caused Bush to write an article for The Atlantic called As We May Think, that inspired generations of computer scientists. Here he laid out the concept of a Memex, or a general purpose computer that every knowledge worker could have. And thus began the era of computing. What we wanted to look at in this episode is how Babbage wasn't an anomaly. Just as Konrad Zuse wasn't. People published works, added to the works they read about, cited works, pulled in concepts from other fields, and we have unbroken chains in our understanding of how science evolves. Some, like Konrad Zuse, might have been operating outside of this peer reviewing process - but he eventually got around to publishing as well.
From the ancient world and moon landings to Dr Who and Sherlock Holmes; special guest Bobby Seagull joins Professors Sue Black OBE and Gordon Love as they talk about their passion for Boolean algebra - taking a look at the impact the mathematician, philosopher and logician had on the dawn on the information age. Also in this episode, Durham's Head of Computer Science explains the science behind how Boole developed the idea of logic into a mathematical context – and helped to change the world! You can email your suggestions for moments for Sue and Gordon to look at using 100moments@durham.ac.uk For those interested in studying Computer Science at Durham, visit https://www.durham.ac.uk/departments/academic/computer-science/ to find out how you can apply. If you enjoyed this episode please do three lovely things for us - like, subscribe and tell a friend! 100 Moments that Rocked Computer Science is a Why did the Chicken? production for Durham University.
We discuss George Boole, a nineteenth century English mathematician and logician and the namesake of Boolean algebra. Plus, we go on tangents about the history of logic and the Boole/Everest family.
En esta ocasión platicamos sobre el sistema binario, de George Boole, álgebra booleana, definición y notación de los números binarios, conversión de decimal a binario, conversión de binario a decimal, y aplicación de los mismos en el mundo real. Cómo cada semana mencionamos a los ganadores de los retos de nuestra página de Facebook @LoosPiq2.
The Roman Empire grew. Philosophy and the practical applications derived from great thinkers were no longer just to impress peers or mystify the commoners into passivity but to help humans do more. The focus on practical applications was clear. This isn't to say there weren't great Romans. We got Seneca, Pliny the Elder, Plutarch, Tacitus, Lucretius, Plotinus, Marcus Aurelius, one of my favorite Hypatia, and as Christianity spread we got the Cristian Philosophers in Rome such as Saint Augustine. The Romans reached into new lands and those lands reached back, with attacks coming by the Goths, Germanic tribes, Vandals, and finally resulting in the sack of Rome. They had been weakened by an overreliance on slaves, overspending on military to fuel the constant expansion, government corruption due to a lack of control given the sheer size of the empire, and the need to outsource the military due to the fact that Roman citizens needed to run the empire. Rome would split in 285 and by the fourth century fell. Again, as empires fall new ones emerge. As the Classical Period ended in each area with the decline of the Roman Empire, we were plunged into the Middle Ages, which I was taught was the Dark Ages in school. But they weren't dark. Byzantine, the Eastern Roman Empire survived. The Franks founded Francia in northern Gaul. The Celtic Britons emerged. The Visigoths setup shop in Northern Spain. The Lombards in Northern Italy. The Slavs spread through Central and Eastern Europe and the Latin language splintered into the Romance languages. And that spread involved Christianity, whose doctrine often classed with the ancient philosophies. And great thinkers weren't valued. Or so it seemed when I was taught about the Dark Ages. But words matter. The Prophet Muhammad was born in this period and Islamic doctrine spread rapidly throughout the Middle East. He united the tribes of Medina and established a Constitution in the sixth century. After years of war with Mecca, he later seized the land. He then went on to conquer the Arabian Peninsula, up into the lands of the Byzantines and Persians. With the tribes of Arabia united, Muslims would conquer the last remains of Byzantine Egypt, Syria, Mesopotamia and take large areas of Persia. This rapid expansion, as it had with the Greeks and Romans, led to new trade routes, and new ideas finding their way to the emerging Islamic empire. In the beginning they destroyed pagan idols but over time adapted Greek and Roman technology and thinking into their culture. They Brough maps, medicine, calculations, and agricultural implants. They learned paper making from the Chinese and built paper mills allowing for an explosion in books. Muslim scholars in Baghdad, often referred to as New Babylon given that it's only 60 miles away. They began translating some of the most important works from Greek and Latin and Islamic teachings encouraged the pursuit of knowledge at the time. Many a great work from the Greeks and Romans is preserved because of those translations. And as with each empire before them, the Islamic philosophers and engineers built on the learning of the past. They used astrolabes in navigation, chemistry in ceramics and dyes, researched acids and alkalis. They brought knowledge from Pythagoras and Babylonians and studied lines and spaces and geometry and trigonometry, integrating them into art and architecture. Because Islamic law forbade dissections, they used the Greek texts to study medicine. The technology and ideas of their predecessors helped them retain control throughout the Islamic Golden Age. The various Islamic empires spread East into China, down the African coast, into Russia, into parts of Greece, and even North into Spain where they ruled for 800 years. Some grew to control over 10 million square miles. They built fantastic clockworks, documented by al-Jazari in the waning days of the golden age. And the writings included references to influences in Greece and Rome, including the Book of Optics by Ibn Al-Haytham in the ninth century, which is heavily influenced by Ptolemy's book, Optics. But over time, empires weaken. Throughout the Middle Ages, monarchs began to be deposed by rising merchant classes, or oligarchs. What the framers of the US Constitution sought to block with the way the government is structured. You can see this in the way the House of Lords had such power in England even after the move to a constitutional monarchy. And after the fall of the Soviet Union, Russia has moved more and more towards a rule by oligarchs first under Yeltsin and then under Putin. Because you see, we continue to re-learn the lessons learned by the Greeks. But differently. Kinda' like bell bottoms are different than all the other times they were cool each time they come back. The names of European empires began to resemble what we know today: Wales, England, Scotland, Italy, Croatia, Serbia, Sweden, Denmark, Portugal, Germany, and France were becoming dominant forces again. The Catholic Church was again on the rise as Rome practiced a new form of conquering the world. Two main religions were coming more and more in conflict for souls: Christianity and Islam. And so began the Crusades of the High Middle Ages. Crusaders brought home trophies. Many were books and scientific instruments. And then came the Great Famine followed quickly by the Black Death, which spread along with trade and science and knowledge along the Silk Road. Climate change and disease might sound familiar today. France and England went to war for a hundred years. Disruption in the global order again allows for new empires. Ghengis Khan built a horde of Mongols that over the next few generations spread through China, Korea, India, Georgia and the Caucasus, Russia, Central Asia and Persia, Hungary, Lithuania, Bulgaria, Vietnam, Baghdad, Syria, Poland, and even Thrace throughout the 11th to 13th centuries. Many great works were lost in the wars, although the Mongols often allowed their subjects to continue life as before, with a hefty tax of course. They would grow to control 24 million square kilometers before the empires became unmanageable. This disruption caused various peoples to move and one was a Turkic tribe fleeing Central Asia that under Osman I in the 13th century. The Ottomon empire he founded would go Islamic and grow to include much of the former Islamic regime as they expanded out of Turkey, including Greece Northern Africa. Over time they would also invade and rule Greece and almost all the way north to Kiev, and south through the lands of the former Mesopotamian empires. While they didn't conquer the Arabian peninsula, ruled by other Islamic empires, they did conquer all the way to Basra in the South and took Damascus, Medina, and Mecca, and Jerusalem. Still, given the density of population in some cities they couldn't grow past the same amount of space controlled in the days of Alexander. But again, knowledge was transferred to and from Egypt, Greece, and the former Mesopotamian lands. And with each turnover to a new empire more of the great works were taken from these cradles of civilization but kept alive to evolve further. And one way science and math and philosophy and the understanding of the universe evolved was to influence the coming Renaissance, which began in the late 13th century and spread along with Greek scholars fleeing the Ottoman Turks after the fall of Constantinople throughout the Italian city-states and into England, France, Germany, Poland, Russia, and Spain. Hellenism was on the move again. The works of Aristotle, Ptolemy, Plato, and others heavily influenced the next wave of mathematicians, astronomers, philosophers, and scientists. Copernicus studied Aristotle. Leonardo Da Vinci gave us the Mona Lisa, the Last Supper, the Vitruvian Man, Salvator Mundi, and Virgin of the Rocks. His works are amongst the most recognizable paintings of the Renaissance. But he was also a great inventor, sketching and perhaps building automata, parachutes, helicopters, tanks, and along the way putting optics, anatomy, hydrodynamics and engineering concepts in his notebooks. And his influences certainly included the Greeks and Romans, including the Roman physician Galen. Given that his notebooks weren't published they offer a snapshot in time rather than a heavy impact on the evolution of science - although his influence is often seen as a contribution to the scientific revolution. Da Vinci, like many of his peers in the Renaissance, learned the great works of the Greeks and Romans. And they learned the teachings in the Bible. They they didn't just take the word of either and they studied nature directly. The next couple of generations of intellectuals included Galileo. Galileo, effectively as with Socrates and countless other thinkers that bucked the prevailing political or religious climate of the time, by writing down what he saw with his own eyeballs. He picked up where Copernicus left off and discovered the four moons of Jupiter and astronomers continued to espouse that the the sun revolved around the Earth Galileo continued to prove it was in fact suspended in space and map out the movement of the heavenly bodies. Clockwork, which had been used in the Greek times, as proven with the Antikypthera device and mentions of Archytas's dove. Mo Zi and Lu Ban built flying birds. As the Greeks and then Romans fell, that automata as with philosophy and ideas moved to the Islamic world. The ability to build a gear with a number of teeth to perform a function had been building over time. As had ingenious ways to put rods and axles and attach differential gearing. Yi Xing, a Buddhist monk in the Tang Dynasty, would develop the escapement, along with Liang Lingzan in the seventeenths century and the practice spread through China and then spread from there. But now clockwork would get pendulums, springs, and Robert Hook would give us the escapement in 1700, making clocks accurate. And that brings us to the scientific revolution, when most of the stories in the history of computing really start to take shape. Thanks to great thinkers, philosophers, scientists, artists, engineers, and yes, merchants who could fund innovation and spread progress through formal and informal ties - the age of science is when too much began happening too rapidly to really be able to speak about it meaningfully. The great mathematics and engineering led to industrialization and further branches of knowledge and specializations - eventually including Boolean algebra and armed with thousands of years of slow and steady growth in mechanics and theory and optics and precision, we would get early mechanical computing beginning the much more quick migration out of the Industrial and into the Information Age. These explosions in technology allowed the British Empire to grow to control 34 million square kilometers of territory and the Russian empire to grow to control 17 million before each overextended. Since writing was developed, humanity has experienced a generation to generation passing of the torch of science, mathematics, and philosophy. From before the Bronze Age, ideas were sometimes independently perceived or sometimes spread through trade from the Chinese, Indian, Mesopotamian, and Egyptian civilizations (and others) through traders like the Phoenicians to the Greeks and Persians - then from the Greeks to the Romans and the Islamic empires during the dark ages then back to Europe during the Renaissance. And some of that went both ways. Ultimately, who first introduced each innovation and who influenced whom cannot be pinpointed in a lot of cases. Greeks were often given more credit than they deserved because I think most of us have really fond memories of toga parties in college. But there were generations of people studying all the things and thinking through each field when their other Maslovian needs were met - and those evolving thoughts and philosophies were often attributed to one person rather than all the parties involved in the findings. After World War II there was a Cold War - and one of the ways that manifested itself was a race to recruit the best scientists from the losing factions of that war, namely Nazi scientists. Some died while trying to be taken to a better new life, as Archimedes had died when the Romans tried to make him an asset. For better or worse, world powers know they need the scientists if they're gonna' science - and that you gotta' science to stay in power. When the masses start to doubt science, they're probably gonna' burn the Library of Alexandria, poison Socrates, exile Galileo for proving the planets revolve around Suns and have their own moons that revolve around them, rather than the stars all revolving around the Earth. There wasn't necessarily a dark age - but given what the Greeks and Romans and Chinese thinkers knew and the substantial slowdown in those in between periods of great learning, the Renaissance and Enlightenment could have actually come much sooner. Think about that next time you hear people denying science. To research this section, I read and took copious notes from the following and apologize that each passage is not credited specifically but it would just look like a regular expressions if I tried: The Evolution of Technology by George Basalla. Civilizations by Filipe Fernández-Armesto, A Short History of Technology: From The Earliest Times to AD 1900 from TK Derry and Trevor I Williams, Communication in History Technology, Culture, Leonardo da vinci by Walter Isaacson, Society from David Crowley and Paul Heyer, Timelines in Science, by the Smithsonian, Wheels, Clocks, and Rockets: A History of Technology by Donald Cardwell, a few PhD dissertations and post-doctoral studies from journals, and then I got to the point where I wanted the information from as close to the sources as I could get so I went through Dialogues Concerning Two New Sciences from Galileo Galilei, Mediations from Marcus Aurelius, Pneumatics from Philo of Byzantium, The Laws of Thought by George Boole, Natural History from Pliny The Elder, Cassius Dio's Roman History, Annals from Tacitus, Orations by Cicero, Ethics, Rhetoric, Metaphysics, and Politics by Aristotle, Plato's Symposium and The Trial & Execution of Socrates. For a running list of all books used in this podcast see the GitHub page at https://github.com/krypted/TheHistoryOfComputingPodcast/blob/master/Books.md
Science in antiquity was at times devised to be useful and at other times to prove to the people that the gods looked favorably on the ruling class. Greek philosophers tell us a lot about how the ancient world developed. Or at least, they tell us a Western history of antiquity. Humanity began working with bronze some 7,000 years ago and the Bronze Age came in force in the centuries leading up to 3,000 BCE. By then there were city-states and empires. The Mesopotamians brought us the wheel in around 3500 BCE, and the chariot by 3200 BCE. Writing formed in Sumeria, a city state of Mesopotamia, in 3000 BCE. Urbanization required larger cities and walls to keep out invaders. King Gilgamesh built huge walls. They used a base 60 system to track time, giving us the 60 seconds and 60 minutes to get to an hour. That sexagesimal system also gave us the 360 degrees in a circle. They plowed fields and sailed. And sailing led to maps, which they had by 2300 BCE. And they gave us the Epic, with the Epic of Gilgamesh which could be old as 2100 BCE. At this point, the Egyptian empire had grown to 150,000 square kilometers and the Sumerians controlled around 20,000 square kilometers. Throughout, they grew a great trading empire. They traded with China, India and Egypt with some routes dating back to the fourth millennia BCE. And commerce and trade means the spread of not only goods but also ideas and knowledge. The earliest known writing of complete sentences in Egypt came to Egypt a few hundred years after it did in Mesopotamia, as the Early Dynastic period ended and the Old Kingdom, or the Age of the Pyramids. Perhaps over a trade route. The ancient Egyptians used numerals, multiplications, fractions, geometry, architecture, algebra, and even quadratic equations. Even having a documented base 10 numbering system on a tomb from 3200 BCE. We also have the Moscow Mathematical Papyrus, which includes geometry problems, the Egyptian Mathematical Leather Roll, which covers how to add fractions, the Berlin Papyrus with geometry, the Lahun Papyri with arithmetical progressions to calculate the volume of granaries, the Akhmim tablets, the Reisner Papyrus, and the Rhind Mathematical Papyrus, which covers algebra and geometry. And there's the Cairo Calendar, an ancient Egyptian papyrus from around 1200 BCE with detailed astronomical observations. Because the Nile flooded, bringing critical crops to Egypt. The Mesopotamians traded with China as well. As the Shang dynasty from the 16th to 11th centuries BCE gave way to the Zhou Dynasty, which went from the 11th to 3rd centuries BCE and the Bronze Age gave way to the Iron Age, science was spreading throughout the world. The I Ching is one of the oldest Chinese works showing math, dating back to the Zhou Dynasty, possibly as old as 1000 BCE. This was also when the Hundred Schools of Thought began, which Conscious inherited around the 5th century BCE. Along the way the Chinese gave us the sundial, abacus, and crossbow. And again, the Bronze Age signaled trade empires that were spreading ideas and texts from the Near East to Asia to Europe and Africa and back again. For a couple thousand years the transfer of spices, textiles and precious metals fueled the Bronze Age empires. Along the way the Minoan civilization in modern Greece had been slowly rising out of the Cycladic culture. Minoan artifacts have been found in Canaanite palaces and as they grew they colonized and traded. They began a decline around 1500 BCE, likely due to a combination of raiders and volcanic eruptions. The crash of the Minoan civilization gave way to the Myceneaen civilization of early Greece. Competition for resources and land in these growing empires helped to trigger wars. Those in turn caused violence over those resources. Around 1250 BCE, Thebes burned and attacks against city states cities increased, sometimes by emerging empires of previously disassociated tribes (as would happen later with the Vikings) and sometimes by other city-states. This triggered the collapse of Mycenaen Greece, the splintering of the Hittites, the fall of Troy, the absorption of the Sumerian culture into Babylon, and attacks that weakened the Egyptian New Kingdom. Weakened and disintegrating empires leave room for new players. The Iranian tribes emerged to form the Median empire in today's Iran. The Assyrians and Scythians rose to power and the world moved into the Iron age. And the Greeks fell into the Greek Dark Ages until they slowly clawed their way out of it in the 8th century BCE. Around this time Babylonian astronomers, in the capital of Mesopomania, were making astronomical diaries, some of which are now stored in the British Museum. Greek and Mesopotamian societies weren't the only ones flourishing. The Indus Valley Civilization had blossomed from 2500 to 1800 BCE only to go into a dark age of its own. Boasting 5 million people across 1,500 cities, with some of the larger cities reaching 40,000 people - about the same size as Mesopotamian cities. About two thirds are in modern day India and a third in modern Pakistan, an empire that stretched across 120,000 square kilometers. As the Babylonian control of the Mesopotamian city states broke up, the Assyrians began their own campaigns and conquered Persia, parts of Ancient Greece, down to Ethiopia, Israel, the Ethiopia, and Babylon. As their empire grew, they followed into the Indus Valley, which Mesopotamians had been trading with for centuries. What we think of as modern Pakistan and India is where Medhatithi Gautama founded the anviksiki school of logic in the 6th century BCE. And so the modern sciences of philosophy and logic were born. As mentioned, we'd had math in the Bronze Age. The Egyptians couldn't have built pyramids and mapped the stars without it. Hammurabi and Nebuchadnezzar couldn't have built the Mesopotamian cities and walls and laws without it. But something new was coming as the Bronze Age began to give way to the Iron Age. The Indians brought us the first origin of logic, which would morph into an almost Boolean logic as Pāṇini codified Sanskrit grammar linguistics and syntax. Almost like a nearly 4,000 verse manual on programming languages. Panini even mentions Greeks in his writings. Because they apparently had contact going back to the sixth century BCE, when Greek philosophy was about to get started. The Neo-Assyrian empire grew to 1.4 million square kilometers of control and the Achaeminid empire grew to control nearly 5 million square miles. The Phoenicians arose out of the crash of the Late Bronze Age, becoming important traders between the former Mesopotamian city states and Egyptians. As her people settled lands and Greek city states colonized lands, one became the Greek philosopher Thales, who documented the use of loadstones going back to 600 BCE when they were able to use magnetite which gets its name from the Magnesia region of Thessaly, Greece. He is known as the first philosopher and in the time of Socrates even had become one of the Seven Sages which included according to Socrates. “Thales of Miletus, and Pittacus of Mytilene, and Bias of Priene, and our own Solon, and Cleobulus of Lindus, and Myson of Chenae, and the seventh of them was said to be Chilon of Sparta.” Many of the fifth and sixth century Greek philosophers were actually born in colonies on the western coast of what is now Turkey. Thales's theorum is said to have originated in India or Babylon. But as we see a lot in the times that followed, it is credited to Thales. Given the trading empires they were all a part of though, they certainly could have brought these ideas back from previous generations of unnamed thinkers. I like to think of him as the synthesizers that Daniel Pink refers to so often in his book A Whole New Mind. Thales studied in Babylon and Egypt, bringing thoughts, ideas, and perhaps intermingled them with those coming in from other areas as the Greeks settled colonies in other lands. Given how critical astrology was to the agricultural societies, this meant bringing astronomy, math to help with the architecture of the Pharoes, new ways to use calendars, likely adopted through the Sumerians, coinage through trade with the Lydians and then Persians when they conquered the Lydians, Babylon, and the Median. So Thales taught Anaximander who taught Pythagoras of Samos, born a few decades later in 570 BCE. He studied in Egypt as well. Most of us would know the Pythagorean theorem which he's credited for, although there is evidence that predated him from Egypt. Whether new to the emerging Greek world or new to the world writ large, his contributions were far beyond that, though. They included a new student oriented way of life, numerology, the idea that the world is round, numerology, applying math to music and applying music to lifestyle, and an entire school of philosophers emerged from his teachings to spread Pythagoreanism. And the generations of philosophers that followed devised both important philosophical contributions and practical applications of new ideas in engineering. The ensuing schools of philosophy that rose out of those early Greeks spread. By 508 BCE, the Greeks gave us Democracy. And oligarchy, defined as a government where a small group of people have control over a country. Many of these words, in fact, come from Greek forms. As does the month of May, names for symbols and theories in much of the math we use, and many a constellation. That tradition began with the sages but grew, being spread by trade, by need, and by religious houses seeking to use engineering as a form of subjugation. Philosophy wasn't exclusive to the Greeks or Indians, or to Assyria and then Persia through conquering the lands and establishing trade. Buddha came out of modern India in the 5th to 4th century BCE around the same time Confucianism was born from Confucious in China. And Mohism from Mo Di. Again, trade and the spread of ideas. However, there's no indication that they knew of each other or that Confucious could have competed with the other 100 schools of thought alive and thriving in China. Nor that Buddhism would begin spreading out of the region for awhile. But some cultures were spreading rapidly. The spread of Greek philosophy reached a zenith in Athens. Thales' pupil Anaximander also taught Anaximenes, the third philosopher of the Milesian school which is often included with the Ionians. The thing I love about those three, beginning with Thales is that they were able to evolve the school of thought without rejecting the philosophies before them. Because ultimately they knew they were simply devising theories as yet to be proven. Another Ionian was Anaxagoras, who after serving in the Persian army, which ultimately conquered Ionia in 547 BCE. As a Greek citizen living in what was then Persia, Anaxagoras moved to Athens in 480 BCE, teaching Archelaus and either directly or indirectly through him Socrates. This provides a link, albeit not a direct link, from the philosophy and science of the Phoenicians, Babylonians, and Egyptians through Thales and others, to Socrates. Socrates was born in 470 BCE and mentions several influences including Anaxagoras. Socrates spawned a level of intellectualism that would go on to have as large an impact on what we now call Western philosophy as anyone in the world ever has. And given that we have no writings from him, we have to take the word of his students to know his works. He gave us the Socratic method and his own spin on satire, which ultimately got him executed for effectively being critical of the ruling elite in Athens and for calling democracy into question, corrupting young Athenian students in the process. You see, in his life, the Athenians lost the Peloponnesian War to Sparta - and as societies often do when they hit a speed bump, they started to listen to those who call intellectuals or scientists into question. That would be Socrates for questioning Democracy, and many an Athenian for using Socrates as a scape goat. One student of Socrates, Critias, would go on to lead a group called the Thirty Tyrants, who would terrorize Athenians and take over the government for awhile. They would establish an oligarchy and appoint their own ruling class. As with many coups against democracy over the millennia they were ultimately found corrupt and removed from power. But the end of that democratic experiment in Greece was coming. Socrates also taught other great philosophers, including Xenophon, Antisthenes, Aristippus, and Alcibiades. But the greatest of his pupils was Plato. Plato was as much a scientist as a philosopher. He had works of Pythagoras, studied the Libyan Theodorus. He codified a theory of Ideas, in Forms. He used as examples, the Pythagorean theorem and geometry. He wrote a lot of the dialogues with Socrates and codified ethics, and wrote of a working, protective, and governing class, looking to produce philosopher kings. He wrote about the dialectic, using questions, reasoning and intuition. He wrote of art and poetry and epistemology. His impact was vast. He would teach mathemetics to Eudoxus, who in turn taught Euclid. But one of his greatest contributions the evolution of philosophy, science, and technology was in teaching Aristotle. Aristotle was born in 384 BCE and founded a school of philosophy called the Lyceum. He wrote about rhetoric, music, poetry, and theater - as one would expect given the connection to Socrates, but also expanded far past Plato, getting into physics, biology, and metaphysics. But he had a direct impact on the world at the time with his writings on economics politics, He inherited a confluence of great achievements, describing motion, defining the five elements, writing about a camera obscure and researching optics. He wrote about astronomy and geology, observing both theory and fact, such as ways to predict volcanic eruptions. He made observations that would be proven (or sometimes disproven) such as with modern genomics. He began a classification of living things. His work “On the Soul” is one of the earliest looks at psychology. His study of ethics wasn't as theoretical as Socrates' but practical, teaching virtue and how that leads to wisdom to become a greater thinker. He wrote of economics. He writes of taxes, managing cities, and property. And this is where he's speaking almost directly to one of his most impressive students, Alexander the Great. Philip the second of Macedon hired Plato to tutor Alexander starting in 343. Nine years later, when Alexander inherited his throne, he was armed with arguably the best education in the world combined with one of the best trained armies in history. This allowed him to defeat Darius in 334 BCE, the first of 10 years worth of campaigns that finally gave him control in 323 BCE. In that time, he conquered Egypt, which had been under Persian rule on and off and founded Alexandria. And so what the Egyptians had given to Greece had come home. Alexander died in 323 BCE. He followed the path set out by philosophers before him. Like Thales, he visited Babylon and Egypt. But he went a step further and conquered them. This gave the Greeks more ancient texts to learn from but also more people who could become philosophers and more people with time to think through problems. By the time he was done, the Greeks controlled nearly 5 million square miles of territory. This would be the largest empire until after the Romans. But Alexander never truly ruled. He conquered. Some of his generals and other Greek aristocrats, now referred to as the Diadochi, split up the young, new empire. You see, while teaching Alexander, Aristotle had taught two other future kings : Ptolemy I Soter and Cassander. Cassander would rule Macedonia and Ptolemy ruled Egypt from Alexandria, who with other Greek philosophers founded the Library of Alexandria. Ptolemy and his son amassed 100s of thousands of scrolls in the Library from 331 BC and on. The Library was part of a great campus of the Musaeum where they also supported great minds starting with Ptolemy I's patronage of Euclid, the father of geometry, and later including Archimedes, the father of engineering, Hipparchus, the founder of trigonometry, Her, the father of math, and Herophilus, who codified the scientific method and countless other great hellenistic thinkers. The Roman Empire had begin in the 6th century BCE. By the third century BCE they were expanding out of the Italian peninsula. This was the end of Greek expansion and as Rome conquered the Greek colonies signified the waning of Greek philosophy. Philosophy that helped build Rome both from a period of colonization and then spreading Democracy to the young republic with the kings, or rex, being elected by the senate and by 509 BCE the rise of the consuls. After studying at the Library of Alexandria, Archimedes returned home to start his great works, full of ideas having been exposed to so many works. He did rudimentary calculus, proved geometrical theories, approximated pi, explained levers, founded statics and hydrostatics. And his work extended into the practical. He built machines, pulleys, the infamous Archimedes' screw pump, and supposedly even a deathly heat ray of lenses that could burn ships in seconds. He was sadly killed by Roman soldiers when Syracuse was taken. But, and this is indicative of how Romans pulled in Greek know-how, the Roman general Marcus Claudius Marcellus was angry that he lost an asset, who could have benefited his war campaigns. In fact, Cicero, who was born in the first century BCE mentioned Archimedes built mechanical devices that could show the motions of the planetary bodies. He claimed Thales had designed these and that Marcellus had taken one as his only personal loot from Syracuse and donated it to the Temple of Virtue in Rome. The math, astronomy, and physics that go into building a machine like that was the culmination of hundreds, if not thousands of years of building knowledge of the Cosmos, machinery, mathematics, and philosophy. Machines like that would have been the first known computers. Machines like the first or second century Antikythera mechanism, discovered in 1902 in a shipwreck in Greece. Initially thought to be a one-off, the device is more likely to represent the culmination of generations of great thinkers and doers. Generations that came to look to the Library of Alexandria as almost a Mecca. Until they didn't. The splintering of the lands Alexander conquered, the cost of the campaigns, the attacks from other empires, and the rise of the Roman Empire ended the age of Greek Enlightenment. As is often the case when there is political turmoil and those seeking power hate being challenged by the intellectuals, as had happened with Socrates and philosophers in Athens at the time, Ptolemy VIII caused The Library of Alexandria to enter into a slow decline that began with the expulsion of intellectuals from Alexandria in 145BC. This began a slow decline of the library until it burned, first with a small fire accidentally set by Caesar in 48 BCE and then for good in the 270s. But before the great library was gone for good, it would produce even more great engineers. Heron of Alexandria is one of the greatest. He created vending machines that would dispense holy water when you dropped a coin in it. He made small mechanical archers, models of dancers, and even a statue of a horse that could supposedly drink water. He gave us early steam engines two thousand years before the industrial revolution and ran experiments in optics. He gave us Heron's forumula and an entire book on mechanics, codifying the known works on automation at the time. In fact, he designed a programmable cart using strings wrapped around an axle, powered by falling weights. Claudius Ptolemy came to the empire from their holdings in Egypt, living in the first century. He wrote about harmonics, math, astronomy, computed the distance of the sun to the earth and also computed positions of the planets and eclipses, summarizing them into more simplistic tables. He revolutionized map making and the properties of light. By then, Romans had emerged as the first true world power and so the Classical Age. To research this section, I read and took copious notes from the following and apologize that each passage is not credited specifically but it would just look like a regular expressions if I tried: The Evolution of Technology by George Basalla. Civilizations by Filipe Fernández-Armesto, A Short History of Technology: From The Earliest Times to AD 1900 from TK Derry and Trevor I Williams, Communication in History Technology, Culture, Leonardo da vinci by Walter Isaacson, Society from David Crowley and Paul Heyer, Timelines in Science, by the Smithsonian, Wheels, Clocks, and Rockets: A History of Technology by Donald Cardwell, a few PhD dissertations and post-doctoral studies from journals, and then I got to the point where I wanted the information from as close to the sources as I could get so I went through Dialogues Concerning Two New Sciences from Galileo Galilei, Mediations from Marcus Aurelius, Pneumatics from Philo of Byzantium, The Laws of Thought by George Boole, Natural History from Pliny The Elder, Cassius Dio's Roman History, Annals from Tacitus, Orations by Cicero, Ethics, Rhetoric, Metaphysics, and Politics by Aristotle, Plato's Symposium and The Trial & Execution of Socrates.
Annie and David are back with 20 new questions from video games to art! Can you answer the following questions: What 1978 arcade game created by Tomohiro Nishikado was one of the first fixed shooter games and helped popularize the concept of achieving a high score? Boolean algebra, which was introduced by George Boole in 1854, is credited with laying the foundation of what age? Which painter's household is the setting for Tracy Chevalier's novel "Girl with a Pearl Earring?" In the 1994 Football World Cup held in the United States who was the first person to miss a penalty? Apparently there ain't no goal that's wide enough. Which fashion designer is credited with the creation of American sportswear? This 2002 film, starring Anthony Hopkins and Edward Norton, is a prequel to “Silence of the Lambs.” This was the one with the naked Ralph Fiennes in it. Who was the longest serving Prime Minister in the history of Japan? Music Hot Swing, Fast Talkin, Bass Walker, Dances and Dames, Ambush by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 3.0 http://creativecommons.org/licenses/by/3.0/ Don't forget to follow us on social media for more trivia at home: Patreon - patreon.com/quizbang - Please consider supporting us on Patreon. Check out our fun extras for patrons and help us keep this podcast going. We appreciate any level of support! Website - quizbangpod.com Check out our website, it will have all the links for social media that you need and while you're there, why not go to the contact us page and submit a question! Facebook - @quizbangpodcast - we post episode links and silly lego pictures to go with our trivia questions. Enjoy the silly picture and give your best guess, we will respond to your answer the next day to give everyone a chance to guess. Instagram - Quiz Quiz Bang Bang (quizquizbangbang), we post silly lego pictures to go with our trivia questions. Enjoy the silly picture and give your best guess, we will respond to your answer the next day to give everyone a chance to guess. Twitter - @quizbangpod We want to start a fun community for our fellow trivia lovers. If you hear/think of a fun or challenging trivia question, post it to our twitter feed and we will repost it so everyone can take a stab it. Come for the trivia - stay for the trivia. Ko-Fi - ko-fi.com/quizbangpod - Keep that sweet caffeine running through our body with a Ko-Fi, power us through a late night of fact checking and editing!
The name Claude Shannon has come up 8 times so far in this podcast. More than any single person. We covered George Boole and the concept that Boolean is a 0 and a 1 and that using Boolean algebra, you can abstract simple circuits into practically any higher level concept. And Boolean algebra had been used by a number of mathematicians, to perform some complex tasks. Including by Lewis Carroll in Through The Looking Glass to make words into math. And binary had effectively been used in morse code to enable communications over the telegraph. But it was Claude Shannon who laid the foundation for making a theory that took both the concept of communicating over the telegraph and applying Boolean algebra to get to a higher level of communication possible. And it all starts with bits, which we can thank Shannon for. Shannon grew up in Gaylord, Michigan. His mother was a high school principal and his grandfather had been an inventor. He built a telegraph as a child, using a barbed wire fence. But barbed wire isn't the greatest conducer of electricity and so… noise. And thus information theory began to ruminate in his mind. He went off to the University of Michigan and got a Bachelors in electrical engineering and another in math. A perfect combination for laying the foundation of the future. And he got a job as a research assistant to Vannevar Bash, who wrote the seminal paper, As We May Think. At that time, Bush was working at MIT on The Thinking Machine, or Differential Analyzer. This was before World War II and they had no idea, but their work was about to reshape everything. At the time, what we think of as computers today, were electro-mechanical. They had gears that were used for the more complicated tasks, and switches, used for simpler tasks. Shannon devoted his masters thesis to applying Boolean algebra, thus getting rid of the wheels, which moved slowly, and allowing the computer to go much faster. He broke down Boole's Laws of Thought into a manner it could be applied to parallel circuitry. That paper was called A Symbolic Analysis of Relay and Switching Circuits in 1937 and helped set the stage for the Hackers revolution that came shortly thereafter at MIT. At the urging of Vannevar Bush, he got his PhD in Biology, pushing genetics forward by theorizing that you could break the genetic code down into a matrix. The structure of DNA would be discovered by George Gamow in 1953 and Watson and Crick would discover the helix and Rosalind Franklin would use X-ray crystallography to capture the first photo of the structure. He headed off to Princeton in 1940 to work at the Institute for Advanced Study, where Einstein and von Neumann were. He quickly moved over to the National Defense Research Committee, as the world was moving towards World War II. A lot of computing was going into making projectiles, or bombs, more accurate. He co-wrote a paper called Data Smoothing and Prediction in Fire-Control Systems during the war. He'd gotten a primer in early cryptography, reading The Gold-Bug by Edgar Allan Poe as a kid. And it struck his fancy. So he started working on theories around cryptography, everything he'd learned forming into a single theory. He would have lunch with Alan Turning during the war. He would And it was around this work that he first coined the term “information theory” in 1945. A universal theory of communication gnawed at him and formed during this time, from the Institute, to the National Defense Research Committee, to Bell Labs, where he helped encrypt communications between world leaders. He hid it from everyone, including failed relationships. He broke information down into the smallest possible unit, a bit, short for a binary digit. He worked out how to compress information that was most repetitive. Similar to how morse code compressed the number of taps on the electrical wire by making the most common letters the shortest to send. Eliminating redundant communications established what we now call compression. Today we use the term lossless compression frequently in computing. He worked out that the minimum amount of information to send would be H = - Sigma Pi log2 Pi - or entropy. His paper, put out while he was at Bell, was called “A mathematical theory or communication” and came out in 1948. You could now change any data to a zero or a one and then compress it. Further, he had to find a way to calculate the maximum amount of information that could be sent over a communication channel before it became garbled, due to loss. We now call this the Shannon Limit. And so once we have that, he derived how to analyze information with math to correct for noise. That barbed wire fence could finally be useful. This would be used in all modern information connectivity. For example, when I took my Network+ we spent an inordinate amount of time learning about Carrier-sense multiple access with collision detection (CSMA/CD) - a media access control (MAC) method that used carrier-sensing to defer transmissions until no other stations are transmitting. And as his employer, Bell Labs helped shape the future of computing. Along with Unix, C, C++, the transistor, the laser, information theory is a less tangible yet given what we all have in our pockets on on our wrists these days, more tangible discovery. Having mapped the limits, Bell started looking to reach the limit. And so the digital communication age was born when the first modem would come out of his former employer, Bell Labs, in 1958. And just across the way in Boston, ARPA would begin working on the first Interface Message Processor in 1967, the humble beginnings of the Internet. His work done, he went back to MIT. His theories were applied to all sorts of disciplines. But he comes in less and less. Over time we started placing bits on devices. We started retrieving those bits. We started compressing data. Digital images, audio, and more. It would take 35 or so years He consulted with the NSA on cryptography. In 1949 he published Communication Theory of Secrecy Systems, pushed cryptography to the next level. His paper Prediction and Entropy of Printed English in 1951 practically created the field of natural language processing, which evolved into various branches of machine learning. He helped give us the Nyquist–Shannon sampling theorem, used in aliasing, deriving maximum throughput, RGB, and of course signal to noise. He loved games. In 1941 he theorized the Shannon Number, or the game-tree complexity of chess. In case you're curious, the reason deep blue can win at chess is that it can brute force 10 to the 120th power. His love of games continued and in 1949 he presented Programming a Computer for Playing Chess. That was the first time we thought about computers playing chess. And he'd have a standing bet that a computer would beat a human grand master at chess by 2001. Garry Kasparov lost to Deep Blue in 1997. That curiosity extended far beyond chess. He would make Theseus in 1950 - a maze with a mouse that learned how to escape, using relays from phone switches. One of the earliest forms of machine learning. In 1961 he would co-invent the first wearable computer to help win a game of roulette. That same year he designed the Minivan 601 to help teach how computers worked. So we'll leave you with one last bit of information. Shannon's maxim is that “the enemy knows the system.” I used to think it was just a shortened version of Kerckhoffs's principle, which is that it should be possible to understand a cryptographic system, for example, modern public key ciphers, but not be able to break the encryption without a private key. Thing is, the more I know about Shannon the more I suspect that what he was really doing was giving the principle a broader meaning. So think about that as you try and decipher what is and what is not disinformation in such a noisy world. Lots and lots of people would cary on the great work in information theory. Like Kullback–Leibler divergence, or relative entropy. And we owe them all our thanks. But here's the thing about Shannon: math. He took things that could have easily been theorized - and he proved them. Because science can refute disinformation. If you let it.
Boolean algebra Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us to innovate (and sometimes cope with) the future! Today we're going to talk a little about math. Or logic. Computers are just a bunch of zeroes and ones, right? Binary. They make shirts about it. You know, there are 10 types of people in the world. But where did that come from? After centuries of trying to build computing devices that could help with math using gears that had lots of slots in them, armed with tubes and then transistors, we had to come up with a simpler form of logic. And why write your own complicated math when you can borrow it and have instant converts to your cause? Technical innovations are often comprised of a lot of building blocks from different fields of scientific or scholastic studies. The 0s and 1s, which make up the flip-flop circuits computers are so famous for, are made possible by the concept that all logic can be broken down into either true or false. And so the mathematical logic that we have built trillions of dollars in industry off of began in 1847 in a book called The Mathematical Analysis of Logic, by George Boole. He would follow that up in a book called An Investigation of the Laws of Thought in 1854. He was he father of what we would later call Boolean Algebra once the science of an entire mathematical language built on true and false matured enough for Charles Sanders Peirce wrote a book called The Simplest Mathematics and had a title called Boolian Algebra with One Constant. By 1913, there were many more works with the name and it became Boolean algebra. This was right around the time that the electronic research community had first started experimenting with using vacuum tubes as flip-flop switches. So there's elementary algebra where you can have any old number with any old logical operation. Those operators can be addition, subtraction, multiplication, division, etc. But in boolean algebra the only variables available are a 0 or a 1. Later we would get abstract algebra as well, but for computing it was way simpler to just stick with those 0s and 1s and in fact, ditching the gears from the old electromechanical computing paved the way for tubes to act as flip-flop switches, and transistors to replace those. And the evolutions came. Both to the efficiency of flip-flop switches and to the increasingly complex uses for mechanical computing devices. But they hadn't all been mashed up together. So set theory and statistics were evolving. And Huntington, Jevons, Schröder, basically perfected Boolean logic, paving the way for MH Stone to provide that Boolean algebra is isomorphic to a field of sets by 1936. And so it should come as no surprise that Boolean algebra would be key to the development of basic mathematical functions used on the Berry-Attansoff computer. Remember that back then, all computing was basically used for math. Claude Shannon would help apply Boolean algebra to switching circuits. This involved binary decision diagrams for synthesizing and verifying the design of logic circuits. And so we could analyze and design circuits using algebra to define logic gates. Those gates would get smaller and faster and combined using combinational logic until we got LSI circuits and later with the automation of the design of chips, VLSI. So to put it super-simple, let's say you are trying to do some maths. First up, you convert values to bits, which are binary digits. Those binary digits would be represented as a 0 or a 1, expressed in binary algebra as . There's a substantial amount of information you can pack into those bits, with all major characters easily allowed for in a byte, which is 8 of those bits. So let's say you also map your algebraic operators using those 0s and 1s, another byte. Now you can add the number in the first byte. To do so though, you would need to basically translate the notations from classical propositional calculus to their expression in Boolean algebra, typically done in an assembler. Much, much more logic is required to apply quantifiers. And simple true values are 0 and 1 but have a one step truth table to define AND (also known as a conjunction), OR (also known as a disjunction), and NOT XOR (also known as an exclusive-or). This allows for an exponential increase in the amount of logic you can apply to a problem. The act of deceasing if the problem satisfies the ability to translate into boolean capabilities is known as the Boolean satisfiability problem or SAT. At this point though, all problems really seem solvable using some model of computation given the amount of complex circuitry we now have. So the computer interprets information the functions and sets the state of a switch based on the input. The computer then combines all those trues and false into the necessary logic and outputs an answer. Because the 0s and 1s took too much the input got moved to punch cards, and modern programming was born. These days we can also add Boolean logic into higher functions, such as running AND for google searches. So ultimately the point of this episode is to explore what exactly all those 0s and 1s are. They're complex thoughts and formulas expressed as true and false using complicated Boolean algebra to construct them. Now, there's a chance that some day we'll find something beyond a transistor. And then we can bring a much more complicated expression of thought broken down into different forms of algebra. But there's also the chance that Boolean algebra sitting on transistors or other things that are the next evolution of boolean gates or transistors is really, well, kinda' it. So from the Barry-Attansoff computer comes Colossus and then ENIAC in 1945. It wasn't obvious yet but nearly 100 years after the development of Boolean algebra, it had been combined with several other technologies to usher in the computing revolution, setting up the evolution to microprocessors and the modern computer. These days, few programmers are constrained by programming in Boolean logic. Instead, we have many more options. Although I happen to believe that understanding this fundamental building block was one of the most important aspects of studying computer science and provided an important foundation to computing in general. So thank you for listening to this episode. I'm sure algebra got ya' totally interested and that you're super-into math. But thanks for listening anyways. I'm pretty lucky to have ya'. Have a great day
Chiunque abbia avuto a che fare con computer o elettronica digitale ha sentito parlare di Algebra Booleana, che è stata inventata proprio da George Boole
The Microchip Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Todays episode is on the history of the microchip, or microprocessor. This was a hard episode, because it was the culmination of so many technologies. You don't know where to stop telling the story - and you find yourself writing a chronological story in reverse chronological order. But few advancements have impacted humanity the way the introduction of the microprocessor has. Given that most technological advances are a convergence of otherwise disparate technologies, we'll start the story of the microchip with the obvious choice: the light bulb. Thomas Edison first demonstrated the carbon filament light bulb in 1879. William Joseph Hammer, an inventor working with Edison, then noted that if he added another electrode to a heated filament bulb that it would glow around the positive pole in the vacuum of the bulb and blacken the wire and the bulb around the negative pole. 25 years later, John Ambrose Fleming demonstrated that if that extra electrode is made more positive than the filament the current flows through the vacuum and that the current could only flow from the filament to the electrode and not the other direction. This converted AC signals to DC and represented a boolean gate. In the 1904 Fleming was granted Great Britain's patent number 24850 for the vacuum tube, ushering in the era of electronics. Over the next few decades, researchers continued to work with these tubes. Eccles and Jordan invented the flip-flop circuit at London's City and Guilds Technical College in 1918, receiving a patent for what they called the Eccles-Jordan Trigger Circuit in 1920. Now, English mathematician George Boole back in the earlier part of the 1800s had developed Boolean algebra. Here he created a system where logical statements could be made in mathematical terms. Those could then be performed using math on the symbols. Only a 0 or a 1 could be used. It took awhile, John Vincent Atanasoff and grad student Clifford Berry harnessed the circuits in the Atanasoff-Berry computer in 1938 at Iowa State University and using Boolean algebra, successfully solved linear equations but never finished the device due to World War II, when a number of other technological advancements happened, including the development of the ENIAC by John Mauchly and J Presper Eckert from the University of Pennsylvania, funded by the US Army Ordinance Corps, starting in 1943. By the time it was taken out of operation, the ENIAC had 20,000 of these tubes. Each digit in an algorithm required 36 tubes. Ten digit numbers could be multiplied at 357 per second, showing the first true use of a computer. John Von Neumann was the first to actually use the ENIAC when they used one million punch cards to run the computations that helped propel the development of the hydrogen bomb at Los Alamos National Laboratory. The creators would leave the University and found the Eckert-Mauchly Computer Corporation. Out of that later would come the Univac and the ancestor of todays Unisys Corporation. These early computers used vacuum tubes to replace gears that were in previous counting machines and represented the First Generation. But the tubes for the flip-flop circuits were expensive and had to be replaced way too often. The second generation of computers used transistors instead of vacuum tubes for logic circuits. The integrated circuit is basically a wire set into silicon or germanium that can be set to on or off based on the properties of the material. These replaced vacuum tubes in computers to provide the foundation of the boolean logic. You know, the zeros and ones that computers are famous for. As with most modern technologies the integrated circuit owes its origin to a number of different technologies that came before it was able to be useful in computers. This includes the three primary components of the circuit: the transistor, resistor, and capacitor. The silicon that chips are so famous for was actually discovered by Swedish chemist Jöns Jacob Berzelius in 1824. He heated potassium chips in a silica container and washed away the residue and viola - an element! The transistor is a semiconducting device that has three connections that amplify data. One is the source, which is connected to the negative terminal on a battery. The second is the drain, and is a positive terminal that, when touched to the gate (the third connection), the transistor allows electricity through. Transistors then acts as an on/off switch. The fact they can be on or off is the foundation for Boolean logic in modern computing. The resistor controls the flow of electricity and is used to control the levels and terminate lines. An integrated circuit is also built using silicon but you print the pattern into the circuit using lithography rather than painstakingly putting little wires where they need to go like radio operators did with the Cats Whisker all those years ago. The idea of the transistor goes back to the mid-30s when William Shockley took the idea of a cat's wicker, or fine wire touching a galena crystal. The radio operator moved the wire to different parts of the crystal to pick up different radio signals. Solid state physics was born when Shockley, who first studied at Cal Tech and then got his PhD in Physics, started working on a way to make these useable in every day electronics. After a decade in the trenches, Bell gave him John Bardeen and Walter Brattain who successfully finished the invention in 1947. Shockley went on to design a new and better transistor, known as a bipolar transistor and helped move us from vacuum tubes, which were bulky and needed a lot of power, to first gernanium, which they used initially and then to silicon. Shockley got a Nobel Prize in physics for his work and was able to recruit a team of extremely talented young PhDs to help work on new semiconductor devices. He became increasingly frustrated with Bell and took a leave of absence. Shockley moved back to his hometown of Palo Alto, California and started a new company called the Shockley Semiconductor Laboratory. He had some ideas that were way before his time and wasn't exactly easy to work with. He pushed the chip industry forward but in the process spawned a mass exodus of employees that went to Fairchild in 1957. He called them the “Traitorous 8” to create what would be Fairchild Semiconductors. The alumni of Shockley Labs ended up spawning 65 companies over the next 20 years that laid foundation of the microchip industry to this day, including Intel. . If he were easier to work with, we might not have had the innovation that we've seen if not for Shockley's abbrasiveness! All of these silicon chip makers being in a small area of California then led to that area getting the Silicon Valley moniker, given all the chip makers located there. At this point, people were starting to experiment with computers using transistors instead of vacuum tubes. The University of Manchester created the Transistor Computer in 1953. The first fully transistorized computer came in 1955 with the Harwell CADET, MIT started work on the TX-0 in 1956, and the THOR guidance computer for ICBMs came in 1957. But the IBM 608 was the first commercial all-transistor solid-state computer. The RCA 501, Philco Transac S-1000, and IBM 7070 took us through the age of transistors which continued to get smaller and more compact. At this point, we were really just replacing tubes with transistors. But the integrated circuit would bring us into the third generation of computers. The integrated circuit is an electronic device that has all of the functional blocks put on the same piece of silicon. So the transistor, or multiple transistors, is printed into one block. Jack Kilby of Texas Instruments patented the first miniaturized electronic circuit in 1959, which used germanium and external wires and was really more of a hybrid integrated Circuit. Later in 1959, Robert Noyce of Fairchild Semiconductor invented the first truly monolithic integrated circuit, which he received a patent for. While doing so independently, they are considered the creators of the integrated circuit. The third generation of computers was from 1964 to 1971, and saw the introduction of metal-oxide-silicon and printing circuits with photolithography. In 1965 Gordon Moore, also of Fairchild at the time, observed that the number of transistors, resistors, diodes, capacitors, and other components that could be shoved into a chip was doubling about every year and published an article with this observation in Electronics Magazine, forecasting what's now known as Moore's Law. The integrated circuit gave us the DEC PDP and later the IBM S/360 series of computers, making computers smaller, and brought us into a world where we could write code in COBOL and FORTRAN. A microprocessor is one type of integrated circuit. They're also used in audio amplifiers, analog integrated circuits, clocks, interfaces, etc. But in the early 60s, the Minuteman missal program and the US Navy contracts were practically the only ones using these chips, at this point numbering in the hundreds, bringing us into the world of the MSI, or medium-scale integration chip. Moore and Noyce left Fairchild and founded NM Electronics in 1968, later renaming the company to Intel, short for Integrated Electronics. Federico Faggin came over in 1970 to lead the MCS-4 family of chips. These along with other chips that were economical to produce started to result in chips finding their way into various consumer products. In fact, the MCS-4 chips, which split RAM , ROM, CPU, and I/O, were designed for the Nippon Calculating Machine Corporation and Intel bought the rights back, announcing the chip in Electronic News with an article called “Announcing A New Era In Integrated Electronics.” Together, they built the Intel 4004, the first microprocessor that fit on a single chip. They buried the contacts in multiple layers and introduced 2-phase clocks. Silicon oxide was used to layer integrated circuits onto a single chip. Here, the microprocessor, or CPU, splits the arithmetic and logic unit, or ALU, the bus, the clock, the control unit, and registers up so each can do what they're good at, but live on the same chip. The 1st generation of the microprocessor was from 1971, when these 4-bit chips were mostly used in guidance systems. This boosted the speed by five times. The forming of Intel and the introduction of the 4004 chip can be seen as one of the primary events that propelled us into the evolution of the microprocessor and the fourth generation of computers, which lasted from 1972 to 2010. The Intel 4004 had 2,300 transistors. The Intel 4040 came in 1974, giving us 3,000 transistors. It was still a 4-bit data bus but jumped to 12-bit ROM. The architecture was also from Faggin but the design was carried out by Tom Innes. We were firmly in the era of LSI, or Large Scale Integration chips. These chips were also used in the Busicom calculator, and even in the first pinball game controlled by a microprocessor. But getting a true computer to fit on a chip, or a modern CPU, remained an elusive goal. Texas Instruments ran an ad in Electronics with a caption that the 8008 was a “CPU on a Chip” and attempted to patent the chip, but couldn't make it work. Faggin went to Intel and they did actually make it work, giving us the first 8-bit microprocessor. It was then redesigned in 1972 as the 8080. A year later, the chip was fabricated and then put on the market in 1972. Intel made the R&D money back in 5 months and sparked the idea for Ed Roberts to build The Altair 8800. Motorola and Zilog brought competition in the 6900 and Z-80, which was used in the Tandy TRS-80, one of the first mass produced computers. N-MOSs transistors on chips allowed for new and faster paths and MOS Technology soon joined the fray with the 6501 and 6502 chips in 1975. The 6502 ended up being the chip used in the Apple I, Apple II, NES, Atari 2600, BBC Micro, Commodore PET and Commodore VIC-20. The MOS 6510 variant was then used in the Commodore 64. The 8086 was released in 1978 with 3,000 transistors and marked the transition to Intel's x86 line of chips, setting what would become the standard in future chips. But the IBM wasn't the only place you could find chips. The Motorola 68000 was used in the Sun-1 from Sun Microsystems, the HP 9000, the DEC VAXstation, the Comodore Amiga, the Apple Lisa, the Sinclair QL, the Sega Genesis, and the Mac. The chips were also used in the first HP LaserJet and the Apple LaserWriter and used in a number of embedded systems for years to come. As we rounded the corner into the 80s it was clear that the computer revolution was upon us. A number of computer companies were looking to do more than what they could do with he existing Intel, MOS, and Motorola chips. And ARPA was pushing the boundaries yet again. Carver Mead of Caltech and Lynn Conway of Xerox PARC saw the density of transistors in chips starting to plateau. So with DARPA funding they went out looking for ways to push the world into the VLSI era, or Very Large Scale Integration. The VLSI project resulted in the concept of fabless design houses, such as Broadcom, 32-bit graphics, BSD Unix, and RISC processors, or Reduced Instruction Set Computer Processor. Out of the RISC work done at UC Berkely came a number of new options for chips as well. One of these designers, Acorn Computers evaluated a number of chips and decided to develop their own, using VLSI Technology, a company founded by more Fairchild Semiconductor alumni) to manufacture the chip in their foundry. Sophie Wilson, then Roger, worked on an instruction set for the RISC. Out of this came the Acorn RISC Machine, or ARM chip. Over 100 billion ARM processors have been produced, well over 10 for every human on the planet. You know that fancy new A13 that Apple announced. It uses a licensed ARM core. Another chip that came out of the RISC family was the SUN Sparc. Sun being short for Stanford University Network, co-founder Andy Bchtolsheim, they were close to the action and released the SPARC in 1986. I still have a SPARC 20 I use for this and that at home. Not that SPARC has gone anywhere. They're just made by Oracle now. The Intel 80386 chip was a 32 bit microprocessor released in 1985. The first chip had 275,000 transistors, taking plenty of pages from the lessons learned in the VLSI projects. Compaq built a machine on it, but really the IBM PC/AT made it an accepted standard, although this was the beginning of the end of IBMs hold on the burgeoning computer industry. And AMD, yet another company founded by Fairchild defectors, created the Am386 in 1991, ending Intel's nearly 5 year monopoly on the PC clone industry and ending an era where AMD was a second source of Intel parts but instead was competing with Intel directly. We can thank AMD's aggressive competition with Intel for helping to keep the CPU industry going along Moore's law! At this point transistors were only 1.5 microns in size. Much, much smaller than a cats whisker. The Intel 80486 came in 1989 and again tracking against Moore's Law we hit the first 1 million transistor chip. Remember how Compaq helped end IBM's hold on the PC market? When the Intel 486 came along they went with AMD. This chip was also important because we got L1 caches, meaning that chips didn't need to send instructions to other parts of the motherboard but could do caching internally. From then on, the L1 and later L2 caches would be listed on all chips. We'd finally broken 100MHz! Motorola released the 68050 in 1990, hitting 1.2 Million transistors, and giving Apple the chip that would define the Quadra and also that L1 cache. The DEC Alpha came along in 1992, also a RISC chip, but really kicking off the 64-bit era. While the most technically advanced chip of the day, it never took off and after DEC was acquired by Compaq and Compaq by HP, the IP for the Alpha was sold to Intel in 2001, with the PC industry having just decided they could have all their money. But back to the 90s, ‘cause life was better back when grunge was new. At this point, hobbyists knew what the CPU was but most normal people didn't. The concept that there was a whole Univac on one of these never occurred to most people. But then came the Pentium. Turns out that giving a chip a name and some marketing dollars not only made Intel a household name but solidified their hold on the chip market for decades to come. While the Intel Inside campaign started in 1991, after the Pentium was released in 1993, the case of most computers would have a sticker that said Intel Inside. Intel really one upped everyone. The first Pentium, the P5 or 586 or 80501 had 3.1 million transistors that were 16.7 micrometers. Computers kept getting smaller and cheaper and faster. Apple answered by moving to the PowerPC chip from IBM, which owed much of its design to the RISC. Exactly 10 years after the famous 1984 Super Bowl Commercial, Apple was using a CPU from IBM. Another advance came in 1996 when IBM developed the Power4 chip and gave the world multi-core processors, or a CPU that had multiple CPU cores inside the CPU. Once parallel processing caught up to being able to have processes that consumed the resources on all those cores, we saw Intel's Pentium D, and AMD's Athlon 64 x2 released in May 2005 bringing multi-core architecture to the consumer. This led to even more parallel processing and an explosion in the number of cores helped us continue on with Moore's Law. There are now custom chips that reach into the thousands of cores today, although most laptops have maybe 4 cores in them. Setting multi-core architectures aside for a moment, back to Y2K when Justin Timberlake was still a part of NSYNC. Then came the Pentium Pro, Pentium II, Celeron, Pentium III, Xeon, Pentium M, Xeon LV, Pentium 4. On the IBM/Apple side, we got the G3 with 6.3 million transistors, G4 with 10.5 million transistors, and the G5 with 58 million transistors and 1,131 feet of copper interconnects, running at 3GHz in 2002 - so much copper that NSYNC broke up that year. The Pentium 4 that year ran at 2.4 GHz and sported 50 million transistors. This is about 1 transistor per dollar made off Star Trek: Nemesis in 2002. I guess Attack of the Clones was better because it grossed over 300 Million that year. Remember how we broke the million transistor mark in 1989? In 2005, Intel started testing Montecito with certain customers. The Titanium-2 64-bit CPU with 1.72 billion transistors, shattering the billion mark and hitting a billion two years earlier than projected. Apple CEO Steve Jobs announced Apple would be moving to the Intel processor that year. NeXTSTEP had been happy as a clam on Intel, SPARC or HP RISC so given the rapid advancements from Intel, this seemed like a safe bet and allowed Apple to tell directors in IT departments “see, we play nice now.” And the innovations kept flowing for the next decade and a half. We packed more transistors in, more cache, cleaner clean rooms, faster bus speeds, with Intel owning the computer CPU market and AMD slowly growing from the ashes of Acorn computer into the power-house that AMD cores are today, when embedded in other chips designs. I'd say not much interesting has happened, but it's ALL interesting, except the numbers just sound stupid they're so big. And we had more advances along the way of course, but it started to feel like we were just miniaturizing more and more, allowing us to do much more advanced computing in general. The fifth generation of computing is all about technologies that we today consider advanced. Artificial Intelligence, Parallel Computing, Very High Level Computer Languages, the migration away from desktops to laptops and even smaller devices like smartphones. ULSI, or Ultra Large Scale Integration chips not only tells us that chip designers really have no creativity outside of chip architecture, but also means millions up to tens of billions of transistors on silicon. At the time of this recording, the AMD Epic Rome is the single chip package with the most transistors, at 32 billion. Silicon is the seventh most abundant element in the universe and the second most in the crust of the planet earth. Given that there's more chips than people by a huge percentage, we're lucky we don't have to worry about running out any time soon! We skipped RAM in this episode. But it kinda' deserves its own, since RAM is still following Moore's Law, while the CPU is kinda' lagging again. Maybe it's time for our friends at DARPA to get the kids from Berkley working at VERYUltra Large Scale chips or VULSIs! Or they could sign on to sponsor this podcast! And now I'm going to go take a VERYUltra Large Scale nap. Gentle listeners I hope you can do that as well. Unless you're driving while listening to this. Don't nap while driving. But do have a lovely day. Thank you for listening to yet another episode of the History of Computing Podcast. We're so lucky to have you!
In Falken's Maze, technologist and former professor Jason Thomas explores the intersection of technology, history, and culture. Created for listeners nostalgic for the 80s but who also want to understand the complexities of today, our show demystifies the world's most compelling technologies and events through 80's movies, music, and television. This is where history, tech, and retro pop collide. If you enjoy the show, tell a friend, leave a review, click some stars!! Find us online at www.falkenspodcast.com. References: Binary and Data (Khan Academy) Jaime Escalante Biography Jaime Escalante, Inspiration for a Movie, Dies at 79 Stand and Deliver Clip Music: CBS Special Presentation Intro Street Dancing by Timecrawler 82 is Licensed under a Creative Commons Attribution (4.0) International license Paint The Sky by Dysfunction_AL (c) copyright 2015 Licensed under a Creative Commons Attribution (3.0) license. Right About Time Open Music Revolution Innovation Open Music Revolution
durée : 00:58:59 - Les Chemins de la philosophie - par : Adèle Van Reeth, Géraldine Mosna-Savoye - Portrait du philosophe Souleymane Bachir Diagne, travaillé par la question de la traduction, un concept qui a traversé tous ses champs de recherche, de sa thèse sur le mathématicien George Boole à l’origine de l’algèbre de la logique, jusqu’à l’histoire de la philosophie dans le monde islamique… - invités : Souleymane Bachir DIAGNE - Souleymane Bachir Diagne : philosophe, professeur de philosophie française et des questions philosophiques en Afrique dans les départements de philosophie et de Français à l'Université de Columbia, directeur de l'Institut d'Etudes africaines - réalisé par : Nicolas Berger, Thomas Beau
In this episode of The Bitwise Podcast, we explore the life and work of the mathematician and logic pioneer, George Boole.
Provable Security: Conversations on Next Gen Security. We published a podcast (https://aws.amazon.com/podcasts/aws-podcast/#266) on provable security (https://aws.amazon.com/security/provable-security/) last fall, and, due to high customer interest, we decided to bring you a regular peek into this AWS initiative. This series will cover how the traditionally academic field of automated reasoning is being applied at AWS at scale to help provide higher assurances for our customers, regulators, and the broader cloud industry. We’ll talk to individuals whose minds helped shape the history of automated reasoning, as well as learn from engineers and scientists who are applying automated reasoning to help solve pressing security and privacy challenges in the cloud. In our first interview, Byron Cook, Director of the AWS Automated Reasoning Group, sits down with Moshe Vardi, Karen Ostrum George Distinguished Service Professor in Computational Engineering and Director of the Ken Kennedy Institute for Information Technology. Moshe describes the history of logic, automated reasoning, formal verification and his legendary moustache. Learn more at the AWS Provable Security webpage (https://aws.amazon.com/security/provable-security/). Automated reasoning public figures: George Boole https://en.wikipedia.org/wiki/George_Boole Tony Hoare https://en.wikipedia.org/wiki/Tony_Hoare Robert W. Floyd https://en.wikipedia.org/wiki/Robert_W._Floyd John McCarthy https://en.wikipedia.org/wiki/John_McCarthy_(computer_scientist) Amir Pnueli https://en.wikipedia.org/wiki/Amir_Pnueli Gottlob Frege https://en.wikipedia.org/wiki/Gottlob_Frege Arthur Prior https://en.wikipedia.org/wiki/Arthur_Prior John Harrison https://www.cl.cam.ac.uk/~jrh13/ Automated techniques and algorithms: First-order logic https://en.wikipedia.org/wiki/First-order_logic Temporal logic https://en.wikipedia.org/wiki/Temporal_logic An Automata-Theoretic Approach to Automatic Program Verification https://orbi.uliege.be/bitstream/2268/116609/1/lics86.pdf Boolean satisfiability problem https://en.wikipedia.org/wiki/Boolean_satisfiability_problem Davis-Putnam algorithm https://en.wikipedia.org/wiki/Davis–Putnam_algorithm SAT Competition https://www.satcompetition.org/
Robots, religion, and really bad re-enactors -- this documentary has it all! We watched The Genius of George Boole and learned some weird things about the man who *eventually* became the father of modern information technology. #BOOOOOLE
Julie Moronuki: @argumatronic | argumatronic.com Show Notes: This episode is a follow-up episode to the one we did with Julie in September: Learn Haskell, Think Less. We talk a whole lot about monoids, and learning programming languages untraditionally. Transcript: CHARLES: Hello everybody and welcome to The Frontside Podcast, Episode 93. My name is Charles Lowell, a developer here at The Frontside and I am your podcast host-in-training. With me today from The Frontside is Elrick also. Hello, Elrick. ELRICK: Hey. CHARLES: How are you doing? ELRICK: I'm doing great. CHARLES: Alright. Are you ready? ELRICK: Oh yeah, I'm excited. CHARLES: You ready to do some podcasting? Alright. Because we actually have a repeat guest on today. It was a very popular episode from last year. We have with us the author of ‘Learning Haskell: From First Principles' and a book that is coming out but is not out yet but one that we're eagerly looking forward to, Julie Moronuki. Welcome. JULIE: Hi. It's great to be back. CHARLES: What was it about, was it last October? JULIE: I think it was right before I went to London to Haskell [inaudible]. CHARLES: Yeah. JULIE: Which was in early October. So yeah… CHARLES: Okay. JULIE: Late or early October, somewhere in there. CHARLES: Okay. You went to Haskell eXchange. You gave a talk on Monoids. What have you been up to since then? JULIE: Oh wow. It's been a really busy time. I moved to Atlanta and so I've had all this stuff going on. And so, I was telling a friend last night “I'm going to be on this podcast tomorrow and I don't think I have anything to talk about.” [Laughter] JULIE: Because I feel like everything has just been like, all my energy has been sucked up with the move and stuff. But I guess… CHARLES: Is it true that everybody calls it ‘Fatlanta' there? JULIE: Yeah. [Laughs] CHARLES: I've heard the term. But do people actually be like “Yes, I'm from Fatlanta.” JULIE: I've heard it a couple of times. CHARLES: Okay. JULIE: Maybe it's mostly outsiders. I'm not sure. CHARLES: [Chuckles] JULIE: But yeah, it's a real cool city and I'm real happy to be here. But yeah, I did go in October. I went to London and I spoke at Haskell eXchange which was really amazing. It was a great experience and I hope to be able to go back. I got to meet Simon Payton Jones which was incredible. Yeah, and I gave a talk on monoids, monoids and semirings. And… CHARLES: Ooh, a semiring. JULIE: Semiring. So, a semiring is a structure where there's two monoids. So, both of them have an identity element. And the identity element of one of them is an annihilator. Isn't that a great word? It's an annihilator… CHARLES: Whoa. JULIE: Of the other. So, if you think of addition and multiplication, the identity element for addition is zero, right? But if you multiply times zero, you're always going to get to zero, so it's the annihilator of multiplication. CHARLES: Whoa. I think my mind is like annihilated. [Laughter] JULIE: So, it's a structure where you're got two monoids and one of them distributes over the other, the distributive property of addition and multiplication. And the identity of one of them is the annihilator of the other. Anyway, but yeah, I gave a history of where monoids come from and that was really fun. CHARLES: Yeah. I would actually like to get a summary of that, because I think since we last talked, I've been getting a little bit deeper and deeper into these formal type classes. I'm still not doing Haskell day-to-day but I've been importing these ideas into just plain vanilla JavaScript. And it turns out, it's actually a pretty straightforward thing to do. There's definitely nothing stopping these things from existing in JavaScript. It's just, I think people find type class programming can be a tough hill to climb or something like that, or find it intimidating. JULIE: Yeah. CHARLES: But I think it's actually quite powerful. And I think one of the things that I'm coming to realize is that these are well-worn pathways for composing things. JULIE: Right. CHARLES: So, what you encounter in the wild is people generating these one-off ways of composing things. And so, for a shop like ours, we did a lot of Ruby on Rails, a lot of Ember, and both of those frameworks have very strong philosophical underpinnings that's like “You shouldn't be reinventing the wheel if you don't have to.” I think that all of these patterns even though they have crazy quixotic esoteric names, they are the wheels, the gold standard of wheel. [Laughs] They're like… JULIE: Right. CHARLES: We should not be reinventing. And so, that's what I'm coming to realize, is I'm into this. And last time you were talking, you were saying “I find monoids so fascinating.” I think it took a little bit while to seep in. But now, I feel like it's like when you look at one of those stereo vision things, like I'm seeing monoids everywhere. It's like sometimes they won't leave me alone. JULIE: In ‘Real World Haskell' there's a line I've always liked. And I'm going to misquote it slightly but paraphrasing at least. “Monoids are ubiquitous in programming. It's just in Haskell we have the ability to just talk about them as monoids.” CHARLES: Yeah, yeah. JULIE: Because we have a name and we have a framework for gathering all these similar things together. CHARLES: Right. And it helps you. I feel like it helps you because if you understand the mechanics of a monoid, you can then when you encounter a new one, you're 90% there. JULIE: Right. CHARLES: Instead of having to learn the whole thing from scratch. JULIE: Right. And as you see them over and over again, you develop a kind of intuition for when something is monoidal or something looks like a semiring. And so, you get a certain intuition where you think, “Oh, this thing is like a… this is a monad.” And so, what do I know about monads? All of a sudden, this new situation like all these things that I know about monads, I can apply to this new situation. And so, you gain some intuition for novel situations just by being able to relate them to things you already do know. CHARLES: Exactly. I want to pause here for people. The other thing that I think I've come in the last three months to embrace is just embrace the terminology. JULIE: Yeah. CHARLES: You got to just get over it. JULIE: [Chuckles] CHARLES: Think about it like learning a foreign language. The example I give is like tasku is the Finnish word for pocket. JULIE: Right. CHARLES: It sounds weird, right? Tasku. But if you say it 10 times and you think “Pocket, pocket, pocket, pocket, pocket.” JULIE: Yes, yeah. [Laughs] CHARLES: Then it's like, this is a very simple, very useful concept. JULIE: Right. CHARLES: And it's two-sided. There on the one hand, the terminology is obtuse. But at the same time, it's not. It's just, it is what it is. And it's just a symbol that's referencing a concept. JULIE: Right, right. CHARLES: It's a simple concept. So, I just want to be… I know for our listeners, I know that there's a general admonition. Don't worry about the terminology. It's… JULIE: Right, right. Like what I just said, I said the word ‘monad'. I just threw that out there at everybody, but [chuckles] it doesn't matter which one of these words we'd be talking about or whatever I call them. We could give monads a different name and it's still this concept that once you understand the concept itself, and then you can apply it in new situations, it doesn't matter then what it's called. But it does take getting used to. The words are… well, I think functor is a pretty good word for what it is. If you know the history of functor and how it came to mean what it means, I think it's a pretty good word. CHARLES: Really? So, I would love to know the history. Because functor is mystifying to me. It sounds like, I think the analogy I use is like if George Clinton and a funk parliament had an empire, the provinces, the governors of the provinces would be functors. ELRICK: [Laughs] JULIE: Yes. CHARLES: But [Laughs] that's the closest thing to an explanation I can come up with. JULIE: I might use that. I'm about to give a talk on functors. I might use that. [Laughter] ELRICK: Isn't that the name of the library? Funkadelic? CHARLES: Well, that's the name of the library that I've been… JULIE: [Could be], yeah. ELRICK: That you'd been… CHARLES: That I'd been [writing] for JavaScript. ELRICK: Yeah. CHARLES: That imports all these concepts. JULIE: [Laughs] ELRICK: Yeah. JULIE: Yeah. ELRICK: So awesome. JULIE: Yeah. Yeah, I have… CHARLES: So, what is the etymology of functor? JULIE: Well, as far as I can tell, Rudolf Carnap, the logician, invented the word. I don't know if he got it from somewhere else. But the first time I can find a reference to it is in, he wrote a book about… he was a logician but this is sort of a linguistics book. It's called ‘The Logical Syntax of Language'. And that's the first reference I know of to the word functor. And he was trying to really make language very logically systematic, which natural language is and isn't, right? [Chuckles] CHARLES: Right. JULIE: But he was only concerned with really logically systematizing everything. And so, he used the word functor to describe some kinds of function words in language that relate one part of a sentence to another part of a sentence. CHARLES: Huh. So, what's an example? JULIE: So, the example that I've used in the past is, as far as I know this is not one that Carnap himself actually uses but it's the clearest one outside of that book… well the ones inside the book I don't really think are very good examples because they're not really how people talk. So, the one that I've used to try to explain it is the word ‘not' in English where ‘not' gets applied to the whole sentence. It doesn't really change the logical structure of the sentence. It doesn't change the meaning of the sentence except for now it negates the whole thing. CHARLES: I see. JULIE: And so, it relates this sentence with this structure to a different context, which is now the whole thing has been negated. CHARLES: I see. So, the meaning changes, but the structure really doesn't. JULIE: Right. And it changes the whole meaning. CHARLES: Right. JULIE: Not just part of the sentence. So, if you imagine ‘not' applying to an entire sentence because of course we can apply it just to a single word or just to a single phrase and change the meaning just of that word or that phrase, but if you imagine a context where you've applied ‘not' to a whole sentence, to an entire proposition, because of course he's a logician. So, if you've applied ‘not' to an entire proposition, then it doesn't change the structure or the meaning of that proposition per se except for it just relates it to the category of negated propositions. CHARLES: Mmhmm. JULIE: So, that's where it comes from. And… CHARLES: But I still don't understand why he called it functor. JULIE: He's sort of making up… well, actually I think the German might be the same word. CHARLES: Ah, okay. JULIE: Because he was writing in German. Because he's looking for something that evokes the idea of ‘function word'. CHARLES: Oh. JULIE: So, if you were to take the ‘func' of ‘function' [Laughs] and the, I don't know, maybe in German there's some better explanation for making this into a particular word. But that's how I think of it. So, it's ‘function word'. And then category theorists took it from Carnap to mean a way to map a function in this category or when we're talking about Haskell, a function of this type, to a function of another type. CHARLES: Okay. JULIE: And so, it takes the entire function, preserves the structure of the function just like negation preserves the structure of the sentence, and maps the whole thing to just a different context. So, if you had a function from A to B, functor can give you a function from maybe A to maybe B. CHARLES: Right. JULIE: So, it takes the function and just maps it into a different context. CHARLES: Right. So, a JavaScript example is if I've got an array of ints and a function of ints to strings, I can take any array of ints and get an array of strings. JULIE: Right. CHARLES: Or if I have a promise that has an int in it, I can take that same function to get a promise of a string. JULIE: Yeah. CHARLES: Yeah. I had no idea that it actually came from linguistics. JULIE: Yeah. [Laughs] CHARLES: So actually, the category theorists even… it digs deeper than category theory. They were actually borrowing concepts. JULIE: They were, yes. CHARLES: We just always are borrowing concepts. ELRICK: I like the borrowing of concepts. JULIE: Yeah. ELRICK: I think where people struggle with certain things, it's tying it back to something that they're familiar with. So, that's where I get… my mind is like [makes exploding sound] “I now get it,” is when someone ties it back to something that I am… CHARLES: Right. ELRICK: Familiar with. Like Charles' work with the JavaScript, tying it with JavaScript. I'm like, “Oh, now I see what they're talking about.” JULIE: Right. CHARLES: because you realize, you're using these concepts. People are using them, just they're using them anonymously. JULIE: Right. ELRICK: True. CHARLES: They don't have names for them. JULIE: Right. ELRICK: True. CHARLES: It's literally like an anonymous function and you're just taking that lambda and assigning it to a symbol. JULIE: Yeah. CHARLES: You're like “Oh wait. I've been using this anonymous function all over the place for years. I didn't realize. Boom. This is actually a formal concept.” ELRICK: True. And I think when people say like “Don't reinvent the wheel” it's a great statement for someone that has seen a wheel already. [Laughter] ELRICK: You know what I'm saying? If you never saw a wheel, then your'e going to reinvent the wheel because you're like “Aw man. This doesn't exist.” [Chuckles] JULIE: Yeah. ELRICK: But if people are exposed to these concepts, then they wouldn't reinvent the wheel. CHARLES: Right. JULIE: Right. Yeah. CHARLES: Instead of calling in some context, calling it a roller. [Chuckles] It's a round thingy. [Laughter] JULIE: Right. Yeah, so that's a little bit what I tried to do in my monoid talk in London. I tried to give some history of monoid, where this idea comes from and why it's worth talking about these things. CHARLES: Yeah. JULIE: Why it's worth talking about the structure. CHARLES: So, why is it worth the… where did it come from and why is it worth talking about? JULIE: Oh, so back when Boole, George Boole, when he decided to start formalizing logic… CHARLES: George Boole also, he was a career-switcher too, right? He was a primary school teacher. JULIE: Right, yeah. CHARLES: If I recall. He actually, he was basically teaching. Primary school is like elementary school in England, right? JULIE: I believe so, yes. CHARLES: Yeah. I think he was like, he was basically the US equivalent of an elementary school teacher who then went on to a second and probably, thankfully a big career that left a big legacy. JULIE: Right. Although no one knew exactly how big the legacy was really, until Claude Shannon picked it up and then just changed the whole world.[Laughs] Anyway, so Boole, when he was trying to come up with a formal algebra of logic so that we could not care so much about the semantic content of arguments (we could just symbolize them and just by manipulating symbols we could determine if an argument was logically valid or not), he was… well, for disjunction and conjunction which is AND and OR – well, disjunction would be the OR and conjunction the AND – he had prior art. He had addition and multiplication to look at. So, addition is like disjunction in some important ways. And multiplication is like conjunction in some important ways. And I think it took me a while to see how addition and disjunction were like each other, but there are some important ways that they're like each other. One of them is that they share their identity values. If you think of, it's sort of like binary addition and binary multiplication because in boolean logic there's only two values: true or false. So, you have a zero and a one. So, if you think of them as being like binary addition and binary multiplication then it's easier to see the connection. Because when we think of addition of just integers in a normal base 10 or whatever, it doesn't seem that much like an OR. [Laughs] CHARLES: Mmhmm. No, it doesn't. JULIE: [Inaudible] like a logical OR. So, it took me a while to see that. But they're also related then to set intersection and union where intersect-… CHARLES: So can… Let's just stop on that for a little bit, because let me parse that. So, for OR I've got two values, like in an ‘if' statement. This OR that. If I've got a true value then I can OR that with anything and I'll get the same anything. JULIE: Right. CHARLES: So, true is the identity value of OR, right? Is that what you're saying? So, one… JULIE: Well, it's false that's the identity of OR. CHARLES: Oh, it is? JULIE: Zero is the identity of addition. CHARLES: Wait, but if I take ‘false OR one' I get… oh, I get one. JULIE: Right. CHARLES: Okay. So, if I get ‘false OR true', I get true. Okay, so false is the identity. JULIE: Yeah. CHARLES: Oh right. You're right. You're right. Because… okay, sorry. JULIE: So, just like in addition, zero is the identity. So, whatever you add to zero, that's the result, right? You're going to get [the same] CHARLES: Right. JULIE: Value back. So, with OR false is the identity and false is equivalent to zero. CHARLES: [Inaudible] ‘False OR anything' and you're getting the anything. JULIE: Right. So, the only time you'll get a false back is if it's ‘false OR false', right? CHARLES: Right. Mmhmm. JULIE: Yeah. So, false is the identity there. And then it's sort of the same for conjunction where one is the identity of multiplication and one is also the… I mean, true is then the identity of logical conjunction. CHARLES: Right. Because one AND… JULIE: ‘True AND false' will get the false back. [Inaudible] CHARLES: Right. ‘True And true' you can get the true back. JULIE: Yeah. CHARLES: Okay. JULIE: And it's also then true, getting back to what we were talking about, semirings, it's also true that false is a kind of annihilator for conjunction. That's sort of trivial, because… CHARLES: Oh, because you annihilate the value. JULIE: Right. When there's only two values it's a little bit trivial. But it is [inaudible]. So… CHARLES: But it's [inaudible]. Yeah. It demonstrates the point. JULIE: Right. CHARLES: So, if I have yeah, ‘false AND anything' is just going to be false. So, I annihilate whatever is in that position. JULIE: Right. CHARLES: And the same thing as zero is the annihilator for multiplication, right? JULIE: Right. CHARLES: Because zero times anything and you annihilate the value. JULIE: Yeah. CHARLES: And now I've got… okay, I'm seeing it. I don't know where you're going with this. [Laughter] ELRICK: Yeah. CHARLES: But I'm there with you. ELRICK: Yup. JULIE: And then it turns out there are some operations from set theory that work really similarly. So, intersection and union are similar but the ones that are closer to conjunction/disjunction are disjoint unions and cartesian products. So we don't need to talk about those a whole lot if you're not into set theory. But anyway… CHARLES: I like set theory although it's so hard to describe without pictures, without Venn diagrams. JULIE: It is. It really is, yeah. So anyway, all of these things are monoids. And they're all binary associative operations with identity elements. So, they're all monoids. And so, we've taken operations on sets, operations on logical propositions, operations on many kinds of numbers (because not all kinds of addition and multiplication I guess are associative), and we can kind of unify all of those into the same framework. And then once we have done that, then we can see that there's all these other ‘sets'. Because most of the kinds of numbers are sets and there are operations on generic sets with set theory. So, now we can say “Oh. We can do these same kinds of operations on many other kinds of sets, many other varieties of sets.” And we can see that same pattern. And then we can get a kind of intuition for “Well, if I have a disjunctive monoid where I'm adding two things or I'm OR-ing two things…” Because even though those are logically very similar, intuitively and in terms of what it means to concatenate lists versus choosing one or the other, those obviously have different practical effects. CHARLES: So, I'm going to try and come up with some concrete examples to maybe… JULIE: Okay, yeah. CHARLES: A part of them will probably be like in JavaScript, right? So, to capture the idea of a disjunctive monoid versus a conjunctive monoid. So, a disjunctive monoid is like, so in JavaScript we're got two objects. You concat them together and it's like two maps or two hashes. So, you mash them together and you get… so, for the disjunctive one you'd have all the keys from both of the hashes inside the resulting object. You take two objects. Basically we call it object assign in JavaScript where you have basically the empty object. You can take the empty object and then take any number of objects. And so, we talked about… JULIE: That would become a disjunctive monoid, right? CHARLES: That would be a disjunctive monoid because you're like basically, you're OR-ing. Yeah. JULIE: You're kind of, [inaudible] CHARLES: Hard to find the terminology. JULIE: Yeah. CHARLES: But like object assign would be a disjunctive monoid because you're like mashing these two objects. And the resulting object has all of the things from both of them. JULIE: Right. So, it's like a sum of the two, right? CHARLES: Right, right. Okay, so then another one would be like min or max where you've got this list of integers and you can basically take any two integers and you can mash them together and if you're using min, you get the one that's smaller. Basically, you're collapsing them into one value but you're actually just choosing one of them. Is that like… JULIE: Yeah. CHARLES: Would that be like a conjunctive monoid? JULIE: No, that's also disjunctive but that's more like an OR than like a sum. CHARLES: Okay. JULIE: Right. So, that's what I said. It's hard to think of disjunctive monoids I think because there's really two varieties. There's some underlying logical similarity, like the similarity in the identity values. But they're also different. Summing two things versus choosing one or the other are also very different things in a lot of ways. CHARLES: Right. Okay. JULIE: And so, I think the conjunctive monoids are all a little bit more similar, I think. [Chuckles] But the disjunctive monoids are two broad categories. And we don't really have a monoid in Haskell of lists where you're choosing one or the other. The basic list monoid is you're concatenating them. So, you're adding two lists or taking the union of them. But for maybe, the maybe type, we do have monoids in Haskell where you're just choosing either the first just value that comes up or the last just value that comes up. So, we do have a monoid of choice over the maybe type. And then we have a type class called alternative which is monoids of choice for… so, they're disjunctive monoids but instead of adding the two things together, they're choosing one or the other. CHARLES: Okay. JULIE: Though we have a type class for that. [Laughs] CHARLES: [Sighs] Oh wow. Yeah. JULIE: Mmhmm, yeah. CHARLES: I'l have to go read up on that one. JULIE: That type class comes up the most when you're parsing, because you can then parse… like if you found this thing, then parse this thing. But if you haven't found this thing, then you can keep going. And if you find this other thing later, then you can take that thing. So, you allow the possibility of choice. The first thing that you come to that matches, take that thing or parse that thing. So, that type class gets mostly used for parsing but it's not only useful for parsing. CHARLES: Okay. JULIE: So yeah. That's the most of the time when I've used it. CHARLES: Is this when you're like parsing JSON? Or is this when you're just searching some stream for some value? Like you just want to run through it until you encounter this value? Or how does that…? JULIE: Right. Say you want to run through it until you find either this value or this value. I've used it when I've been parsing command line arguments. So, let's say I have some flags that can be passed in on my command line command. There are some flags that could be passed in. So, we'll parse until we find this thing or this thing. This flag or this flag. So, if you find this flag, then we're going to go ahead and parse that and do whatever that flag says to do. If you don't find that first flag then we can keep parsing and see if you find this other flag, in which case we'll do something different. CHARLES: Okay. JULIE: It'll take the first match that it finds. Does that make sense? CHARLES: Yeah, yeah, yeah. It does. But I'm not connecting how it's a monoid. [Laughs] JULIE: How is that a monoid? Well, because it's a monoid of OR-ing CHARLES: What's the identity value or the empty value in that case? JULIE: Well, the empty value would be… let's say you have maybes. Let's say you have some kind of maybe thing, so you're parser is going to return maybe this thing, maybe whatever you're parsing. Like maybe string. CHARLES: Yeah, yeah. JULIE: So, it's going to return a maybe string. So well, nothing would be the empty. CHARLES: Okay. JULIE: But nothing is like the zero because it's a disjunction, logical OR. So, only when you have two nothings will you get back a nothing. Otherwise, it will take the first thing that it finds. CHARLES: Okay. I see. JULIE: Yeah. So, the identity then is the nothing, like false is the identity for disjunction. CHARLES: Mmhmm. Okay. JULIE: Yeah. CHARLES: [Inaudible] JULIE: Yeah. If you have nothing or this other thing, then you return this other thing. Then you return the maybe string. If you have two nothings, then you get in fact nothing. Your parsing has failed. CHARLES: Right, because you've got nothing. JULIE: Because you've got nothing. There was nothing to give you back. CHARLES: So, you concatenated all of the things together and you ended up with nothing. JULIE: Right, because there was nothing there. CHARLES: Right. [Laughs] JULIE: You found nothing. So, it's useful when you've got some possibilities that could be present and you just want to keep parsing until you find the first one that matches. And then it'll just return whatever. It'll just parse the first thing that it matches on. CHARLES: Okay, okay. JULIE: Does that make sense? CHARLES: Yeah. No, I think it makes sense. JULIE: I'm not sure. Because I feel like I kind of went down a rabbit hole there. [Laughs] CHARLES: Yeah. [Laughs] No, no. I think it makes sense. And as a quick aside, I think… so, I was, when we were talking about min and max, are min and max also like a semiring? Because negative infinity is the annihilator of min and it's the identity of max. and positive infinity is the annihilator of max but it's the identity of min. JULIE: I guess. I don't really think of min and max as having identities. Is that how [inaudible]? CHARLES: I'm just, I don't know. Well, I think if you have negative infinity and you max it with anything, you're going to get the anything, right? Negative infinity max one is one. Negative infinity/minus a billion is minus a billion. JULIE: Yeah, okay. CHARLES: I don't know. Just off the cuff. I'm just trying to… annihilators sound cool. And so… [Laughter] CHARLES: And so I'm like, I'm trying to find annihilators. JULIE: Yeah, they are cool. CHARLES: [Laughs] JULIE: One of my friends on Twitter was just talking about how he used the intuition at least of a semiring at work because he had this sort of monoid to concatenate schedules. So, he's got all these different schedules and he's got this kind of monoid to concatenate them, to merge the schedules together. But then he's got this one schedule that is special. And whenever something is in this schedule, it needs to hard override every other schedule. CHARLES: Right. JULIE: And so, that was like the annihilator. So, he was thinking of it as a semiring, because that hard override schedule is like the annihilator of all the other schedules. CHARLES: Yeah. JULIE: If anything else exists on this day or whatever, then it'd just get a hard override. So, there's a real world use. [Laughs] CHARLES: Yeah, a real world example. That's the thing that I'm finding, is that all these really very crystalline abstractions, they still play out very well I think in the real world. And they're useful as a took in terms of casting a net over a problem. Because you're like… when I'm faced with something new, I'm like “Well, let's see. Can I make it a functor?” And if I can, then I've unlocked all these goodies. I've unlocked every single composition pattern that works with functor. JULIE: Right, right. CHARLES: And it's like sometimes it fits. It almost feels like when you're working on something at home and you've got some bolt and you're trying on different diameters. So you're like, “Oh, is it 15 millimeter? Is it 8 millimeter?” JULIE: Right. [Laughs] CHARLES: “Like no, okay. Maybe it'll work with this.” But then when it clicks, then you can really ratchet with some serious torque. JULIE: Right, right. Yeah. CHARLES: So, yeah. Definitely trying to look for semirings [Laughs] is definitely beyond my [can] at this point. But I hope to get there where it can be like, if it's a fit, it's a fit. That's awesome. JULIE: Right. Yeah, it's kind of beyond my can too. Semirings are still a little bit new for me and I can't say that I find them in the wild as it were, as often as monoids or something. But I think it just takes seeing some concrete examples. So, now you know this idea exists. If you just have some concrete examples of it, then over time you develop that intuition, right? CHARLES: Right. JULIE: Like “Okay, I've seen this pattern before.” [Chuckles] CHARLES: yeah. Basically, every time now I want to fold a list, or like in JavaScript, any time you want to reduce something I'm like “There's a monoid here that I'm not seeing. Let me look for it.” JULIE: Yeah. Oh, that's cool, yeah. CHARLES: Because like, that's basically, most of the time you're doing a reduce, then like I said that's the terminology for fold in JavaScript, is you start with some reducible thing. Then you have an initial value and a function to actually concatenate two things together. JULIE: Right. CHARLES: And so, usually that initial state, that's your identity. And then that function is just your concat function from your monoid. And so, usually anytime I do a reduce, there's the three pieces. Boom. Identity value, concatenation function, it's usually right there. And so, that's the way I've found of extracting these things, is I'm very suspicious every time I'm tempted to… JULIE: [Laughs] CHARLES: A fold. I'm like “Hmm. Where's the monoid I'm missing? Is it [under the] couch?” Like, where is it? [Laughs] Because it just, it cleans it up and it makes it so much more concise. JULIE: Oh yeah, that's awesome. CHARLES: So anyhow. JULIE: Have we totally lost Elrick? ELRICK: Nope, I'm still here. JULIE: Okay. [Laughs] ELRICK: I'm sitting in and listening to you two break down these complex topics is really good. Because you guys break them down to a level where it's consumable by people that barely understand it. So, I'm just sitting here just soaking everything in like “Oh, that's awesome.” Taking notes. Yes, okay, okay. [Laughter] JULIE: Cool. ELRICK: So, I'm like riding the train in the back just hanging out, feeling the cool breeze while you guys just pull the train ahead in… [Laughter] ELRICK: In the engine department, you know? It's awesome. CHARLES: Yeah. ELRICK: I don't know if they're related. But you were talking about semirings and I heard of semigroups or semigroups. I have no idea if those two things are related. Are they related or [inaudible]? JULIE: They're kind of related. So, a semigroup is like a monoid but doesn't have an identity value. CHARLES: What is an example of a semigroup out there in the wild? Because every time I find a semigroup, I feel like it's actually a monoid. JULIE: Well, you know I feel like that a lot, too. We do have a data type in Haskell that is a non-empty list. So, there is no empty list CHARLES: Ah, right. Okay. JULIE: So then you can concatenate those lists, but there's never an identity value for it. CHARLES: I see. JULIE: Yeah. So, that's a case. There's actually a lot of comparison functions, greater than and less than. I think those are semigroups because they're binary, they're associative, but they don't have an identity value. Like if you're comparing two numbers, there's not really an identity value there. CHARLES: Right. Well, would the negative infinity work there? Let's see. Like, negative infinity greater than anything would be the anything. Well, okay wait. But greater than, that takes numbers and yields a boolean, right? JULIE: Yeah, CHARLES: Right. So, it couldn't be… could it be a semigroup? Don't semigroups have to… Doesn't the [inaudible] function have to yield the same type as the operands? JULIE: Yes. CHARLES: But a non-empty list, that's a good one. Sometimes it's basically not valid for you to have a list that doesn't have any elements, right? Because it's like the null value or the empty value and it could be like a shopping cart on Amazon. You can't have a shopping cart without at least something in it. JULIE: Right. CHARLES: Or, you can't check out without something. So, you might want to say like the shopping cart that I'm going to check out is a non-empty list. And so, you can put two non-empty lists together. But yeah, there's no value you can mash together, you can concat with anything, that isn't empty. JULIE: Right. CHARLES: So, I guess going back to your question Elrick, I don't know if it's related to semiring. But semigroup is just, it's like one-half of monoid. It's the part that concats two values together. JULIE: Right. Well, yeah. And so, it's supposed to be half a group, right? But I don't remember… CHARLES: [Laughs] JULIE: [Inaudible] all of the group stuff is, all the stuff that these types have to have to be a group. And similarly, I forget what the difference between semiring and ring is. [Chuckles] Because a ring and a group I know are not the same thing. But I forget what the difference is, too. So, I kind of got a handle on what semigroups are, and I know all my Haskell friends are going to, when they hear this podcast they're going to tweet all these examples of semigroups at me, especially my coauthor for ‘Joy of Haskell', Chris Martin. He's really into semigroups. And so, I know he's going to be very disappointed in my inability to think… [Laughter] JULIE: To think of any good examples. But it's not something that I find myself using a lot, whereas semirings are something that I have started noticing a little bit more often. So, how a monoid relates to a group is something that I can't remember off the top of my head. And I know how semirings relate to monoids, but how monoids then relate to rings and groups, I can't really remember. And so, these things are sort of all related. But the relation is not something I can spill out off the top of my head. Sorry. [Laughs] CHARLES: No, It's no worries. You know, I feel like… ELRICK: It's all good. CHARLES: What's funny is I feel like having these discussions is exactly like the discussions people have with any framework of using one that we use a lot, which is EmberJS. But if you could do with React or something, it's like, how does the model relate to the controller, relate to the router, relate to the middleware, relate to the services? You just have these things, these moving parts that fit together. And part of… I feel like exploring this space is really, absolutely no different than exploring any other software framework where you just have these things, these cooperating concepts, and they do click together. But you just have to map out the space in your head. JULIE: Yeah. This is going to sound stupid because everybody thinks that because I know Haskell I must know all these other things. But I just had to ask people to recommend me a book that could explain the relationship of HTML and CSS, because that was completely opaque to me. CHARLES: [Laughs] Yeah. JULIE: I've been involved in the making now of several websites because of the books and stuff like that. And I have a blog. It's not WordPress or anything. I did that sort of myself. So, I've done a little bit with that. But CSS is really terrifying. And… CHARLES: Right. Like query selectors, rules, properties. JULIE: Yeah. ELRICK: [Laughs] CHARLES: Again, might as well be groups and semigroups and monoids, right? JULIE: Right, right. ELRICK: Yeah. CHARLES: [Laughs] ELRICK: That is really interesting. [Chuckles] I've never heard anyone make that comparison before. But it's totally true, now that I'm thinking about it. JULIE: Yeah, yeah. CHARLES: Yeah. In the tech world we are so steeped in our own jargon that we could be… we can reject one set of jargon and be totally fine with another set. Or be like, suspicious of one set of concepts working together and be totally fine with these other designations which are somewhat arbitrary but they work. JULIE: Right. CHARLES: So, people use them. JULIE: So, it's like what you've gotten used to and what you're familiar with and that seems normal and natural to you. [Chuckles] So, the Haskell stuff, most of it seems normal and natural to me. And then I don't understand HTML and CSS. So, I bought a book. [Laughter] CHARLES: Learning HTML and CSS from first principles. JULIE: Yes, yeah. I just wanted to understand. I could tell that they do relate to each other, that there is some way that they click together. I can tell that by banging my head against them repeatedly. But I didn't really understand how, and so yeah. So, i've been reading this book to [Laughs] [learn] HTML and CSS and how they relate together. That's so important, just figuring out how things relate to each other, you know? CHARLES: Yeah. ELRICK: Yeah. That is very true. JULIE: Yeah. ELRICK: We can trade. I can teach you HTML and CSS and you can teach me Haskell. JULIE: Absolutely. ELRICK: [Laughs] CHARLES: There you go JULIE: [Laughs] ELRICK: Because I'm like, “Ooh.” I'm like, “Oh, CSS. Great. No problem.” [Laughter] ELRICK: Haskell, I'm like “Oh, I don't know.” JULIE: Yeah. CHARLES: Yeah. ELRICK: [Laughs] CHARLES: No, it's amazing [inaudible] CSS. ELRICK: Yeah. CHARLES: It is, it's a complicated system. And it's actually, it's in many ways, it's actually a pretty… it's a pretty functional system, CSS is at least. The DOM APIs are very much imperative and about mutable state. But CSS is basically yeah, completely declarative. JULIE: Right. CHARLES: Completely immutable. And yeah, the workings of the interpreter are a mystery. [Laughs] ELRICK: Yup. JULIE: YEs. And you know, for the Joy of Haskell website we use Bootstrap. And so, there was just like… there's all this magic, you know? [Laughs] ELRICK: Oh, yeah. CHARLES: Yeah. JULIE: Oh look, if I just change this little thing, suddenly it's perfectly responsive and mobile. Cool. [Laughter] JULIE: I don't know how it's doing this, but this is great. [Laughs] CHARLES: Yeah. Oh, yeah. It's an infinite space. And yeah, people forget what is so easy and intuitive is not and that there's actually a lot of learning that happened there that they're just taking for granted. JULIE: I think so many people start from HTML and CSS. That's one of their first introductions to programming, or JavaScript or some combination of all three of those. And so, to them the idea that you would be learning Haskell first and then coming around and being like “Oaky, I have to figure out HTML,” that [seems very] strange, right? [Laughter] CHARLES: Yeah. Well, definitely probably stepping into bizarro world. JULIE: And I went backwards. But [Laughs] CHARLES: Yeah. JULIE: Not that it's backwards in terms of… just backwards in terms of the normal way, progression of [inaudible] CHARLES: Yeah. It's definitely the back door. Like coming in through the catering kitchen or something. JULIE: Yes. CHARLES: Instead of the front door. Because you know the browser, you can just open up the Dev Tools and there you are. JULIE: Exactly, yeah. CHARLES: The level of accessibility is pretty astounding. And so, I think t's why it's one of the most popular avenues. JULIE: Oh, definitely. Yeah. ELRICK: It's the back door probably for web development but not the back door for programming in general. JULIE: Mm, yeah. Yeah. CHARLES: Yeah. It seems like Haskell programming has really started taking off and that the ecosystem is starting to get some of the trappings of a really less fricative developer experience in terms of the package management and a command line experience and being able to not make all of the tiny little decisions that need to be made before you're actually writing ‘hello world'. JULIE: Right. ELRICK: Interesting. Haskell has a package manager now? CHARLES: Oh, it has for a while. ELRICK: Oh, really? What is it called? I have no idea? Do you know the name off the top of your head? CHARLES: So, I actually, I'm not that familiar with the ecosystem other than every time I try it out. So I definitely will defer this question to you, Julie. JULIE: This is going to be a dumb question, I guess. What do we mean by package manager? CHARLES: So, in JavaScript, we have npm. The concept of these packages. It's code that you can download, a module that you can import, basically import symbols from. And Ruby has RubyGems. And Python has pip. JULIE: Okay, okay. CHARLES: Emacs has Emacs Packages. And usually, there's some repository and people could publish to them and you can specify dependencies. JULIE: Right, yeah. Okay, so we have a few things. Hackage is sort of the main package repository. And then we have another one called Stackage and the packages that are in Stackage are all guaranteed to work with each other. CHARLES: Mm, okay. JULIE: So, on Hackage, some of the packages that are on Hackage are not really maintained or they only work with some old versions of dependencies and stuff like that, so the people who made Stackage were like “well, if we had this set of packages that were all guaranteed to work together, the dependencies were all kept updated and they all can be made to work together, then that would be really convenient.” And then we have Cabal and we have Stack are the main… and a lot of people use Nix for the same purpose that you would use Cabal or Stack for building projects and importing dependencies and all of that. CHARLES: Right. So, Cabal and Stack would be roughly equivalent then to the way we use Yarn or JavaScript and Bundler in Ruby. You're solving the equation for, here's my root set of dependencies. Go out and solve for the set of packages that satisfy. Give me at least one solution and then download those packages and [you can] run them. JULIE: Yeah, yeah. Right, so managing your dependencies and building your project. Because Haskell's compiled, so you've got to build things. And so yeah, we have both of those. CHARLES: And now there's like web frameworks and REST frameworks. JULIE: Oh there are, yeah. We have… CHARLES: All kinds of stuff now. JULIE: We had this big proliferation of web frameworks lately. And I guess some of them are very good. I don't really do web development. But the people I know who do web development in Haskell say that some of these are very good. Yesod is supposed to be very good. Servant is sort of the new hotness. And I haven't used Servant at all though, so don't ask me questions about it. [Laughter] JULIE: But yeah, we have several big web frameworks now. There are still some probably big holes in the Haskell ecosystem in terms of what people want to see. So, that's one thing that people complain about Haskell for, is that we don't have some of the libraries they'd like to see. I'd like to see something… I would really like to see in Haskell something along the lines of like NLTK from Python. CHARLES: What is that? JULIE: Natural language toolkit. CHARLES: Oh, okay. JULIE: So yeah, Python has this… CHARLES: Yeah, Python's got all the nice science things. JULIE: They really do. And Haskell has some natural language processing libraries available but nothing along the lines of, nothing as big or easy to use and stuff as NLTK yet. So, I'd really like to see that hole get filled a little bit better. And you know… CHARLES: Well, there you go. If anyone out there is seeking fame and fortune in the Haskell community. JULIE: That's actually why I started learning Python, was just so that I could figure out NLTK well enough to start writing it in Haskell. [Laughter] JULIE: So, that's sort of my ambitious long-term project. We'll see how that goes. [Laughs] CHARLES: Nice. Before we wrap up, is there anything going on, coming up, that you want to give a shoutout to or mention or just anything exciting in general? JULIE: Yeah, so on March 30th I'm going to be giving a talk at lambda-squared which is going to be in Knoxville and is a new conference. I think it's just a single-day conference and I'm going to be giving a talk about functors. So, I'm going to try to get through all the exciting varieties of functors in a 50-minute talk. CHARLES: Ooh. JULIE: So, we'll see how that goes. Yeah. And I am still working with Chris Martin on ‘The Joy of Haskell' which should be finished this year, sometime. I'm not going to… [Laughter] JULIE: Give any more specific deadline than that. And in the process of writing Joy of Haskell, I was telling him about some things that, some things that I think are really difficult. Like in my experience, teaching Haskell some places where I find people have the biggest stumbling blocks. And I said, “What if we could do a beginner video course where instead of throwing all of these things at people at once, we separated them out?” And so, you can just worry about this set of stumbling blocks at one time and then later we can talk about this set of stumbling blocks. And so, we're doing… we're going to start a video course, a beginner Haskell video course. I think we'll be starting later this month. So, I'm pretty excited… CHARLES: Nice. JULIE: About that. Yeah. CHARLES: Yeah, I know a lot of people learn really, really well from videos. There's just some… JULIE: Yeah. [Inaudible] for me, so I'm a little nervous. But [Laughs] CHARLES: Yeah, especially if you can do… are you going to be doing live coding examples? Building out things with folks? JULIE: Yeah. CHARLES: Yeah. Well, you just needn't look no further than the popular things like RailsCasts and some of the… yeah, there's just so many good video content out there. Yeah, we'll definitely be looking for the. JULIE: Cool. CHARLIE: Alright. Well, thank you so much, Julie, for coming on. JULIE: Well, thank you for having me on. Sorry I went down some… I went kind of down some rabbit holes. Sorry about that. [Laughs] CHARLES: You know what? You go down the rabbit holes, we spend time walking around the rabbit holes. JULIE: [Laughs] CHARLES: There's something for everybody. So… [Laughter] CHARLES: And ultimately we're strolling through the meadow. So, it's all good. JULIE: [Laughs] Yeah. CHARLES: Thank you too, Elrick. JULIE: It was nice talking to you guys again. CHARLES: Yeah. ELRICK: Yeah, thank you. CHARLES: If folks want to follow up with you or reach out to you, what's the best way to get in contact with you? JULIE: I'm @argumatronic on Twitter and my blog is argumatronic.com which has an email address and some other contact information for me. So, I'd love to hear questions, comments. [Laughs] Yeah. I always [inaudible]. CHARLES: Alright, fantastic. JULIE: To talk to new people. CHARLES: Alright. And if you want to get in touch with us, we are @TheFrontside on Twitter. Or you can just drop us an email at contact@frontside.io. Thanks everybody for listening. And we will see you all later.
Rachel White gives a biography on both Alicia Boole Stott and Mary Everest Boole, a mother & daughter from the 19th century. Alicia is the daughter of Mary Everest Boole and George Boole and developed polytopes which are a huge area of study today! Mary is an esteemed mathematician on her own with a number of interesting hobbies that Rachel speaks about. This podcast is part of Damien Adams' series Women in Math: The Limit Does Not Exist.
Boolean algebra is a set of rules to describe a problem whose outcome will either be true or false. They were formulated by an English mathematician named George Boole. He's the namesake for the rules and for the boolean type in statically typed languages. Read more › The post Boolean Algebra appeared first on Complete Developer Podcast.
Show notes at ocdevel.com/mlg/2 Updated! Skip to [00:29:36] for Data Science (new content) if you've already heard this episode. What is artificial intelligence, machine learning, and data science? What are their differences? AI history. Hierarchical breakdown: DS(AI(ML)). Data science: any profession dealing with data (including AI & ML). Artificial intelligence is simulated intellectual tasks. Machine Learning is algorithms trained on data to learn patterns to make predictions. Artificial Intelligence (AI) - Wikipedia Oxford Languages: the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. AlphaGo Movie, very good! Sub-disciplines Reasoning, problem solving Knowledge representation Planning Learning Natural language processing Perception Motion and manipulation Social intelligence General intelligence Applications Autonomous vehicles (drones, self-driving cars) Medical diagnosis Creating art (such as poetry) Proving mathematical theorems Playing games (such as Chess or Go) Search engines Online assistants (such as Siri) Image recognition in photographs Spam filtering Prediction of judicial decisions Targeting online advertisements Machine Learning (ML) - Wikipedia Oxford Languages: the use and development of computer systems that are able to learn and adapt without following explicit instructions, by using algorithms and statistical models to analyze and draw inferences from patterns in data. Data Science (DS) - Wikipedia Wikipedia: Data science is an interdisciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from noisy, structured and unstructured data, and apply knowledge and actionable insights from data across a broad range of application domains. Data science is related to data mining, machine learning and big data. History Greek mythology, Golums First attempt: Ramon Lull, 13th century Davinci's walking animals Descartes, Leibniz 1700s-1800s: Statistics & Mathematical decision making Thomas Bayes: reasoning about the probability of events George Boole: logical reasoning / binary algebra Gottlob Frege: Propositional logic 1832: Charles Babbage & Ada Byron / Lovelace: designed Analytical Engine (1832), programmable mechanical calculating machines 1936: Universal Turing Machine Computing Machinery and Intelligence - explored AI! 1946: John von Neumann Universal Computing Machine 1943: Warren McCulloch & Walter Pitts: cogsci rep of neuron; Frank Rosemblatt uses to create Perceptron (-> neural networks by way of MLP) 50s-70s: "AI" coined @Dartmouth workshop 1956 - goal to simulate all aspects of intelligence. John McCarthy, Marvin Minksy, Arthur Samuel, Oliver Selfridge, Ray Solomonoff, Allen Newell, Herbert Simon Newell & Simon: Hueristics -> Logic Theories, General Problem Solver Slefridge: Computer Vision NLP Stanford Research Institute: Shakey Feigenbaum: Expert systems GOFAI / symbolism: operations research / management science; logic-based; knowledge-based / expert systems 70s: Lighthill report (James Lighthill), big promises -> AI Winter 90s: Data, Computation, Practical Application -> AI back (90s) Connectionism optimizations: Geoffrey Hinton: 2006, optimized back propagation Bloomberg, 2015 was whopper for AI in industry AlphaGo & DeepMind
Açık Bilinç 22 Aralık 2015 Açık Bilinç'te, Boğaziçi Üniversitesi Bilgisayar Mühendisliği Bölümü'nden Prof. Dr. Cem Say'la, doğumunun 200. yılında George Boole'un mantık ve matematiğe katkılarından söz edeceğiz. Alan Turing'e kadar uzanan kuramsal hattın en önemli isimlerinden Boole'un "Düşünce Yasaları" (1854) kitabı mantık alanını dönüştürmekle kalmamış, bilgisayar biliminin kuramsal altyapısını da hazırlamıştır.
Professor Flood gives a fabulous overvierw of the lives and work of two mathematicians, Hamilton and Boole: http://www.gresham.ac.uk/lectures-and-events/hamilton-boole-and-their-algebrasWilliam Rowan Hamilton (1805-1865) revolutionized algebra with his discovery of quaternions, a non-commutative algebraic system, as well as his earlier work on complex numbers. George Boole (1815-1864) contributed to probability and differential equations, but his greatest achievement was to create an algebra of logic 'Boolean algebra'. These new algebras were not only important to the development of algebra but remain of current use.The transcript and downloadable versions of the lecture are available from the Gresham College website: http://www.gresham.ac.uk/lectures-and-events/hamilton-boole-and-their-algebrasGresham College has been giving free public lectures since 1597. This tradition continues today with all of our five or so public lectures a week being made available for free download from our website. There are currently over 1,800 lectures free to access or download from the website.Website: http://www.gresham.ac.ukTwitter: http://twitter.com/GreshamCollegeFacebook: https://www.facebook.com/greshamcollegeInstagram: http://www.instagram.com/greshamcollege
Grid cells and time Animals navigate by calculating their current position based on how long and how far they have travelled and a new study on treadmill-running rats reveals how this happens. Neurons called grid cells collate the information about time and distance to support memory and spatial navigation, even in the absence of visual landmarks. New research by Howard Eichenbaum at Boston University has managed to separate the space and time aspects in these cells challenging currently held views of the role of grid cells in the brain. Boole It's the 200th anniversary of the birth of George Boole. We speak to Professor Des MacHale, his biographer at Cork University, and Dr Mark Hocknull, historian of science at University of Lincoln, where he was born, to uncover Boole's unlikely rise to Professor of Mathematics, given his lack of formal academic training. We discuss the impact of his work at the time, and his legacy for the modern digital age. How your brain shapes your life It weighs 3lbs, takes 25 years to reach maturity and, unique to bits of our bodies, damage to your brain is likely to change who you are. Neuroscientist David Eagleman's new book, The Brain: The Story of You, explores the field of brain research. New technology is providing a flood of data. But what we don't have, according to Eagleman, is the theoretical scaffolding on which to hang this. Why do brains sleep and dream? What is intelligence? What is consciousness? Producer: Fiona Roberts.
Die Britse wiskundige, George Boole, word vandag vereer deur Google se Doodle. Dit sou vandag Boole se 200ste verjaarsdag gewees het. Maar wie is George Boole? Melissa Tighy het met prof. Johan Engelbrecht van die Universiteit van Pretoria en uitvoerende direkteur van die Suid-Afrikaanse Wiskundestigting gepraat.
The inspiration for Arthur Conan Doyle's creation, literature's most ingenious villain, Professor Moriarty & Sherlock Holmes archenemy was apparently, George Boole who was Prof of Mathematics at UCC Cork between 1849 & 1864. Joining Dave from our Cork studio is Des MacHale, Boole's bioghrapher & Bill Liao, co-founder of CoderDojo.com
*(Not actually episode 2, as Jason says at the beginning and end of the episode for some reason)Audio note: Rachel's wifi dropped in and out a bit. We think that nothing important was said during the brief pauses! Also, the recording quality improves after the first five minutes or so, so if the audio sounds muddy at first, stick with it.Download the podcast (mp3, ~48 minutes) News:Rachel's upcoming new gigShow Notes:George BooleJason's Simmons College continuing ed classes (Zotero, Instruction Librarian Boot Camp)
Many of the fundamental ideas of computer science have been invented, explored and discussed by leading philosophers and logicians, long before computers were invented (by logicians, of course). This presentation by Tony Hoare, Microsoft Research, looks at the ideas of philosophers and logicians such as Aristotle, Euclid, St. Thomas Aquinas, William of Ockham, Leibnitz, George Boole, and of course Alan Turing, and explains their relevance to computing of the present day.
A look at the career of some Irish scientists.