Base.cs Podcast

Follow Base.cs Podcast
Share on
Copy link to clipboard

Beginner-friendly computer science lessons based on Vaidehi Joshi's base.cs blog series, produced by CodeNewbie.

CodeNewbie

  • Mar 24, 2020 LATEST EPISODE
  • monthly NEW EPISODES
  • 23m AVG DURATION
  • 66 EPISODES


Search for episodes from Base.cs Podcast with a specific topic:

Latest episodes from Base.cs Podcast

S9:E8 - "In the end, the code you take is equal to the code you make"

Play Episode Listen Later Mar 24, 2020 26:17


For our final episode, we answer your burning questions including the Base.cs origin story, Saron and Vaidehi's favorite niche data structure, and what are some good resources to check out next. We also take a look back at some of our favorite moments from the show's history, and find a couple of fun themes. Based on Vaidehi Joshi's blog post, "Base.cs".

S9:E7 - "This way to translate is le-JIT"

Play Episode Listen Later Mar 17, 2020 21:57


We've been talking a lot about the differences between compilers and interpreters, and how both of them work, and the ways that allowed one — the compiler — to lead to the creation of the other — the interpreter. Now we get into the Just In Time compiler, or a JIT, which is fusion or combination of the interpreter and the compiler, which are each two types of translators in their own right. A just-in-time compiler has many of the benefits of both of these two translation techniques, all rolled up into one. Based on Vaidehi Joshi's blog post, "A Most Perfect Union: Just-In-Time Compilers".

S9:E6 - "Two translators, both alike in dignity"

Play Episode Listen Later Mar 10, 2020 19:51


We have been talking a lot about compilers, and in this episode we discuss the differences between compilation versus interpretation. An interpreter is also a translator, just like a compiler, in that it takes a high level language (our source text) and converts it into machine code. However, it does something slightly different: it actually runs and executes the code that it translates immediately (inline) as it translates. Based on Vaidehi Joshi's blog post, "A Deeper Inspection Into Compilation And Interpretation".

S9:E5 - "Paring down our parse trees with AST"

Play Episode Listen Later Mar 3, 2020 18:36


In this episode, we take our parse tree, an illustrated, pictorial version of the grammatical structure of a sentence, and we take a metaphorical broom to sweep away repetitive bits, sliming it down, and leveling it up by creating an abstract syntax tree (AST). Based on Vaidehi Joshi's blog post, "Leveling Up One’s Parsing Game With ASTs".

S9:E4 - "Confused about compilers?"

Play Episode Listen Later Feb 25, 2020 22:53


In this episode, we get into what a compiler is and does. In short, a compiler is a program that reads our code (or any code, in any programming language), and translates it into another language. You'll want to listen in to find out just how it does this! Based on Vaidehi Joshi's blog post, "Reading Code Right, With Some Help From The Lexer".

S9:E3 - "Parsing out parse trees"

Play Episode Listen Later Feb 18, 2020 23:14


In this episode, we get into parse trees, an illustrated, pictorial version of the grammatical structure of a sentence, which is important to understanding how computers understand coding syntax. Based on Vaidehi Joshi's blog post, "Grammatically Rooting Oneself With Parse Trees".

S9:E2 - "Speeding up our traveling salesperson"

Play Episode Listen Later Feb 11, 2020 21:38


We continue our journey with the Traveling Salesman Problem (TSP), where this we imagine a salesperson has to travel to every single city in an area, visiting each city only once. Additionally, they need to end up in the same city where they starts their journey from, and do this in the most efficient manner. However, in this episode, we are going to speed our salesperson up by using a bottom-up approach! Based on Vaidehi Joshi's blog post, "The Trials And Tribulations Of The Traveling Salesman".

S9:E1 - "Take a journey with the Traveling Salesman"

Play Episode Listen Later Feb 4, 2020 30:04


We start our season off with something that often pops up in technical interviews: the Traveling Salesman Problem (TSP). In this problem, a salesperson has to travel to every single city in an area, visiting each city only once. Additionally, they need to end up in the same city where they starts their journey from. Find out how to make our salesperson do this in the most efficient way possible! Based on Vaidehi Joshi's blog post, "The Trials And Tribulations Of The Traveling Salesman".

S8:E8 - "Memoizing all the things in dynamic programming"

Play Episode Listen Later Dec 10, 2019 19:04


In this last episode of the season we continue our discussion of dynamic programming, and show just how efficient it can be by using the Fibonacci sequence! Based on Vaidehi Joshi's blog post, "Less Repetition, More Dynamic Programming".

S8:E7 - "Dynamic Programming is pretty dynamite "

Play Episode Listen Later Dec 3, 2019 21:38


In this episode we talk about different paradigms and approaches to algorithmic design: the Divide and Conquer Algorithm, the Greedy Algorithm, and the Dynamic Programming Algorithm, which remembers the subproblems that it has seen and solved before so as not to repeat doing the same thing over again. Based on Vaidehi Joshi's blog post, "Less Repetition, More Dynamic Programming".

S8:E6 - "Getting deeper into Dijkastra"

Play Episode Listen Later Nov 19, 2019 29:23


We continue our talk about Dijkstra's algorithm, which can be used to determine the shortest path from one node in a graph to every other node within the same graph data structure, provided that the nodes are reachable from the starting node. Based on Vaidehi Joshi's blog post, "Finding The Shortest Path, With A Little Help From Dijkstra".

S8:E5 - "Dijkstra's algorithm is a weighty topic"

Play Episode Listen Later Nov 12, 2019 26:07


In this episode, we talk about Dijkstra's algorithm, which can be used to determine the shortest path from one node in a graph to every other node within the same graph data structure, provided that the nodes are reachable from the starting node. It's super important, and you'll see why when you learn about the weighted graph! Based on Vaidehi Joshi's blog post, "Finding The Shortest Path, With A Little Help From Dijkstra".

S8:E4 - "DAG, Daniel! Back at it again..."

Play Episode Listen Later Nov 5, 2019 13:25


We end our section of the DFS algorithm with a discussion on DAGs (directed acyclic graphs), because most implementations of depth-first search will check to see if any cycles exist, and a large part of that is based on the DFS algorithm checking to see whether or not a graph is a directed acyclic graph. DAGs are also somewhat infamous in computer science because they’re pretty much everywhere in sofware. For example, a directed acyclic graph is the backbone of applications that handle scheduling for systems of tasks or handling jobs — especially those that need to be processed in a particular order. So let's dig into DAGs! Based on Vaidehi Joshi's blog post, "Spinning Around In Cycles With Directed Acyclic Graphs".

S8:E3 - "Living on the edge!"

Play Episode Listen Later Oct 28, 2019 20:54


Throughout our exploration of graphs, we’ve focused mostly on representing graphs, and how to search through them. We also learned about edges, the elements that connect the nodes in a graph. In this episode, we look at the different classifications of edges and how, in the context of a graph, edges can be more than just “directed” or “undirected”. Based on Vaidehi Joshi's blog post, "Spinning Around In Cycles With Directed Acyclic Graphs".

S8:E2 - "Jump around the indexes with DFS!"

Play Episode Listen Later Oct 22, 2019 24:51


Last episode, we talked about traversing through a graph with the depth-first search (DFS) algorithm, which helps us determine one (of sometimes many) paths between two nodes in the graph by traversing down one single path until we can't go any further, checking one child node at a time. Now we talk about how you code BFS and what tools might you use. Based on Vaidehi Joshi's blog post, "Deep Dive Through A Graph: DFS Traversal".

S8:E1 - "Getting deep with depth-first search"

Play Episode Listen Later Oct 15, 2019 25:34


We ended last season by starting our discussion of searching, or traversing, through a graph with breadth-first search (BFS). The breadth-first search algorithm traverses broadly into a structure, by visiting neighboring sibling nodes before visiting children nodes. Now we begin our new season with depth-first search (DFS), which also helps us determine one (of sometimes many) paths between two nodes in the graph, but this time by traversing down one single path in a graph, until we can't go any further, checking one child node at a time. Based on Vaidehi Joshi's blog post, "Deep Dive Through A Graph: DFS Traversal".

S7:E8 - "Delivering muffins with BFS"

Play Episode Listen Later Sep 10, 2019 23:01


In this episode, we start our discussion of searching, or traversing, through a graph with breadth-first search (BFS). The breadth-first search algorithm traverses broadly into a structure, by visiting neighboring sibling nodes before visiting children nodes. The power of using breadth-first search to traverse through a graph is that it can easily tell us the shortest way to get from one node to another, which you'll experience first hand by brining muffins to your neighbors! Based on Vaidehi Joshi's blog post, "Going Broad In A Graph: BFS Traversal".

S7:E7 - "Plotting to represent a graph? We got you."

Play Episode Listen Later Sep 3, 2019 24:37


In this episode, we continue our discussion of representing graphs with adjacency lists -- a hybrid between an edge list and an adjacency matrix, which we learned about last episode! They are also the most popular and commonly-used representation of a graph. Based on Vaidehi Joshi's blog post, "From Theory To Practice: Representing Graphs".

S7:E6 - "It's laughable how easy it is to get graphical"

Play Episode Listen Later Aug 27, 2019 21:04


Graphs come from mathematics, and are nothing more than a way to formally represent a network, which is a collection of objects that are all interconnected (this is all stuff you should already know if you have been religiously listening to this podcast, which you should be). Now we're going from theory to practice and talking about how to represent graphs. Based on Vaidehi Joshi's blog post, "From Theory To Practice: Representing Graphs".

S7:E5 - "To b-tree or not to b-tree"

Play Episode Listen Later Aug 20, 2019 16:50


In last episode, we talked about 2-3 trees, where the nodes of every tree contain data in the form of keys, as well as potential child nodes, and can contain more than one key. This takes us to b-trees, which is a generalized version of the 2-3 tree, and are super efficient for storing data in an indexed database, like MySQL. Based on Vaidehi Joshi's blog post, "Busying Oneself With B-Trees".

S7:E4 - "A 2-3 tree for you and me"

Play Episode Listen Later Aug 13, 2019 20:02


We continue our discussion of tree data structures with 2-3 trees, where the nodes of every tree contain data in the form of keys, as well as potential child nodes. Not only that, but it can contain MORE THAN ONE KEY. They are also the -key- to what we'll be talking about next episode, B-trees, and you won't tree-lieve how cool those are. Based on Vaidehi Joshi's blog post, "Busying Oneself With B-Trees".

S7:E3 - "Color me logarithmic!"

Play Episode Listen Later Aug 6, 2019 23:26


In this episode, we are looking at a different type of self-balancing tree: red-black trees. By following four very important rules while we paint our tree red and black, we can make it not only self-balancing, but also make it run super efficiently in logarithmic time. Based on Vaidehi Joshi's blog post, "Painting Nodes Black With Red-Black Trees".

S7:E2 - "Stay gold, AVL tree, stay gold"

Play Episode Listen Later Jul 30, 2019 18:11


Last episode, we learned about AVL trees, a type of self-balancing binary search tree that follows a golden rule: no single leaf in the tree should have a significantly longer path from the root node than any other leaf on the tree. In this episode, we learn about a pattern that we can use to programmatically figure out the minimum number of nodes we’ll need to create any given height-balanced AVL tree, which leads us to the Fibonacci sequence, and relates to the "golden ratio" you might know about from fine art! Trust us, this is really neat stuff. Based on Vaidehi Joshi's blog post, "Finding Fibonacci In Golden Trees".

S7:E1 - "The AVL balancing act"

Play Episode Listen Later Jul 23, 2019 23:23


When you're dealing with data structures like trees, the balance of its "leaves" (data/nodes) matters. The moment a tree becomes unbalanced, it loses its efficiency, much like a real life tree bending to the weight of one side, unable to efficiently stand tall and grab the light of the sun. Don't let your garden grow full of lopsided saplings, and make sure to plant some AVL trees--your efficient runtime hangs in the balance. Based on Vaidehi Joshi's blog post, "The Little AVL Tree That Could".

S6:E8 - "Meet our good friend PATRICIA"

Play Episode Listen Later Jun 18, 2019 27:10


In this episode, we continue our talk on Radix Trees and introduce the Practical Algorithm To Retrieve Information Coded In Alphanumeric trees, also known as PATRICIA trees. Yeah, I think we'll just stick with calling them PATRICIA trees. Based on Vaidehi Joshi's blog post, "Compressing Radix Trees Without (Too Many) Tears".

S6:E7 - "The cannibalistic efficiency of radix trees"

Play Episode Listen Later Jun 11, 2019 22:37


In this episode, join us as we adventure into the safari that is radix trees, where parent nodes eat their offspring nodes as they chomp them down and compress. Don't worry, with all of this new added space in the trie(b), they'll more efficiently keep their children's memory alive. Based on Vaidehi Joshi's blog post, "Compressing Radix Trees Without (Too Many) Tears".

S6:E6 - "Dear tries, you (auto)complete me"

Play Episode Listen Later Jun 4, 2019 22:55


In this episode we continue our talk on pies and tries, and how this data structure is used to power such things as auto-complete! Based on Vaidehi Joshi's blog post, "Trying to Understand Tries".

S6:E5 - "Tries: the golden retriever of data structures"

Play Episode Listen Later May 28, 2019 19:14


In this episode we go through some trie-als and tribulations to retrieve and build words using tries! Based on Vaidehi Joshi's blog post, "Trying to Understand Tries".

S6:E4 - "Radix sort: the patient zero of sorting algorithms "

Play Episode Listen Later May 21, 2019 27:29


This episode we're diving into radix sort! The word has no relation to Raid, so it is definitely non-toxic and you don't have to bug out. It IS, however, a great integer sorting algorithm, and the first one at that! Based on Vaidehi Joshi's blog post, "Getting To The Root Of Sorting With Radix Sort".

S6:E3 - "You can count on counting sort"

Play Episode Listen Later May 14, 2019 29:39


You may have noticed that it's really hard to sort things efficiently. Well, that's where counting sort comes in! Based on Vaidehi Joshi's blog post, "Counting Linearly With Counting Sort".

S6:E2 - "Getting to the bottom of the heap...sort."

Play Episode Listen Later May 7, 2019 24:34


We've gotten acquainted with heaps as arrays, now we're diving into heap sort with some help from a few condiments! Based on Vaidehi Joshi's blog post, "Heapify All The Things With Heap Sort".

S6:E1 - "Heaps as arrays"

Play Episode Listen Later Apr 30, 2019 17:26


So we've talked about heaps, but how do you represent heaps as arrays? And why would you want to? We break it down step by step! Based on Vaidehi Joshi's blog post, "Learning to Love Heaps".

S5:E8 - "Shrinking and growing heaps with cats"

Play Episode Listen Later Apr 2, 2019 17:13


Now that you've got your heap, what do you do with it? Shrink and grow it of course! We talk about how to add and remove values from a heap with the help of a few cats. Based on Vaidehi Joshi's blog post, "Learning to Love Heaps".

S5:E7 - "A heap of heaps"

Play Episode Listen Later Mar 26, 2019 22:02


What are heaps? How are they related to binary trees? We use losers, winners, and some cards to help us get to the bottom of heaps! Based on Vaidehi Joshi's blog post, "Learning to Love Heaps".

S5:E6 - "The big O of quicksort"

Play Episode Listen Later Mar 19, 2019 28:57


How does quicksort perform? And how do variables, like the pivot number, affect it? We walk through three examples to find out! Based on Vaidehi Joshi's blog post, "Pivoting To Understand Quicksort [Part 2]".

S5:E5 - "Quick sort Queendom"

Play Episode Listen Later Mar 12, 2019 28:59


We learn all about our second "divide and conquer" algorithm, quick sort! We walk through how it works with help from a queendom, a few pointers, and a very helpful pivot number. Based on Vaidehi Joshi's blog post, "Pivoting To Understand Quicksort [Part 1]".

S5:E4 - "Merge sort stops the suckage"

Play Episode Listen Later Mar 4, 2019 26:00


Finally, a sorting algorithm that doesn't suck! We explore how merge sort works and why it performs better than insertion, bubble, and selection sort. Based on Vaidehi Joshi's blog post, "Making Sense of Merge Sort ".

S5:E3 - "Sorting with insertion sort"

Play Episode Listen Later Feb 26, 2019 19:07


We dig into how insertion sort works, how we know where to do our inserting, and how this sorting algorithm performs, all with the help of our new boos. Based on Vaidehi Joshi's blog post, "Inching Towards Insertion Sort".

S5:E2 - "What's bubble sort?"

Play Episode Listen Later Feb 19, 2019 24:24


We are super bubbly about bubble sort! We dig into our second sorting algorithm and break down how it works and why it's actually not a great way of sorting things. Based on Vaidehi Joshi's blog post, "Bubbling Up With Bubble Sorts".

S5:E1 - "The simple selection sort"

Play Episode Listen Later Feb 12, 2019 23:16


What is selection sort? How does this algorithm work? And just as importantly, how does it perform? We use broken books and cookies to tell you all about it! Based on Vaidehi Joshi's blog post, "Exponentially Easy Selection Sort".

S4:E8 - "The Saron Sort"

Play Episode Listen Later Dec 18, 2018 26:24


We're at the end of the season! And to wrap things up, we're breaking down the last two ways to classify sorting algorithms: recursive vs. non-recursive and comparison vs. non-comparison. We bring it all together to talk about what we can do with all these classifications, and, in true basecs podcast fashion, we bring in seemingly unrelated topics like tomatoes! Based on Vaidehi Joshi's blog post, "Sorting Out The Basics Behind Sorting Algorithms".

S4:E7 - "Sorting the Michaels"

Play Episode Listen Later Dec 11, 2018 23:52


Last week, we talked about two ways of classifying sorting algorithms: time complexity and space usage. This episode, we dig into two more! We explore how algorithms can be internal or external, and what "stability" means for a sorting algorithm. And we do it all with the help of cards, clovers, and a pair of Michaels. Based on Vaidehi Joshi's blog post, "Sorting Out The Basics Behind Sorting Algorithms".

S4:E6 - "It's sorting time!"

Play Episode Listen Later Dec 4, 2018 24:34


You probably sort things all the time -- files, clothes, dishes. But have you thought about how to categorize your sorting? How do your sorting algorithms hold up in terms of, say, time complexity? We give you an introduction to sorting algorithms, what they are and what they're used for, and dig into the six ways we can classify them. Based on Vaidehi Joshi's blog post, "Sorting Out The Basics Behind Sorting Algorithms".

S4:E5 - "Sets, sets, everywhere"

Play Episode Listen Later Nov 27, 2018 25:12


Sets are everywhere! If you've worked with relational databases, made a venn diagram, maybe touched some relational algebra, then you've already worked with sets. We talk about why they're so common, how well they perform (time for some Big O Notation!), and how they're actually implemented. Based on Vaidehi Joshi's blog post, "Set Theory: the Method To Database Madness".

S4:E4 - "Varon explains set theory"

Play Episode Listen Later Nov 20, 2018 18:27


Set theory might sound like a scary, super-math thing, but it's not! Well, it is a math thing, but it doesn't have to be super scary. In fact, if you already know how venn diagrams work, then you basically already know set theory. We'll walk you through it all and show you how it connects back to computer science with the help of our favorite foods. Based on Vaidehi Joshi's blog post, "Set Theory: the Method To Database Madness".

S4:E3 - "Chaining to the rescue!"

Play Episode Listen Later Nov 13, 2018 27:08


We're back in our hash table classroom with our multiple Brians that need their own tables! But don't you worry, we've got a brand new collision resolution called chaining to help us out. We talk about how it works and how it compares to linear probing. Based on Vaidehi Joshi's blog post, "Taking Hash Tables Off The Shelf".

S4:E2 - "Too many Brians at the (hash)table"

Play Episode Listen Later Nov 6, 2018 21:08


School is in session, and the teacher is directing students to their assigned seat. Each unique name gets its own unique table. But there's an unexpected student in the class. There's another Brian! What do we do?! In this episode, we dig into how to manage these collisions in a hashtable, and how to use our collision resolution strategy to find new Brian his own desk. Based on Vaidehi Joshi's blog post, "Taking Hash Tables Off The Shelf".

S4:E1 - "Gotta hash 'em all"

Play Episode Listen Later Oct 30, 2018 25:28


We're kicking off a new season with a brand new topic: hash tables! This episode is full of bookshelves, pizza toppings, and helpful fridge operators who are teaming up to give you the most gentle (and the most fun) introduction to the world of hash tables. Based on Vaidehi Joshi's blog post, "Taking Hash Tables Off The Shelf".

S3:E8 - "BFS is your BFF"

Play Episode Listen Later Oct 9, 2018 25:16


Let's break down how breadth-first search (BFS) actually works! We'll walk through a real example, explain the Big O notation of this algorithm, and explore how you might decide whether to use breadth-first search or depth-first search. Based on Vaidehi Joshi's blog post, "Breaking Down Breadth-First Search".

S3:E7 - "Getting in line for breadth-first search"

Play Episode Listen Later Oct 2, 2018 26:35


We're going broad with breadth-first search! Well, actually, we're getting in line, or enqueuing ;) We walk through the steps of how breadth-first search (BFS) works, complete with holiday themed analogies and reindeers that need a GPS. We also compare and contrast the steps of BFS to those in DFS (depth-first search). Based on Vaidehi Joshi's blog post, "Breaking Down Breadth-First Search".

S3:E6 - "Drowning in DFS"

Play Episode Listen Later Sep 24, 2018 28:01


In our final look at depth-first search (DFS), we explore how to implement this lovely algorithm in coding terms. We also dig into Big O notation, breaking down how to determine the time and space complexity of DFS. Based on Vaidehi Joshi's blog post, "Demystifying Depth-First Search".

Claim Base.cs Podcast

In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

Claim Cancel