Lecture videos from 6.262 Discrete Stochastic Processes, Spring 2011. License: Creative Commons BY-NC-SA More information at ocw.mit.edu/terms
In this lecture, we put together many of the topics covered throughout the term: martingales; Markov chains; countable state Markov processes; reversibility for Markov processes; random walks; and Wald's identity for two thresholds.
Sequential hypothesis testing is viewed as a random walk example. Threshold hypothesis tests are distinguished from random walk thresholds. Random walk threshold probabilities are analyzed by Chernoff bounds.
This lecture continues our conversation on Martingales and covers stopped martingales, Kolmogorov submartingale inequality, martingale convergence theorem, and more.
After reviewing Wald's identity, we introduce martingales and show they include many processes already studied. Next, submartingales, supermartingales, and stopped (simple, sub, super) martingales are introduced.
This lecture covers topics including the Kingman bound for G/G/1, large deviations for hypothesis tests, sequential detection, and tilted random variables and proof of Wald's identity.
After reviewing steady-state, this lecture discusses reversibility for Markov processes and for tandem M/M/1 queues. Random walks and their applications are then introduced.
Markov processes with countable state-spaces are developed in terms of the embedded Markov chain. The steady-state process probabilities and the steady-state transition probabilities are treated.
In this lecture, the professor covers sample-time M/M/1 queue, Burke’s theorem, branching processes, and Markov processes with countable state spaces.
This lecture reviews the previous 13 lectures in preparation for the upcoming quiz.
This lecture covers a variety of topics, including elementary renewal theorem, generalized stopping trials, the G/G/1 queue, Little's theorem, ensemble averages and more.
After reviewing the three major renewal theorems, we introduce Markov chains with countable state spaces. The matrix approach for finite-state chains is replaced by renewals based on first-passage times.
In this lecture, we continue our discussion of renewals and cover topics such as Markov chains and renewal processes, expected number of renewals, elementary renewal and Blackwell theorems, and delayed renewal processes.
This lecture begins with a discussion of convergence WP1 related to a quiz problem. Then positive and null recurrence, steady state, birth-death chains, and reversibility are covered.
This lecture begins with the SLLN and the central limit theorem for renewal processes. This is followed by the time-average behavior of reward functions such as residual life.
This lecture covers rewards for Markov chains, expected first passage time, and aggregate rewards with a final reward. The professor then moves on to discuss dynamic programming and the dynamic programming algorithm.
This lecture covers eigenvalues and eigenvectors of the transition matrix and the steady-state vector of Markov chains. It also includes an analysis of a 2-state Markov chain and a discussion of the Jordan form.
This lecture treats joint conditional densities for Poisson processes and then defines finite-state Markov chains. Recurrent and transient states, periodic states, and ergodic chains are discussed. (Courtesy of Mina Karzand. Used with permission.)
In this lecture, many problem solving techniques are developed using, first, combining and splitting of various Poisson processes, and, second, conditioning on the number of arrivals in an interval.
In this lecture, we learn about time-averages for renewal rewards, stopping trials for stochastic processes, and Wald's equality.
Renewal processes are introduced and their importance in analyzing other processes is explained. Proofs about convergence with probability 1 (WP1) and the SLLN are given.
The transition matrix approach to finite-state Markov chains is developed in this lecture. The powers of the transition matrix are analyzed to understand steady-state behavior. (Courtesy of Shan-Yuan Ho. Used with permission.)
This lecture begins with the use of the WLLN in probabilistic modeling. Next the central limit theorem, the strong law of large numbers (SLLN), and convergence are discussed.
This lecture begins with a description of arrival processes, and continues on to describe the Poisson process from three different viewpoints.
The review of probability is continued with expectation, multiple random variables, and conditioning. We then move on to develop the weak law of large numbers (WLLN) and the Bernoulli process.
Probability, as it appears in the real world, is related to axiomatic mathematical models. Events, independence, and random variables are reviewed, stressing both the axioms and intuition.