Hamilton Institute Seminars (iPod / small)

Hamilton Institute Seminars (iPod / small)

Follow Hamilton Institute Seminars (iPod / small)
Share on
Copy link to clipboard

The Hamilton Institute is a multi-disciplinary research centre established at the National University of Ireland, Maynooth in November 2001. The Institute seeks to provide a bridge between mathematics and its applications in ICT and biology. In this podcast feed, we make accessible some of the best…

Hamilton Institute

  • Aug 6, 2013 LATEST EPISODE
  • infrequent NEW EPISODES
  • 56m AVG DURATION
  • 63 EPISODES


Latest episodes from Hamilton Institute Seminars (iPod / small)

Periodicity of Matrix Powers in Max Algebra

Play Episode Listen Later Aug 6, 2013 55:20


Speaker: Dr. S. Sergeev Abstract: It is well known that the sequence of max-algebraic powers of irreducible nonnegative matrices is ultimately periodic. We express this periodicity in terms of CSR-representations and give new bounds on the transient time after which the max-algebraic powers become periodic.

Very High Speed Networking in VMs and Bare Metal

Play Episode Listen Later Jul 4, 2013 72:05


Speaker: Prof. L. Rizzo Abstract: In this talk I will give a survey of solutions and tools that we have developed in recent years to achieve extremely high packet processing rates in commodity operating systems, running on bare metal and on virtual machines. Our NETMAP framework supports processing of minimum size frames from user space at 10 Gbits per second (14.88 Mpps) with very small CPU usage. Netmap is hardware independent, supports multiple NIC types, and it does not require IOMMU or expose critical resources (e.g. device registers) to userspace. A libpcap library running on top of netmap gives instant acceleration to pcap clients without even the need to recompile applications. VALE is a software switch using the netmap API, which delivers over 20 Mpps per port, or 70 Gbits per second with 1500 byte packets. Originally designed to interconnect virtual machines, VALE is actually very convenient also as a testing tool and as a high speed IPC mechanism. More recently we have extended QEMU, and with a few small changes (using VAEL as a switch, paravirtualizing the e1000 emulator, and with small device driver enhancements), we reached guest to guest communication speeds of over 1 Mpps (with socket based clients) and 5 Mpps (with netmap based clients). NETMAP and VALE are small kernel modules, part of standard FreeBSD and also available as add-on for Linux. QEMU extensions are also available from the author and are being submitted to the qemu-devel list for inclusion in the standard distributions.

ROMA: Random Overlook Mastering ATFM

Play Episode Listen Later Mar 20, 2013 39:16


Speaker: C. Lancia Abstract: Consider the arrival process defined by t_i=i + xi_i, where xi_i are i.i.d random variables. First introduced in the 50's, this arrival process is of remarkable importance in Air Traffic Flow Management and other transportation systems, where scheduled arrivals are intrinsically subject to random variations; other frameworks where this model has proved to be capable of a good description of actual job arrivals include health care and crane handling systems. This talk is ideally divided in two parts. In the first half, I will show through numerical simulations two of the most important features of the model, namely, the insensitivity with respect to the nature (i.e. the law) of the delays xi_i and the extremely valuable goodness of fit of simulated queue length distribution against the empirical distribution obtained from actual arrivals at London Heathrow airport. Further, I will show that the congestion related to this process is very different from the congestion of a Poisson process. This is due to the negative autocorrelation of the process. In the second part, I will restrict the analysis to the case where the delays xi_i are exponentially distributed. In this context, I will show some preliminary results on a possible strategy to find the stationary distribution of the queue length using a bivariate generating function.

Machine-to-Machine in Smart Cities & Smart Grids Vision, Technology & Applications

Play Episode Listen Later Jan 20, 2013 78:04


Speaker: Dr. M. Dohler Abstract: The unprecedented communication paradigm of machine-to-machine (M2M), facilitating 24/7 ultra-reliable connectivity between a prior unseen number of automated devices, is currently gripping both industrial as well as academic communities. Whilst applications are diverse, the in-home market is of particular interest since undergoing a fundamental shift of machine-to-human communications towards fully automatized M2M. The aim of this presentation is thus to provide academic, technical and industrial insights into latest key aspects of wireless M2M networks, with particular application to the emerging smart city and smart grid verticals. Notably, I will provide an introduction to the particularities of M2M systems. Architectural, technical and privacy requirements, and thus applicable technologies will be discussed. Notably, we will dwell on the capillary and cellular embodiments of M2M in smart homes. The focus of capillary M2M, useful for real-time data gathering in homes, will be on IEEE (.15.4e) and IETF (6LoWPAN, ROLL, COAP) standards compliant low-power multihop networking designs; furthermore, for the first time, low power Wifi will be dealt with and positioned into the eco-system of capillary M2M. The focus of cellular M2M will be on latest activities, status and trends in leading M2M standardization bodies with technical focus on ETSI M2M and 3GPP LTE-MTC. Open technical challenges, along with the industry’s vision on M2M and its shift of industries, will be discussed during the talk.

State Constrained Optimal Control

Play Episode Listen Later Nov 28, 2012 59:16


Speaker: Prof. R. Vinter Abstract: Estimates on the distance of a nominal state trajectory from the set of state trajectories that are confined to a closed set have an important unifying role in optimal control theory. They can be used to establish non-degeneracy of optimality conditions such as the Pontryagin Maximum Principle, to show that the value function describing the sensitivity of the minimum cost to changes of the initial condition is characterized as a unique generalized solution to the Hamilton Jacobi equation, and for numerous other purposes. We discuss the validity of various presumed distance estimates and their implications, recent counter-examples illustrating some unexpected pathologies and pose some open questions.

Effective Information Delivery Through Opportunistic Replication in Wireless Networks

Play Episode Listen Later Nov 27, 2012 78:07


Speaker: Prof. L. Tassiulas Abstract: Increased replication of information is observed in modern wireless networks either in pre-planned content replication schemes or through opportunistic caching in intermediate relay nodes as the information flows to the final destination or through overhearing of broadcast information in the wireless channel. In all cases the available other node information might be used to effectively increase the efficiency of the information delivery process. We will consider first an information theoretic perspective and present a scheme that exploits the opportunistically available overheard information to achieve the Shannon capacity of the broadcast erasure channel. Then we will consider information transport in a multi-hop flat wireless network and present schemes for spatial information replication based on popularity, in association with any-casting routing schemes, that achieve asymptotically optimal performance.

Dynamics of Some Cholera Models

Play Episode Listen Later Nov 21, 2012 61:22


Speaker: Prof. P. van den Driessche Abstract: The World Health Organization estimates that there are 3 to 5 million cholera cases per year with 100 thousand deaths spread over 40 to 50 countries. For example, there has been a recent cholera outbreak in Haiti. Cholera is a bacterial disease caused by the bacterium Vibrio cholerae, which can be transmitted to humans directly by person to person contact or indirectly via the environment (mainly through contaminated water). To better understand the dynamics of cholera, ageneral ordinary differential equation compartmental model is formulated that incorporates these two transmission pathways as well as multiple infection stages and pathogen states. In the model analysis, some matrix theory is used to derive a basic reproduction number, and Lyapunov functions are used to show that this number gives a sharp threshold determining whether cholera dies out or becomes endemic. In the absence of recruitment and death, a final size equation or inequality is derived, and simulations illustrate how assumptions on cholera transmission affect the final size of the epidemic. Further models that incorporate temporary immunity and hyperinfectivity using distributed delays are formulated, and numerical simulations show that oscillatory solutions may occur for parameter values taken from cholera data in the literature.

Distributed Opportunistic Scheduling: A Control Theoretic Approach

Play Episode Listen Later Oct 9, 2012 59:32


Speaker: Prof. A. Banchs Abstract: Distributed Opportunistic Scheduling (DOS) techniques have been recently proposed to improve the throughput performance of wireless networks. With DOS, each station contends for the channel with a certain access probability. If a contention is successful, the station measures the channel conditions and transmits in case the channel quality is above a certain threshold. Otherwise, the station does not use the transmission opportunity, allowing all stations to recontend. A key challenge with DOS is to design a distributed algorithm that optimally adjusts the access probability and the threshold of each station. To address this challenge, in this paper we first compute the configuration of these two parameters that jointly optimizes throughput performance in terms of proportional fairness. Then, we propose an adaptive algorithm based on control theory that converges to the desired point of operation. Finally, we conduct a control theoretic analysis of the algorithm to find a setting for its parameters that provides a good tradeoff between stability and speed of convergence. Simulation results validate the design of the proposed mechanism and confirm its advantages over previous proposals.

Large-scale urban vehicular networks: mobility and connectivity

Play Episode Listen Later Oct 4, 2012 52:44


Speaker: Dr. M. Fiore Abstract: Vehicular networks are large scale communication systems that exploit wireless technologies to interconnect moving cars. Vehicular networks are envisioned to provide drivers with real time information on potential dangers, on road traffic conditions, and on travel times, thus improving road safety and traffic efficiency. Direct vehicle-to-vehicle communication is also foreseen to enable nonsafety applications, such as pervasive urban sensing and fast data dissemination throughout metropolitan regions. The quantity and relevance of potential usages make pervasive inter-vehicular communication one of the highest impact future applications of the wireless technology, which explains the growing interest of both industry and academy towards this research field. In this talk, we will address two intertwined topics in vehicular networks: the modeling of vehicular mobility in large scale urban environments and the topological characterization of the vehicular network built on top of such a mobility. Both are fundamental, yet often oversought, aspects of vehicular networking, defining the strengths and weaknesses of the vehicle-to-vehicle communication system and dictating the rules for the design of dedicated protocols.

Learning Cell Cycle Variability at the Level of each phase

Play Episode Listen Later Sep 26, 2012 43:06


Speaker: Dr. T. Weber Abstract: Inter-cellular variability in the duration of the cell cycle is a well documented phenomena which has been integrated into mathematical models of cell proliferation since the 70’s. Here I present a minimalist stochastic cell cycle model that allows for inter-cellular variability at the level of each single phase, i.e. G1, S and G2M. Fitting this model to flow cytometry data from 5-bromo-2'-deoxyuridine (BrdU) pulse labeling experiments of two different cell lines shows that the mean field predictions mimic closely the measured average kinetics. However as indicated by bayesian inference, scenarios with deterministic or purely stochastic waiting times especially in the G1 phase seem to explain the data equally well. To resolve this uncertainty a novel experimental proto col is proposed able to provide sufficient information about cell kinetics to fully determine both the inter-cellular average and variance of the duration of each of the phases. Finally I present a case in which this model is extended in order to estimate cell cycle parameters in germinal centers. The latter play a central role in the generation of highly effective antibodies that protect our body against invading pathogens.

EPT functions: Non-negativity analysis, Levy processes and Financial applications

Play Episode Listen Later Sep 16, 2012 59:22


Speaker: Prof. B. Hanzon Abstract: Exponential Polynomial Trigonometric (EPT) functions are being considered as probability density functions. A specific matrix-vector representation is proposed for doing calculations with these functions. We investigate when these functions are non-negative and under which conditions the density functions are infinitely divisible--in which case there is an associated Levy process. Application to option price computations in finance will be presented. For background information on this topic the website www.2-ept.com can be considered.

Playing with Standards: the IEEE 802.11 case

Play Episode Listen Later Sep 11, 2012 62:47


Speaker: Dr. F. Gringoli Abstract: Experimenting in the field is a key activity for the evolution of the modern Internet: this is especially true for radio access protocols like IEEE 802.11 that are usually affected by unpredictable issues due to noise, competing stations and interference. Here we introduce OpenFWWF, an opensource firmware that implements a fully compliant 802.11 MAC on off-the-shelf WiFi boards: we show how it can be used in conjunction with the Linux kernel to play with the wireless stack. To this end we further demonstrate how we can easily customize the basic DCF access firmware to explore either performance boosting variations or to measure physical properties of the wireless channel.

In Search of Optimality: Network Coding for Wireless Networks

Play Episode Listen Later Aug 28, 2012 59:52


Speaker: Dr. M. A. Chaudry Abstract: Network coding has gained significant interest from the research community since the first paper by Alshwede et al., in 2000. Network coding techniques can significantly increase the overall throughput of wireless networks by taking advantage of their broadcast nature. We focus on network coding for wireless networks; specifically we investigate the Index Coding problem. In wireless networks, each transmitted packet is broadcasted within a certain region and can be overheard by the nearby users. When a user needs to transmit packets, it employs the Index Coding that uses the knowledge of what the user's neighbors have heard previously (side information) in order to reduce the number of transmissions. The objective is to satisfy the demands of all the users with the minimum number of transmissions. With the Index Coding, each transmitted packet can be a combination of the original packets. The Index Coding problem has been proven to be NP-hard, and NP-hard to approximate. Noting that the Index Coding problem is not only NP-hard but NP-hard to approximate, we look at it from a novel perspective and define the Complementary Index Coding problem; where the objective is to maximize the number of transmissions that are saved by employing the Index Coding compared to the solution that does not involve coding. We prove that the Complementary Index Coding problem can be approximated in several cases of practical importance. We investigate both the multiple unicast and multiple multicast scenarios for the Complementary Index Coding problem for computational complexity, and provide polynomial time approximation algorithms.

On Continuous Counting and Learning in a Distributed System

Play Episode Listen Later Aug 2, 2012 65:53


Speaker: Dr. B. Radunović Abstract: Consider a distributed system that consists of a coordinator node connected to multiple sites. Items from a data stream arrive to the system one by one, and are arbitrarily distributed to different sites. The goal of the system is to continuously track a function of the items received so far within a prescribed relative accuracy and at the lowest possible communication cost. This class of problems is called a continual distributed stream monitoring. In this talk we will focus on two problems from this class. We will first discuss the count tracking problem (counter), which is an important building block for other more complex algorithms. The goal of the counter is to keep a track of the sum of all the items from the stream at all times. We show that for a class of input loads a randomized algorithm guarantees to track the count accurately with high probability and has the expected communication cost that is sublinear in both data size and the number of sites. We also establish matching lower bounds. We then illustrate how our non-monotonic counter can be applied to solve more complex problems, such as to track the second frequency moment and the Bayesian linear regression of the input stream. We will next discuss the online non-stochastic experts problem in the continual distributed setting. Here, at each time-step, one of the sites has to pick one expert from the set of experts, and then the same site receives information about payoffs of all experts for that round. The goal of the distributed system is to minimize regret with respect to the optimal choice in hindsight, while simultaneously keeping communication to the minimum. This problem is well understood in the centralized setting, but the communication trade-off in the distributed setting is unknown. The two extreme solutions to this problem are to communicate with everyone after each payoff, and not to communicate at all. We will discuss how to achieve the trade-off between these two approaches. We will present an algorithm that achieves a non-trivial trade-off and show the difficulties of further improving its performance.

Multi-channel MAC Protocols for Wireless Sensor Networks

Play Episode Listen Later Jul 30, 2012 40:09


Speaker: Dr. C. Cano Abstract: Wireless Sensor Networks (WSNs) are networks formed by highly constrained devices that communicate measured environmental data using low-power wireless transmissions. The increase of spectrum utilization in non-licensed bands along with the reduced power used by these nodes is expected to cause high interference problems in WSNs. Therefore, the design of new dynamic spectrum access techniques specifically tailored to these networks plays an important role for their future development. In this talk the main challenges of dynamic spectrum access in WSNs will be described and a first approach to coordinate sensor nodes will be presented.

Networking Infrastructure and Data Management for Cyber-Physical Systems

Play Episode Listen Later Jul 9, 2012 68:32


Speaker: S. Han Abstract: A cyber-physical system (CPS) is a system featuring a tight combination of, and coordination between, the system's computational and physical elements. A large-scale CPS usually consists of several subsystems which are formed by networked sensors and actuators, and deployed in different locations. These subsystems interact with the physical world and execute specific monitoring and control functions. How to organize the sensors and actuators inside each subsystem and interconnect these physically separated subsystems together to achieve secure, reliable and real-time communication is a big challenge. In this talk, I will first present a TDMA-based low-power and secure real-time wireless protocol. This protocol can serve as an ideal communication infrastructure for CPS subsystems which require flexible topology control, secure and reliable communication and adjustable real-time service support. I will describe the network management techniques for ensuring the reliable routing and real-time services inside the subsystems and data management techniques for maintaining the quality of the sampled data from the physical world. To evaluate these proposed techniques, we built up a prototype system and deployed it in different environments for performance measurement. I will also present a light-weighted and scalable solution for interconnecting heterogenous CPS subsystems together through a slim IP adaptation layer. This approach makes the underlying connectivity technologies transparent to the application developers thus enables rapid application development and efficient migration among different CPS platforms.

Cracking the Cutoff Window

Play Episode Listen Later Jun 10, 2012 39:38


Speaker: C. Lancia Abstract: The cutoff phenomenon is the abrupt convergence to stationarity of a Markov chain. It is characterized by a narrow window centered around a cutoff-time in which the distance from stationarity suddenly drops from 1 to 0. All the examples in which cutoff was detected clearly indicate that a drift towards the opportune quantiles of the stationary measure could be held responsible for this phenomenon. In the case of birth- and- death chains this mechanism is fairly well understood. I will present a possible generalization of this picture to more general systems and show that there are two sources of randomness contributing to the size of the cutoff window. One is related to the drift towards the relevant quantiles of $pi$ and the other to the thermalization in that region of the state space. For one-dimensional systems a sufficiently strong drift ensures that the thermalization is under control but for higher-dimensional models the thermalization contribution can grow wide the cutoff window and even destroy completely the phenomenon.

Reaching Consensus about Gossip

Play Episode Listen Later May 27, 2012 72:03


Speaker: Prof. P. Thiran Abstract: An increasingly larger number of applications require networks to perform decentralized computations over distributed data. A representative problem of these “in-network processing” tasks is the distributed computation of the average of values present at nodes of a network, known as gossip algorithms. They have received recently significant attention across different communities (networking, algorithms, signal processing, control) because they constitute simple and robust methods for distributed information processing over networks. The first part of the talk is a survey some recent results on real-valued (analog) gossip algorithms. For many topologies that are realistic for wireless sensor networks, the classical nearest-neighbor gossip algorithms are slow, but a variation of these algorithms can be proven to order optimal (cost of O(n) messages for a network of n nodes) for some random geometric graphs. A second improvement, inspired by Uniform Gossip, allows to use uni-directional paths to compute the average, instead of requiring to route the average back and forth along the same path (one way paths are better suited in highly dynamic networks). The second part of the talk is devoted to quantized gossip on arbitrary connected networks. By their nature, quantized algorithms cannot produce a real, analog average, but they can (almost surely) reach consensus on the quantized interval that contains the average, in finite time. (This is a joint work with Florence Benezit, Martin Vetterli, Alex Dimakis, Vincent Blondel and John Tsitsiklis.)

The Role of Kemeny's Constant in Properties of Markov Chains

Play Episode Listen Later May 8, 2012 52:12


Speaker: Prof. J. J. Hunter Abstract: In a finite m-state irreducible Markov chain with stationary probabilities {pi_i} and mean first passage times m_{ij} (mean recurrence time when i=j) it was first shown, by Kemeny and Snell, that sum_{j=1}^{m}pi_jm_{ij} is a constant, K, not depending on i. This constant has since become known as Kemeny’s constant. We consider a variety of techniques for finding expressions for K, derive some bounds for K, and explore various applications and interpretations of theseresults. Interpretations include the expected number of links that a surfer on the World Wide Web located on a random page needs to follow before reaching a desired location, as well as the expected time to mixing in a Markov chain. Various applications have been considered including some perturbation results, mixing on directed graphs and its relation to the Kirchhoff index of regular graphs.

Experiences in Industrial Mathematics in Ireland

Play Episode Listen Later Apr 22, 2012 56:25


Speaker: Prof. S. O'Brien Abstract: In the context of the Macsi industrial mathematics group, we look at the types of problems which have arisen from industrial collaboration and examine a couple of these in detail. In particular, we look at a mathematical model for etching glass with acids which arose from a study group with industry problem presented by Waterford Crystal.

Geographically weighted regression: modelling spatial heterogeneity

Play Episode Listen Later Mar 20, 2012 65:02


Speaker: Martin Charlton Abstract: Geographically Weighted Regression is a technique for exploratory spatial data analysis. In "normal" regression with data for spatial objects we assume that the relationship we are modelling is uniform across the study area - that is, the estimated regression parameters are "whole-map" statistics. In many situations this is not necessarily the case, as mapping the residuals (the differences between the observed and predicted data) may reveal. Many different solutions have been proposed for dealing with spatial variation in these relationships. GWR provides means of modelling such relationships. This seminar outlines the characteristics of spatial data and the challenges its use poses for analysis, the ideas underpinning geographically weighted regression and details the process of estimating and interpreting the outputs from GWR models. We finish with a brief survey of current issues in GWR and potential future developments.

Cascade Dynamics on Complex Networks

Play Episode Listen Later Mar 13, 2012 70:26


Speaker: Dr. A. Hackett Abstract: A cascade or avalanche is observed when interactions between the components of a system allow an initially localized effect to propagate globally. For example, the malfunction of technological systems like email networks or electrical power grids is often attributable to a cascade of failures triggered by some isolated event. Similarly, the transmission of infectious diseases and the adoption of innovations or cultural fads may induce cascades among people in society. It has been extensively demonstrated that such dynamics depend sensitively on the patterns of interaction laid out in the underlying network of the system. One of the primary goals of the study of complex networks is to provide a sound theoretical basis for this dependence. In this seminar we discuss some recent progress in modelling the interaction between network structure and dynamics. Focusing on the phenomenon of high clustering, we present two recently proposed classes of random graphs that achieve non­ zero clustering coefficients. We provide an analytically tractable framework for modeling cascades in both of these classes. This framework is then used to calculate the mean cascade size and the cascade threshold for a broad class of binary­state dynamics.

Exploit prediction to handle mobility in wireless ad hoc networks

Play Episode Listen Later Feb 29, 2012 48:49


Speaker: Dr. X. Li Abstract: Node mobility is often a hindering factor of the networking process in wireless ad hoc networks. In this talk, we will introduce our two recent works that address this problem through a prediction approach. The first work proposes an AutoRegressive Hello protocol (ARH) for mobile ad hoc networks. A hello protocol is a basic tool for neighborhood discovery. It requires nodes to claim their existence/aliveness by periodic ‘hello’ messages. ARH evolves along with network dynamics by predicting node mobility, and seamlessly tunes itself to obtain ‘hello’ frequency using local knowledge only. The second work proposes a distributed Prediction-based Secure and Reliable routing framework (PSR) for wireless body area networks. In this protocol, each node predicts the quality of every incidental link and any change in the neighbor set too, based on an autoregressive model. According to the prediction result, it selects routing next hope and decides whether to enables/disables source authentication.

Juggler's Exclusion Process

Play Episode Listen Later Jan 31, 2012 52:17


Speaker: Prof. L. Leskelä Abstract: Juggler's exclusion process describes a system of particles on the positive integers where particles drift down to zero at unit speed. After a particle hits zero, it jumps into a randomly chosen unoccupied site. I will model the system as a set-valued Markov process and show that the process is ergodic if the family of jump height distributions is uniformly integrable. In a special case where the particles perform jumps according to an entropy-maximizing fashion, the process reaches its equilibrium in finite nonrandom time, and the equilibrium distribution can be represented as a Gibbs measure conforming to a linear gravitational potential. Time permitting, I will also discuss a recent result which sharply characterizes uniform integrability using the theory of stochastic orders, and allows to interpret the dominating function in Lebesgue's dominated convergence theorem in a natural probabilistic way. This talk is based on joint work with Harri Varpanen (Aalto University, Finland) and Matti Vihola (University of Jyväskylä, Finland).

Exploratory analysis of human mobility and activities from geo-referenced communication data streams

Play Episode Listen Later Jan 18, 2012 46:47


Speaker: Dr. A. Pozdnoukhov Abstract: Communication technologies with their very high penetration into society can serve as particularly rich source of information to explore and model evolution of complex social systems. This talk presents a framework of methods useful for exploratory analysis, modelling and visualization of data streams available from Twitter, instant messenger services and mobile phone communication logs. We apply probabilistic topic models to uncover the temporal evolution and spatial variability of population’s response to various stimuli such as large scale sportive, political or cultural events. We demonstrate how untypical activity levels can be identified by fitting a non-homogeneous Markov-modulated Poisson processes and exploring spatial variability of the component corresponding to unusual bursts/lulls of human activities. Finally, we present initial ideas on the combined use of available data sources and models within a joint large-scale geocomputation framework to uncover a complex interplay of mobility and communication patterns.

Diagonal Stability and Completely Positive Matrices

Play Episode Listen Later Oct 16, 2011 39:33


Speaker: Prof. A. Berman Abstract: In this paper a general notion of common diagonal Lyapunov matrix is formulated for a collection of n×n matrices A_1,...,A_s and polyhedral cones k_1,...,k_s in R^n. Necessary and sufficient conditions are derived for the existence of a common diagonal Lyapunov matrix in this setting. This talk is based on joint work with Christopher King & Robert Shorten.

Load balancing for Markov chains

Play Episode Listen Later Oct 16, 2011 39:18


Speaker: Prof. S. Kirkland Abstract: A square matrix T is called stochastic if its entries are nonnegative and its row sums are all equal to one. Stochastic matrices are the centrepiece of the theory of discrete-time, time homogenous Markov chains on a finite state space. If some power of the stochastic matrix T has all positive entries, then there is a unique left eigenvector for T, known as the stationary distribution, to which the iterates of the Markov chain converge, regardless of what the initial distribution for the chain is. Thus, in this setting, the stationary distribution can be thought of as giving the probability that the chain is in a particular state over the long run. In many applications, the stochastic matrix under consideration is equipped with an underlying combinatorial structure, which can be recorded in a directed graph. Given a stochastic matrix T, how are the entries in the stationary distribution influenced by the structure of the directed graph associated with T? In this talk we investigate a question of that type by finding the minimum value of the maximum entry in the stationary distribution for T, as T ranges over the set of stochastic matrices with a given directed graph. The solution involves techniques from matrix theory, graph theory, and nonlinear programming.

The Symmetric Nonnegative Inverse Eigenvalue Problem

Play Episode Listen Later Oct 16, 2011 31:54


Speaker: Dr. H. Šmigoc Abstract: The question of which lists of complex numbers are the spectra of nonnegative matrices, is known as the nonnegative inverse eigenvalue problem, and the same question posed for symmetric nonnegative matrices is called the symmetric nonnegative inverse eigenvalue problem. In the talk we will present an overview of some recent results on the symmetric nonnegative inverse eigenvalue problem. Joint work with T. J. Laffey.

On the Block Numerical Range of Operators in Banach Spaces

Play Episode Listen Later Oct 16, 2011 37:52


Speaker: Prof. K.-H. Förster Abstract: In this talk following topics will be discussed: - The Numerical Range of Operators in Banach Spaces. - The Block Numerical Range of Operators. - The Block Numerical Range of Operator Functions. - The Block Numerical Range of m-monic Perron-Frobenius-Matrix-Polynomials.

Essentially Negative News About Positive Systems

Play Episode Listen Later Oct 16, 2011 46:25


Speaker: Prof. P. Colaneri Abstract: In this paper the discretisation of switched and non-switched linear positive systems using Padé approximations is considered. Padé approximations to the matrix exponential are sometimes used by control engineers for discretising continuous time systems and for control system design. We observe that this method of approximation is not suited for the discretisation of positive dynamic systems, for two key reasons. First, certain types of Lyapunov stability are not, in general, preserved. Secondly, and more seriously, positivity need not be preserved, even when stability is. Finally we present an alternative approximation to the matrix exponential which preserves positivity, and linear and quadratic stability. This talk is based on joint work with Steve Kirkland, Annalisa Zappavigna & Robert Shorten

Some relationships between formal power series and nonnegative matrices

Play Episode Listen Later Oct 16, 2011 44:29


Speaker: Prof. T. Laffey Abstract: Let σ = (λ_1,...,λ_n) be a list of complex numbers which we aim to realize constructively as the spectrum of a nonnegative matrix. Most constructions available in the literature rely on building matrices related to companion matrices from the polynomial f(x) = (x-λ_1)...(x-λ_n). Kim, Ormes and Roush (JAMS 2000) showed how certain formal power series related to f(x), which have all coefficients, other than the leading one, negative, can be used in finding constructions over the semiring of polynomials with nonnegative coefficients, while, in joint work, Šmigoc and this author (ELA 17 (2008) 333-342, LAMA 58 (2010), 1053-1059) have used polynomials having all their non-leading coefficients negative, to find realizations when σ has not more than two entries with positive real parts. Beginning with the observation that if λ_1,...,λ_n are all positive, then the Taylor expansion of the nth root of F(t) = (1-λ_1t)...(1-λ_nt) about t=0 has all its non-leading coefficients negative, we present a number of results on the negativity of the coefficients of power series and their applications to nonnegative matrices.

Maximal exponents of polyhedral cones

Play Episode Listen Later Oct 16, 2011 48:32


Speaker: Prof. R. Loewy Abstract: Let K be a proper (i.e., closed, pointed, full and convex) cone in R^n. We consider A∈R^(n×n) which is K-primitive, that is, there exists a positive integer l such that A^l.x ∈ int K for every 0≠x∈K. The smallest such l is called the exponent of A, denoted by γ(A). For a polyhedral cone K, the maximum value of γ(A), taken over all K-primitive matrices A, is denoted by γ(K). Our main result is that for any positive integers m,n, 3 ≤ n ≤ m, the maximum value of γ(K), as K runs through all n-dimensional polyhedral cones with m extreme rays, equals ( n - 1 )( m - 1 ) + ½( 1 + (-1)^{(n-1)m} ). We will consider various uniqueness issues related to the main result as well as its connections to known results. This talk is based on a joint work with Micha Perles and Bit-Shun Tam.

From nonnegative matrices to nonnegative tensors

Play Episode Listen Later Oct 16, 2011 43:56


Speaker: Prof. S. Friedland Abstract: In this talk we will discuss a number of generalizations of results on nonnegative matrices to nonnegative tensors as: irreducibility and weak irreducibility, Perron-Frobenius theorem, Collatz-Wielandt characterization, Kingman's inequality, Karlin-Ost and Friedland theorems, tropical spectral radius, diagonal scaling, Friedland-Karlin inequality, nonnegative multilinear forms.

Fundamental delay bounds in peer-to-peer chunk-based real-time streaming systems

Play Episode Listen Later Aug 10, 2011 75:59


Speaker: Prof. G. Bianchi Abstract: In this talk we address the following question: What is the minimum theoretical delay performance achievable by an overlay peer-to-peer streaming system where the streamed content is subdivided into chunks? We first start to show that, when posed for chunk-based systems, and as a consequence of the store-and-forward way in which chunks are delivered across the network, this question has a fundamentally different answer with respect to the case of systems where the streamed content is distributed through one or more flows (sub-streams). We then proceed by defining a convenient performance metric, called "stream diffusion metric", which is directly related to the end-to-end minimum delay achievable in a P2P streaming network, but which allows us to circumvent the complexity emerging when directly dealing with delay. We further derive a performance bound for such metric, and we show how this bound relates to two fundamental parameters: the upload bandwidth available at each node, and the number of neighbors a node may deliver chunks to. Quite interestingly, in this bound, n-step Fibonacci sequences play a key role, and appear to set the laws that characterize the optimal operation of chunk-based systems. Finally, we constructively show by means of which topologies and system operation this bound is attainable.

Robot Navigation and Mapping

Play Episode Listen Later Aug 8, 2011 65:33


Speaker: Prof. J. Leonard Abstract: This talk will have two parts. In part one, we will review recent progress in mobile robotics, focusing on the problems of simultaneous mapping and localization (SLAM) and cooperative navigation of mobile sensor networks. The problem of SLAM is stated as follows: starting from an initial position, a mobile robot travels through a sequence of positions and obtains a set of sensor measurements at each position. The goal is for the mobile robot to process the sensor data to compute an estimate of its position while concurrently building a map of the environment. We will present SLAM results for several scenarios including land robot mapping of large-scale environments and undersea mapping using optical imaging sensors. We will also describe work on cooperative navigation for networks of autonomous underwater vehicles (AUVs) and autonomous sea-surface vehicles (ASVs). In the second part of the talk, we will provide an overview of MIT's entry in the 2007 DARPA Urban Challenge. The goal of this effort was to produce a car that can drive autonomously in traffic. Our team developed a novel strategy for using a large number of many inexpensive sensors, mounted on the vehicle periphery, and calibrated with a new cross-modal calibration technique. Lidar, camera, and radar data streams are processed using an innovative, locally smooth state representation that provides robust perception for real-time autonomous control. A resilient planning and control architecture has been developed for driving in traffic, comprised of an innovative combination of well-proven algorithms for mission planning, situational planning, situational interpretation, and trajectory control. The performance of our system in the NQE and race events will be reviewed, and ideas for future research will be discussed. For more information, see http://grandchallenge.mit.edu Joint work with Seth Teller, Michael Bosse, Paul Newman, Ryan Eustice, Matthew Walter, Hanumant Singh, Henrik Schmidt, Mike Benjamin, Alexander Bahr, Joseph Curcio, Andrew Patrikalakis, Matt Antone, David Barrett, Mitch Berger, Ryan Buckley, Stefan Campbell, Alexander Epstein, Gaston Fiore, Luke Fletcher, Emilio Frazzoli, Robert Galejs, Jonathan How, Albert Huang, Karl Iagnemma, Troy Jones, Sertac Karaman, Olivier Koch, Siddhartha Krishnamurthy, Yoshi Kuwata, Keoni Maheloni, David Moore, Katy Moyer, Edwin Olson, Andrew Patrikalakis, Steve Peters, Stephen Proulx, Nicholas Roy, Daniela Rus, Chris Sanders, Seth Teller, Justin Teo, Robert Truax, Matthew Walter, and Jonathan Williams.

Humanoid Robot Soccer 101

Play Episode Listen Later Aug 8, 2011 78:19


Speaker: Dr. T. Röfer Abstract: Building the software for a competitive robot soccer team is a challenging task. The robots have to perceive their environment, estimate where they and the other relevant object are located on the field, decide what to do, and execute those decisions. All this has to happen in real-time, on-board the robots, with limited computing power, and not only for a single robot, but for the whole team. The lecture will give a survey of these tasks, using the methods used by the team B-Human in the RoboCup Standard Platform League as an example.

An Introduction to R

Play Episode Listen Later Jun 2, 2011 59:01


Speaker: C. Walz Abstract: A first introduction to R.

Advances in non-linear distortion methods of synthesis and processing of musical signals

Play Episode Listen Later Mar 22, 2011 66:27


Speaker: Dr. V. Lazzarini Abstract: Non-linear distortion methods form a set of elegant and computationally economic methods of synthesis and processing for musical applications. Among these, we find the famous Frequency Modulation synthesis, as developed by Chowning and made popular by Yamaha. In addition, various other techniques, including Discrete Summation Formulae, Waveshaping and Phase distortion, can be cast in the same group (and often be given alternative interpretations) of non-linear distortion methods. Research in the area has been very limited since the mid nineties, until a recent series of developments spurred new interest in these ideas. In this talk, I will first introduce briefly the principles of non-linear distortion, providing an overview of the area. I will then follow this with a tour of recent work, which will include adaptive methods, virtual analogue models and analysis-synthesis applications.

Lifecycle of HIV-infected cells

Play Episode Listen Later Mar 4, 2011 54:43


Speaker: Dr. J. Petravic Abstract: In HIV dynamics models, it is commonly assumed that HIV-infected cells all have the same viral production and death rates. We explored the dynamics of viral production and death in vitro to determine the validity of this assumption. We infected human cells with HIV-1 constructs that expressed enhanced green fluorescent protein (EGFP) and determined the amount of viral proteins produced by infected cells. Analysis of the flow cytometry data showed that the productively infected cells exhibited a broad, approximately log-normal distribution of viral protein content (spanning several orders of magnitude) that changed its shape and mean fluorescence intensity over time, and that population death rate apparently did not correlate with its mean EGFP content. We assumed that the observed EGFP fluorescence level represented the balance of protein production and degradation. In our model of the infected cell population, EGFP fluorescence distribution at any time depended on probability distributions of four independent parameters: time to the start of protein production, protein production and degradation rates, and the lifespan of infected cells. After exploration of possible combinations of parameter distributions, we found that a distribution of protein production rates that is negatively correlated to the times to start of production of viral can explain the observed time course of the distribution of EGFP intensity.

Programming stem cells: modeling stem cell dynamics and organ development

Play Episode Listen Later Feb 22, 2011 40:32


Speaker: Dr. Y. Setty Abstract: In recent years, we have used software engineering tools to develop reactive models to simulate and analyze the development of organs. The modeled systems embody highly complex and dynamic processes, by which a set of precursor stem cells proliferate, differentiate and move, to form a functioning tissue. Three organs from diverse evolutionary organisms have been thus modeled: the mouse pancreas, the C. elegans gonad, and partial rodent brain development. Analysis and execution of the models provided dynamic representation of the development, anticipated known experimental results and proposed novel testable predictions. In my talk, I will l discuss challenges, goals and achievement in this direction in science.

Vehicle-2-x Communication

Play Episode Listen Later Feb 17, 2011 69:21


Speaker: Dr. I. Radusch Abstract: Future drivers and vehicles will benefit from upcoming integrated communication devices three-fold. Communication will increase safety and efficiency in traffic as well as making driving more enjoyable. Upcoming field operational tests will assess if available standards and implementations are ready for wide scale deployment. Additionally, simulation environments such as VSimRTI allow comprehensive pre-validation of novel vehicle functions utilizing vehicle-2-x communication.

Event-Driven Automation in Laser-Scanning Microscopy Applied to Live Cell Imaging

Play Episode Listen Later Dec 14, 2010 38:29


Speaker: Dr. J. Wenus Abstract: Microscopy of living cells is heavily employed in biomedicine to understand the mechanisms of disease progression and to develop novel pharmaceuticals. In particular, confocal microscopy which relies on laser-based excitation of fluorescent cellular biomarkers is frequently used for understanding molecular actions of therapeutic drugs to abnormal cells. However, prolonged exposure to highly energetic laser radiation often leads to light induced cell death before any spontaneous effects can occur --- an effect known as 'photo-toxicity'. To address this problem we have developed an automated live-cell imaging system 'ALISSA' which employs online image processing and analysis to automatically detect biological events and then trigger appropriate changes in the image acquisition settings. This way we minimize the photo-toxicity, obtain higher quality of the imaging data and minimize direct user involvement by introducing more automation to the whole experimental process. So far, ALISSA has been used in studies on cancer cells and neurons at the Royal College of Surgeons in Ireland and it is currently under development aimed towards applications in commercial high content screening systems. This is a joint work between the RCSI, Dublin (H. Huber, H. Duessmann, J. Prehn) and the Hamilton Institute, NUI Maynooth (J. Wenus, P. Paul, D. Kalamatianos, P. Wellstead) with involvement from Siemens and Carl Zeiss MicroImaging. We gratefully acknowledge financial support from the National Biophotonics and Imaging Platform Ireland (HEA PRTLI Cycle 4).

Spectrum Sharing in Cognitive Radio with Quantized Channel Information

Play Episode Listen Later Jul 14, 2010 59:12


Speaker: Dr. S. Dey Abstract: In this talk, we consider a wideband spectrum sharing system where a secondary user can share a number of orthogonal frequency bands each licensed to a distinct primary user. We address the problem of optimum secondary transmit power allocation for its ergodic capacity maximization subject to an average sum (across the bands) transmit power constraint and individual average interference constraints on the primary users. The major contribution of our work lies in considering quantized channel state information (CSI) (for the vector channel space consisting of all secondary-to-secondary and secondary-to-primary channels) at the secondary transmitter as opposed to the prevalent assumption of full CSI in most existing work.It is assumed that a band manager or a cognitive radio service provider has access to the full CSI information from the secondary and primary receivers and designs (offline) an optimal power codebook based on the statistical information(channel distributions) of the channels and feeds back the index of the codebook to the secondary transmitter for every channel realization in real-time, via a delay-free noiseless limited feedback channel. A modified Generalized Lloyds-type algorithm (GLA) is designed for deriving the optimal power codebook,which is proved to be globally convergent and empirically consistent. An approximate quantized power allocation (AQPA) algorithm is presented,that performs very close to its GLA based counterpart for large number of feedback bits and is significantly faster.We also present an extension of the modified GLA based quantized power codebook design algorithm for the case when the feedback channel is noisy. Numerical studies illustrate that with only 3-4 bits of feedback, the modified GLA based algorithms provide secondary ergodic capacity very close to that achieved by full CSI and with only as little as 4 bits of feedback, AQPA provides a comparable performance,thus making it an attractive choice for practical implementation.Various open problems and future research directions will also be discussed.

Large deviation theory and its applications in statistical mechanics

Play Episode Listen Later Mar 23, 2010 54:21


Speaker: Dr. H. Touchette Abstract: The theory of large deviations, initiated by Cramer in the 1930s and later developed by Donsker and Varadhan in the 1970s, is an active field of probability theory that finds applications in many subjects, including statistics, finance, actuarial mathematics, engineering, and physics. Its use in physics dates back to the work of Ruelle, Lanford, and the late John Lewis, among others, who used concepts of large deviations in the 1970s and 1980s to study equilibrium systems and to put statistical mechanics on a rigorous footing. I will give in this talk a survey of these applications, as well as more recent ones related to long-range equilibrium systems and nonequilibrium systems, at a level which assumes little knowledge of statistical mechanics or large deviations. As we cover these applications, we will see that large deviation theory and statistical mechanics share a common mathematical structure, which Lewis was well aware of, and which can be summarized by saying that an entropy function is to a physicist what a large deviation function (or rate function) is to a mathematician. Other connections of this sort will be discussed.

Asymptotic Stability Region of Slotted Aloha

Play Episode Listen Later Mar 2, 2010 56:21


Speaker: Dr. C. Bordenave Abstract: Consider N queues with non-homogeneous packet arrivals. The queues share a common communication channel. At the beginning of each timeslot, if queue i has a packet, it attempts to access the channel with probability p_i. This attempt is successful when no other queue attempts to access the channel. For arbitrary N, the stability region of such queuing system is a long standing open problem. However as the number of queues N goes to infinity, it is possible to compute the asymptotic stability region. This is a joint work with David McDonald (Ottawa) and Alexandre Proutiere (Microsoft).

On the stabilization of discrete-time positive switched systems by means of Lyapunov based switching strategies

Play Episode Listen Later Feb 18, 2010 42:59


Speaker: Prof. M. E. Valcher Abstract: Abstract: Positive switched systems typically arise to cope with two distinct modeling needs. On the one hand, switching among different models mathematically formalizes the fact that the system laws change under different operating conditions.On the other hand, the variables to be modeled may be quantities that have no meaning unless positive (temperatures, pressures,population levels, ...). In this talk we consider the class of discrete-time positive switched systems, described, at each time t, by the first-order difference equation: x(t+1) = A_{sigma(t)} x(t), where sigma is a switching sequence, taking values in the finite set {1,2}, and for each index i, A_i is an n x n positive matrix. Assuming that both A_1 and A_2 are not Schur matrices, we focus on the stabilizability of the system, namely on the possibility of finding switching strategies that drive to zero the state evolution corresponding to every positive initial state x(0). To this end, we resort to state feedback switching laws, whose value at the time t depends on the value of some Lyapunov function in x(t). We first explore quadratic positive definite functions, by extending a technique described by De Carlo et al.. Later, by taking advantage of the system positivity, we show that other classes of Lyapunov functions, such as linear copositive and quadratic copositive ones, may be used to design state-dependent stabilizing switching laws, and some of them may be designed under weaker conditions on the pair of matrices (A_1,A_2) with respect to those required for quadratic stabilizability. Some comparisons between the performances of the switching strategies are given.

A Phylogenetic Hidden Markov Model for Immune Epitope Discovery

Play Episode Listen Later Dec 8, 2009 71:37


Speaker: Prof. C. Seoighe Abstract: We describe a phylogenetic model of protein-coding sequence evolution that includes environmental variables. We apply it to a set of viral sequences from individuals with known human leukocyte antigen (HLA) genotype and include parameters to model selective pressures affecting mutations within immunogenic (epitope) regions that facilitate viral evasion of immune responses. We combine this evolutionary model with a hidden Markov model to identify regions of the HIV-1 genome that evolve under immune pressure in the presence of specific HLA class I alleles and may therefore represent potential T cell epitopes. This phylogenetic hidden Markov model (phylo-HMM) provides a probabilistic framework that can be combined with sequence or structural information to enhance epitope prediction.

Stochastic Modelling of T Cell Repertoire Diversity

Play Episode Listen Later Nov 17, 2009 54:15


Speaker: Dr. C. Molina-París Abstract: T cells are specialised white blood cells that protect the body from infection and are also able to kill infected cells. T cells are characterised by the presence of a special receptor on their cell surface called T cell receptor (TCR). The specificity of the T cell, namely which pathogens it can recognise, is determined by the molecular structure of its TCR. T cells can be classified according to their TCRs. All T cells that have identical TCRs are said to belong to the same clonotype. There are two types of T cells: naive and memory. Naive T cells have not yet encountered pathogens and memory T cells have already encountered pathogen. In this talk, I will only consider the class of naive T cells. A diverse naive T cell pool is essential to protect against novel infections, as the immune system cannot predict which pathogens the organism will be exposed to during its life-time. A healthy adult human possesses approximately 10^(11) naive T cells, which belong to about 10^7-10^8 different clonotypes. The reliability of the immune response to pathogenic challenge depends critically on the size (how many cells) and diversity (how many different TCRs or clonotypes) of the naive T cell pool of the individual. Experimental evidence suggests that interactions between TCRs with self-peptides (self-peptide = a fragment of a household protein) displayed on the surface of specialised cells, called antigen presenting cells (APCs), are important in controlling naive T cell numbers. Naive T cells undergo one round of cell division after receiving a survival stimulus from these specialized APCs. Whether or not a particular naive T cell can receive a survival signal from an specialized APC depends both on the TCR it expresses and the array of self-peptides displayed on the surface of the APC. Competition amongst naive T cells for these interactions regulates the diversity of the naive T cell pool. We have made use of a probabilistic (stochastic) model to describe this competition. In particular, we have modeled the time evolution of the number of T cells belonging to a particular clonotype. Our results indicate that competition maximizes TCR diversity by promoting the survival of T cell clonotypes that are most different from each other in terms of the self-peptides they are able to recognise.

The Brain is an Embedding Machine

Play Episode Listen Later Sep 29, 2009 40:30


Speaker: Dr. R. Clement Abstract: Neural responses are often generated by the physical movement of an object or a limb. Each such set of responses corresponds a point on a smooth geometrical surface. To be able to manipulate such a representation the brain assigns coordinates to every point on the surface --- a procedure known as embedding. In the first part of this talk the properties of the early visual system are exploited to produce a model of coordinate space based on features such as colour, orientation and movement. The feature model has the advantage over the geometric model that it is not restricted to 2 or 3-dimensional pictorial representations. The neural mechanism is highly suited to embedding. In the second part of the talk the feature based coordinate space will be used to explore the neural embedding of the sensory stimuli encountered in binocular vision and in the movement of the eye. In the final part of the talk the limitations on our ability to see objects arising from the neural embedding procedures will be outlined and in particular, what can be "seen" of the shape of surfaces embedded in more than three dimensions.

From idea to product: Best practices for improving the impact of product development in large organistations

Play Episode Listen Later Sep 16, 2009 73:25


Speaker: Dr. N. Pettit Abstract: As part of a wider improvement initiative across all parts of our value chain, Danfoss, in 2007, launched an initiative to significantly improve its product development processes. The goal was to make radical improvements on the dimensions of: value to customer, time to profit, unit cost and quality. In order to do this, we looked around to identify industry-wide accepted best practices to build on. When starting a similar program in production 4 years earlier, there were clear accepted practices that had proved themselves in multiple companies and industry sectors. These are centred on the manufacturing philosophy of Toyota and generally grouped under the term "lean production". These would often be merged with another set of practices termed "six sigma", that came out of Motorola and championed by GE. In product development we found a different picture. Although many schools of thought have been adopted by industries, often trying to build on the back of lean production ideas (termed unsurprisingly "Lean product development"), these were found to be relatively immature in their application and narrow in what dimensions they improved when applied. Many proponents backed different tools and methods out of these schools as the "best" best practice, but non appeared to have a track record of significant impact on the multiple dimensions we needed, to justify their claims. We undertook a significant exercise to look at the internal processes we wanted to improve. We then separated the tools and methods from the different schools of thought to identify which tools and methods were relevant to our processes and had a track record of success along at least one dimension. This led us to identify an underlying empirical set of principles that really seemed to drive true impact along all the dimensions we were looking for. Once we had these, we were able to go back and pick and choose a variety tools and methods from the different schools of thought, that embodied one or more of these principles --- stealing with pride. This gave us a set of tools that when used together would create the impact we were looking for. Finally we then created a system to adapt, improve and test these tools and methods before spreading them out, so that our people engaged in product development find them relevant, workable, and able to quickly deliver visible and significant improvement to their product development. The talk will outline some of these principles and methods we have built up in this journey.

On the Design of Doubly-Generalized Low-Density Parity-Check Code

Play Episode Listen Later Aug 25, 2009 52:55


Speaker: Dr. M. Flanagan Abstract: Doubly-generalized low-density parity-check (D-GLDPC) codes offer an attractive compromise between algebraic and random code design philosophies. In this talk we introduce the concept of D-GLDPC codes,and then provide a solution for the asymptotic growth rate of the weight distribution of any D-GLDPC ensemble. This tool is then used for detailed analysis of a case study, namely, a rate-1/2 D-GLDPC ensemble where all the check nodes are (7,4) Hamming codes and all the variable nodes are length-7 single parity-check codes. It is illustrated how the variable node representations can heavily affect the code properties and how different variable node representations can be combined within the same graph to enhance some of the code parameters. The analysis is conducted over the binary erasure channel. Interesting features of the new codes include the capability of achieving a good compromise between waterfall and error floor performance while preserving graphical regularity, and values of threshold outperforming LDPC counterparts.

Claim Hamilton Institute Seminars (iPod / small)

In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

Claim Cancel