Podcasts about hmms

  • 22PODCASTS
  • 26EPISODES
  • 37mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • May 20, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about hmms

Latest podcast episodes about hmms

Data Skeptic
HMMs for Behavior

Data Skeptic

Play Episode Listen Later May 20, 2024 45:11


Théo Michelot has made a career out of tackling tough ecological questions using time-series data. How do scientists turn a series of GPS location observations over time into useful behavioral data? GPS tech has improved to the point that modern data sets are large and complex. In this episode, Théo takes us through his research and the application of Hidden Markov Models to complex time series data. If you have ever wondered what biologists do with data from those GPS collars you have seen on TV, this is the episode for you! 

The Nonlinear Library
AF - Transformers Represent Belief State Geometry in their Residual Stream by Adam Shai

The Nonlinear Library

Play Episode Listen Later Apr 16, 2024 20:40


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Transformers Represent Belief State Geometry in their Residual Stream, published by Adam Shai on April 16, 2024 on The AI Alignment Forum. Produced while being an affiliate at PIBBSS[1]. The work was done initially with funding from a Lightspeed Grant, and then continued while at PIBBSS. Work done in collaboration with @Paul Riechers, @Lucas Teixeira, @Alexander Gietelink Oldenziel, and Sarah Marzen. Paul was a MATS scholar during some portion of this work. Thanks to Paul, Lucas, Alexander, Sarah, and @Guillaume Corlouer for suggestions on this writeup. Introduction What computational structure are we building into LLMs when we train them on next-token prediction? In this post we present evidence that this structure is given by the meta-dynamics of belief updating over hidden states of the data-generating process. We'll explain exactly what this means in the post. We are excited by these results because We have a formalism that relates training data to internal structures in LLMs. Conceptually, our results mean that LLMs synchronize to their internal world model as they move through the context window. The computation associated with synchronization can be formalized with a framework called Computational Mechanics. In the parlance of Computational Mechanics, we say that LLMs represent the Mixed-State Presentation of the data generating process. The structure of synchronization is, in general, richer than the world model itself. In this sense, LLMs learn more than a world model. We have increased hope that Computational Mechanics can be leveraged for interpretability and AI Safety more generally. There's just something inherently cool about making a non-trivial prediction - in this case that the transformer will represent a specific fractal structure - and then verifying that the prediction is true. Concretely, we are able to use Computational Mechanics to make an a priori and specific theoretical prediction about the geometry of residual stream activations (below on the left), and then show that this prediction holds true empirically (below on the right). Theoretical Framework In this post we will operationalize training data as being generated by a Hidden Markov Model (HMM)[2]. An HMM has a set of hidden states and transitions between them. The transitions are labeled with a probability and a token that it emits. Here are some example HMMs and data they generate. Consider the relation a transformer has to an HMM that produced the data it was trained on. This is general - any dataset consisting of sequences of tokens can be represented as having been generated from an HMM. Through the discussion of the theoretical framework, let's assume a simple HMM with the following structure, which we will call the Z1R process[3] (for "zero one random"). The Z1R process has 3 hidden states, S0,S1, and SR. Arrows of the form Sxa:p%Sy denote P(Sy,a|Sx)=p%, that the probability of moving to state Sy and emitting the token a, given that the process is in state Sx, is p%. In this way, taking transitions between the states stochastically generates binary strings of the form ...01R01R... where R is a random 50/50 sample from { 0, 1}. The HMM structure is not directly given by the data it produces. Think of the difference between the list of strings this HMM emits (along with their probabilities) and the hidden structure itself[4]. Since the transformer only has access to the strings of emissions from this HMM, and not any information about the hidden states directly, if the transformer learns anything to do with the hidden structure, then it has to do the work of inferring it from the training data. What we will show is that when they predict the next token well, transformers are doing even more computational work than inferring the hidden data generating process! Do Transformers Learn ...

The Nonlinear Library
LW - Transformers Represent Belief State Geometry in their Residual Stream by Adam Shai

The Nonlinear Library

Play Episode Listen Later Apr 16, 2024 20:34


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Transformers Represent Belief State Geometry in their Residual Stream, published by Adam Shai on April 16, 2024 on LessWrong. Produced while being an affiliate at PIBBSS[1]. The work was done initially with funding from a Lightspeed Grant, and then continued while at PIBBSS. Work done in collaboration with @Paul Riechers, @Lucas Teixeira, @Alexander Gietelink Oldenziel, and Sarah Marzen. Paul was a MATS scholar during some portion of this work. Thanks to Paul, Lucas, Alexander, and @Guillaume Corlouer for suggestions on this writeup. Introduction What computational structure are we building into LLMs when we train them on next-token prediction? In this post we present evidence that this structure is given by the meta-dynamics of belief updating over hidden states of the data-generating process. We'll explain exactly what this means in the post. We are excited by these results because We have a formalism that relates training data to internal structures in LLMs. Conceptually, our results mean that LLMs synchronize to their internal world model as they move through the context window. The computation associated with synchronization can be formalized with a framework called Computational Mechanics. In the parlance of Computational Mechanics, we say that LLMs represent the Mixed-State Presentation of the data generating process. The structure of synchronization is, in general, richer than the world model itself. In this sense, LLMs learn more than a world model. We have increased hope that Computational Mechanics can be leveraged for interpretability and AI Safety more generally. There's just something inherently cool about making a non-trivial prediction - in this case that the transformer will represent a specific fractal structure - and then verifying that the prediction is true. Concretely, we are able to use Computational Mechanics to make an a priori and specific theoretical prediction about the geometry of residual stream activations (below on the left), and then show that this prediction holds true empirically (below on the right). Theoretical Framework In this post we will operationalize training data as being generated by a Hidden Markov Model (HMM)[2]. An HMM has a set of hidden states and transitions between them. The transitions are labeled with a probability and a token that it emits. Here are some example HMMs and data they generate. Consider the relation a transformer has to an HMM that produced the data it was trained on. This is general - any dataset consisting of sequences of tokens can be represented as having been generated from an HMM. Through the discussion of the theoretical framework, let's assume a simple HMM with the following structure, which we will call the Z1R process[3] (for "zero one random"). The Z1R process has 3 hidden states, S0,S1, and SR. Arrows of the form Sxa:p%Sy denote P(Sy,a|Sx)=p%, that the probability of moving to state Sy and emitting the token a, given that the process is in state Sx, is p%. In this way, taking transitions between the states stochastically generates binary strings of the form ...01R01R... where R is a random 50/50 sample from { 0, 1}. The HMM structure is not directly given by the data it produces. Think of the difference between the list of strings this HMM emits (along with their probabilities) and the hidden structure itself[4]. Since the transformer only has access to the strings of emissions from this HMM, and not any information about the hidden states directly, if the transformer learns anything to do with the hidden structure, then it has to do the work of inferring it from the training data. What we will show is that when they predict the next token well, transformers are doing even more computational work than inferring the hidden data generating process! Do Transformers Learn a Model of the World...

The Nonlinear Library: LessWrong
LW - Transformers Represent Belief State Geometry in their Residual Stream by Adam Shai

The Nonlinear Library: LessWrong

Play Episode Listen Later Apr 16, 2024 20:34


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Transformers Represent Belief State Geometry in their Residual Stream, published by Adam Shai on April 16, 2024 on LessWrong. Produced while being an affiliate at PIBBSS[1]. The work was done initially with funding from a Lightspeed Grant, and then continued while at PIBBSS. Work done in collaboration with @Paul Riechers, @Lucas Teixeira, @Alexander Gietelink Oldenziel, and Sarah Marzen. Paul was a MATS scholar during some portion of this work. Thanks to Paul, Lucas, Alexander, and @Guillaume Corlouer for suggestions on this writeup. Introduction What computational structure are we building into LLMs when we train them on next-token prediction? In this post we present evidence that this structure is given by the meta-dynamics of belief updating over hidden states of the data-generating process. We'll explain exactly what this means in the post. We are excited by these results because We have a formalism that relates training data to internal structures in LLMs. Conceptually, our results mean that LLMs synchronize to their internal world model as they move through the context window. The computation associated with synchronization can be formalized with a framework called Computational Mechanics. In the parlance of Computational Mechanics, we say that LLMs represent the Mixed-State Presentation of the data generating process. The structure of synchronization is, in general, richer than the world model itself. In this sense, LLMs learn more than a world model. We have increased hope that Computational Mechanics can be leveraged for interpretability and AI Safety more generally. There's just something inherently cool about making a non-trivial prediction - in this case that the transformer will represent a specific fractal structure - and then verifying that the prediction is true. Concretely, we are able to use Computational Mechanics to make an a priori and specific theoretical prediction about the geometry of residual stream activations (below on the left), and then show that this prediction holds true empirically (below on the right). Theoretical Framework In this post we will operationalize training data as being generated by a Hidden Markov Model (HMM)[2]. An HMM has a set of hidden states and transitions between them. The transitions are labeled with a probability and a token that it emits. Here are some example HMMs and data they generate. Consider the relation a transformer has to an HMM that produced the data it was trained on. This is general - any dataset consisting of sequences of tokens can be represented as having been generated from an HMM. Through the discussion of the theoretical framework, let's assume a simple HMM with the following structure, which we will call the Z1R process[3] (for "zero one random"). The Z1R process has 3 hidden states, S0,S1, and SR. Arrows of the form Sxa:p%Sy denote P(Sy,a|Sx)=p%, that the probability of moving to state Sy and emitting the token a, given that the process is in state Sx, is p%. In this way, taking transitions between the states stochastically generates binary strings of the form ...01R01R... where R is a random 50/50 sample from { 0, 1}. The HMM structure is not directly given by the data it produces. Think of the difference between the list of strings this HMM emits (along with their probabilities) and the hidden structure itself[4]. Since the transformer only has access to the strings of emissions from this HMM, and not any information about the hidden states directly, if the transformer learns anything to do with the hidden structure, then it has to do the work of inferring it from the training data. What we will show is that when they predict the next token well, transformers are doing even more computational work than inferring the hidden data generating process! Do Transformers Learn a Model of the World...

Whiskey with Witcher
A Farewell to Henry Cavill

Whiskey with Witcher

Play Episode Listen Later Oct 11, 2023 108:20


Five years. Three seasons. Twenty-four episodes. And a whole lot of “Hmms.” It's hard to overstate the impact Henry Cavill has had on The Witcher and the role of Geralt of Rivia, which makes his departure from the series difficult for so many of its fans. But rather than being sad that he's leaving, we're celebrating his time on the Continent with a special tribute episode! Tim and Valerie discuss the role Cavill had in their fandom of the show, share some of our favorite moments featuring his Geralt and read a few tributes written by listeners. And since we can't rightly toast him without a good whiskey, we uncork a bottle of Aberfeldy 12-Year Single Malt for Cavill…and a bottle of Aberfeldy Napa Valley Red Wine Cask 15-Year Single Malt for Liam Hemsworth, the man who will be replacing him.

13 Sided Die
Level 3: Episode 23 – What is Our Process to Make Creativity Interesting?

13 Sided Die

Play Episode Listen Later Oct 6, 2023 80:08


Our amazing guest: Courtney gave us the topic for this show and this is something we will ask all our future guests to do!Come listen to Jim use the technical term: hokey pokey. See if you can hear when my phone beeps (sorry)!Learn how we deal with creativity while keeping enjoyment in the hobby!Listen to us hang out and chat with Sean swearing the most!Also - join in on the controversial dice discussion we have!!All this and more in our latest episode of... 13 Sided Die!Send comments, support and questions to: crystalball@13sideddie.com// All music is copyright zenX13Assorted Sound EffectsAlright! We did it! Female Cheer for games by: SkyRaeVoicing (freesound.org)Hmms various 1.wav by: Hmms various 1.wav (freesound.org)Applause.wav by: Salsero_classic (freesound.org)Salsero_classic by: FunWithSound (freesound.org)Nokia Keypad Beep.wav by: edinc90 (freesound.org)Shock (Funny Version) by: Beetlemuse (freesound.org)Crowd shock.wav by: deleted_user_2104797 (freesound.org)male what.wav by: Reitanna (freesound.org)

The AI Frontier Podcast
#20 - Hidden Markov Models (HMMs): Sequential Data Analysis and Applications

The AI Frontier Podcast

Play Episode Listen Later Jun 4, 2023 9:12


Discover the fascinating world of Hidden Markov Models (HMMs) in this episode of "The AI Frontier" podcast. Explore the fundamentals of HMMs, their applications in fields like speech recognition, bioinformatics, and finance, and learn about their limitations and alternatives. Gain insights into the theoretical concepts and real-world use cases, and stay up-to-date with emerging trends in sequential data analysis. Join us on this journey to uncover the power and potential of HMMs in artificial intelligence and machine learning.Support the Show.Keep AI insights flowing – become a supporter of the show!Click the link for details

PaperPlayer biorxiv neuroscience
Predicting individual traits from models of brain dynamics accurately and reliably using the Fisher kernel

PaperPlayer biorxiv neuroscience

Play Episode Listen Later Mar 2, 2023


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2023.03.02.530638v1?rss=1 Authors: Ahrends, C., Vidaurre, D. Abstract: How specifically brain activity unfolds across time, namely the nature of brain dynamics, can sometimes be more predictive of behavioural and cognitive subject traits than both brain structure and summary measures of brain activity that average across time. Brain dynamics can be described by models of varying complexity, but what is the best way to use these models of brain dynamics for characterising subject differences and predicting individual traits is unclear. While most studies aiming at predicting subjects' traits focus on having accurate predictions, for many practical applications, it is critical for the predictions not just to be accurate but also reliable. Kernel methods are a robust and computationally efficient way of expressing differences in brain dynamics between subjects for the sake of predicting individual traits, such as clinical or psychological variables. Using the Hidden Markov model (HMM) as a model of brain dynamics, we here propose the use of the Fisher kernel, a mathematically principled approach to predict phenotypes from subject-specific HMMs. The Fisher kernel is computed in a way that preserves the mathematical structure of the HMM. This results in benefits in terms of both accuracy and reliability when compared with kernels that ignore the structure of the underlying model of brain dynamics. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC

PaperPlayer biorxiv neuroscience
Bayesian multilevel hidden Markov models identify stable state dynamics in longitudinal recordings from macaque primary motor cortex

PaperPlayer biorxiv neuroscience

Play Episode Listen Later Oct 21, 2022


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2022.10.17.512024v1?rss=1 Authors: Kirchherr, S., Mildiner Moraga, S., Coude, G., Bimbi, M., Ferrari, P. F., Aarts, E., Bonaiuto, J. J. Abstract: Neural populations, rather than single neurons, may be the fundamental unit of cortical computation. Analyzing chronically recorded neural population activity is challenging not only because of the high dimensionality of activity in many neurons, but also because of changes in the recorded signal that may or may not be due to neural plasticity. Hidden Markov models (HMMs) are a promising technique for analyzing such data in terms of discrete, latent states, but previous approaches have either not considered the statistical properties of neural spiking data, have not been adaptable to longitudinal data, or have not modeled condition specific differences. We present a multilevel Bayesian HMM which addresses these shortcomings by incorporating multivariate Poisson log-normal emission probability distributions, multilevel parameter estimation, and trial-specific condition covariates. We applied this framework to multi-unit neural spiking data recorded using chronically implanted multi-electrode arrays from macaque primary motor cortex during a cued reaching, grasping, and placing task. We show that the model identifies latent neural population states which are tightly linked to behavioral events, despite the model being trained without any information about event timing. We show that these events represent specific spatiotemporal patterns of neural population activity and that their relationship to behavior is consistent over days of recording. The utility and stability of this approach is demonstrated using a previously learned task, but this multilevel Bayesian HMM framework would be especially suited for future studies of long-term plasticity in neural populations. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC

Matt and Kate
Your Little Hmms

Matt and Kate

Play Episode Listen Later Jul 14, 2021 30:36


What events are new to the Olympics this year? Will the Kate household be able to figure out the television so they can watch the Olympics? Was Michael Jackson a little weird? The answers to these questions, plus Lil Jon Tourette's, in today's show.

Ice Cup Pod
112.5. Hmms This is Good Beer

Ice Cup Pod

Play Episode Listen Later May 3, 2021 41:48


Since we were gone for so long we decided to hit you with a 2 for 1 deal! Of Course our boy Lelo deserved a .5 episode.

LIVE! With Mike Kasem & Vernetta Lopez
Star Awards Fashion Hmms and Will Mike's Jokes Finally Tickle Vern?

LIVE! With Mike Kasem & Vernetta Lopez

Play Episode Listen Later Apr 20, 2021 56:03


See omnystudio.com/listener for privacy information.

PaperPlayer biorxiv biophysics
Generalizing HMMs to Continuous Time for Fast Kinetics: Hidden Markov Jump Processes

PaperPlayer biorxiv biophysics

Play Episode Listen Later Jul 29, 2020


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.07.28.225052v1?rss=1 Authors: Kilic, Z., Sgouralis, I., Presse, S. Abstract: The hidden Markov model (HMM) is a framework for time series analysis widely applied to single molecule experiments. It has traditionally been used to interpret signals generated by systems, such as single molecules, evolving in a discrete state space observed at discrete time levels dictated by the data acquisition rate. Within the HMM framework, originally developed for applications outside the Natural Sciences, such as speech recognition, transitions between states, such as molecular conformational states, are modeled as occurring at the end of each data acquisition period and are described using transition probabilities. Yet, while measurements are often performed at discrete time levels in the Natural Sciences, physical systems evolve in continuous time according to transition rates. It then follows that the modeling assumptions underlying the HMM are justified if the transition rates of a physical process from state to state are small as compared to the data acquisition rate. In other words, HMMs apply to slow kinetics. The problem is, as the transition rates are unknown in principle, it is unclear, a priori, whether the HMM applies to a particular system. For this reason, we must generalize HMMs for physical systems, such as single molecules, as these switch between discrete states in continuous time. We do so by exploiting recent mathematical tools developed in the context of inferring Markov jump processes and propose the hidden Markov jump process (HMJP). We explicitly show in what limit the HMJP reduces to the HMM. Resolving the discrete time discrepancy of the HMM has clear implications: we no longer need to assume that processes, such as molecular events, must occur on timescales slower than data acquisition and can learn transition rates even if these are on the same timescale or otherwise exceed data acquisition rates. Copy rights belong to original authors. Visit the link for more info

Data Science et al.
Hidden Markov Model

Data Science et al.

Play Episode Listen Later Jun 9, 2020 0:47


HMMs are used in speech recognition systems, computational molecular biology, data compression and in other areas of AI and pattern recognition.Support the show (http://paypal.me/SachinPanicker )

PaperPlayer biorxiv neuroscience
Transient resting-state network dynamics in cognitive ageing

PaperPlayer biorxiv neuroscience

Play Episode Listen Later May 21, 2020


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.05.19.103531v1?rss=1 Authors: Tibon, R., Tsvetanov, K. A., Price, D., Nesbitt, D., Cam-CAN,, Henson, R. Abstract: It is important to maintain cognitive function in old age, yet the neural substrates that support successful cognitive ageing remain unclear. One factor that might be crucial, but has been overlooked due to limitations of previous data and methods, is the ability of brain networks to flexibly reorganise and coordinate over a millisecond time-scale. Magnetoencephalography (MEG) provides such temporal resolution, and can be combined with Hidden Markov Models (HMMs) to characterise transient neural states. We applied HMMs to resting-state MEG data from a large cohort (N=594) of population-based adults (aged 18-88), who also completed a range of cognitive tasks. Using multivariate analysis of neural and cognitive profiles, we found that decreased occurrence of 'lower-order' brain networks, coupled with increased occurrence of 'higher-order' networks, was associated with both increasing age and impaired fluid intelligence. These results favour theories of age-related reductions in neural efficiency over current theories of age-related functional compensation. Copy rights belong to original authors. Visit the link for more info

Learning Bayesian Statistics
#14 Hidden Markov Models & Statistical Ecology, with Vianey Leos-Barajas

Learning Bayesian Statistics

Play Episode Listen Later Apr 22, 2020 49:01


I bet you love penguins, right? The same goes for koalas, or puppies! But what about sharks? Well, my next guest loves sharks — she loves them so much that she works a lot with marine biologists, even though she’s a statistician! Vianey Leos Barajas is indeed a statistician primarily working in the areas of statistical ecology, time series modeling, Bayesian inference and spatial modeling of environmental data. Vianey did her PhD in statistics at Iowa State University and is now a postdoctoral researcher at North Carolina State University. In this episode, she’ll tell us what she’s working on that involves sharks, sheep and other animals! Trying to model animal movements, Vianey often encounters the dreaded multimodal posteriors. She’ll explain why these can be very tricky to estimate, and why ecological data are particularly suited for hidden Markov models and spatio-temporal models — don’t worry, Vianey will explain what these models are in the episode! Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ ! Links from the show: Vianey on Twitter: https://twitter.com/vianey_lb Hidden Markov Models in the Stan User's Guide: https://mc-stan.org/docs/2_18/stan-users-guide/hmms-section.html Tagging Basketball Events with HMM in Stan: https://mc-stan.org/users/documentation/case-studies/bball-hmm.html HMMs with Python and PyMC3: https://ericmjl.github.io/bayesian-analysis-recipes/notebooks/markov-models/ The Discrete Adjoint Method -- Efficient Derivatives for Functions of Discrete Sequences (Betancourt, Margossian, Leos-Barajas): https://arxiv.org/abs/2002.00326 Vianey will be doing an HMM 90-minute introduction at the International Statistical Ecology Conference in June 2020: http://www.isec2020.org/ Stan for Ecology -- a website for the ecology community in Stan: https://stanecology.github.io/ LatinR 2020 -- 7th to 9th October 2020: https://latin-r.com/ Migramar -- Science for the Conservation of Marine Migratory Species in the Eastern Pacific: http://migramar.org/hi/en/ Pelagios Kakunja -- Know, educate and conserve for a sustainable sea: https://www.pelagioskakunja.org/ Book recommendations: Hidden Markov Models for Time Series: https://www.routledge.com/Hidden-Markov-Models-for-Time-Series-An-Introduction-Using-R-Second-Edition/Zucchini-MacDonald-Langrock/p/book/9781482253832 Handbook of Mixture Analysis: https://www.routledge.com/Handbook-of-Mixture-Analysis-1st-Edition/Fruhwirth-Schnatter-Celeux-Robert/p/book/9781498763813 Pattern Recognition and Machine Learning: http://users.isr.ist.utl.pt/~wurmd/Livros/school/Bishop%20-%20Pattern%20Recognition%20And%20Machine%20Learning%20-%20Springer%20%202006.pdf --- Send in a voice message: https://anchor.fm/learn-bayes-stats/message

The Booth
Apr. 14, 2020 - Rapper H.M.M.F. (Haters Make Me Famous)

The Booth

Play Episode Listen Later Apr 14, 2020 67:12


The Booth Notes – Apr. 14, 2020Sinista1 returns the airwaves with a vengeance this Tuesday night at 7:00 PM NY EST!!!If you want to get in on the conversation LIVE on air you can call into the show at (508) 251-5722 or join us in the LIVE Chat on FB!Topics for the night...Rapper H.M.M.F. (Haters Make Me Famous) Dominic Pappas joins Sinista1 to talk about his latest video ”Petty Juice” and gives us listen to his other latest single “Respect Over Fame”.Local/National News BoothCoronavirus – Updates. Brockton – 593 Confirmed w/34 DeathsBernie Rubin, co-founder of Bernie and Phyl's, dies of coronavirus at 82. Bernie and his wife built a furniture empire with a risky start in 1983 as Jordan's Furniture was much beloved in the area. Entertainment Booth16 Year Old Actor Logan Williams who appeared on the CW's "The Flash" as young Barry Allen, has died, according to his family with no release at cause of death. (The Wrap)WWE is Deemed Essential to the State of Florida under the Essentials Services List EO 20-91 from the Division of Emergency Management (See Pic of List on our FB Page)Producers RZA of the Wu-Tang Clan and DJ Premier squared off in a beat battle this past Saturday night on DJ Premier's IG Live put on by Verzuz Presents the creation of Swizz Beats & Timberland which features live battles between some of the industry's biggest heavyweights and this battle did not disappoint for a whole two hours! Legal BoothNetflix's “How to Fix a Drug Scandal” is “The Booth's” legal homework for those who have not watched. It focuses on the MA Drug lab scandal around chemists Annie Dookhan & Sonja Farak. A historic story and one of the biggest behind the scenes scandals.Sports BoothNFL QB Tavaris Jackson was killed in a car accident in Alabama. Tavaris played with the Seahawks, Vikings & Bills.NHL's Forward Colby Cave of the Edmonton Oilers who underwent emergency surgery on his brain after a bleed from a cyst on Tuesday, died Saturday morning. Cave had been a member of the Boston Bruins' organization and spent four-plus seasons with the AHL's Providence Bruins. Hank Steinbrenner, general partner and co-chairperson of the Yankees, died early Tuesday due to a lengthy illness at his Clearwater, Fla., residence, surrounded by family members. He was 63 and his death was NOT COVID-19 related. Steinbrenner was in his 13th year as general partner and 11th as co-chairperson.Apology Podium – Ganassi Racing's Kyle Larson of the NASCAR 42 Car dropped the “N” Bomb this past Easter Sunday during the Monza Madness iRacing Event live-streaming on Twitch during Lap 6 after crashing. Chip Ganassi Racing immediately suspended Kyle without pay on Monday morning while his two major sponsors McDonald's & Credit One Bank cut their ties. Earlier today after doing their own investigation into the incident Chip Ganassi Racing terminated Kyle from his contract. Trump Troubles BoothDems Unifying - Former President Barack Obama Endorses Presidential Hopeful Joe Biden just days after Bernie Sanders endorses him saying he has ‘qualities we need.”Open or Not to Open – No one wants to rush into “Life Returns to Normal” status as more states begin to “Stay at Home” Trump Tweets about Dr. Fauci on Sunday in regards to his firing as he is now taking fire for his comments back in February. Event RemindersEverything is still CANCELLED… LOL, LOL LOLVeana Marie's EP "Vee" is now available on iTunes & YouTube.These will be some of our topics on “The Booth” tonight and don't forget if you join in & converse in our FB Live chat you could win a FREE t-shirt courtesy of ILoveBostonSports.com!New Look on OBS & FB Live with MUCH MORE to COME!!!#Discuss #AreYouListening #DoYourHomework #TheBooth #Whoobazoo #Sinista1 #SeeYouNextTuesday #7PM #ILoveBostonSports

Like it is Podcast
The Witcher Review: A Tale of F-Bombs and Hmms

Like it is Podcast

Play Episode Listen Later Jan 10, 2020 33:39


Hannibal and Carla (Book Nasty) dive into the world of the hottest Netflix show out right now, The Witcher. Toss a coin for our education system because we can barely pronounce the names and places of this universe. --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app --- Send in a voice message: https://anchor.fm/hannibal-k-j-darby/message Support this podcast: https://anchor.fm/hannibal-k-j-darby/support

Fahrradio
Podcast Nr. 92 – BRT 2017, Sommer in Hannover - Fahrradio

Fahrradio

Play Episode Listen Later Jul 25, 2017 43:32


Die letzte Folge unserer Talkshow vor der Sommerpause kommt aus Hannover. Wir waren am Bundes-Radsporttreffen 2017 in Hannover live vor Publikum. Wir, das waren in diesem Fall Hans und sein Gast, das Radelmädchen Juliane Schumacher. Wir haben unsere Zeit um 50 % überzogen. Deshalb kommt ihr in den Genuss von 45 Minuten feinstem Interview-Talks (mit vielen Ähms und Hmms, die wir nicht herausgeschnitten haben). Viel Spaß und einen schönen Sommer. Die Sommeredition Hans spricht mit dem Radelmädchen kurz über ihr Buch und anschließend über den Sommer. Dabei hangeln sie sich lose an den Standardrubliken entlang. Was machen wir im Sommer * Juliane bleibt in Berlin, näht (unter anderem Schicke Fahrradtaschen und Fahrradmode) und macht Kurzausflüge. * Hans fährt weg und fährt nicht Fahrrad. * Hans wollte eigentlich Manuels lernen und Wheelie-Tricks * Das geht aber nicht. Jetzt wird er lesen. Technik und Design * Jetzt lieferbar: Mokumono Bike aus Holland. Von Robotern in Holland gebaut. (besser vor Ort bauen, statt über’s Meer bringen?) Sport * BMXCologne in Köln mit Oldschool und Autoscooter Lesen und schauen Sonderedition Wir machen eine Lesen & Schauen Sonderausgabe mit unseren Lieblingsblogs. Podcasts, Video * The Spokesmen: Der älteste Fahrradpodcast, den ich kenne. Mittlerweile von Carlton Reid übernommen. Leute aus der Fahrradbranche sprechen über Business, Sport und Politik. * Velocast: 00:26 Wegen dieses Podcasts hab‘ ich eigentlich angefangen. John Galloway & Scott O’Raw unterhielten sich über Radfahren und Radsport. Mittlerweile berichten sie von Radrennen und den Podcast gibt’s als bezahltes Abo. * velohome: Das deutsche Velocast. Rennradfahrer müssen das hören. * GMBN, Global Mountain Bike Network: Mit der wöchentlichen Dirt Shed Show: Martyn Ashton, Neil Donoghue und Blake Samson kommen wöchentlich mit Nachrichten. Im Vergleich zu Bibi und Daggi B lächerlich wenig Abonnenten: 555.000. * Die haben auch Rubriken: z. B. Hacks and Bolts, Caption of the week, Viewers Edits: Zuschauervideos, Fails and Bails und Bike Vault: Fahrradkritik. Es gibt nur Nice und Super Nice, * Gibt’s auch in der Rennradausführung, das GCN, Global Cycling Network. * Seth’s Bike Hacks hat auch nur 386.000. Er macht das alleine. Sein Stil ist besonders. Er baut am Fahrrad oder fährt irgendwo und spricht aus dem Off. Das ganze ziemlich unaufgeregt. Auch für MTB-Anfänger ziemlich interessant. Pro Tipp von Seth: Wenn du groß musst, geh vor der Tour. Blogs * The Wriders Club: (http://www.thewridersclub.cc/member/) * DC Rainmaker, Ray Maker bloggt seit 2007, lebt mittlerweile in Paris. Er kommt vom Triathlon und macht sehr viel Produkttests. Hier stellt er eine neue Drohne vor, die Airdog ADII. *

Linear Digressions
Genetics and Um Detection (HMM Part 2)

Linear Digressions

Play Episode Listen Later Mar 25, 2015 14:49


In part two of our series on Hidden Markov Models (HMMs), we talk to Katie and special guest Francesco about more useful and novel applications of HMMs. We revisit Katie's "Um Detector," and hear about how HMMs are used in genetics research.

Linear Digressions
Introducing Hidden Markov Models (HMM Part 1)

Linear Digressions

Play Episode Listen Later Mar 24, 2015 14:54


Wikipedia says, "A hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved (hidden) states." What does that even mean? In part one of a special two-parter on HMMs, Katie, Ben, and special guest Francesco explain the basics of HMMs, and some simple applications of them in the real world. This episode sets the stage for part two, where we explore the use of HMMs in Modern Genetics, and possibly Katie's "Um Detector."

Computer Science (audio)
Alexey Koloydenko on a Risk-based View of Path Inference in HMMs

Computer Science (audio)

Play Episode Listen Later Apr 16, 2012 39:01


If you experience any technical difficulties with this video or would like to make an accessibility-related request, please send a message to digicomm@uchicago.edu. Partha Niyogi Memorial Conference: "A Risk-based View of the Conventional and New Types of Path Inference in HMMs". This conference is in honor of Partha Niyogi, the Louis Block Professor in Computer Science and Statistics at the University of Chicago. Partha lost his battle with cancer in October of 2010, at the age of 43. Partha made fundamental contributions to a variety of fields including language evolution, statistical inference, and speech recognition. The underlying themes of learning from observations and a rigorous basis for algorithms and models permeated his work.

Computer Science (video)
Alexey Koloydenko on a Risk-based View of Path Inference in HMMs

Computer Science (video)

Play Episode Listen Later Apr 13, 2012 39:02


If you experience any technical difficulties with this video or would like to make an accessibility-related request, please send a message to digicomm@uchicago.edu. Partha Niyogi Memorial Conference: "A Risk-based View of the Conventional and New Types of Path Inference in HMM". This conference is in honor of Partha Niyogi, the Louis Block Professor in Computer Science and Statistics at the University of Chicago. Partha lost his battle with cancer in October of 2010, at the age of 43. Partha made fundamental contributions to a variety of fields including language evolution, statistical inference, and speech recognition. The underlying themes of learning from observations and a rigorous basis for algorithms and models permeated his work.

Fundamental Algorithms in Bioinformatics
Lecture 24: Hidden Markov models and the Vitterbi algorithm

Fundamental Algorithms in Bioinformatics

Play Episode Listen Later Jan 25, 2010 4:10


Finish the discussion of HMMs for CpG islands. Introduction to the Vitterbi algorithm (really dynamic programming) to find the most likely Markov Chain generating a given sequence.

CERIAS Security Seminar Podcast
Terran Lane, Machine Learning Techniques for Anomaly Detection in Computer Security

CERIAS Security Seminar Podcast

Play Episode Listen Later Apr 7, 2000 57:52


With the recent phenomenal growth of the availability and connectivity of computing resources and the advent of e-commerce, more valuable and private data is being stored online than ever before. But with greater value and availability comes greater threat. In this talk we examine the information security problem of anomaly detection --- recognizing the occurrence of ``out of the ordinary'' events which may prove to be hazardous. We evaluate this problem as a machine learning task and describe the application of two machine learning techniques: instance-based learning (IBL) and hidden Markov models (HMMs). This work focuses on anomaly detection at the user level (as opposed to the network or system call level), which introduces a number of interesting and complex issues from a machine learning standpoint. In particular, we explore privacy, resource constraints, non-stationarity (a.k.a. concept drift), and performance issues and give empirical analyses based on real user data. We close with some thoughts on extensions to this work and on other areas of application. About the speaker: graduated from Ballard High School (Louisville, KY) in 1990 and entered the department of Electrical and Computer Engineering (then the department of Electrical Engineering) at Purdue University (West Lafayette, IN) in the fall of that year. I have been here ever since, attaining my bachelor's (BSCEE == Bachelor of Science in Computer and Electrical Engineering) in May of 1994. I immediately plunged into the PhD program, and am currently working toward that degree under the direction of Professor Carla Brodley. Some notes on my Research are available.

CERIAS Security Seminar Podcast
Terran Lane, "Machine Learning Techniques for Anomaly Detection in Computer Security"

CERIAS Security Seminar Podcast

Play Episode Listen Later Apr 7, 2000


With the recent phenomenal growth of the availability and connectivity of computing resources and the advent of e-commerce, more valuable and private data is being stored online than ever before. But with greater value and availability comes greater threat. In this talk we examine the information security problem of anomaly detection --- recognizing the occurrence of ``out of the ordinary'' events which may prove to be hazardous. We evaluate this problem as a machine learning task and describe the application of two machine learning techniques: instance-based learning (IBL) and hidden Markov models (HMMs). This work focuses on anomaly detection at the user level (as opposed to the network or system call level), which introduces a number of interesting and complex issues from a machine learning standpoint. In particular, we explore privacy, resource constraints, non-stationarity (a.k.a. concept drift), and performance issues and give empirical analyses based on real user data. We close with some thoughts on extensions to this work and on other areas of application.