Podcasts about recurrent neural network

  • 13PODCASTS
  • 21EPISODES
  • 33mAVG DURATION
  • ?INFREQUENT EPISODES
  • Sep 11, 2023LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about recurrent neural network

Latest podcast episodes about recurrent neural network

TechStuff
Did AI Write This?

TechStuff

Play Episode Listen Later Sep 11, 2023 42:13 Transcription Available


Figuring out if artificial intelligence wrote a block of text can be tricky. Some companies have created tools that claim to determine if text was likely the product of a human author or AI. But as we have learned, these tools aren't reliable. What makes it so difficult to tell who wrote what?See omnystudio.com/listener for privacy information.

PaperPlayer biorxiv neuroscience
When and why does motor preparation arise in recurrent neural network models of motor control?

PaperPlayer biorxiv neuroscience

Play Episode Listen Later Apr 3, 2023


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2023.04.03.535429v1?rss=1 Authors: Schimel, M., Kao, T.-C., Hennequin, G. Abstract: During delayed ballistic reaches, motor areas consistently display movement-specific activity patterns prior to movement onset. It is unclear why these patterns arise: while they have been proposed to seed an initial neural state from which the movement unfolds, recent experiments have uncovered the presence and necessity of ongoing inputs during movement, which may lessen the need for careful initialization. Here, we modelled the motor cortex as an input-driven dynamical system, and we asked what the optimal way to control this system to perform fast delayed reaches is. We find that delay-period inputs consistently arise in an optimally controlled model of M1. By studying a variety of network architectures, we could dissect and predict the situations in which it is beneficial for a network to prepare. Finally, we show that optimal input-driven control of neural dynamics gives rise to multiple phases of preparation during reach sequences, providing a novel explanation for experimentally observed features of monkey M1 activity in double reaching. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC

PaperPlayer biorxiv neuroscience
Remapping in a recurrent neural network model of navigation and context inference

PaperPlayer biorxiv neuroscience

Play Episode Listen Later Jan 26, 2023


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2023.01.25.525596v1?rss=1 Authors: Low, I. I., Giocomo, L. M., Williams, A. H. Abstract: Neurons in navigational brain regions provide information about position, orientation, and speed relative to environmental landmarks. These cells also change their firing patterns ("remap") in response to changing contextual factors such as environmental cues, task conditions, and behavioral state, which influence neural activity throughout the brain. How can navigational circuits preserve their local roles while responding to global context changes? To investigate this question, we trained recurrent neural network models to track position in simple environments while at the same time reporting transiently-cued context changes. We show that these combined task constraints (navigation and context inference) produce activity patterns that are qualitatively similar to population-wide remapping in the entorhinal cortex, the entorhinal cortex, a navigational brain region. Furthermore, the models identify a solution that generalizes to more complex navigation and inference tasks. We thus provide a simple, general, and experimentally-grounded model of remapping as one neural circuit performing both navigation and context inference. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC

PaperPlayer biorxiv neuroscience
Neural correlates of face perception modeled with a convolutional recurrent neural network

PaperPlayer biorxiv neuroscience

Play Episode Listen Later Jan 3, 2023


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2023.01.02.522523v1?rss=1 Authors: O'Reilly, J. A., Wehrman, J., Carey, A., Bedwin, J., Hourn, T., Asadi, F., Sowman, P. F. Abstract: Event-related potential (ERP) sensitivity to faces is predominantly characterized by an N170 peak that has greater amplitude and shorter latency when elicited by human faces than images of other objects. We developed a computational model of visual ERP generation to study this phenomenon which consisted of a convolutional neural network (CNN) connected to a recurrent neural network (RNN). We used open-access data to develop the model, generated synthetic images for simulating experiments, then collected additional data to validate predictions of these simulations. For modeling, visual stimuli presented during ERP experiments were represented as sequences of images (time x pixels). These were provided as inputs to the model. The CNN transformed these inputs into sequences of vectors that were passed to the RNN. The ERP waveforms evoked by visual stimuli were provided to the RNN as labels for supervised learning. The whole model was trained end-to-end using data from the open-access dataset to reproduce ERP waveforms evoked by visual events. Cross-validation model outputs strongly correlated with open-access (r = 0.98) and validation study data (r = 0.78). Open-access and validation study data correlated similarly (r = 0.81). Some aspects of model behavior were consistent with neural recordings while others were not, suggesting promising albeit limited capacity for modeling the neurophysiology of face-sensitive ERP generation. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC

PaperPlayer biorxiv neuroscience
A recurrent neural network model of prefrontal brain activity during a working memory task

PaperPlayer biorxiv neuroscience

Play Episode Listen Later Sep 2, 2022


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2022.09.02.506349v1?rss=1 Authors: Piwek, E. P., Stokes, M. G., Summerfield, C. P. Abstract: When multiple items are held in short-term memory, cues that retrospectively prioritise one item over another (retro-cues) can facilitate subsequent recall. However, the neural and computational underpinnings of this effect are poorly understood. One recent study recorded neural signals in the macaque lateral prefrontal cortex (LPFC) during a retro-cueing task, contrasting delay-period activity before (pre-cue) and after (post-cue) retrocue onset. They reported that in the pre-cue delay, the individual stimuli were maintained in independent subspaces of neural population activity, whereas in the post-cue delay, the prioritised items were rotated into a common subspace, potentially allowing a common readout mechanism. To understand how such representational transitions can be learnt through error minimisation, we trained recurrent neural networks (RNNs) with supervision to perform an equivalent cued-recall task. RNNs were presented with two inputs denoting conjunctive colour-location stimuli, followed by a pre-cue memory delay, a location retrocue, and a post-cue delay. We found that the orthogonal-to-parallel geometry transformation observed in the macaque LPFC emerged naturally in RNNs trained to perform the task. Interestingly, the parallel geometry only developed when the cued information was required to be maintained in short-term memory for several cycles before readout, suggesting that it might confer robustness during maintenance. We extend these findings by analysing the learning dynamics and connectivity patterns of the RNNs, as well as the behaviour of models trained with probabilistic cues, allowing us to make predictions for future studies. Overall, our findings are consistent with recent theoretical accounts which propose that retrocues transform the prioritised memory items into a prospective, action-oriented format. Copy rights belong to original authors. Visit the link for more info Podcast created by PaperPlayer

Machine learning
Recurrent neural network and brain code translation

Machine learning

Play Episode Listen Later Feb 18, 2022 8:00


Think it do it technology --- Send in a voice message: https://anchor.fm/david-nishimoto/message

iCritical Care: Pediatric Critical Care Medicine
SCCMPod-442 Continuous Prediction of Mortality in the PICU: A Recurrent Neural Network Model in a Single-Center Dataset

iCritical Care: Pediatric Critical Care Medicine

Play Episode Listen Later Sep 2, 2021 30:50


As a proof of concept, a recurrent neural network (RNN) model was developed using electronic medical record (EMR) data capable of continuously assessing a child's risk of mortality throughout an ICU stay as a proxy measure of illness severity.

iCritical Care: All Audio
SCCMPod-442 Continuous Prediction of Mortality in the PICU: A Recurrent Neural Network Model in a Single-Center Dataset

iCritical Care: All Audio

Play Episode Listen Later Sep 2, 2021 30:50


As a proof of concept, a recurrent neural network (RNN) model was developed using electronic medical record (EMR) data capable of continuously assessing a child's risk of mortality throughout an ICU stay as a proxy measure of illness severity.

Thinking Elixir Podcast
47: Crypto Trading in Elixir with Kamil Skowron

Thinking Elixir Podcast

Play Episode Listen Later May 11, 2021 26:50


We talk with Kamil Skowron about his Youtube channel that walks people through building a crypto-currency trading bot in Elixir. We learn how that led him to start a free online book sharing that content. He covers what people will learn from the process, his goal of helping people see a larger working Elixir system, and his experience writing the book. A fun chat! Elixir Community News - https://spawnfest.org/ (https://spawnfest.org/) – SpawnFest 2021 - Free to enter contest with cash prizes! Gather your team! - https://twitter.com/sean_moriarity/status/1388532916221878272 (https://twitter.com/sean_moriarity/status/1388532916221878272) – Axon gets Recurrent Neural Network support - https://en.wikipedia.org/wiki/Recurrentneuralnetwork (https://en.wikipedia.org/wiki/Recurrent_neural_network) - https://github.com/thbar/ex-portmidi (https://github.com/thbar/ex-portmidi) – Thibaut Barrère is bringing new life to ex-portmidi - https://twitter.com/fhunleth/status/1388864429052289025 (https://twitter.com/fhunleth/status/1388864429052289025) – Frank Hunleth playes with nerves and Livebook - https://twitter.com/fhunleth/status/1388557426283229188 (https://twitter.com/fhunleth/status/1388557426283229188) – Erlang and Elixir running on the RISC-V Beagleboard the BeagleV - https://github.com/phoenixframework/phoenixliveview/pull/1440 (https://github.com/phoenixframework/phoenix_live_view/pull/1440) – LiveView is getting an HTMLEngine and "heex" templates for components Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at show@thinkingelixir.com (mailto:show@thinkingelixir.com) Discussion Resources - https://www.youtube.com/c/Frathon (https://www.youtube.com/c/Frathon) – Youtube channel with the tutorial videos - https://www.elixircryptobot.com/ (https://www.elixircryptobot.com/) - https://github.com/frathon/create-a-cryptocurrency-trading-bot-in-elixir (https://github.com/frathon/create-a-cryptocurrency-trading-bot-in-elixir) – Book in online format - https://github.com/frathon/create-a-cryptocurrency-trading-bot-in-elixir-source-code (https://github.com/frathon/create-a-cryptocurrency-trading-bot-in-elixir-source-code) – Source code for the book examples - https://thinkingelixir.com/podcast-episodes/031-crawling-the-web-using-elixir-with-oleg-tarasenko-and-tze-yiing/ (https://thinkingelixir.com/podcast-episodes/031-crawling-the-web-using-elixir-with-oleg-tarasenko-and-tze-yiing/) Guest Information - https://twitter.com/kamilskowron (https://twitter.com/kamilskowron) – on Twitter - https://github.com/frathon/ (https://github.com/frathon/) – Frathon on Github - https://github.com/Cinderella-Man (https://github.com/Cinderella-Man) – Kamil on Github Find us online - Message the show - @ThinkingElixir (https://twitter.com/ThinkingElixir) - Email the show - show@thinkingelixir.com (mailto:show@thinkingelixir.com) - Mark Ericksen - @brainlid (https://twitter.com/brainlid) - David Bernheisel - @bernheisel (https://twitter.com/bernheisel) - Cade Ward - @cadebward (https://twitter.com/cadebward)

PaperPlayer biorxiv neuroscience
PsychRNN: An Accessible and Flexible Python Package for Training Recurrent Neural Network Models on Cognitive Tasks

PaperPlayer biorxiv neuroscience

Play Episode Listen Later Oct 1, 2020


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.09.30.321752v1?rss=1 Authors: Ehrlich, D. B., Stone, J. T., Brandfonbrener, D., Atanasov, A., Murray, J. D. Abstract: Task-trained artificial recurrent neural networks (RNNs) provide a computational modeling framework of increasing interest and application in computational, systems, and cognitive neuroscience. RNNs can be trained, using deep learning methods, to perform cognitive tasks used in animal and human experiments, and can be studied to investigate potential neural representations and circuit mechanisms underlying cognitive computations and behavior. Widespread application of these approaches within neuroscience has been limited by technical barriers in use of deep learning software packages to train network models. Here we introduce PsychRNN, an accessible, flexible, and extensible Python package for training RNNs on cognitive tasks. Our package is designed for accessibility, for researchers to define tasks and train RNN models using only Python and NumPy without requiring knowledge of deep learning software. The training backend is based on TensorFlow and is readily extensible for researchers with TensorFlow knowledge to develop projects with additional customization. PsychRNN implements a number of specialized features to support applications in systems and cognitive neuroscience. Users can impose neurobiologically relevant constraints on synaptic connectivity patterns. Furthermore, specification of cognitive tasks has a modular structure, which facilitates parametric variation of task demands to examine their impact on model solutions. PsychRNN also enables task shaping during training, or curriculum learning, in which tasks are adjusted in closed-loop based on performance. Shaping is ubiquitous in training of animals in cognitive tasks, and PsychRNN allows investigation of how shaping trajectories impact learning and model solutions. Overall, the PsychRNN framework facilitates application of trained RNNs in neuroscience research. Copy rights belong to original authors. Visit the link for more info

PaperPlayer biorxiv bioinformatics
Identifying protein subcellular localisation in scientific literature using bidirectional deep recurrent neural network

PaperPlayer biorxiv bioinformatics

Play Episode Listen Later Sep 10, 2020


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.09.09.290577v1?rss=1 Authors: David, R., Menezes, R.-J. D., Klerk, J. D., Castleden, I. R., Hooper, C. M., Carneiro, G., Gilliham, M. Abstract: With the advent of increased diversity and scale of molecular data, there has been a growing appreciation for the applications of machine learning and statistical methodologies to gain new biological insights. An important step in achieving this aim is the Relation Extraction process which specifies if an interaction exists between two or more biological entities in a published study. Here, we employed natural-language processing (CBOW) and deep Recurrent Neural Network (bi-directional LSTM) to predict relations between biological entities that describe protein subcellular localisation in plants. We applied our system to 1700 published Arabidopsis protein subcellular studies from the SUBA manually curated dataset. The system was able to extract relevant text and the classifier predicted interactions between protein name, subcellular localisation and experimental methodology. It obtained a final precision, recall rate, accuracy and F1 scores of 0.951, 0.828, 0.893 and 0.884 respectively. The classifier was subsequently tested on a similar problem in crop species (CropPAL) and demonstrated a comparable accuracy measure (0.897). Consequently, our approach can be used to extract protein functional features from unstructured text in the literature with high accuracy. The developed system will improve dissemination or protein functional data to the scientific community and unlock the potential of big data text analytics for generating new hypotheses from diverse datasets. Copy rights belong to original authors. Visit the link for more info

InnoPodcast
#22 Mit einem Swipe zum Traumjob - mit Matthes Dohmeyer, Gründer und Managing Director bei «truffls»

InnoPodcast

Play Episode Listen Later Aug 27, 2020 61:23


Wer schon einmal die Dating-App Tinder benutzt hat, kennt das Prinzip: ein Profil wird vorgeschlagen. Ist es interessant, wird nach rechts geswiped. Wenn nicht, halt nach links. Swipen beide nach recht, entsteht ein Match und somit die Chance auf eine tolle neue Verbindung. Genau so funktioniert auch «truffls», allerdings nicht für die grosse (oder kleine) Liebe, sondern für die Besetzung von freien Arbeitsstellen. Wieso Recruiterinnen und Recruiter sich mit der neuen Realität abfinden sollten und warum Onlinedating und Recruiting mehr gemeinsam haben, als uns bewusst ist, klären Khalil und Matthes Dohmeyer, Gründer und Managing Director von «truffles», in der neusten Folge des InnoPodcasts. Matthes Dohmeyer hat zunächst BWL in Köln studiert. Nebenbei arbeitete er als Online Marketeer bei Siteranger. Gegen Ende seines Studiums war er kurze Zeit als Entrepreneur in Residence bei Hanse Ventures und gründete 2012 zunächst MeCruiting. Seit 2013 widmet er sich als Gründer und Managing Director mit vollem Herzen der Recruiting-App «truffls». Interesse geweckt? Hier geht's zum Download der App: Apple: http://espacelab.co/truffls_apple Android: http://espacelab.co/truffls_android ***** Alle Themen der Folge im Überblick: 0:00 Vorstellung Matthes Dohmeyer 2:45 truffls in a nutshell 4:30 Datenfluss in der App truffls 8:35 Wer truffls nutzt 11:35 Zahlen zu Downloads, MAU und Swipes 18:41 Was nach einem Swipe nach rechts passiert 20:00 #mussegalsein #diversität 27:11 Schnelligkeit und Interaktion statt Time to Hire 40:25 truffls Management Summary 42:16 Deep Learning, Mobile Recruiting, Recurrent Neural Network, Disrupting Headhunter 51:05 Drei Empfehlungen an HR Teams 57:50 Outro und Message an die EspaceLab Community ***** Viel Spass beim Hören dieser Folge des #InnoPodcast. Folgt unserem Kanal. Teilt diese Folge in eurem Netzwerk. Ihr findet uns überall, wo es Podcasts gibt. Schickt uns euer Feedback zum Podcast gerne als Kommentar oder via E-Mail an espacelab@post.ch.

PaperPlayer biorxiv bioinformatics
Deep Recurrent Neural Network and Point Process Filter Approaches in Multidimensional Neural Decoding Problems

PaperPlayer biorxiv bioinformatics

Play Episode Listen Later Aug 10, 2020


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.08.10.244368v1?rss=1 Authors: Rezaei, M. R., Nazari, B., Sadri, S., yousefi, A. Abstract: Recent technological and experimental advances in recording from neural systems have led to a significant increase in the type and volume of data being collected in neuroscience experiments. This brings an increasing demand for development of appropriate analytical tools to analyze large scale neuroscience data. Simultaneously, advancement in deep neural networks (DNNs) and statistical modeling frameworks have provided new techniques for analysis of diverse forms of neuroscience data. DNNs like Long short-term memory (LSTM) or statistical modeling approaches like state-space point-process (SSPP) are widely used in the analysis of neural data including neural coding and inference analysis. Despite wide utilization of these techniques, there is a lack of comprehensive studies which systematically assess attributes of LSTM and SSPP approaches on a common neuroscience data analysis problem. As a result, this occasionally leads to inconsistent and divergent conclusions on the strength or weakness of either of the methodologies and also statistical significance of the analytical outcomes. In this research, we focus on providing a more systematic and multifaceted assessment of LSTM and SSPP techniques in a neural decoding problem.We examine different settings and modeling specifications to attain the optimal modeling solutions. We propose new LSTM network topologies and approximate filter solution to estimate a rat movement trajectory in a 2-D spaces using an ensemble of place cells' spiking activity. For each technique; we then study performance, computational efficiency, and generalizability of each technique in this decoding problem. By utilizing these results, we provided a succinct picture of the strength and weakness of each modeling approach and suggest who each of these techniques can be properly utilized in neural decoding problems. Copy rights belong to original authors. Visit the link for more info

PaperPlayer biorxiv neuroscience
Recurrent Neural Network-based Acute Concussion Classifier using Raw Resting State EEG Data

PaperPlayer biorxiv neuroscience

Play Episode Listen Later Jul 10, 2020


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.07.07.192138v1?rss=1 Authors: Thanjavur, K., Babul, A., Foran, B., Bielecki, M., Gilchrist, A., Hristopulos, D. T., Brucar, L. R., Virji-Babul, N. Abstract: Concussion is a global health concern. Despite its high prevalence, a sound understanding of the mechanisms underlying this type of diffuse brain injury remains elusive. It is, however, well established that concussions cause significant functional deficits; that children and youths are disproportionately affected and have longer recovery time than adults; and recovering individuals are more prone to suffer additional concussions, with each successive injury increasing the risk of long term neurological and mental health complications. Currently, concussion management faces two significant challenges: there are no objective, clinically accepted, brain-based approaches for determining (i) whether an athlete has suffered a concussion, and (ii) when the athlete has recovered. Diagnosis is based on clinical testing and self-reporting of symptoms and their severity. Self-reporting is highly subjective and symptoms only indirectly reflect the underlying brain injury. Here, we introduce a deep learning Long Short Term Memory (LSTM)-based recurrent neural network that is able to distinguish between healthy and acute post-concussed adolescent athletes using only a short (i.e. 90 seconds long) sample of resting state EEG data as input. The athletes were neither required to perform a specific task nor subjected to a stimulus during data collection, and the acquired EEG data was neither filtered, cleaned of artefacts, nor subjected to explicit feature extraction. The LSTM network was trained and tested on data from 27 male, adolescent athletes with sports related concussion, bench marked against 35 healthy, adolescent athletes. During rigorous testing, the classifier consistently identified concussions with an accuracy of >90% and its ensemble-median Area Under the Curve (AUC) corresponds to 0.971. This is the first instance of a high-performing classifier that relies only on easy-to-acquire resting state EEG data. It represents a key step towards the development of an easy-to-use, brain-based, automatic classification of concussion at an individual level. Copy rights belong to original authors. Visit the link for more info

PaperPlayer biorxiv neuroscience
Decoding spontaneous pain from brain cellular calcium signals using deep learning

PaperPlayer biorxiv neuroscience

Play Episode Listen Later Jun 30, 2020


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.06.30.179374v1?rss=1 Authors: Yoon, H., BAK, M. S., KIM, S. H., LEE, J. H., CHUNG, G., KIM, S. J., KIM, S. K. Abstract: We developed AI-bRNN (Average training, Individual test-bidirectional Recurrent Neural Network) to decipher spontaneous pain information from brain cellular calcium signals recorded by two-photon imaging in awake, head-fixed mice. The AI-bRNN determines the intensity and time point of spontaneous pain even during the chronic pain period and evaluates the efficacy of analgesics. Furthermore, it could be applied to different cell types and brain areas, and it distinguished between itch and pain, proving its versatility. Copy rights belong to original authors. Visit the link for more info

PaperPlayer biorxiv neuroscience
Reverse-engineering Recurrent Neural Network solutions to a hierarchical inference task for mice

PaperPlayer biorxiv neuroscience

Play Episode Listen Later Jun 11, 2020


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.06.09.142745v1?rss=1 Authors: Schaeffer, R., Khona, M., Meshulam, L., International Brain Laboratory,, Fiete, I. R. Abstract: We study how recurrent neural networks (RNNs) solve a hierarchical inference task involving two latent variables and disparate timescales separated by 1-2 orders of magnitude. The task is of interest to the International Brain Laboratory, a global collaboration of experimental and theoretical neuroscientists studying how the mammalian brain generates behavior. We make four discoveries. First, RNNs learn behavior that is quantitatively similar to ideal Bayesian baselines. Second, RNNs perform inference by learning a two-dimensional subspace defining beliefs about the latent variables. Third, the geometry of RNN dynamics reflects an induced coupling between the two separate inference processes necessary to solve the task. Fourth, we perform model compression through a novel form of knowledge distillation on hidden representations - Representations and Dynamics Distillation (RADD)- to reduce the RNN dynamics to a low-dimensional, highly interpretable model. This technique promises a useful tool for interpretability of high dimensional nonlinear dynamical systems. Altogether, this work yields predictions to guide exploration and analysis of mouse neural data and circuity. Copy rights belong to original authors. Visit the link for more info

PaperPlayer biorxiv neuroscience
RippleNet: A Recurrent Neural Network for Sharp Wave Ripple (SPW-R) Detection

PaperPlayer biorxiv neuroscience

Play Episode Listen Later May 12, 2020


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.05.11.087874v1?rss=1 Authors: Hagen, E., Chambers, A. R., Einevoll, G. T., Pettersen, K. H., Enger, R., Stasik, A. J. Abstract: Hippocampal sharp wave ripples (SPW-R) have been identified as key bio-markers of important brain functions such as memory consolidation and decision making. SPW-R detection typically relies on hand-crafted feature extraction, and laborious manual curation is often required. In this multidisciplinary study, we propose a novel, self-improving artificial intelligence (AI) method in the form of deep Recurrent Neural Networks (RNN) with Long Short-Term memory (LSTM) layers that can learn features of SPW-R events from raw, labeled input data. The algorithm is trained using supervised learning on hand-curated data sets with SPW-R events. The input to the algorithm is the local field potential (LFP), the low- frequency part of extracellularly recorded electric potentials from the CA1 region of the hippocampus. The output prediction can be interpreted as the time-varying probability of SPW-R events for the duration of the input. A simple thresholding applied to the output probabilities is found to identify times of events with high precision. The reference implementation of the algorithm, named 'RippleNet', is open source, freely available, and implemented using a common open-source framework for neural networks (tensorflow.keras) and can be easily incorporated into existing data analysis workflows for processing experimental data. Copy rights belong to original authors. Visit the link for more info

For Inquisitive Minds
14 Understanding customer satisfaction

For Inquisitive Minds

Play Episode Listen Later Oct 24, 2019 43:44


Organisations which sell services or products to mass audiences struggle to understand the reasons why their customers are happy, and especially how the preferences evolve over time. The problem of understanding reasons for customer satisfaction over time stems from methodological difficulty in analysing written customer reviews as time series data. This project trials use of sentiment analysis to understand evolution of feedback over time. Sentiment models trained using Recurrent Neural Network, Naive Bayes and Maximum Entropy are compared and the best model is selected to predict feedback in the future. Difference in predictive accuracy over time is assessed for the selected model. Moreover, visuals are developed to depict how text features and themes vary in importance when it comes to accurate prediction of satisfaction over time. The objective is to enable real-time visualization and understanding of patterns in customer feedback over time from big text corpora --- Support this podcast: https://anchor.fm/fim/support

sentiment organisations customer satisfaction naive bayes recurrent neural network
Data Skeptic
Doctor AI

Data Skeptic

Play Episode Listen Later Jun 23, 2017 41:50


hen faced with medical issues, would you want to be seen by a human or a machine? In this episode, guest Edward Choi, co-author of the study titled Doctor AI: Predicting Clinical Events via Recurrent Neural Network shares his thoughts. Edward presents his team’s efforts in developing a temporal model that can learn from human doctors based on their collective knowledge, i.e. the large amount of Electronic Health Record (EHR) data.

electronic health record ehr recurrent neural network
NLP Highlights
04 - Recurrent Neural Network Grammars, with Chris Dyer

NLP Highlights

Play Episode Listen Later May 12, 2017 24:36


An interview with Chris Dyer. https://www.semanticscholar.org/paper/Recurrent-Neural-Network-Grammars-Dyer-Kuncoro/1594d954abc650bce2db445c52a76e49655efb0c

neural networks chris dyer grammars recurrent neural network
Learning Machines 101
LM101-046: How to Optimize Student Learning using Recurrent Neural Networks (Educational Technology)

Learning Machines 101

Play Episode Listen Later Feb 22, 2016 23:19


In this episode, we briefly review Item Response Theory and Bayesian Network Theory methods for the assessment and optimization of student learning and then describe a poster presented on the first day of the Neural Information Processing Systems conference in December 2015 in Montreal which describes a Recurrent Neural Network approach for the assessment and optimization of student learning called “Deep Knowledge Tracing”. For more details check out: www.learningmachines101.com and follow us on Twitter at: @lm101talk