Podcasts about Wavelet

  • 22PODCASTS
  • 31EPISODES
  • 1hAVG DURATION
  • ?INFREQUENT EPISODES
  • Sep 22, 2022LATEST
Wavelet

POPULARITY

20172018201920202021202220232024


Best podcasts about Wavelet

Latest podcast episodes about Wavelet

Astro arXiv | all categories
Wavelet analysis of the transient QPOs in MAXI J1535 - 571 with Insight-HXMT

Astro arXiv | all categories

Play Episode Listen Later Sep 22, 2022 0:49


Wavelet analysis of the transient QPOs in MAXI J1535 - 571 with Insight-HXMT by X. Chen et al. on Thursday 22 September Using wavelet analysis and power density spectrum, we investigate two transient quasi-periodic oscillations (QPOs) observed in MAXI J1535$-$571 observed with Insight-HXMT. The transient QPOs have a centroid frequency of $sim 10$ Hz with a FWHM $sim 0.6$ Hz and an rms amplitude $sim 14%$. Energy spectra of QPO and non-QPO regimes are also separated and analyzed, and the spectra become softer with higher $E_{cut}$ in the non-QPO regime compared to the QPO regime. Our results suggest that the transient QPOs detected in MJD 58016 and 58017 are still the type-C QPO, and the source remains in its HIMS. The duration of all type-C QPO signals based on wavelet is positively correlated with the mean count rate above $sim 10$ keV, implying appearance of QPOs in different time scales should be coupled with the corona. The transient QPO properties could be related to the jet or flares, perhaps the partial ejection of the corona is responsible for the disappearance of the type-C QPO. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2209.10408v1

Astro arXiv | all categories
Wavelet analysis of the transient QPOs in MAXI J1535 - 571 with Insight-HXMT

Astro arXiv | all categories

Play Episode Listen Later Sep 22, 2022 0:43


Wavelet analysis of the transient QPOs in MAXI J1535 - 571 with Insight-HXMT by X. Chen et al. on Thursday 22 September Using wavelet analysis and power density spectrum, we investigate two transient quasi-periodic oscillations (QPOs) observed in MAXI J1535$-$571 observed with Insight-HXMT. The transient QPOs have a centroid frequency of $sim 10$ Hz with a FWHM $sim 0.6$ Hz and an rms amplitude $sim 14%$. Energy spectra of QPO and non-QPO regimes are also separated and analyzed, and the spectra become softer with higher $E_{cut}$ in the non-QPO regime compared to the QPO regime. Our results suggest that the transient QPOs detected in MJD 58016 and 58017 are still the type-C QPO, and the source remains in its HIMS. The duration of all type-C QPO signals based on wavelet is positively correlated with the mean count rate above $sim 10$ keV, implying appearance of QPOs in different time scales should be coupled with the corona. The transient QPO properties could be related to the jet or flares, perhaps the partial ejection of the corona is responsible for the disappearance of the type-C QPO. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2209.10408v1

Astro arXiv | all categories
Exploration of 3D wavelet scattering transform coefficients for line-intensity mapping measurements

Astro arXiv | all categories

Play Episode Listen Later Sep 15, 2022 0:51


Exploration of 3D wavelet scattering transform coefficients for line-intensity mapping measurements by Dongwoo T Chung. on Thursday 15 September The wavelet scattering transform (WST) has recently gained attention in the context of large-scale structure studies, being a possible generator of summary statistics encapsulating non-Gaussianities beyond the reach of the conventional power spectrum. This work examines the three-dimensional solid harmonic WST in the context of a three-dimensional line-intensity mapping measurement to be undertaken by current and proposed phases of the CO Mapping Array Project (COMAP). The WST coefficients demonstrate interpretable behaviour in the context of noiseless CO line-intensity simulations. The contribution of the cosmological $zsim3$ signal to these coefficients is also detectable in principle even in the Pathfinder phase of COMAP. Using the peak-patch method to generate large numbers of simulations and incorporating observational noise, we numerically estimate covariance matrices and show that careful choices of WST hyperparameters and rescaled or reduced coefficient sets are both necessary to keep covariances well-conditioned. Fisher forecasts show that even a reduced `shapeless' set of $ell$-averaged WST coefficients show constraining power that can exceed that of the power spectrum alone even with similar detection significance. The full WST could improve parameter constraints even over the combination of the power spectrum and the voxel intensity distribution, showing that it uniquely encapsulates shape information about the line-intensity field. However, practical applications urgently require further understanding of the WST in key contexts like covariances and cross-correlations. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2207.06383v2

Astro arXiv | all categories
The Propagation of Coherent Waves Across Multiple Solar Magnetic Pores

Astro arXiv | all categories

Play Episode Listen Later Sep 14, 2022 1:05


The Propagation of Coherent Waves Across Multiple Solar Magnetic Pores by S. D. T. Grant et al. on Wednesday 14 September Solar pores are efficient magnetic conduits for propagating magnetohydrodynamic wave energy into the outer regions of the solar atmosphere. Pore observations often contain isolated and/or unconnected structures, preventing the statistical examination of wave activity as a function of atmospheric height. Here, using high resolution observations acquired by the Dunn Solar Telescope, we examine photospheric and chromospheric wave signatures from a unique collection of magnetic pores originating from the same decaying sunspot. Wavelet analysis of high cadence photospheric imaging reveals the ubiquitous presence of slow sausage mode oscillations, coherent across all photospheric pores through comparisons of intensity and area fluctuations, producing statistically significant in-phase relationships. The universal nature of these waves allowed an investigation of whether the wave activity remained coherent as they propagate. Utilizing bi-sector Doppler velocity analysis of the Ca II 8542 {AA} line, alongside comparisons of the modeled spectral response function, we find fine-scale 5 mHz power amplification as the waves propagate into the chromosphere. Phase angles approaching zero degrees between co-spatial bi-sectors spanning different line depths indicate standing sausage modes following reflection against the transition region boundary. Fourier analysis of chromospheric velocities between neighboring pores reveals the annihilation of the wave coherency observed in the photosphere, with examination of the intensity and velocity signals from individual pores indicating they behave as fractured wave guides, rather than monolithic structures. Importantly, this work highlights that wave morphology with atmospheric height is highly complex, with vast differences observed at chromospheric layers, despite equivalent wave modes being introduced into similar pores in the photosphere. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2209.06280v1

The American Mind
Red Wavelet | The Roundtable Ep. 93

The American Mind

Play Episode Listen Later Nov 5, 2021 61:09


Republicans Glenn Youngkin, Winsome Sears, and Jason Miyares have turned Virginia red, and everyone is closely scrutinizing what this might signal for other elections in 2022 and 2024. Or at least, sensible people are scrutinizing it, while the Left screeches absurdly about racism and refuses to even consider introspection. Our editors consider what conservatives could do to capitalize on this initial victory.

UEMA Podcast
UEMA Series 089 by Sigma_ ALgebra

UEMA Podcast

Play Episode Listen Later May 7, 2021 65:21


Entrega número 89 de nuestra serie de podcast. Sigma_ALgebra es el encargado de esta experiencia musical. Vinculado a la música electrónica desde los 90, y bajo el apodo dg!, nuestro invitado ha navegado a lo más profundo de estilos como el Electro y el IDM buscando esa línea divisoria, a veces invisible, con el Techno pero siempre en la vanguardia. Sigma_ALgebra es todo un veterano en el arte del Dj y muy ligado a la escena subterránea valenciana. En 2012 pasa a llamarse Sigma_ALgebra y hace del electro más sofisticado su seña de identidad dejando referencias en sellos como Digital Distortions, Anti Gravity Device, Section27, Urban Connections o Elektrodos. Ha participado como Dj en numerosas ediciones de las We Are The Robots en Valencia y actualmente ha lanzado un sello, Tensor Norm, donde junto a Wavelet, pretenden mostrar ese sonido Electro bien cargado de IDM y todo envuelto en un halo de oscuridad y contundencia muy cercana al Techno pero sobre todo pensado para hacer sudar al personal. Como primera referencia en Tensor Norm, Sigma_ALgebra ha dejado 2 temas increíbles que junto a los dos temas de ADJ, hacen de este Split un vinilo obligatorio para cualquier buen amante del Electro y del formato plasticoso. Sin más os dejamos con este gran artista, un matemático del Electro, metódico en la selección y ejecución, os dejamos con Sigma_ALgebra y su Electro belicoso. ¡Subid el volumen y gozad! UEMA os quiere, paz y amor! Tracklist: 1.-Cult 48 – NoRevolution (C48) 2.-Sigma_Algebra – Convergence in order (Upcoming Diffuse Reality) 3.-Chino-Infrared (Syntetyk) 4.-Alpha+ - Deranged (Urban Connections) 5.-Adam Jay-Discrete Event (Detroit Underground) 6.-ADJ-Sonar Trasmissions (Tensor Norm) 7.-Umwelt- State of Matter (Shipwrec) 8-. Hadamard- City to city (RotterHague) 9.- Neonicle – Spook Country (Electro music Coalition) 10.-The Adapt-Transform Message Component (Urban Connections) 11.-Sigma_Algebra-Epsilon (Urban Connections) 12.-Jauzas The Shinning & Foreign Sequence- Talkin’ Machines (New Flesh Records) 13.-Error Beauty & Serge Geyzel-Cul-De-Sac (Zodiac) 14.-James Shinra-Gyorgy (Craigie Knowes) 15.-Jeremiah R-Syncronization Of The Soul (BAKK) 16.-Richard Divine- Tilmetrics (Bl_k Noise) https://www.facebook.com/edjimfer/ https://m.soundcloud.com/sigma_algebra https://tensornorm.bandcamp.com/album/delta-ring-ep

house dj sin cult mix split techno coalition electro tracklist entrega sigma algebra idm sesiones adj vinculado subid wavelet uema section27 urban connections digital distortions we are the robots elektrodos
UEMA Podcast
UEMA Series 082 By Wavelet

UEMA Podcast

Play Episode Listen Later Jan 12, 2021 66:26


¡Feliz Año a tod@s! Hemos tardado en felicitaros pero queríamos hacerlo con un regalo en forma de set y es que hoy arrancamos de nuevo con las UEMA Series y Wavelet es el responsable de la entrega número 82. Wavelet es valenciano, matemático y amante, conocedor y defensor del Electro más purista. Bien conocidas son las fiestas We Are The Robots, de las que fue cofundador, y que durante años bajo el amparo de Hypnotica Colectiva, se convirtieron en un verdadero foco de resistencia Electro en el Levante. Y es que Wavelet además de ser un gran conocedor del Electro e IDM es todo un activista. Fruto de su conocimiento y nivel de implicación con la escena Electro es el nuevo proyecto en el que, nuestro querido Wavelet, se ha embarcado y es que próximamente estrena sello, Tensor Norm, cuyo primer vinilo lo firman ADJ y Sigma Álgebra (cofundador del sello). Wavelet es un gran amigo del Colectivo y la verdad que andábamos con muchas ganas de publicar alguno de sus magistrales sets, y no ha fallado, Wavelet nos trae una selección perfecta a vinilo y muy bien mezclada. Paz y Amor. Aquí dejamos el tracklist para las mentes más curiosas: 1. Fret – Salfort Priors_ Fausten Version. 2. Gescom – Keynell 3. 3. ADJ – Sonar Transmission (upcoming EP on Tensor Norm Label). 4. Skyfarma – Detroit. 5. Saint Thomas LeDoux – Chud. 6. Sigma_Algebra – Mu (Upcoming EP on Tensor Norm Label). 7. AFX – Boxing Day. 8. Exterminator- Noble Train. 9. Kamikaze Space Programme – An Empty Sky. 10. ADJ – Kepp It Real (upcmoing EP on Tensor Norm Label). 11. Sepehr – Darklord. 12. Versalife – Isotropic. 13. DJ K-1 – Modular World. 14. Shiver – Subsonic Soundscape. 15. Errorbeauty / Serge Geyzel – Cul-De-Sac. 16. Split Horizon – Activator. 17. Automat – Enemyz. 18. Clone Theory – Micro Dissection. 19. SDEM - PuwPuwPuw Wavelet: https://soundcloud.com/waveletar Tensor Norm: https://soundcloud.com/user-373963620

Buried Interfaces
Wavelet analysis

Buried Interfaces

Play Episode Listen Later Oct 29, 2020 5:34


Short explanation on the history of Wavelet transform analysis.

wavelet
The GonnaGeek Show
GonnaGeek.com Show #344 – Gonna Talk Wavelet

The GonnaGeek Show

Play Episode Listen Later Sep 5, 2020 66:11


News points include announcement from Gamescom Live, Apple opening up an appeal process, Boeing’s Starliner to launch second uncrewed test and there’s an update on the Android TV Dongle. Finally, Chris provides his thoughts on the Wavelet app, This Week’s Hosts: Stephen Jondrew, Chris Ferrell and Stargate Pioneer. This episode was recorded on Monday, August […]

PaperPlayer biorxiv neuroscience
Temporal codes provide additional category-related information in object category decoding: a systematic comparison of informative EEG features

PaperPlayer biorxiv neuroscience

Play Episode Listen Later Sep 3, 2020


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.09.02.279042v1?rss=1 Authors: Karimi-Rouzbahani, H., Shahmohammadi, M., Vahab, E., Setayeshi, S., Carlson, T. Abstract: Humans are remarkably efficent at recognizing objects. Understanding how the brain performs object recognition has been challenging. Our understanding has been advanced substantially in recent years with the development of multivariate pattern analysis or brain decoding methods. Most start-of-the-art decoding procedures, make use of the mean signal activation to extract object category information, which overlooks temporal variability in the signals. Here, we studied category-related information in 30 mathematically different features from the electroencephalography (EEG; the largest set ever) across three independent and highly-varied datasets using multi-variate pattern analyses. While the event-related potential (ERP) components of N1 and P2a were among the most informative features, the informative original signal samples and Wavelet coefficients, down-sampled through principal component analysis, outperformed them. Informative features showed more pronounced effects in the Theta frequency band, which has been shown to support feed-forward processing of visual information. Correlational analyses showed that the features which provided the most information about object categories, could predict participants' performance (reaction time) more accurately than the less informative features. These results provide researchers with new avenues to study how the brain encodes object category information and how we can read out object category to study the temporal dynamics of the neural code. Copy rights belong to original authors. Visit the link for more info

PaperPlayer biorxiv biophysics
Wavelet coherence phases decode the universal switching mechanism of Ras GTPase superfamily

PaperPlayer biorxiv biophysics

Play Episode Listen Later Aug 15, 2020


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.08.15.252247v1?rss=1 Authors: Motiwala, Z., Sandholu, A. S., Sengupta, D., Kulkarni, K. Abstract: Ras superfamily GTPases are molecular switches which regulate critical cellular processes. Extensive structural and computational studies on this G protein family have tried to establish a general framework for their switching mechanism. The current understanding of the mechanism is that two loops, Switch I and Switch II, undergo conformational changes upon GTP binding and hydrolysis, which results in alteration of their functional state. However, because of variation in the extent of conformational changes seen across the members of the Ras superfamily, there is no generic modus operandi defining their switching mechanism, in terms of loop conformation. Here, we have developed a novel method employing wavelet transformation to dissect the structures of these molecular switches to explore indices that defines the unified principle of working. Our analysis shows that the structural coupling between the Switch I and Switch II regions is manifested in terms of wavelet coherence phases. The resultant phase pertaining to these regions serve as a functional identity of the GTPases. The coupling defined in terms of wavelet coherence phases is conserved across the Ras superfamily. In oncogenic mutants of the GTPases the phase coupling gets disentangled, this perhaps provides an alternative explanation for their aberrant function. Although similar observations were made using MD simulations, there was no structural parameter to define the coupling, as delineated here. Furthermore, the technique reported here is computationally inexpensive and can provide significant functional insights on the GTPases by analyzing as few as two structures. Copy rights belong to original authors. Visit the link for more info

PaperPlayer biorxiv animal behavior and cognition
Practical Design and Implementation of Animal Movements Tracking System for Neuroscience Trials

PaperPlayer biorxiv animal behavior and cognition

Play Episode Listen Later Jul 26, 2020


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.07.26.221754v1?rss=1 Authors: Memarian Sorkhabi, M. Abstract: Background: The nervous system functions of an animal are predominantly reflected in the behaviour and the movement, therefore the movement-related data and measuring behavior quantitatively are crucial for behavioural analyses. The animal movement is traditionally recorded, and human observers follow the animal behaviours; if they recognize a certain behaviour pattern, they will note it manually, which may suffer from observer fatigue or drift. Objective: Automating behavioural observations with computer-vision algorithms are becoming essential equipment to the brain function characterization in neuroscience trials. In this study, the proposed tracking module is eligible to measure the locomotor behaviour (such as speed, distance, turning) over longer time periods that the operator is unable to precisely evaluate. For this aim, a novel animal cage is designed and implemented to track the animal movement. The frames received from the camera are analyzed by the 2D bior 3.7 Wavelet transform and SURF feature points. Results: Implemented video tracking device can report the location, duration, speed, frequency and latency of each behavior of an animal. Validation tests were conducted on the auditory stimulation trial and the magnetic stimulation treatment of hemi-Parkinsonian rats. Conclusion/ Significance: The proposed toolkit can provide qualitative and quantitative data on animal behaviour in an automated fashion, and precisely summarize an animal's movement at an arbitrary time and allows operators to analyse movement patterns without requiring to check full records for every experiment. Copy rights belong to original authors. Visit the link for more info

SYNTAX
WAVELET - SYNTAX Podcast #009 :: JUNE 2020

SYNTAX

Play Episode Listen Later Jun 26, 2020 126:26


El numero #009 es para WAVELET, bien conocido defensor del sonido electroide en Valencia City. Podríamos decir que es un catedrático de este estilo y sus vertientes. En su historial nos encontramos con "We Are The Robots", uno de los últimos proyectos dedicados a este género en Valencia, siendo el 50% activo del mismo. WAVELET aterriza en SYNTAX Podcasts con 2 horas de pura arquitectura sonora, robots, breaks, aliens y cajas infinitas. Purista selector, nos deja el super tracklist que seleccionó para SYNTAX. Gracias por tu tiempo y bienvenido al pantano verde. Formato: DJ Set Género: Electrónica Estilo: Electro Plataforma: Syntax Podcasts #009 Fecha: Junio 2020 Video: https://youtu.be/kiFNqy8WEU8 ___________________________________ Number #009 is for WAVELET, a well-known defender of electroid sound in Valencia City. We could say that he is a professor of this style and its aspects. In its history we find "We Are The Robots", one of the last projects dedicated to this genre in Valencia, being 50% active in it. WAVELET lands on SYNTAX Podcasts with 2 hours of pure sound architecture, robots, breaks, aliens and infinite boxes. Puristic selector, leaves us the super tracklist that you selected for SYNTAX. Thank you for your time and welcome to the green swamp. Format: DJ Set Genre: Electronic Style: Electro Platform: Syntax Podcasts #009 Date: June 2020 Video: https://youtu.be/kiFNqy8WEU8 __________________________________ 1 The Exaltics - 00066.15.00 2 Solotempo -Dro 3 Versalife - Udap Hydra Gamma 4 Sigma_Algebra - He 5 Autechre - Cipater 6 Nomadic - High Tone 7 ΠΕΡΑ ΣΤΑ ΟΡΗ - Gone Beautiful_Brenecki Version 8 Junq - Daylight Never Came 9 S--D - Devil's Tower 10 Shorai - Spectral Ovelap II 11 Uexkull - Exer 12 Stormfield & Nonima - Digiborrrow 13 214 - Auto Parts 14 Uexkull - All rac 15 Neonicle - Rotation 16 Sigma_Algebra - Unreleased 17 Daniel Andréasson - Less End 18 ScanOne - Some Machine 19 Astrobotnia - Miss June 20 SC-164 - Separate Follower - Sync24 Remix 21 Solid Blake - Yagharek 22 /DL/MS/ - Yep 23 Passarani - New cities 24 A Credible Eye-Witness - Episode1 25 Dj Emile - Beast From The Middle East 26 Hydraulix vs. DOC Nasty - Upset The Setup 27 No Moon - Aoe Rushin 28 Hydraulix - 305 to 315 29 Aux 88 - Moonwalker 30 Versalife - Exosuit 31 Faceless Mind - Ocean Movers - VCS2600 Science Remix 32 DJ Di'jital - Bang 33 Illektrolab - Overdrive 35 Mandroid - Ant-Gravity Machines(Aux 88 Remix) 36 Mas 2008 - X-perience The Reality 37 Spinks and Kalbata - Contact Jerusalem - Dexorcist remix 38 Kronos Device -Conscious Robots 39 Sprawl - Electrome 40 Phoenecia - Odd Job - Rhythm Box 41 Dmx Krew - Experiment 5 42 Hosmoz - Zarnica 43 Leo Anibaldi - Untitled 44 Sons Of Melancholia - Rapaz - Out Of The Blue

remix podr aux syntax hydraulix wavelet we are the robots
PaperPlayer biorxiv neuroscience
Improved estimation of hemodynamic response in fNIRS using protocol constraint and wavelet transform decomposition based adaptive algorithm

PaperPlayer biorxiv neuroscience

Play Episode Listen Later Apr 27, 2020


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.04.25.062000v1?rss=1 Authors: Jahani, S., Setarehdan, K. Abstract: Background: Near infrared spectroscopy allows monitoring of oxy and deoxyhemoglobin concentration changes associated with hemodynamic response function (HRF). HRF is mainly affected by physiological interferences which occur in the superficial layers of the head. This makes HRF extracting a very challenging task. Recent studies have used an additional near channel which is sensitive to the systemic interferences of the superficial layers. This additional information can be used to remove the systemic interference from the HRF. New Method: This paper presents a novel wavelet-based constrained adaptive procedure to define the proportion of the physiological interferences in the brain hemodynamic response. The proposed method decomposes the near channel signal into several wavelet transform (WT) scales and adaptively estimates proper weights of each scale to extract their share in the HRF. The estimation of the weights are done by applying data acquisition protocol as a coefficient on recursive least square (RLS), normalized least mean square (NLMS) and Kalman filter methods. Results: Performance of the proposed algorithm is evaluated in terms of the mean square error (MSE) and Pearson's correlation coefficient (R2) criteria between the estimated and the simulated HRF. Comparison with Existing Methods: Results showed that using the proposed method is significantly superior to all past adaptive filters such as EMD/EEMD based RLS/NLMS on estimating HRF signals. Conclusions: we recommend the use of WT based constraint Kalman filter in dual channel fNIRS studies with a defined protocol paradigm and using WT based Kalman filter in studies without any pre-defined protocol. Copy rights belong to original authors. Visit the link for more info

The Video Insiders
Video coding retrospective with codec expert Pankaj Topiwala.

The Video Insiders

Play Episode Listen Later Jan 24, 2020 54:08


Click to watch SPIE Future Video Codec Panel DiscussionRelated episode with Gary Sullivan at Microsoft: VVC, HEVC & other MPEG codec standardsInterview with MPEG Chairman Leonardo Charliogne: MPEG Through the Eyes of it's ChairmanLearn about FastDVO herePankaj Topiwala LinkedIn profile--------------------------------------The Video Insiders LinkedIn Group is where thousands of your peers are discussing the latest video technology news and sharing best practices. Click here to joinWould you like to be a guest on the show? Email: thevideoinsiders@beamr.comLearn more about Beamr--------------------------------------TRANSCRIPT:Pankaj Topiwala: 00:00 With H.264 H.265 HEVC in 2013, we were now able to do up to 300 to one to up to 500 to one compression on a, let's say a 4K video. And with VVC we have truly entered a new realm where we can do up to 1000 to one compression, which is three full orders of magnitude reduction of the original size. If the original size is say 10 gigabits, we can bring that down to 10 megabits. And that's unbelievable. And so video compression truly is a remarkable technology and you know, it's a, it's a marval to look at Announcer: 00:39 The Video Insiders is the show that makes sense of all that is happening in the world of online video as seen through the eyes of a second generation codec nerd and a marketing guy who knows what I-frames and macro blocks are. And here are your hosts, Mark Donnigan and Dror Gill. Speaker 3: 00:39 Dror Gill: 01:11 Today we're going to talk with one of the key figures in the development of a video codecs and a true video insider Pankaj Topiwala. Hello Pankaj and welcome to The Video Insiders podcast. Pankaj Topiwala: 01:24 Gentlemen. hello, and thank you very much for this invite. It looks like it's going to be a lot of fun. Mark Donnigan: 01:31 It is. Thank you for joining Pankaj. Dror Gill: 01:33 Yeah, it sure will be a lot of fun. So can you start by telling us a little bit about your experience in codec development? Pankaj Topiwala: 01:41 Sure, so, I should say that unlike a number of the other people that you have interviewed or may interview my background is fair bit different. I really came into this field really by a back door and almost by chance my degree PhD degree is actually in mathematical physics from 1985. And I actually have no engineering, computer science or even management experience. So naturally I run a small research company working in video compression and analytics, and that makes sense, but that's just the way things go in the modern world. But that the effect for me was a, and the entry point was that even though I was working in very, very abstract mathematics I decided to leave. I worked in academia for a few years and then I decided to join industry. And at that point they were putting me into applied mathematical research. Pankaj Topiwala: 02:44 And the topic at that time that was really hot in applied mathematics was a topic of wavelets. And I ended up writing and edited a book called wavelet image and video compression in 1998. Which was a lot of fun along with quite a few other co authors on that book. But, wavelets had its biggest contribution in the compression of image and video. And so that led me finally to enter into, and I noticed that video compression was a far larger field than image compression. I mean, by many orders, by orders of magnitude. It is probably a hundred times bigger in terms of market size than, than image compression. And as a result I said, okay, if the sexiest application of this new fangled mathematics could be in video compression I entered that field roughly with the the book that I mentioned in 1998. Mark Donnigan: 03:47 So one thing that I noticed Pankaj cause it's really interesting is your, your initial writing and you know, research was around wavelet compression and yet you have been very active in ISO MPEG, all block-based codecs. So, so tell us about that? Pankaj Topiwala: 04:08 Okay. Well obviously you know when you make the transition from working on the wavelets and our initial starting point was in doing wavelet based video compression. When I started first founded my company fastVDO in 1998, 1999 period we were working on wavelet based video compression and we, we pushed that about as much as we could. And at that, at one point we had what we felt was the world's best a video compression using wavelets in fact, but best overall. And it had the feature that you know, one thing that we should, we should tell your view or reader listeners is that the, the value of wavelets in particular in image coding is that not only can you do state of the art image coding, but you can make the bitstream what is called embedded, meaning you can chop it off at anywhere you like, and it's still a decodable stream. Pankaj Topiwala: 05:11 And in fact it is the best quality you can get for that bit rate. And that is a powerful, powerful thing you can do in image coding. Now in video, there is actually no way to do that. Video is just so much more complicated, but we did the best we could to make it not embedded, but at least scalable. And we, we built a scalable wavelet based video codec, which at that time was beating at the current implementations of MPEG4. So we were very excited that we could launch a company based on a proprietary codec that was based on this new fangled mathematics called wavelets. And lead us to a state of the art codec. The facts of the ground though is that just within the first couple of years of running our company, we found that in fact the block-based transformed codecs that everybody else was using, including the implementers of MPEG4. Pankaj Topiwala: 06:17 And then later AVC, those quickly surpassed anything we could build with with wavelets in terms of both quality and stability. The wavelet based codecs were not as powerful or as stable. And I can say quite a bit more about why that's true. If you want? Dror Gill: 06:38 So when you talk about stability, what exactly are you referring to in, in a video codec? Pankaj Topiwala: 06:42 Right. So let's let's take our listeners back a bit to compare image coding and video coding. Image coding is basically, you're given a set of pixels in a rectangular array and we normally divide that into blocks of sub blocks of that image. And then do transforms and then quantization and than entropy coding, that's how we typically do image coding. With the wavelet transform, we have a global transform. It's a, it's ideally done on the entire image. Pankaj Topiwala: 07:17 And then you could do it multiple times, what are called multiple scales of the wavelet transform. So you could take various sub sub blocks that you create by doing the wavelet transfer and the low pass high pass. Ancs do that again to the low low pass for multiple scales, typically about four or five scales that are used in popular image codecs that use wavelets. But now in video, the novelty is that you don't have one frame. You have many, many frames, hundreds or thousands or more. And you have motion. Now, motion is something where you have pieces of the image that float around from one frame to another and they float randomly. That is, it's not as if all of the motion is in one direction. Some things move one way, some things move other ways, some things actually change orientations. Pankaj Topiwala: 08:12 And they really move, of course, in three dimensional space, not in our two dimensional space that we capture. That complicates video compression enormously over image compression. And it particularly complicates all the wavelet methods to do video compression. So, wavelet methods that try to deal with motion were not very successful. The best we tried to do was using motion compensated video you know, transformed. So doing wavelet transforms in the time domain as well as the spatial domain along the paths of motion vectors. But that was not very successful. And what I mean by stability is that as soon as you increase the motion, the codec breaks, whereas in video coding using block-based transforms and block-based motion estimation and compensation it doesn't break. It just degrades much more gracefully. Wavelet based codecs do not degrade gracefully in that regard. Pankaj Topiwala: 09:16 And so we of course, as a company we decided, well, if those are the facts on the ground. We're going to go with whichever way video coding is going and drop our initial entry point, namely wavelets, and go with the DCT. Now one important thing we found was that even in the DCT- ideas we learned in wavelets can be applied right to the DCT. And I don't know if you're familiar with this part of the story, but a wavelet transform can be decomposed using bits shifts and ads only using something called the lifting transform, at least a important wavelet transforms can. Now, it turns out that the DCT can also be decomposed using lifting transforms using only bit shifts and ads. And that is something that my company developed way back back in 1998 actually. Pankaj Topiwala: 10:18 And we showed that not only for DCT, but a large class of transforms called lab transforms, which included the block transforms, but in particular included more powerful transforms the importance of that in the story of video coding. Is that up until H.264, all the video codec. So H.261, MPEG-1, MPEG-2, all these video codecs used a floating point implementation of the discrete cosign transform and without requiring anybody to implement you know a full floating point transform to a very large number of decimal places. What they required then was a minimum accuracy to the DCT and that became something that all codecs had to do. Instead. If you had an implementation of the DCT, it had to be accurate to the true floating point DCT up to a certain decimal point in, in the transform accuracy. Pankaj Topiwala: 11:27 With the advent of H.264, with H.264, we decided right away that we were not going to do a flooding point transform. We were going to do an integer transform. That decision was made even before I joined, my company joined, the development base, H.264, AVC, But they were using 32 point transforms. We found that we could introduce 16 point transforms, half the complexity. And half the complexity only in the linear dimension when you, when you think of it as a spatial dimension. So two spatial dimensions, it's a, it's actually grows more. And so the reduction in complexity is not a factor of two, but at least a factor of four and much more than that. In fact, it's a little closer to exponential. The reality is that we were able to bring the H.264 codec. Pankaj Topiwala: 12:20 So in fact, the transform was the most complicated part of the entire codec. So if you had a 32 point transform, the entire codec was at 32 point technology and it needed 32 points, 32 bits at every sample to process in hardware or software. By changing the transform to 16 bits, we were able to bring the entire codec to a 16 bit implementation, which dramatically improved the hardware implementability of this transfer of this entire codec without at all effecting the quality. So that was an important development that happened with AVC. And since then, we've been working with only integer transforms. Mark Donnigan: 13:03 This technical history is a really amazing to hear. I, I didn't actually know that Dror or you, you probably knew that, but I didn't. Dror Gill: 13:13 Yeah, I mean, I knew about the transform and shifting from fixed point, from a floating point to integer transform. But you know, I didn't know that's an incredible contribution Pankaj. Pankaj Topiwala: 13:27 We like to say that we've saved the world billions of dollars in hardware implementations. And we've taken a small a small you know, a donation as a result of that to survive as a small company. Dror Gill: 13:40 Yeah, that's great. And then from AVC you moved on and you continued your involvement in, in the other standards, right? That's followed. Pankaj Topiwala: 13:47 in fact, we've been involved in standardization efforts now for almost 20 years. My first meeting was a, I recall in may of 2000, I went to a an MPEG meeting in Geneva. And then shortly after that in July I went to an ITU VCEG meeting. VCEG is the video coding experts group of the ITU. And MPEG is the moving picture experts group of ISO. These two organizations were separately pursuing their own codecs at that time. Pankaj Topiwala: 14:21 ISO MPEG was working on MPEG-4 and ITU VCEG was working on H.263, and 263 plus and 263 plus plus. And then finally they started a project called 263 L for longterm. And eventually it became clear to these two organizations that look, it's silly to work on, on separate codecs. They had worked once before in MPEG-2 develop a joint standard and they decided to, to form a joint team at that time called the joint video team, JVT to develop the H.264 AVC video codec, which was finally done in 2003. We participate participated you know fully in that making many contributions of course in the transform but also in motion estimation and other aspects. So, for example, it might not be known that we also contributed the fast motion estimation that's now widely used in probably nearly all implementations of 264, but in 265 HEVC as well. Pankaj Topiwala: 15:38 And we participated in VVC. But one of the important things that we can discuss is these technologies, although they all have the same overall structure, they have become much more complicated in terms of the processing that they do. And we can discuss that to some extent if you want? Dror Gill: 15:59 The compression factors, just keep increasing from generation to generation and you know, we're wondering what's the limit of that? Pankaj Topiwala: 16:07 That's of course a very good question and let me try to answer some of that. And in fact that discussion I don't think came up in the discussion you had with Gary Sullivan, which certainly could have but I don't recall it in that conversation. So let me try to give for your listeners who did not catch that or are not familiar with it. A little bit of the story. Pankaj Topiwala: 16:28 The first international standard was the ITU. H.261 standard dating roughly to 1988 and it was designed to do only about 15 to one to 20 to one compression. And it was used mainly for video conferencing. And at that time you'd be surprised from our point of view today, the size of the video being used was actually incredibly tiny about QCIP or 176 by 144 pixels. Video of that quality that was the best we could conceive. And we thought we were doing great. And doing 20 to one compression, wow! Recall by the way, that if you try to do a lossless compression of any natural signal, whether it's speech or audio or images or video you can't do better than about two to one or at most about two and a half to one. Pankaj Topiwala: 17:25 You cannot do, typically you cannot even do three to one and you definitely cannot do 10 to one. So a video codec that could do 20 to one compression was 10 times better than what you could do lossless, I'm sorry. So this is definitely lossy, but lossy with still a good quality so that you can use it. And so we thought we were really good. When MPEG-1 came along in, in roughly 1992 we were aiming for 25 to one compression and the application was the video compact disc, the VCD. With H.262 or MPEG-2 roughly 1994, we were looking to do about 35 to one compression, 30 to 35. And the main application was then DVD or also broadcast television. At that point, broadcast television was ready to use at least in some, some segments. Pankaj Topiwala: 18:21 Try digital broadcasting. In the United States, that took a while. But in any case it could be used for broadcast television. And then from that point H.264 AVC In 2003, we jumped right away to more than 100 to one compression. This technology at least on large format video can be used to shrink the original size of a video by more than two orders of magnitude, which was absolutely stunning. You know no other natural signal, not speech, not broadband, audio, not images could be compressed that much and still give you high quality subjective quality. But video can because it's it is so redundant. And because we don't understand fully yet how to appreciate video. Subjectively. We've been trying things you know, ad hoc. And so the entire development of video coding has been really by ad hoc methods to see what quality we can get. Pankaj Topiwala: 19:27 And by quality we been using two two metrics. One is simply a mean square error based metric called peak signal to noise ratio or PSNR. And that has been the industry standard for the last 35 years. But the other method is simply to have people look at the video, what we call subjective rating of the video. Now it's hard to get a subjective rating. That's reliable. You have to do a lot of standardization get a lot of different people and take mean opinion scores and things like that. That's expensive. Whereas PSNR is something you can calculate on a computer. And so people have mostly in the development of video coding for 35 years relied on one objective quality metric called PSNR. And it is good but not great. And it's been known right from the beginning that it was not perfect, not perfectly correlated to video quality, and yet we didn't have anything better anyway. Pankaj Topiwala: 20:32 To finish the story of the video codecs with H.265 HEVC in 2013, we were now able to do up to 300 to one to up to 500 to one compression on let's say a 4K. And with VVC we have truly entered a new realm where we can do up to 1000 to one compression, which is three full orders of magnitude reduction of the original size. If the original size is say, 10 gigabits, we can bring that down to 10 megabits. And that's unbelievable. And so video compression truly is a remarkable technology. And you know, it's a, it's a marvel to look at. Of course it does not, it's not magic. It comes with an awful lot of processing and an awful lot of smarts have gone into it. That's right. Mark Donnigan: 21:24 You know Pankaj, that, is an amazing overview and to hear that that VVC is going to be a thousand to one. You know, compression benefit. Wow. That's incredible! Pankaj Topiwala: 21:37 I think we should of course we should of course temper that with you know, what people will use in applications. Correct. They may not use the full power of a VVC and may not crank it to that level. Sure, sure. I can certainly tell you that that we and many other companies have created bitstreams with 1000 to one or more compression and seeing video quality that we thought was usable. Mark Donnigan: 22:07 One of the topics that has come to light recently and been talked about quite a bit. And it was initially raised by Dave Ronca who used to lead encoding at Netflix for like 10 years. In fact you know, I think he really built that department, the encoding team there and is now at Facebook. And he wrote a LinkedIn article post that was really fascinating. And what he was pointing out in this post was, was that with compression efficiency and as each generation of codec is getting more efficient as you just explained and gave us an overview. There's a, there's a problem that's coming with that in that each generation of codec is also getting even more complex and you know, in some settings and, and I suppose you know, Netflix is maybe an example where you know, it's probably not accurate to say they have unlimited compute, but their application is obviously very different in terms of how they can operate their, their encoding function compared to someone who's doing live, live streaming for example, or live broadcast. Maybe you can share with us as well. You know, through the generation generational growth of these codecs, how has the, how has the compute requirements also grown and has it grown in sort of a linear way along with the compression efficiency? Or are you seeing, you know, some issues with you know, yes, we can get a thousand to one, but our compute efficiency is getting to the, where we could be hitting a wall. Pankaj Topiwala: 23:46 You asked a good question. Has the complexity only scaled linearly with the compression ratio? And the answer is no. Not at all. Complexity has outpaced the compression ratio. Even though the compression ratio is, is a tremendous, the complexity is much, much higher. And has always been at every step. First of all there's a big difference in doing the research, the research phase in development of the, of a technology like VVC where we were using a standardized reference model that the committee develops along the way, which is not at all optimized. But that's what we all use because we share a common code base. And make any new proposals based on modifying that code base. Now that code base is always along the entire development chain has always been very, very slow. Pankaj Topiwala: 24:42 And true implementations are anywhere from 100 to 500 times more efficient in complexity than the reference software. So right away you can have the reference software for say VVC and somebody developing a, an implementation that's a real product. It can be at least 100 times more efficient than what the reference software, maybe even more. So there's a big difference. You know, when we're developing a technology, it is very hard to predict what implementers will actually come up with later. Of course, the only way they can do that is that companies actually invest the time and energy right away as they're developing the standard to build prototype both software and hardware and have a good idea that when they finish this, you know, what is it going to really cost? So just to give you a, an idea, between, H.264 and Pankaj Topiwala: 25:38 H.265, H.264, only had two transforms of size, four by four and eight by eight. And these were integer transforms, which are only bit shifts and adds, took no multiplies and no divides. The division in fact got incorporated into the quantizer and as a result, it was very, very fast. Moreover, if you had to do, make decisions such as inter versus intra mode, the intra modes there were only about eight or 10 intra modes in H.264. By contrast in H.265. We have not two transforms eight, four by four and eight by, but in fact sizes of four, eight, 16 and 32. So we have much larger sized transforms and instead of a eight or 10 intra modes, we jumped up to 35 intra modes. Pankaj Topiwala: 26:36 And then with a VVC we jumped up to 67 intro modes and we just, it just became so much more complex. The compression ratio between HEVC and VVC is not quite two to one, but let's say, you know, 40% better. But the the complexity is not 40% more. On the ground and nobody has yet, to my knowledge, built a a, a, a fully compliant and powerful either software or hardware video codec for VVC yet because it's not even finished yet. It's going to be finished in July 2020. When it, when, the dust finally settles maybe four or five years from now, it will be, it will prove to be at least three or four times more complex than HEVC encoder the decoder, not that much. The decoder, luckily we're able to build decoders that are much more linear than the encoder. Pankaj Topiwala: 27:37 So I guess I should qualify as discussion saying the complexity growth is all mostly been in the encoder. The decoder has been a much more reasonable. Remember, we are always relying on this principle of ever-increasing compute capability. You know, a factor of two every 18 months. We've long heard about all of this, you know, and it is true, Moore's law. If we did not have that, none of this could have happened. None of this high complexity codecs, whatever had been developed because nobody would ever be able to implement them. But because of Moore's law we can confidently say that even if we put out this very highly complex VVC standard, someday and in the not too distant future, people will be able to implement this in hardware. Now you also asked a very good question earlier, is there a limit to how much we can compress? Pankaj Topiwala: 28:34 And also one can ask relatively in this issue, is there a limit to a Moore's law? And we've heard a lot about that. That may be finally after decades of the success of Moore's law and actually being realized, maybe we are now finally coming to quantum mechanical limits to you know how much we can miniaturize in electronics before we actually have to go to quantum computing, which is a totally different you know approach to doing computing because trying to go smaller die size. Well, we'll make it a unstable quantum mechanically. Now the, it appears that we may be hitting a wall eventually we haven't hit it yet, but we may be close to a, a physical limit in die size. And in the observations that I've been making at least it seems possible to me that we are also reaching a limit to how much we can compress video even without a complexity limit, how much we can compress video and still obtain reasonable or rather high quality. Pankaj Topiwala: 29:46 But we don't know the answer to that. And in fact there are many many aspects of this that we simply don't know. For example, the only real arbiter of video quality is subjective testing. Nobody has come up with an objective video quality metric that we can rely on. PSNR is not it. When, when push comes to shove, nobody in this industry actually relies on PSNR. They actually do subjective testing well. So in that scenario, we don't know what the limits of visual quality because we don't understand human vision, you know, we try, but human vision is so complicated. Nobody can understand the impact of that on video quality to any very significant extent. Now in fact, the first baby steps to try to understand, not explicitly but implicitly capture subjective human video quality assessment into a neural model. Those steps are just now being taken in the last couple of years. In fact, we've been involved, my company has been involved in, in getting into that because I think that's a very exciting area. Dror Gill: 30:57 I tend to agree that modeling human perception with a neural network seems more natural than, you know, just regular formulas and algorithms which are which are linear. Now I, I wanted to ask you about this process of, of creating the codecs. It's, it's very important to have standards. So you encode a video once and then you can play it anywhere and anytime and on any device. And for this, the encoder and decoder need to agree on exactly the format of the video. And traditionally you know, as you pointed out with all the history of, of development. Video codecs have been developed by standardization bodies, MPEG and ITU first separately. And then they joined forces to develop the newest video standards. But recently we're seeing another approach to develop codecs, which is by open sourcing them. Dror Gill: 31:58 Google started with an open source code, they called VP9 which they first developed internally. Then they open sourced it and and they use it widely across their services, especially in, YouTube. And then they joined forces with the, I think the largest companies in the world, not just in video but in general. You know those large internet giants such as Amazon and Facebook and and Netflix and even Microsoft, Apple, Intel have joined together with the Alliance of Open Media to jointly create another open codec called AV1. And this is a completely parallel process to the MPEG codec development process. And the question is, do you think that this was kind of a one time effort to, to to try and find a, or develop a royalty free codec, or is this something that will continue? And how do you think the adoption of the open source codecs versus the committee defined codecs, how would that adoption play out in the market? Pankaj Topiwala: 33:17 That's of course a large topic on its own. And I should mention that there have been a number of discussions about that topic. In particular at the SPIE conference last summer in San Diego, we had a panel discussion of experts in video compression to discuss exactly that. And one of the things we should provide to your listeners is a link to that captured video of the panel discussion where that topic is discussed to some significant extent. And it's on YouTube so we can provide a link to that. My answer. And of course none of us knows the future. Right. But we're going to take our best guesses. I believe that this trend will continue and is a new factor in the landscape of video compression development. Pankaj Topiwala: 34:10 But we should also point out that the domain of preponderance use preponderant use of these codecs is going to be different than in our traditional codecs. Our traditional codecs such as H.264 265, were initially developed for primarily for the broadcast market or for DVD and Blu-ray. Whereas these new codecs from AOM are primarily being developed for the streaming media industry. So the likes of Netflix and Amazon and for YouTube where they put up billions of user generated videos. So, for the streaming application, the decoder is almost always a software decoder. That means they can update that decoder anytime they do a software update. So they're not limited by a hardware development cycle. Of course, hardware companies are also building AV1. Pankaj Topiwala: 35:13 And the point of that would be to try to put it into handheld devices like laptops, tablets, and especially smartphones. But to try to get AV1 not only as a decoder but also as an encoder in a smartphone is going to be quite complicated. And the first few codecs that come out in hardware will be of much lower quality, for example, comparable to AVC and not even the quality of HEVC when they first start out. So that's... the hardware implementations of AV1 that work in real time are not going to be, it's going to take a while for them to catch up to the quality that AV1 can offer. But for streaming we, we can decode these streams reasonably well in software or in firmware. And the net result is that, or in GPU for example, and the net result is that these companies can already start streaming. Pankaj Topiwala: 36:14 So in fact Google is already streaming some test streams maybe one now. And it's cloud-based YouTube application and companies like Cisco are testing it already, even for for their WebEx video communication platform. Although the quality will not be then anything like the full capability of AV1, it'll be at a much reduced level, but it'll be this open source and notionally, you know, royalty free video codec. Dror Gill: 36:50 Notionally. Yeah. Because they always tried to do this, this dance and every algorithm that they try to put into the standard is being scrutinized and, and, and they check if there are any patents around it so they can try and keep this notion of of royalty-free around the codec because definitely the codec is open source and royalty free. Dror Gill: 37:14 I think that is, is, is a big question. So much IP has gone into the development of the different MPEG standards and we know it has caused issues. Went pretty smoothly with AVC, with MPEG-LA that had kind of a single point of contact for licensing all the essential patents and with HEVC, that hasn't gone very well in the beginning. But still there is a lot of IP there. So the question is, is it even possible to have a truly royalty free codec that can be competitive in, in compression efficiency and performance with the codec developed by the standards committee? Pankaj Topiwala: 37:50 I'll give you a two part answer. One because of the landscape of patents in the field of video compression which I would describe as being, you know very, very spaghetti like and patents date back to other patents. Pankaj Topiwala: 38:09 And they cover most of the, the topics and the most of the, the tools used in video compression. And by the way we've looked at the AV1 and AV1 is not that different from all the other standards that we have. H.265 or VVC. There are some things that are different. By and large, it resembles the existing standards. So can it be that this animal is totally patent free? No, it cannot be that it is patent free. But patent free is not the same as royalty free. There's no question that AV1 has many, many patents, probably hundreds of patents that reach into it. The question is whether the people developing and practicing AV1 own all of those patents. That is of course, a much larger question. Pankaj Topiwala: 39:07 And in fact, there has been a recent challenge to that, a group has even stood up to proclaim that they have a central IP in AV1. The net reaction from the AOM has been to develop a legal defense fund so that they're not going to budge in terms of their royalty free model. If they do. It would kill the whole project because their main thesis is that this is a world do free thing, use it and go ahead. Now, the legal defense fund then protects the members of that Alliance, jointly. Now, it's not as if the Alliance is going to indemnify you against any possible attack on IP. They can't do that because nobody can predict, you know, where somebody's IP is. The world is so large, so many patents in that we're talking not, not even hundreds and thousands, but tens of thousands of patents at least. Pankaj Topiwala: 40:08 So nobody in the world has ever reviewed all of those patent. It's not possible. And the net result is that nobody can know for sure what technology might have been patented by third parties. But the point is that because such a large number of powerful companies that are also the main users of this technology, you know, people, companies like Google and Apple and Microsoft and, and Netflix and Amazon and Facebook and whatnot. These companies are so powerful. And Samsung by the way, has joined the Alliance. These companies are so powerful that you know, it would be hard to challenge them. And so in practice, the point is they can project a royalty-free technology because it would be hard for anybody to challenge it. And so that's the reality on the ground. Pankaj Topiwala: 41:03 So at the moment it is succeeding as a royalty free project. I should also point out that if you want to use this, not join the Alliance, but just want to be a user. Even just to use it, you already have to offer any IP you have in this technology it to the Alliance. So all users around the world, so if tens of thousands and eventually millions of you know, users around the world, including tens of thousands of companies around the world start to use this technology, they will all have automatically yielded any IP they have in AV1, to the Alliance. Dror Gill: 41:44 Wow. That's really fascinating. I mean, first the distinction you made between royalty free and patent free. So the AOM can keep this technology royalty free, even if it's not patent free because they don't charge royalties and they can help with the legal defense fund against patent claim and still keep it royalty free. And, and second is the fact that when you use this technology, you are giving up any IP claims against the creators of the technology, which means that if any, any party who wants to have any IP claims against the AV1 encoder cannot use it in any form or shape. Pankaj Topiwala: 42:25 That's at least my understanding. And I've tried to look at of course I'm not a lawyer. And you have to take that as just the opinion of a video coding expert rather than a lawyer dissecting the legalities of this. But be that as it may, my understanding is that any user would have to yield any IP they have in the standard to the Alliance. And the net result will be if this technology truly does get widely used more IP than just from the Alliance members will have been folded into into it so that eventually it would be hard for anybody to challenge this. Mark Donnigan: 43:09 Pankaj, what does this mean for the development of so much of the technology has been in has been enabled by the financial incentive of small groups of people, you know, or medium sized groups of people forming together. You know, building a company, usually. Hiring other experts and being able to derive some economic benefit from the research and the work and the, you know, the effort that's put in. If all of this sort of consolidates to a handful or a couple of handfuls of, you know, very, very large companies, you know, does that, I guess I'm, I'm asking from your view, will, will video and coding technology development and advancements proliferate? Will it sort of stay static? Because basically all these companies will hire or acquire, you know, all the experts and you know, it's just now everybody works for Google and Facebook and Netflix and you know... Or, or do you think it will ultimately decline? Because that's something that that comes to mind here is, you know, if the economic incentives sort of go away, well, you know, people aren't going to work for free! Pankaj Topiwala: 44:29 So that's of course a, another question and a one relevant. In fact to many of us working in video compression right now, including my company. And I faced this directly back in the days of MPEG-2. There was a two and a half dollar ($2.50) per unit license fee for using MPEG-2. That created billions of dollars in licensing in fact, the patent pool, MPEG-LA itself made billions of dollars, even though they took only 10% of the proceeds, they already made billions of dollars, you know, huge amounts of money. With the advent of H.264 AVC, the patent license went not to from two and a half dollars to 25 cents a unit. And now with HEVC, it's a little bit less than that per unit. Of course the number of units has grown exponentially, but then the big companies don't continue to pay per unit anymore. Pankaj Topiwala: 45:29 They just pay a yearly cap. For example, 5 million or 10 million, which to these big companies is is peanuts. So there's a yearly cap for the big companies that have, you know, hundreds of millions of units. You know imagine the number of Microsoft windows that are out there or the number of you know, Google Chrome browsers. And if you have a, a codec embedded in the browser there are hundreds of millions of them, if not billions of them. And so they just pay a cap and they're done with it. But even then, there was up till now an incentive for smart engineers to develop exciting new ideas in a future video coding. But, and that has been up the story up till now. But when, if it happens that this AOM model with AV1 and then AV2, really becomes a dominant codec and takes over the market, then there will be no incentive for researchers to devote any time and energy. Pankaj Topiwala: 46:32 Certainly my company for example, can't afford to you know, just twiddle thumbs, create technologies for which there is absolutely no possibility of a royalty stream. So we, we cannot be in the business of developing video coding when video coding doesn't pay. So the only thing that makes money, is Applications, for example, a streaming application or some other such thing. And so Netflix and, and Google and Amazon will be streaming video and they'll charge you per stream but not on the codec. So that that's an interesting thing and it certainly affects the future development of video. It's clear to me it's a negative impact on the research that we got going in. I can't expect that Google and Amazon and Microsoft are going to continue to devote the same energy to develop future compression technologies in their royalty free environment that companies have in the open standards development technology environment. Pankaj Topiwala: 47:34 It's hard for me to believe that they will devote that much energy. They'll devote energy, but it will not be the the same level. For example, in developing a video standards such as HEVC, it took up to 10 years of development by on the order of 500 to 600 experts, well, let's say four to 500 experts from around the world meeting four times a year for 10 years. Mark Donnigan: 48:03 That is so critical. I want you to repeat that again. Pankaj Topiwala: 48:07 Well, I mean so very clearly we've been putting out a video codec roughly on the schedule of once every 10 years. MPEG-2 was 1994. AVC was 2003 and also 2004. And then HEVC in 2013. Those were roughly 10 years apart. But VVC we've accelerated the schedule to put one out in seven years instead of 10 years. But even then you should realize that we had been working right since HEVC was done. Pankaj Topiwala: 48:39 We've been working all this time to develop VVC and so on the order of 500 experts from around the world have met four times a year at all international locations, spending on the order of $100 million per meeting. You know so billions of dollars have been spent by industry to create these standards, many billions and it can't happen, you know without that. It's hard for me to believe that companies like Microsoft, Google, and whatnot, are going to devote billions to develop their next incremental, you know, AV1and AV2 AV3's. But maybe they will it just, that there's no royalty stream coming from the codec itself, only the application. Then the incentive, suppose they start dominating to create even better technology will not be there. So there really is a, a financial issue in this and that's at play right now. Dror Gill: 49:36 Yeah, I, I find it really fascinating. And of course, Mark and I are not lawyers, but all this you know, royalty free versus committee developed open source versus a standard those large companies who some people fear, you know, their dominance and not only in video codec development, but in many other areas. You know, versus you know, dozens of companies and hundreds of engineers working for seven or 10 years in a codec. So you know, it's really different approaches different methods of development eventually to approach the exact same problem of video compression. And, and how this turns out. I mean we, we cannot forecast for sure, but it will be very interesting, especially next year in 2020 when VVC is ratified. And at around the same time, EVC is ratified another codec from the MPEG committee. Dror Gill: 50:43 And then AV1, and once you know, AV1 starts hitting the market. We'll hear all the discussions of AV2. So it's gonna be really interesting and fascinating to follow. And we, we promise to to bring you all the updates here on The Video Insiders. So Pankaj I really want to thank you. This has been a fascinating discussion with very interesting insights into the world of codec development and compression and, and wavelets and DCT and and all of those topics and, and the history and the future. So thank you very much for joining us today on the video insiders. Pankaj Topiwala: 51:25 It's been my pleasure, Mark and Dror. And I look forward to interacting in the future. Hope this is a useful for your audience. If I can give you a one parting thought, let me give this... Pankaj Topiwala: 51:40 H.264 AVC was developed in 2003 and also 2004. That is you know, some 17 years or 16 years ago, it is close to being now nearly royalty-free itself. And if you look at the market share of video codecs currently being used in the market, for example, even in streaming AVC dominates that market completely. Even though VP8 and VP9 and VP10 were introduced and now AV1, none of those have any sizeable market share. AVC currently dominates from 70 to 80% of that marketplace right now. And it fully dominates broadcast where those other codecs are not even in play. And so they're 17, 16, 17 years later, it is now still the dominant codec even much over HEVC, which by the way is also taking an uptick in the last several years. So the standardized codecs developed by ITU and MPEG are not dead. They may just take a little longer to emerge as dominant forces. Mark Donnigan: 52:51 That's a great parting thought. Thanks for sharing that. What an engaging episode Dror. Yeah. Yeah. Really interesting. I learned so much. I got a DCT primer. I mean, that in and of itself was a amazing, Dror Gill: 53:08 Yeah. Yeah. Thank you. Mark Donnigan: 53:11 Yeah, amazing Pankaj. Okay, well good. Well thanks again for listening to the video insiders, and as always, if you would like to come on this show, we would love to have you just send us an email. The email address is thevideoinsiders@beamr.com, and Dror or myself will follow up with you and we'd love to hear what you're doing. We're always interested in talking to video experts who are involved in really every area of video distribution. So it's not only encoding and not only codecs, whatever you're doing, tell us about it. And until next time what do we say Dror? Happy encoding! Thanks everyone. 

Bigdata Hebdo
Episode 89 : Si AWS ne fait pas un service managé avec ton produit tu n'existes pas

Bigdata Hebdo

Play Episode Listen Later Dec 10, 2019 66:45


Episode 89 : Si AWS ne fait pas un service managé avec ton produit, tu n'existes pasLa FAQ de Noël pour l'épisode 90 : https://trkit.io/s/BDHFAQNOEL----------------------------------------------------------------Cocorico ou presqueDataiku : Florent Douetteau [Podcast / Itw]https://pca.st/vdt5xiutAvec Dataiku, la France se dote d’une nouvelle « licorne »https://www.lemonde.fr/economie/article/2019/12/04/avec-dataiku-la-france-se-dote-d-une-nouvelle-licorne_6021687_3234.html----------------------------------------------------------------TimeseriesTime Series Prediction - A short introduction for pragmatistshttps://www.liip.ch/en/blog/time-series-prediction-a-short-comparison-of-best-practicesUsing Gradient Boosting for Time Series prediction taskshttps://towardsdatascience.com/using-gradient-boosting-for-time-series-prediction-tasks-600fac66a5fcTime series features extraction using Fourier and Wavelet transforms on ECG datahttps://blog.octo.com/time-series-features-extraction-using-fourier-and-wavelet-transforms-on-ecg-data/----------------------------------------------------------------NoSQLCassandra chez AWShttps://aws.amazon.com/fr/mcs/https://aws.amazon.com/fr/blogs/aws/new-amazon-managed-apache-cassandra-service-mcs/https://www.scylladb.com/2019/12/04/managed-cassandra-on-aws-our-take/----------------------------------------------------------------SQL for everModern Data Practice and the SQL Traditionhttps://tselai.com/modern-data-practice-and-the-sql-tradition.htmlAdvent of code with Google Bigqueryhttps://towardsdatascience.com/advent-of-code-sql-bigquery-31e6a04964d4https://adventofcode.comhttps://www.reddit.com/r/adventofcode/----------------------------------------------------------------IAReconnaissance de symbole avec IAProgrammez! n°235 décembre 2019Practical AI https://practicalai.me/Googler Zack Akil to discuss machine learning and AI advances at Googlehttps://www.gcppodcast.com/post/episode-206-ml-ai-with-zack-akil/Dev board TPUhttps://coral.ai/products/dev-board/----------------------------------------------------------------Outils en tout genreMaking Git and Jupyter Notebooks play nicehttp://timstaley.co.uk/posts/making-git-and-jupyter-notebooks-play-nice/IntelliJ IDEA 2019.3: Better Performance and Qualityhttps://blog.jetbrains.com/idea/2019/11/intellij-idea-2019-3-better-performance-and-quality/----------------------------------------------------------------AnnoncesBigdatapero en janvier à définir17/12/2019PTSM #3 RedisTimeSeries & TSLhttps://www.meetup.com/fr-FR/Paris-Time-Series-Meetup/events/266610627/----------------------------------------------------------------Rejoignez le Slack du Bigdata Hebdohttp://trkit.io/s/invitebdh----------------------------------------------------------------http://www.bigdatahebdo.com https://twitter.com/bigdatahebdoNicolas : https://www.cerenit.fr/ et https://twitter.com/_CerenIT et https://twitter.com/nsteinmetz Jérôme : https://twitter.com/jxerome et https://www.zeenea.comVincent : https://twitter.com/vhe74-------------------------------------------------------------Cette publication est sponsorisée par Affini-Tech et CerenitBesoin de concevoir, d'industrialiser ou d'automatiser vos plateformes ? Ecrivez nous à contact@cerenit.fr( https://www.cerenit.fr/ et https://twitter.com/_CerenIT )Affini-Tech vous accompagne dans tous vos projets Cloud et Data, pour Imaginer, Expérimenter et Executer vos services ! ( http://affini-tech.com https://twitter.com/affinitech )On recrute ! venez cruncher de la data avec nous ! écrivez nous à recrutement@affini-tech.com----------------------------------------------------------------

Bigdata Hebdo
Episode 89 : Si AWS ne fait pas un service managé avec ton produit tu n'existes pas

Bigdata Hebdo

Play Episode Listen Later Dec 10, 2019 66:45


Episode 89 : Si AWS ne fait pas un service managé avec ton produit, tu n'existes pasLa FAQ de Noël pour l'épisode 90 : https://trkit.io/s/BDHFAQNOEL----------------------------------------------------------------Cocorico ou presqueDataiku : Florent Douetteau [Podcast / Itw]https://pca.st/vdt5xiutAvec Dataiku, la France se dote d’une nouvelle « licorne »https://www.lemonde.fr/economie/article/2019/12/04/avec-dataiku-la-france-se-dote-d-une-nouvelle-licorne_6021687_3234.html----------------------------------------------------------------TimeseriesTime Series Prediction - A short introduction for pragmatistshttps://www.liip.ch/en/blog/time-series-prediction-a-short-comparison-of-best-practicesUsing Gradient Boosting for Time Series prediction taskshttps://towardsdatascience.com/using-gradient-boosting-for-time-series-prediction-tasks-600fac66a5fcTime series features extraction using Fourier and Wavelet transforms on ECG datahttps://blog.octo.com/time-series-features-extraction-using-fourier-and-wavelet-transforms-on-ecg-data/----------------------------------------------------------------NoSQLCassandra chez AWShttps://aws.amazon.com/fr/mcs/https://aws.amazon.com/fr/blogs/aws/new-amazon-managed-apache-cassandra-service-mcs/https://www.scylladb.com/2019/12/04/managed-cassandra-on-aws-our-take/----------------------------------------------------------------SQL for everModern Data Practice and the SQL Traditionhttps://tselai.com/modern-data-practice-and-the-sql-tradition.htmlAdvent of code with Google Bigqueryhttps://towardsdatascience.com/advent-of-code-sql-bigquery-31e6a04964d4https://adventofcode.comhttps://www.reddit.com/r/adventofcode/----------------------------------------------------------------IAReconnaissance de symbole avec IAProgrammez! n°235 décembre 2019Practical AI https://practicalai.me/Googler Zack Akil to discuss machine learning and AI advances at Googlehttps://www.gcppodcast.com/post/episode-206-ml-ai-with-zack-akil/Dev board TPUhttps://coral.ai/products/dev-board/----------------------------------------------------------------Outils en tout genreMaking Git and Jupyter Notebooks play nicehttp://timstaley.co.uk/posts/making-git-and-jupyter-notebooks-play-nice/IntelliJ IDEA 2019.3: Better Performance and Qualityhttps://blog.jetbrains.com/idea/2019/11/intellij-idea-2019-3-better-performance-and-quality/----------------------------------------------------------------AnnoncesBigdatapero en janvier à définir17/12/2019PTSM #3 RedisTimeSeries & TSLhttps://www.meetup.com/fr-FR/Paris-Time-Series-Meetup/events/266610627/----------------------------------------------------------------Rejoignez le Slack du Bigdata Hebdohttp://trkit.io/s/invitebdh----------------------------------------------------------------http://www.bigdatahebdo.com https://twitter.com/bigdatahebdoNicolas : https://www.cerenit.fr/ et https://twitter.com/_CerenIT et https://twitter.com/nsteinmetz Jérôme : https://twitter.com/jxerome et https://www.zeenea.comVincent : https://twitter.com/vhe74-------------------------------------------------------------Cette publication est sponsorisée par Affini-Tech et CerenitBesoin de concevoir, d'industrialiser ou d'automatiser vos plateformes ? Ecrivez nous à contact@cerenit.fr( https://www.cerenit.fr/ et https://twitter.com/_CerenIT )Affini-Tech vous accompagne dans tous vos projets Cloud et Data, pour Imaginer, Expérimenter et Executer vos services ! ( http://affini-tech.com https://twitter.com/affinitech )On recrute ! venez cruncher de la data avec nous ! écrivez nous à recrutement@affini-tech.com----------------------------------------------------------------

Themisia Podcast
Ep 003: A Digital Democracy Built On Truth

Themisia Podcast

Play Episode Listen Later Sep 5, 2019 39:40


In the coming future, Truth will be safeguarded by citizens of an entirely self-governed and autonomous digital community that spans the world.In this episode I talk about:The coming Themsia WhitepaperProgressing towards Pre-Seed investmentDispatch Labs, Perlin and Wavelet technology.The Investigations public utility architectureBuilding investigations using fact and reason modules.One day giving up control so that Themisia can decide its future.The continued contribution of my tech company after handing over the reigns.Bicameral Parliament governanceThe Great Compromise in the founding of the U.S CongressBalancing short term vs long term interests of the countryRepresentation for Themisian institutions, communities and districtsSupport the show (https://www.themisia.org/invest)

Binance Podcast
Supporting Greater Environmental Accountability with Blockchain Technology

Binance Podcast

Play Episode Listen Later Aug 27, 2019 26:32


How can blockchain technology contribute to carbon emission reduction? With over a decade of experience in environmental service, Dorjee Sun proposed a solution after he entered blockchain industry. In this episode, Jill Ni from Binance Charity Foundation and Dorjee Sun, Perlin CEO, discussed the Global Ledger Initiative co-headed by Perlin and BCF that aims to greater environmental transparency and accountability through the Wavelet distributed ledger.

Wyre Talks
Ep 45, The DAG Ledger Powering Wasm Smart Contracts—with Cofounders Kenta Iwasaki & Dorjee Sun of Perlin

Wyre Talks

Play Episode Listen Later Aug 8, 2019 68:46


The creators of Perlin still hold the long-term vision of developing a revenue-generating distributed ledger system that closes the wealth gap. But until smartphones are powerful enough and broadband speeds are fast enough, the team has expanded its scope to include Wavelet, a DAG ledger that powers WebAssembly smart contracts. Kenta Iwasaki and Dorjee Sun are the cofounders of Perlin, a crypto project powering the future of trade with enterprise solutions built on top of the world’s fastest public ledger. Today, Kenta and Dorjee join us to share their ten-year plan to realize a decentralized cloud compute platform and explain how the network evolved to include smart contracts—with international trade applications. They introduce us to DAG architecture, describing how it leverages alternatives to the longest chain rule and discussing Wavelet’s unique snowball sampling solution for identifying the valid chain. Kenta weighs in on the main design flaw of Avalanche as a consensus mechanism and offers a high-level overview of how Wavelet achieves total ordering of transactions. He also walks us through the recent Wavelet benchmark, revealing the system’s impressive numbers in terms of transactions per second and time to finality. Listen in to understand the key considerations for smart contract developers building on Perlin and learn how its leaderless proof-of-stake governance model solves for the safety concerns associated with other modern proof-of-stake ledgers Follow Thomas on Twitter: https://twitter.com/tomscaria Follow Louis on Twitter: https://twitter.com/louAboudHogben Follow Perlin on Twitter: https://twitter.com/PerlinNetwork Follow Kenta on Twitter: https://twitter.com/xtwokei Follow Dorjee on Twitter: https://twitter.com/dorjeesun

Laris Maker presenta: ELEKTRODOS
ELEKTRODOS. Special Electrocamp (2). Interview Dark vektor and DJ Set from Wavelet

Laris Maker presenta: ELEKTRODOS

Play Episode Listen Later Aug 7, 2017 118:34


Elektrodos show of the 7th August 2017, dedicated to ELECTROCAMP 2.0 electro party celebrated in Valencia, Spain (Part 2) With interview and new song from Dark Vektor With DJ Set from and Wavelet ELEKTRODOS show del 7 de agosto del 2017, dedicado a a la fiesta electro ELECTROCAMP 2.0 celebrada en Valencia, España. (Parte 2) Con entrevista y nueva canción de Dark Vektor Con DJ Set de Wavelet (Mario Ortega Pérez) Playlist: 01. Dark Vektor interview 02. Dark Vektor - No More (Sóc Un Frik Sóc Un Tècnic) 03. Wavelet DJ Set

Laris Maker presenta: ELEKTRODOS
ELEKTRODOS 29th aug 16. Special ELECTROCAMP

Laris Maker presenta: ELEKTRODOS

Play Episode Listen Later Aug 30, 2016 160:31


Elektrodosspecial show of the 29th August 2016, 3 hours show dedicated to the the Spanish electro festival ELECTROCAMP 2016,celebrated on the 12th, 13th and 14th August 2016 in The Hole in the Valley, Turís, Valencia. 12 artists: Laris Maker, Electromorphosed, Honexters, Spy DJ, Xabi the Butcher, Kima, Vema, Elektrógena, Blanet, Wavelet, Negocius Man and Gerson. With LIVE from Kima and DJ Sets from Electromorphosed, Xabi The Butcher, Elektrógena, Blanet, Wavelet and Jose Spy. Elektrodos sdel 29 de agosto del 2016, casi 3 horas del show especial dedicado al festival ELECTROCAMP 2016, celebrado los días 12, 13 y 14 de agosto en The Hole in the Valley, Turís, Valencia.Con 12 artistas: Laris Maker, Electromorphosed, Honexters, Spy DJ, Xabi the Butcher, Kima, Vema, Elektrógena, Blanet, Wavelet, Negocius Man y Gerson. Con LIVE de Kima and DJ Sets from Electromorphosed , Xabi The Butcher, Elektrógena, Blanet, Wavelet and Jose Spy. PLAYLIST: 01. Electromorphosed DJ Set 02. Xabi The Butcher DJ Set 03. Elektrógena DJ Set 04. Kima LIVE 05. Blanet DJ Set 06. Wavelet DJ Set 07. SPY DJ Set

Boots-n-Cast
036 - Wavelet

Boots-n-Cast

Play Episode Listen Later Jul 20, 2016 68:34


Tracklist: [0:00] Puppet & Foria - I'm Here {MONSTERCAT} [3:18] Vicetone - Nevada {MONSTERCAT} [6:38] Reez - Arena {MAINSTAGE} [9:33] Retrovision - Puzzle {FREE} [13:10] Max Freegrant & Digital Sketch - Aura {BIG TOYS} [18:18] Wrenchiski - Diamond Eyes {ZEROTHREE} [23:03] Yotto - The Owls {ANJUNABEATS} [27:44] Jerome Isma-Ae & Alastor - Fiction {JEE} [30:50] Estiva - Power Core {STATEMENT!} [35:39] Paris & Simo ft. Emol Reid - Can You See Me (Miska K Remix) {REVEALED/FREE} [38:50] Ashley Wallbridge - Es Vedra {GARUDA} [44:39] Adrian Alexander & Skylane - Stratosphere {ELLIPTICAL SUN} [49:54] Adip Kiyoi - Panther {LANGE} [55:17] Purelight - Infinity {ESSENTIALIZM} [59:21] Darryn M - Back Home {STATE CONTROL} [1:02:50] Michael Woods ft. Andrea Martin - Sleep (Michael Woods VIP Mix) {DIFFUSED} [1:05:35] Arty & Andrew Bayer - Follow The Light {ANJUNABEATS} Podcast cover image courtesy of Tachibana Lita.

Laris Maker presenta: ELEKTRODOS
ELEKTRODOS 14 March 16. new songs from Spanish producers and DJ Set Wavelet

Laris Maker presenta: ELEKTRODOS

Play Episode Listen Later Mar 15, 2016 120:00


ELEKTRODOS show of the 14th March 2016, special program dedicated to new electro songs from Spanish producers, such as Amper Clap, Avidya Djvtr, Carlos Native, VeMa, Kima, NEGOCIUS MAN, Spectrums Data Forces and MEKA. With DJ Set from Wavelet ELEKTRODOS show del 14 de marzo del 2016, programa especial dedicado a novedades de artistas españoles como Amper Clap, Avidya, Carlos Native, Vema, Kima, Negocius Man, Spectrums Data Forces y Meka. Con DJ Set de Wavelet Playlist: 01. Amper Clap - Steigung 02. Quinta Columna - Part Of Genome 03. Carlos native - Computadoras 04. Carlos native - Tainted Memory 05. Vema - Emma 06. Kima - Impacto 07. Negocius Man - Asian Rats 08. O.T.R.S. - Fusion (Spectrums Data Forces Remix) 09. Meka - System Optymus 10. Wavelet DJ Set Wavelet DJ Set Tracklist: 1) Klorex55 - I Can´t you 2) DJ K-1 - Erase The Time 3) Alpha 303 - Kontrol 4) ADJ - Rabbisht Master 5) Umwelt -Perception Of Heaven 6) Alavux - Baikonur Cosmodrome 7) The Martian - Base Station 303 8) Morphology - Magnetospheric 9) Anthony Rother - Destroy Him My Robots 10) Voiski - Rollerblade On The Grass 11) Mikron - Dry Sense II 12) Mas 2008 - X-perience The Reality 13) Aux 88 - Break It Down 14) Fleck ESC - Nice Guy (Pip Williams remix) 15) Zeta Reticula - Pratoma 16) Electronome - Bass Commander 17) Lory D - Bluff City 18) Chaos - Afrogermanic

Laris Maker presenta: ELEKTRODOS
ELEKTRODOS Special We Are The Robots Event

Laris Maker presenta: ELEKTRODOS

Play Episode Listen Later Feb 2, 2016 120:00


ELEKTRODOS special show of the 1st February 2016, dedicated to the event We Are The Robots, celebrated in Valencia on 22nd January 2016. With interviews from Wavelet (Mario Ortega Pérez), Raúl Blanet, Amper Clap and DJDG (Guayo Jimfer). With DJ Sets from from Wavelet, Gerson Tarin and DJDG ELEKTRODOS, show especial del 1 de febrero del 2016, dedicado al evento de electro We Are The Robots, del colectivo Hypnotica Colectiva, celebrado en Valencia el pasado 22 de enero del 2016, con entrevistas a Wavelet, Raúl Blanet, Amper Clap yDJDG. Con DJ Sets de Wavelet, Gerson Tarin y DJDG. Tracklist: 01. Wavelet Interview 02. Wavelet DJ Set 03. Amper Clap Interview 04. Amper Clap - History Of Violence 05. Raul Blanet Interview 06. Gerson DJ Set 07. DJDG Interview 08. DJDG DJ Set

event robots ra electro tracklist wavelet we are the robots elektrodos hypnotica colectiva amper clap
Modellansatz
InSAR - SAR-Interferometrie

Modellansatz

Play Episode Listen Later Sep 24, 2015 40:14


Im Rahmen des ersten Alumitreffens im neu renovierten Mathematikgebäude gibt uns unser Alumnus Markus Even einen Einblick in seine Arbeit als Mathematiker am Fraunhofer IOSB, dem Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung in Ettlingen in der Arbeitsgruppe zur Analyse und Visualisierung von SAR-Bilddaten. Er befasst sich mit der Entwicklung von Algorithmen für die Fernerkundung, genauer gesagt für die Deformationsanalyse mit Hilfe von SAR-Interferometrie (InSAR). Deformation bezieht sich hier auf Bewegungen der Erdkruste oder auf ihr befindlicher Strukturen, z.B. von Bauwerken. Hinter dem Stichwort SAR-Interferometrie verbirgt sich eine Vielfalt von Verfahren der Fernerkundung, die auf Synthetic Aperture Radar, auf Deutsch Radar mit synthetischer Apertur, beruhen, und die die Fähigkeit der Sensorik ein kohärentes Signal zu verarbeiten zur Erzeugung sogenannter Interferogramme nutzen. Für SAR ist es wesentlich, dass der Sensor bewegt wird. Zu diesem Zweck ist er auf einen Satelliten, ein Flugzeug oder auch auf einem auf Schienen laufenden Schlitten montiert. Für die Mehrzahl der Anwendungen wird er entlang einer näherungsweise geradlinigen Bahn bewegt und sendet in festen Zeitabständen elektromagnetische Signale im Mikrowellenbereich aus, deren Returns er, unterteilt in sehr kurze Zeitintervalle, aufzeichnet. Dabei "blickt" er schräg nach unten, um nicht systematisch von zwei verschiedenen Orten der Erdoberfläche rückkehrende Signale zu vermischen. Herauszuheben ist, dass er unabhängig von der Tageszeit- er beleuchtet die Szene selbst- und weitgehend unabhängig von den Wetterverhältnissen- die Atmosphäre verzögert das Signal, ist aber für diese Wellenlängen (ca. 3cm-85cm) bis auf seltene Ausnahmen durchlässig dafür- Aufnahmen machen kann. Dies ist ein Vorzug gegenüber Sensoren, die im optischen oder infraroten Teil des Spektrums arbeiten, und nachts oder bei Bewölkung nicht die gewünschten Informationen liefern können. Neben der Magnitude des rückgestreuten Signals zeichnet der SAR-Sensor auch dessen Phasenverschiebung gegenüber einem Referenzoszillator auf, die die Grundlage für die Interferometrie darstellt und viele Anwendungsmöglichkeiten bietet. Aus dem aufgezeichneten Signal wird das sogenannte fokusierte Bild berechnet. (Mathematisch gesehen handelt es sich bei dieser Aufgabe um ein inverses Problem.) Die Achsen dieses komplexwertigen Bildes entsprechen eine der Position des Satelliten auf seiner Bahn und die andere der Laufzeit des Signals. Der Zahlenwert eines Pixels kann vereinfacht als Mittel der aufgezeichneten Rückstreuung aus dem Volumen angesehen werden, dass durch das jeweilige Paar aus Bahninterval und Laufzeitinterval definiert ist. Dies ist der Kern von SAR: Die Radarkeule erfasst eine größere Fläche auf dem Boden, so dass das aufgezeichnete Signal aus der Überlagerung aller zurückkehrenden Wellen besteht. Diese Überlagerung wird durch die Fokusierung rückgängig gemacht. Dazu benutzt man, dass ein Auflösungselement am Boden zu allen Returns beiträgt, solange es von der Radarkeule erfasst wird und dabei eine bekannte Entfernungskurve durchläuft.Die Magnitude des sich so ergebenden Bildes erinnert bei hochaufgelösten Aufnahmen auf den ersten Blick an eine Schwarzweißphotographie. Betrachtet man sie jedoch genauer, so stellt man schnell Unterschiede fest. Erhabene Objekte kippen zum Sensor, da die höhergelegenen Punkte näher zu ihm liegen. Hohe Werte der Magnitude, also hohe Rückstreuung, sind in der Regel mit günstigen geometrischen Konstellationen verbunden: Eine ebene Fläche muss dazu beispielsweise senkrecht zum einfallenden Signal ausgerichtet sein, was selten der Fall ist. Geht man an die Grenze des aktuell Möglichen und betrachtet ein Bild einer städtischen Umgebung eines luftgetragenen Sensors mit wenigen Zentimetern Auflösung, so scheint es beinahe in punktförmige Streuer zu zerfallen. Diese werden durch dihedrale (Pfosten) und- häufiger- trihedrale Strukturen erzeugt. Trihedrale Strukturen reflektieren das einfallende Signal parallel zur Einfallsrichtung (man kennt das von den an Fahrzeugen verwendeten, Katzenaugen genannten Reflektoren). Sehr niedrige Rückstreuung ist meist darin begründet, dass kein Signal mit der entsprechenden Laufzeit zum Sensor zurückkehrt, sei es weil keine Streuer erreicht werden (Schatten) oder das Signal auf glatten Flächen vom Satelliten weggespiegelt wird. Für Wellenlängen von einigen Zentimetern sind z.B. asphaltierte oder gepflasterte Flächen glatt, bei Windstille ist es auch Wasser. Daneben gibt es auch kompliziertere Streumechanismen, die zu Magnituden mittlerer Höhe führen, etwa Volumenstreuung in Vegetation, Schnee und Sand, verteilte Streuung an Flächen mit vielen kleinen, homogen verteilten Objekten (z.B. Kiesflächen oder andere Flächen mit spärlicher Vegetation) oder einer gewissen Rauigkeit. Außer diesen gibt es noch viele weitere Möglichkeiten, wie Mehrfachreflektionen oder das Zusammenfallen in verschiedenen Höhen positionierter Streuer in einer Entfernungszelle.Die für die SAR-Interferometrie wesentliche Information aber ist die Phase. Sie kann allerdings nur genutzt werden, wenn zwei oder mehr Aufnahmen aus annähernd der gleichen Position vorliegen. Die grundlegende Idee dabei ist die Betrachtung von Doppeldifferenzen der Phase zweier Pixel zweier Aufnahmezeitpunkte. Um sie zu verstehen nehmen wir zunächst an, dass sich in beiden Auflösungszellen je ein dominanter, punktförmiger Streuer befindet, was so gemeint ist, dass die Phase einer Laufzeit entspricht. Da die Subpixelpositionen unbekannt sind und die Größe der Auflösungszelle um Vieles größer als die Wellenlänge ist, ist die Phasendifferenz zweier Pixel eines einzelnen Bildes nicht verwertbar. In der Doppeldifferenz heben sich die unbekannten Subpixelpositionen allerdings heraus. Die Doppeldifferenz ist in dieser idealisierten Situation die Summe dreier Anteile: des Laufzeitunterschiedes auf Grund der verschiedenen Aufnahmegeometrien, des Laufzeitunterschiedes auf Grund einer relativen Positionsänderung der Streuer während der zwischen den Aufnahmen verstrichenen Zeit und des Laufzeitunterschiedes auf Grund der räumlichen und zeitlichen Variation der atmosphärischen Verzögerung. Diese drei Anteile können jeder für sich nützliche Information darstellen. Der Erste wird zur Gewinnung von Höhenmodellen genutzt, der Zweite zur Detektion von Deformationen der Erdoberfläche und der Dritte, obwohl meist als Störterm angesehen, kann bei der Bestimmung der Verteilung von Wasserdampf in der Atmosphäre genutzt werden. Es stellt sich aber die Frage, wie man diese Terme separiert, zumal noch die Mehrdeutigkeit aufgelöst werden muss, die darin liegt, dass die Phase nur bis auf ganzzahlige Vielfache von zwei Pi bekannt ist.Weitere Fragen ergeben sich, da in realen Daten diese Annahmen für viele Pixel nicht erfüllt sind. Stellt man sich beispielsweise eine Auflösungszelle mit mehreren oder vielen kleineren Streuern vor (z.B. mit Geröll), so ändert sich die Phase der überlagerten Returns mit dem Einfallswinkel des Signals. Sie ändert sich auch, wenn manche der Streuer bewegt wurden oder die beiden Aufnahmen nicht ausreichend genau zur Deckung gebracht wurden. Dies führt dazu, dass die Phase sich um einen schlecht quantifizierbaren Betrag ändert. Man spricht dann von Dekorrelation. Eventuell besteht nach Änderung der physischen Gegebenheiten in der Auflösungszelle keine Beziehung mehr zwischen den Phasenwerten eines Pixels. Dies ist etwa der Fall, wenn ein dominanter Streuer hinzu kommt oder nicht mehr anwesend ist, ein Gelände überschwemmt wird oder trocken fällt. Es stellt sich also die Frage, welche Pixel überhaupt Information tragen, bzw. wie ihre Qualität ist und wie sie extrahiert werden kann.Die Geschichte der SAR-Interferometrie begann nach dem Start des ESA-Satelliten ERS 1 im Jahr 1991 mit einfachen differentiellen Interferogrammen. Das berühmteste ist sicher das vom Landers-Erdbeben 1992 in Kalifornien. Zum ersten Mal in der Geschichte der Wissenschaft war es möglich, das Deformationsfeld eines Erdbebens flächig zu messen, wenn auch nur die Komponente in Sichtlinie des Sensors. Statt Werte hunderter in der Region installierter Messstationen stellte das Interferogramm ein Bild des Erdbebens mit Millionen Datenpunkten dar. Diese Fähigkeit, großflächig Deformationen der Erdoberfläche aufzuzeichnen, besitzt nur die SAR-Interferometrie! Allerdings ist zu bemerken, dass dieses Resultat seine Entstehung auch günstigen Umständen verdankt. Landers liegt in der Mojave-Wüste, so dass die Variation der atmosphärischen Verzögerung und die Dekorrelation vernachlässigbar waren. Dank der Verfügbarkeit eines guten Höhenmodells konnte der Anteil des Laufzeitunterschiedes auf Grund der verschiedenen Aufnahmegeometrien eliminiert werden (man spricht dann von einem differentiellen Interferogramm). Ein weiterer Meilenstein war die Shuttle Radar Topography Mission des Space Shuttle Endeavour im Februar 2000, während der die Daten für ein Höhenmodell der gesamten Landmasse zwischen 54 Grad südlicher Breite und 60 Grad nördlicher Breite aufgezeichnet wurden. Für diesen Zweck wurde die Endeavour mit zwei SAR-Antennen ausgestattet, eine am Rumpf, eine an einem 60 Meter langen Ausleger. Dank zeitgleicher Aufnahmen waren die Phasenanteile auf Grund Deformation und atmosphärischer Verzögerung vernachlässigbar. Dekorrelation auf Grund von Änderungen der physischen Gegebenheiten spielt hier auch keine Rolle. Dem Wunsch nach einem weltweiten, dazu deutlich höher aufgelösten Höhenmodell kommt seit 2010 die TanDEM-X-Mission des DLR nach, bei der die beiden SAR-Antennen von zwei Satelliten im Formationsflug getragen werden. Auch in der Algorithmik gab es entscheidende Fortschritte. Einer der fruchtbarsten war die Erfindung von Permanent Scatterer Interferometric SAR (PSInSAR) um das Jahr 2000, das durch die Verwendung einer längeren Zeitreihe von differentiellen Interferogrammen und einiger neuer Ideen das Problem der Separierung der im vorangehenden Abschnitt genannten Terme löste. Der Ausgangspunkt hierfür war die Entdeckung, dass häufig eine größere Anzahl über lange Zeiträume phasenstabile Streuer, die sogenannten Permanent Scatterer (auch Persistent Scatterer oder PS), gefunden werden können, die man sich vereinfacht als Pixel vorstellen darf, deren Auflösungszelle einen dominanten, punktförmigen, über die Zeitreihe unveränderten Streuer enthält. Auf diese wird nun die Auswertung beschränkt, die vereinfacht folgende Schritte durchläuft: Definition eines Graphen mit den PS als Knoten und Paaren benachbarter PS als Kanten; Schätzung einer Modellphase für Deformation und Höhenmodellfehler an Hand der Doppeldifferenzen aller verwendeten differentiellen Interferogramme für alle Kanten; Entrollen von Originalphase minus Modellphase, d.h. Auflösen der Mehrdeutigkeiten; räumlich-zeitliche Filterung, um die Variation der atmosphärischen Verzögerung zu eliminieren. Als Produkt ergeben sich für jeden PS seine Bewegung in Sichtlinie des Sensors und eine Korrektur seiner Höhenlage relativ zum für die Erzeugung der differentiellen Interferogramme verwendeten Höhenmodell. Seither wurden diese Grundideen modifiziert und verfeinert. Vor allem müssen die Berücksichtigung verteilter Streuer (auch Distributed Scatterer oder DS) für die Deformationsanalyse erwähnt werden, was die Informationsdichte vor allem in ariden Gebieten drastisch erhöhen kann, sowie die SAR-Tomographie, die eine Analyse auch dann erlaubt, wenn zwei oder drei vergleichbar starke Streuer in einer Auflösungszelle vorhanden sind (z.B. wenn ein Streuer am Boden, eine Fensterniche und eine Dachstruktur den gleichen Abstand zum Sensor haben). Die SAR-Interferometrie, insbesondere die Deformationsanalyse, verwendet vor allem mathematische Methoden aus den Bereichen Stochastik, Signalverarbeitung, Optimierungstheorie und Numerik. Besondere Herausforderungen ergeben sich daraus, dass die Vielfalt natürlicher Phänomene sich nur bedingt durch einfache statistische Modelle beschreiben lässt und aus dem Umstand, dass die Datensätze in der Regel sehr groß sind (ein Stapel von 30 Aufnahmen mit komplexwertigen 600 Megapixeln ist durchaus typisch). Es treten lineare Gleichungssysteme mit mehreren Zehntausend Unbekannten auf, die robust gelöst sein wollen. Für die Auflösung der Mehrdeutigkeiten verwenden die fortgeschrittensten Algorithmen ganzzahlige Optimierung. Wavelet-basierte Filterverfahren werden genutzt, um die atmosphärische Verzögerung vom Nutzsignal zu trennen. Im Zusammenhang mit der Schätzung der Variation der atmosphärischen Verzögerung werden geostatistische Verfahren wie Kriging eingesetzt. Statistische Tests werden bei der Auswahl der DS, sowie zur Detektion schlechter Pixel eingesetzt. Bei der Prozessierung der DS spielen Schätzer der Kovarianzmatrix eine prominente Rolle. Die SAR-Tomographie nutzt Compressive Sensing und viele weitere Verfahren. Zusammenfassend lässt sich sagen, dass die SAR-Interferometrie auch aus Perspektive eines Mathematikers ein reichhaltiges und spannendes Arbeitsgebiet ist. Eine wichtige Anwendung ist die Deformationsanalyse durch die InSAR-Methode: Die SAR-Interferometrie zeichnet sich vor allen anderen Techniken dadurch aus, dass sie bei geeignetem Gelände sehr großflächige Phänomene mit sehr hoher Informationsdichte abbilden kann. Allerdings liefert sie relative Messungen, so dass in der Regel eine Kombination mit Nivellement oder hochgenauen GPS-Messungen verwendet wird. Ihre Genauigkeit hängt neben der Qualität der Daten von der Wellenlänge ab und zeigt bei 3cm Wellenlänge meist nur wenige Millimeter je Jahr Standardabweichung. Damit können selbst sehr feine Bewegungen, wie z.B. die Hebung des Oberrheingrabens (ca. 2mm/y), nachgewiesen werden. Allerdings können wegen der Mehrdeutigkeit der Phase Bewegungen auch zu stark sein, um noch mit PSInSAR auswertbar zu sein. In diesem Fall können längere Wellenlängen, höhere zeitliche Abtastung oder Korrelationsverfahren helfen. Trotz der diskutierten Einschränkungen lässt sich die Deformationsanalyse mit InSAR in vielen Zusammenhängen nutzensreich einsetzen, denn auch die Ursachen für Deformationen der Erdoberfläche sind vielfältig. Neben geologischen und anderen natürlichen Phänomenen werden sie von Bergbau, Förderung von Wasser, Erdgas, Erdöl, durch Geothermiebohrungen, Tunnelbau oder andere Bautätigkeiten ausgelöst. Meist steht bei den Anwendungen die Einschätzung von Risiken im Fokus. Erdbeben, Vulkanismus, aber auch Schäden an kritischer Infrastruktur, wie Deichen, Staudämmen oder Kernkraftwerken können katastrophale Folgen haben. Ein weiteres wichtiges Thema ist die Entdeckung oder Beobachtung von Erdbewegungen, die sich potentiell zu einem Erdrutsch entwickeln könnten. Allein in den Alpen gibt es tausende Bergflanken, wo sich größere Bereiche in langsamer Bewegung befinden und in Leben oder Infrastruktur gefährdende Hangrutsche münden könnten. Auf Grund der zunehmenden Erderwärmung nimmt diese Bedrohung überall dort zu, wo Permafrost zu tauen beginnt, der bisher den Boden stabilisierte. InSAR wird bei der Erstellung von Risikokarten genutzt, die der Beurteilung der Gefährdungslage und der Entscheidung über Gegenmaßnahmen dienen. In vielen Regionen der Erde werden Deformationen der Erdoberfläche durch veränderte Grundwasserstände verursacht. Nimmt das Grundwasser ab, etwa wegen Entnahme zur Bewässerung oder industriellen Verwendung, so senkt sich die Erdoberfläche. Nimmt das Grundwasser während regenreicher Zeiten zu, so hebt sich die Erdoberfläche. Das Monitoring mit InSAR ist hier aus mehreren Gründen interessant. Bewegungen der Erdoberfläche können Schäden an Gebäuden oder anderen Strukturen verursachen (Bsp. Mexico City). Übermäßige Wasserentnahme kann zu irreversibler Verdichtung der wasserführenden Schichten führen, was Konsequenzen für die zukünftige Verfügbarkeit der lebenswichtigen Flüssigkeit hat. Bei Knappheit muss die Entnahme reguliert und überwacht werden (Bsp. Central Valley, Kalifornien). Von besonderer Bedeutung sind durch geologische Phänomene wie Vulkanismus oder tektonische Bewegungen verursachte Deformationen der Erdoberfläche. Die von SAR-Satelliten gewonnenen Daten werden zur Einschätzung von Risiken benutzt, auch wenn eine sichere, frühzeitige und zeitgenaue Vorhersage von Erdbeben oder Vulkanausbrüchen mit den heutigen Methoden nicht möglich ist. Sie sind aber die Grundlage für eine ausgedehnte Forschungsaktivität, die unser Verständnis der Vorgänge in der Erdkruste stetig wachsen lässt und immer genauere Vorhersagen erlaubt. Dies ist in erster Linie den SAR-Satelliten der ESA (ERS-1, ERS-2, Envisat und aktuell Sentinel-1A) zu verdanken, die seit 1991 mit lediglich einer Lücke von zwei Jahren (2012-2014) kontinuierlich die gesamte Erde aufnehmen. Die Idee dabei ist, dass so in festem zeitlichen Rhythmus (bei ERS alle 35 Tage) jeder Punkt der Erde aufgenommen wird. Dadurch ist ein großes Archiv entstanden, das es nach einem geologischen Ereignis ermöglicht, dieses mit den Methoden der SAR-Interferometrie zu untersuchen, da die Vorgeschichte verfügbar ist. Eine Entwicklung der letzten Jahre ist die Nutzung bei der Erschließung von Erdgas und Erdöl. Die mit InSAR sichtbar gemachten Deformationen erlauben es, neue Einsicht in die Struktur der Lagerstätten zu erhalten, geomechanische Modelle zu kalibrieren und letztlich die Rohstoffe Dank optimierter Positionierung von Bohrlöchern effektiver und kostengünstiger zu fördern. Wer InSAR noch besser verstehen will, der findet in den InSAR Guidlines der ESA die Grundlagen sehr gut erklärt. Einen etwas breiteren Überblick über Anwendungsmöglichkeiten kann man sich auf der Homepage von TRE verschaffen, einem Unternehmen, das von den Schöpfern von PSInSAR gegründet wurde und im Bereich InSAR-Auswertungen nach wie vor führend ist. Die Wettbewerber ADS und e-GEOS bieten außer InSAR weitere Anwendungen von SAR-Daten. Aus wissenschaftlich/politischer Perspektive kann man sich in der Broschüre der DLR über Themenfelder der Erdbeobachtung informieren. Zu dem speziellen Thema der Erdbewegung auf Grund Absenkung des Grundwasserspiegels in den USA gibt es weitere Informationen. Literatur und weiterführende Informationen A. Ferretti, A. Monti-Guarnieri, C. Prati, F. Rocca, D. Massonnet: InSAR Principles: Guidelines for SAR Interferometry Processing and Interpretation, TM-19, ESA Publications, 2007. M. Fleischmann, D. Gonzalez (eds): Erdbeobachtung – Unseren Planeten erkunden, vermessen und verstehen, Deutsches Zentrum für Luft- und Raumfahrt e.V., 2013. Land Subsidence, U.S. Geological Survey. M. Even, A. Schunert, K. Schulz, U. Soergel: Atmospheric phase screen-estimation for PSInSAR applied to TerraSAR-X high resolution spotlight-data, Geoscience and Remote Sensing Symposium (IGARSS), IEEE International, 2010. M. Even, A. Schunert, K. Schulz, U. Soergel: Variograms for atmospheric phase screen estimation from TerraSAR-X high resolution spotlight data, SPIE Proceedings Vol. 7829, SAR Image Analysis, Modeling, and Techniques X, 2010. M. Even: Advanced InSAR processing in the footsteps of SqueeSAR Podcast: Raumzeit RZ037: TanDEM-X Podcast: Modellansatz Modell010: Positionsbestimmung Podcast: Modellansatz Modell012: Erdbeben und Optimale Versuchsplanung Podcast: Modellansatz Modell015: Lawinen

united states man fall er situation leben thema position phase ps geschichte arbeit dabei gef rolle blick definition zeiten grund sand bei idee diese entwicklung fokus hilfe dazu damit pi ideen einblick einen bedeutung unternehmen qualit region bild beziehung entscheidung signal dank neben mexico city wasser verst gonzalez punkt esa analyse schritte aufgabe modeling luft interpretation perspektive trotz grad unterschiede bewegung zum erde daten meter punkte wissenschaft methoden umst kern hinter positions allerdings homepage regel signals pixel auswahl schatten szene geb grundlage konsequenzen mittel allein risiken struktur vielfalt entstehung einsch die geschichte strukturen grenze bahn ds umgebung ursachen grundlagen linie bereiche abstand paar literatur atmosph kombination zweck schnee dadurch nutzung gel aufgrund vieles einschr techniken zusammenh orten anwendung anteil aufl ereignis sar anzahl stellt tm schulz vorg verfahren pixels bew returns infrastruktur regionen flugzeug zweite wellen modelle kalifornien fortschritte bedrohung die idee verwendung betrachtung variation rhythmus aufnahmen entdeckung meist erfindung positionierung alpen sensor bestimmung sensors signale meilenstein volumen daneben nimmt optimierung bewegungen dritte summe anwendungen central valley resultat erstellung magnitude beobachtung algorithmen verz erdbeben verteilung ausnahmen anteile auswertung baut gebieten seither einsicht abschnitt visualisierung komponente erd endeavour gegebenheiten raumfahrt terme landers schichten vorhersagen paaren breite umstand betrag kanten annahmen korrektur knoten beurteilung vorgeschichte rocca dlr eventuell permafrost ers sensoren laufzeit erderw erdgas geosciences vegetation satelliten konstellationen vorhersage fahrzeugen bsp brosch betrachtet objekten stapel geological survey millimeter gewinnung schienen mathematiker anwendungsm schwarz wei schlitten wellenl grundwasser gegenma bildes messungen deckung mehrzahl zeitr bergbau fleischmann arbeitsgruppe erzeugung erschlie im zusammenhang fraunhofer institut datens diese f themenfelder ferretti tageszeit zusammenfassend rumpf pfosten prati sensorik spektrums weitere fragen erdoberfl deformation erdrutsch vulkanausbr vorzug staud geos streuung verdichtung entnahme wasserdampf erdbebens eine entwicklung graphen detektion bauwerken reflektoren kernkraftwerken space shuttle endeavour wetterverh filterung mehrdeutigkeit der ausgangspunkt ettlingen mathematisch zentimetern erdbeobachtung messstationen lagerst zeitabst vulkanismus insar grundideen bohrl erdkruste signalverarbeitung wavelet informationsdichte prozessierung fernerkundung landmasse mojave w systemtechnik forschungsaktivit arbeitsgebiet hebung numerik envisat vielfache separierung deformationen sichtlinie interferometrie formationsflug gleichungssysteme abtastung fokusierung terrasar x
StatLearn 2010 - Workshop on
1.1 Ultrametric wavelet regression of multivariate time series: application to Colombian conflict analysis (Fionn Murtagh)

StatLearn 2010 - Workshop on "Challenging problems in Statistical Learning"

Play Episode Listen Later Dec 4, 2014 47:24


We first pursue the study of how hierarchy provides a well-adapted tool for the analysis of change. Then, using a time sequence-constrained hierarchical clustering, we develop the practical aspects of a new approach to wavelet regression. This provides a new way to link hierarchical relationships in a multivariate time series data set with external signals. Violence data from the Colombian conflict in the years 1990 to 2004 are used throughout. We conclude with some proposals for further study on the relationship between social violence and market forces, viz. between the Colombian conflict and the US narcotics market.

Inference for Change-Point and Related Processes
Wavelet-based Bayesian Estimation of Long Memory Models - an Application to fMRI Data

Inference for Change-Point and Related Processes

Play Episode Listen Later Feb 12, 2014 58:36


Vannucci, M (Rice University) Tuesday 04 February 2014, 14:00-15:00

Mathematical and Statistical Approaches to Climate Modelling and Prediction
Development of wavelet methodology for weather Data Assimilation

Mathematical and Statistical Approaches to Climate Modelling and Prediction

Play Episode Listen Later Sep 24, 2010 55:05


Fournier, A (UCAR) Friday 17 September 2010, 11:00-12:00

Soft Active Materials: From Granular Rods to Flocks, Cells and Tissues
Foraging Strategies for Starving and Feeding Amoeba

Soft Active Materials: From Granular Rods to Flocks, Cells and Tissues

Play Episode Listen Later May 18, 2009 57:14


This talk discusses how different environmental conditions affect the growth strategies of amoeba cells.

ORNL's Computer Science and Mathematics Division Seminar
Estimates for Green's functions of Schrodinger operators; also, a pure mathematician's adventures in wavelet applications

ORNL's Computer Science and Mathematics Division Seminar

Play Episode Listen Later Dec 31, 1969 67:00