Podcasts about Resampling

  • 22PODCASTS
  • 22EPISODES
  • 46mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Aug 14, 2024LATEST

POPULARITY

20172018201920202021202220232024


Latest podcast episodes about Resampling

Audionautic | Covering the Latest in Music Production, Marketing and Technology
125: Sampling as a Creative Tool | MNTRA Borealis Sound Design Tips | Artium Instruments Swarming the Synth Market

Audionautic | Covering the Latest in Music Production, Marketing and Technology

Play Episode Listen Later Aug 14, 2024 1:03


MNTRA are BACK with a cool lil' reverb plugin. We're diving under the hood and looking at how it can we used to create some very unique sounds. Artium Instruments have a sick polysynth on the horizon that's looking to be super affordable In the Round Robin, we're focusing on Resampling, looking at methods for doing such and how it can be such a useful tool. Today we welcome another member to the Audionautic Roster! Please join us as we embrace Lars Haur with his new EP 'Faces Stained With Tears'. You can check it below: https://larshaur.bandcamp.com/album/faces-stained-with-tears Help Support the Channel: Patreon: https://www.patreon.com/audionautic Time Stamps:0:00 Introduction 10:00 MNTRA Borealis 19:00 Our Review of Borealis 24:00 Discord Challenge: Track in 60 Minutes 38:00 Artium Instruments Swarm Polysynth 49:00 Help Support the Show on Patreon 58:00 Sampling as a Creative Tool

The Nonlinear Library
AF - Calculating Natural Latents via Resampling by johnswentworth

The Nonlinear Library

Play Episode Listen Later Jun 6, 2024 17:36


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Calculating Natural Latents via Resampling, published by johnswentworth on June 6, 2024 on The AI Alignment Forum. So you've read some of our previous natural latents posts, and you're sold on the value proposition. But there's some big foundational questions still unanswered. For example: how do we find these natural latents in some model, if we don't know in advance what they are? Examples in previous posts conceptually involved picking some latents out of the ether (like e.g. the bias of a die), and then verifying the naturality of that latent. This post is about one way to calculate natural latents, in principle, when we don't already know what they are. The basic idea is to resample all the variables once simultaneously, conditional on the others, like a step in an MCMC algorithm. The resampled variables turn out to be a competitively optimal approximate natural latent over the original variables (as we'll prove in the post). Toward the end, we'll use this technique to calculate an approximate natural latent for a normal distribution, and quantify the approximations. The proofs will use the graphical notation introduced in Some Rules For An Algebra Of Bayes Nets. Some Conceptual Foundations What Are We Even Computing? First things first: what even is "a latent", and what does it even mean to "calculate a natural latent"? If we had a function to "calculate natural latents", what would its inputs be, and what would its outputs be? The way we use the term, any conditional distribution (λ,xP[Λ=λ|X=x]) defines a "latent" variable Λ over the "observables" X, given the distribution P[X]. Together P[X] and P[Λ|X] specify the full joint distribution P[Λ,X]. We typically think of the latent variable as some unobservable-to-the-agent "generator" of the observables, but a latent can be defined by any extension of the distribution over X to a distribution over Λ and X. Natural latents are latents which (approximately) satisfy some specific conditions, namely that the distribution P[X,Λ] (approximately) factors over these Bayes nets: Intuitively, the first says that Λ mediates between the Xi's, and the second says that any one Xi gives approximately the same information about Λ as all of X. (This is a stronger redundancy condition than we used in previous posts; we'll talk about that change below.) So, a function which "calculates natural latents" takes in some representation of a distribution (xP[X]) over "observables", and spits out some representation of a conditional distribution (λ,xP[Λ=λ|X=x]), such that the joint distribution (approximately) factors over the Bayes nets above. For example, in the last section of this post, we'll compute a natural latent for a normal distribution. The function to compute that latent: Takes in a covariance matrix ΣXX for X, representing a zero-mean normal distribution P[X]. Spits out a covariance matrix ΣΛΛ for Λ and a cross-covariance matrix ΣΛX, together representing the conditional distribution of a latent Λ which is jointly zero-mean normal with X. … and the joint normal distribution over Λ,X represented by those covariance matrices approximately factors according to the Bayes nets above. Why Do We Want That, Again? Our previous posts talk more about the motivation, but briefly: two different agents could use two different models with totally different internal (i.e. latent) variables to represent the same predictive distribution P[X]. Insofar as they both use natural latents, there's a correspondence between their internal variables - two latents over the same P[X] which both approximately satisfy the naturality conditions must contain approximately the same information about X. So, insofar as the two agents both use natural latents internally, we have reason to expect that the internal latents of one can be faithfully translated int...

Object Worship
Andrew Tasselmyer and the Elektron Octatrack MKII

Object Worship

Play Episode Listen Later Feb 22, 2024 90:47


Today we're joined by Andrew Tasselmyer, prolific musician you may know from his solo work or as a member of Hotel Neon, Gray Acres, and Mordançage. Andrew and Andy have a long friendship, and Dan gets to meet Andrew for the first time. After some chat about technical difficulties and our favorite instrument mods, we get right into the Elektron Octatrack MKII, and Andy and Dan nod their heads along as if they understand half of what's being talked about. It's a deep dive on a deep device from the deep mind of Andrew Tasselmyer. Dig in!Check out all things Andrew: https://www.andrewtasselmyer.com/Buy Old Blood pedals: http://www.oldbloodnoise.comJoin the conversation in Discord: https://discord.com/invite/PhpA5MbN5uFollow us on the socials: @andrewtasselmyer, @oldbloodnoise, @andyothling, @danfromdsf

Free To Choose Media Podcast
Episode 188 – New Statistics – Without Tears (Podcast)

Free To Choose Media Podcast

Play Episode Listen Later Mar 16, 2023


Today's podcast is titled, “New Statistics – Without Tears.” Peter C. Bruce, Director of the Resampling Project at the University of Maryland and Julian L. Simon, Professor of Business Administration at the University of Maryland present the history and ramifications of the new statistics of resampling. Listen now, and don't forget to subscribe to get updates each week for the Free To Choose Media Podcast.

R Weekly Highlights
Issue 2023-W07 Highlights

R Weekly Highlights

Play Episode Listen Later Feb 15, 2023 38:28


A glimpse into the day-to-day of maintaining an R package, exploring gender effects in art history data with the power of resampling, and a huge win for accessible SVG plots with R-Markdown. Episode Links This week's curator: Jon Carroll - @carroll_jono (https://twitter.com/carroll_jono) (Twitter) & @jonocarroll@fosstodon.org (https://fosstodon.org/@jonocarroll) (Mastodon) What Does It Mean to Maintain a Package? (https://ropensci.org/blog/2023/02/07/what-does-it-mean-to-maintain-a-package/) Resampling to understand gender in #TidyTuesday art history data (https://juliasilge.com/blog/art-history/) Manipulate SVG Plots with JavaScript in R Markdown (https://yihui.org/en/2023/02/manipulate-svg/) Entire issue available at rweekly.org/2023-W07 (https://rweekly.org/2023-W07.html) Supplement Resources Bootstrap resampling and tidy regression models https://www.tidymodels.org/learn/statistics/bootstrap/ Accessible Data Science Beyond Visual Models: Non-Visual Interactions with R and RStudio Packages (JooYoung Seo from rstudio::global(2021)) https://www.rstudio.com/resources/rstudioglobal-2021/accessible-data-science-beyond-visual-models-non-visual-interactions-with-r-and-rstudio-packages Eric's adventures with Shiny modules and SVG interactions https://community.rstudio.com/t/passing-module-namespace-to-embedded-javascript-function/26988 Supporting the show Use the contact page at https://rweekly.fireside.fm/contact to send us your feedback Get a New Podcast App and send us a boost! https://podcastindex.org/apps?elements=Boostagrams%2CValue Support creators with boostagrams using Podverse and Alby: https://blog.podverse.fm/support-creators-with-boostagrams-and-streaming-sats-using-podverse-and-alby/ A new way to think about value: https://value4value.info Get in touch with us on social media Eric Nantz: @theRcast (https://twitter.com/theRcast) (Twitter) and @rpodcast@podcastindex.social (https://podcastindex.social/@rpodcast) (Mastodon) Mike Thomas: @mike_ketchbrook (https://twitter.com/mike_ketchbrook) (Twitter) and @mike_thomas@fosstodon.org (https://fosstodon.org/@mike_thomas) (Mastodon)

Neural Information Retrieval Talks — Zeta Alpha
Evaluating Extrapolation Performance of Dense Retrieval: How does DR compare to cross encoders when it comes to generalization?

Neural Information Retrieval Talks — Zeta Alpha

Play Episode Listen Later Jul 20, 2022 58:30


How much of the training and test sets in TREC or MS Marco overlap? Can we evaluate on different splits of the data to isolate the extrapolation performance? In this episode of Neural Information Retrieval Talks, Andrew Yates and Sergi Castella i Sapé discuss the paper "Evaluating Extrapolation Performance of Dense Retrieval" byJingtao Zhan, Xiaohui Xie, Jiaxin Mao, Yiqun Liu, Min Zhang, and Shaoping Ma.

A Short Story Long Podcast
EPISODE #58 : DABS & DOOBIES PART #4 HE TOOK MY B***CH! Story Times w/Nate & Ju, Hip-Hop & And The Cons To Resampling Songs

A Short Story Long Podcast

Play Episode Listen Later May 24, 2022 67:11


DABS & DOOBIES PART #4 HE TOOK MY B***CH! Story Times w/Nate & Ju, Hip-Hop & And The Cons To Resampling Songs Follow us on Instagram https://www.instagram.com/hesgoals/ https://www.instagram.com/http.gemini https://www.instagram.com/ssl.podcast Follow us on youtube https://www.youtube.com/channel/UC_bdwTcNXHXZHQYgpJCZy8w --- Send in a voice message: https://anchor.fm/nathaniel-walker5/message Support this podcast: https://anchor.fm/nathaniel-walker5/support --- Send in a voice message: https://anchor.fm/nathaniel-walker5/message

PaperPlayer biorxiv neuroscience
Representational Connectivity Analysis: Identifying Networks of Shared Changes in Representational Strength through Jackknife Resampling

PaperPlayer biorxiv neuroscience

Play Episode Listen Later May 30, 2020


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.05.28.103077v1?rss=1 Authors: Coutanche, M. N., Buckser, R. R., Akpan, E. Abstract: The structure of information in the brain is crucial to cognitive function. The representational space of a brain region can be identified through Representational Similarity Analysis (RSA) applied to functional magnetic resonance imaging (fMRI) data. In its classic form, RSA collapses the time-series of each condition, eliminating fluctuations in similarity over time. We propose a method for identifying representational connectivity (RC) networks, which share fluctuations in representational strength, in an analogous manner to functional connectivity (FC), which tracks fluctuations in BOLD signal, and informational connectivity, which tracks fluctuations in pattern discriminability. We utilize jackknife resampling, a statistical technique in which observations are removed in turn to determine their influence. We applied the jackknife technique to an existing fMRI dataset collected as participants viewed videos of animals (Nastase et al., 2017). We used ventral temporal cortex (VT) as a seed region, and compared the resulting network to a second-order RSA, in which brain regions' representational spaces are compared, and to the network identified through FC. The novel representational connectivity analysis identified a network comprising regions associated with lower-level visual processing, spatial cognition, perceptual-motor integration, and visual attention, indicating that these regions shared fluctuations in representational similarity strength with VT. RC, second-order RSA and FC identified areas unique to each method, indicating that analyzing shared fluctuations in the strength of representational similarity reveals previously undetectable networks of regions. The RC analysis thus offers a new way to understand representational similarity at the network level. Copy rights belong to original authors. Visit the link for more info

Linux Headlines
2020-05-27

Linux Headlines

Play Episode Listen Later May 27, 2020 2:51


Ardour 6 is out with major changes under the hood, CoreOS Container Linux is officially unmaintained, TeleIRC version 2.0.0 lands with a complete rewrite, the FIDO Alliance launches an instructional campaign, and PeerTube outlines its newest fundraising goals.

Bass Camp
023: Resampling in Serum (and with Serum) [Video Audio]

Bass Camp

Play Episode Listen Later May 4, 2020 17:17


Watch the full video breakdown at BassCampPodcast.com in the show notes for Episode 023, as well as the free downloads (Serum Patch + Ableton Project) I built while filming this episode. This week I wanted to dive into how I made the bass patch I gave away in the free Experimental Bass Drop template a few weeks back. I also used this as an opportunity to show how I resample both within Serum and in the Ableton project to get some dope sounds. Full disclosure: this week's bass patch isn't exactly the same, but the process I walk through to get the new patch is.  

Behind The DAW
43.5 |Culprate In The DAW - Subsonics

Behind The DAW

Play Episode Listen Later Jan 14, 2019 44:43


Sound Design through Resampling, Constructing phrases with a sampler, Mixing through Monitors Vs. Mixing through Headphones with Culprate ______________________________________ Patreon: https://m.me/itdbtd?ref=btd_Patreon Artist Suggestions: https://m.me/itdbtd?ref=BTDsuggestion Private lessons: https://m.me/itdbtd?ref=privatelesssons Free Consultation: https://m.me/itdbtd?ref=FreeConsultation Culprate Patches: https://m.me/itdbtd?ref=CulpratePatches _______________________________________ Deliverance EP: https://m.me/itdbtd?ref=CulprateLinks Culprate Subsonics: https://m.me/itdbtd?ref=CulprateLinks FabFilter Pro-MB: https://m.me/itdbtd?ref=CulprateLinks Antares Warm: https://m.me/itdbtd?ref=CulprateLinks FabFilter Saturn: https://m.me/itdbtd?ref=CulprateLinks Cable Guys Volumeshaper: https://m.me/itdbtd?ref=CulprateLinks Waves Scheps 73: https://m.me/itdbtd?ref=CulprateLinks Speak & Spell VST: https://m.me/itdbtd?ref=CulprateLinks iZotope Vocal synth: https://m.me/itdbtd?ref=CulprateLinks Subpac: https://m.me/itdbtd?ref=CulprateLinks Voxango Mid Side Plugin: https://m.me/itdbtd?ref=CulprateLinks Noisia - Square Feet: https://m.me/itdbtd?ref=CulprateLinks Omnisphere: https://m.me/itdbtd?ref=CulprateLinks Heavyocity Gravity: https://m.me/itdbtd?ref=CulprateLinks iZotope Trash 2: https://m.me/itdbtd?ref=CulprateLinks _______________________________________ In The DAW Playlist: https://tinyurl.com/instagramitdplaylist Behind The DAW Playlist: https://tinyurl.com/btdinstagram

Fakultät für Psychologie und Pädagogik - Digitale Hochschulschriften der LMU
Vergleich von Methoden zur Strukturfindung in der Psychometrie mit Hilfe echter Daten

Fakultät für Psychologie und Pädagogik - Digitale Hochschulschriften der LMU

Play Episode Listen Later Jan 22, 2015


Die vorliegende Arbeit beschäftigt sich mit der Evaluation von strukturfindenden Methoden, die die Items psychologischer Fragebogendaten in homogene Gruppen von ähnlichen Items zusammenfassen. Ein wesentlicher Unterschied zwischen Methoden, die zu diesem Zweck verwendet werden, ist, ob sie ein zugrundeliegendes Messmodell annehmen oder ob sie nur eine möglichst brauchbare Gruppierung der Items anstreben. Zum einen gibt es die modellbasierte Faktorenanalyse (FA), die auf dem Faktormodell basiert. Der mathematische Ansatz ist ähnlich der Hauptkomponentenanalyse, oder principal component analysis (PCA). In der FA wird im Unterschied zur PCA noch angenommen, dass die Antworten auf die Items kausal von zugrundeliegenden Faktoren plus einem einzigartigen Residualterm kausal erklärt werden. Und dieser spezifische Residualterm jedes Items wird als völlig unkorreliert zu allen anderen Items angenommen. Ein Verfahren, das keine Modellannahmen trifft, ist die Clusteranalyse (CA). Hier werden lediglich Objekte zusammengefügt, die sich auf einem bestimmten Kriterium ähnlicher sind als andere. So wie man Methoden darin unterscheiden kann, ob sie ein zugrundeliegendes Modell annehmen oder nicht, kann man auch bei der Evaluation von Methoden diese Unterscheidung treffen. Eine Evaluationtechnik, die ein Modell annimmt, ist die Monte Carlo Simulation. Eine Technik, die nicht zwangsweise ein Modell zugrunde legt, ist das Resampling. Es werden Stichproben aus einem echten Datensatz gezogen und das Verhalten der Methode in diesen Stichproben wird untersucht. In der ersten Studie wurde ein solches Resampling-Verfahren angewandt, das wir Real World Simulation nennen. Es soll das bestehende Problem der mangelnden Validität von Monte Carlo Studien zur FA beheben. Es wurde eine Real World Simulation an zwei großen Datensätzen durchgeführt und die Schätzer der Modellparameter aus dem echten Datensatz anschließend für die Monte Carlo Simulation als Modellparameter verwendet. So kann getestet werden, welchen Einfluss die spezifischen Datensatzcharakteristiken sowie kontrollierte Veränderungen von ihnen auf die Funktion der Methoden haben. Die Ergebnisse legen nahe, dass die Resultate von Simulationsstudien immer stark von bestimmten Spezifikationen des Modells und seiner Verletzungen abhängen und daher keine allgemeingültigen Aussagen getroffen werden können. Die Analyse echter Daten ist wichtig, um die Funktion verschiedener Methoden zu verstehen. In der zweiten Studie wurde mit Hilfe dieser neuen Evaluationstechnik ein neues k-means Clusterungsverfahren zur Clusterung von Items getestet. Die zwei Verfahren, die vorgeschlagen wurden, sind: k-means skaliertes Distanzmaß (k-means SDM) und k-means cor. In den Analysen zeigte sich, dass sich die neuen Verfahren besser eignen, Items zu Konstrukten zuzuordnen als die EFA. Lediglich bei der Bestimmung der Anzahl der zugrundeliegenden Konstrukte, waren die EFA-Verfahren genauso gut. Aus diesem Grund wird vorgeschlagen eine Kombination dieser beiden Verfahren zu verwenden. Ein großer Vorteil der neuen Methoden ist, dass sie das Problem der Unbestimmtheit der Faktorwerte in der EFA lösen können, da die Clusterwerte der Personen auf den Clustern eindeutig bestimmt werden können. Am Ende der Arbeit wird auf die unterschiedlichen Evaluierungs- bzw. Validierungstechniken für modellbasierte und nicht-modellbasierte Verfahren eingegangen. Für die Zukunft wird vorgeschlagen, für die Evaluation des neuen k-means CA Verfahrens zur Clusterung von Items, Real World Simulationen sowie Validierungen der Clusterwerte mit Außenkriterien anzuwenden.

StatLearn 2010 - Workshop on
4.3 Data-driven penalties for optimal calibration of learning algorithms (Sylvain Arlot)

StatLearn 2010 - Workshop on "Challenging problems in Statistical Learning"

Play Episode Listen Later Dec 4, 2014 69:50


Learning algorithms usually depend on one or several parameters that need to be chosen carefully. We tackle in this talk the question of designing penalties for an optimal choice of such regularization parameters in non-parametric regression. First, we consider the problem of selecting among several linear estimators, which includes model selection for linear regression, the choice of a regularization parameter in kernel ridge regression or spline smoothing, and the choice of a kernel in multiple kernel learning. We propose a new penalization procedure which first estimates consistently the variance of the noise, based upon the concept of minimal penalty which was previously introduced in the context of model selection. Then, plugging our variance estimate in Mallows? CL penalty is proved to lead to an algorithm satisfying an oracle inequality. Second, when data are heteroscedastic, we can show that dimensionality-based penalties are suboptimal for model selection in least-squares regression. So, the shape of the penalty itself has to be estimated. Resampling is used for building penalties robust to heteroscedasticity, without requiring prior information on the noise-level. For instance, V-fold penalization is shown to improve V-fold cross-validation for a fixed computational cost.

Mathematik, Informatik und Statistik - Open Access LMU - Teil 03/03
A variance decomposition and a Central Limit Theorem for empirical losses associated with resampling designs

Mathematik, Informatik und Statistik - Open Access LMU - Teil 03/03

Play Episode Listen Later Nov 1, 2014


The mean prediction error of a classification or regression procedure can be estimated using resampling designs such as the cross-validation design. We decompose the variance of such an estimator associated with an arbitrary resampling procedure into a small linear combination of covariances between elementary estimators, each of which is a regular parameter as described in the theory of $U$-statistics. The enumerative combinatorics of the occurrence frequencies of these covariances govern the linear combination's coefficients and, therefore, the variance's large scale behavior. We study the variance of incomplete U-statistics associated with kernels which are partly but not entirely symmetric. This leads to asymptotic statements for the prediction error's estimator, under general non-empirical conditions on the resampling design. In particular, we show that the resampling based estimator of the average prediction error is asymptotically normally distributed under a general and easily verifiable condition. Likewise, we give a sufficient criterion for consistency. We thus develop a new approach to understanding small-variance designs as they have recently appeared in the literature. We exhibit the $U$-statistics which estimate these variances. We present a case from linear regression where the covariances between the elementary estimators can be computed analytically. We illustrate our theory by computing estimators of the studied quantities in an artificial data example.

Modellansatz
Zeitreihen

Modellansatz

Play Episode Listen Later Apr 24, 2014 32:50


Zeitreihen erleben wir bei Aktienkursen, seismischen Messungen, Wetterbeobachtungen, Wahlen und vielem mehr. Franziska Lindner befasst sich deshalb mit der Zeitreihenanalyse und wie man mit mathematischen Methoden und Statistik Aussagen über die Struktur der Zeitreihen, Ereignisse im Zeitverlauf und Prognosen mit Konfidenzbereichen bestimmen kann. Im Gespräch mit Gudrun Thäter erläutert sie das Prinzip der Stationarität und den Einsatz der Fourieranalyse und das statistische Resampling in der Zeitreihenanalyse. Literatur und Zusatzinformationen R. Dahlhaus: Fitting time series models to nonstationary processes, The Annals of Statistics 25.1, 1-37, 1997. C. Kirch, D. Politis: TFT-bootstrap: Resampling time series in the frequency domain to obtain replicates in the time domain, The Annals of Statistics 39.3: 1427-1470, 2011.

Medientechnik - SoSe 2011
Audiotechnik Teil 2: Audiotechnik digital, Digitalrundfunk

Medientechnik - SoSe 2011

Play Episode Listen Later Jul 5, 2011 114:54


Diese Vorlesung führt in die digitale Audiotechnik ein.

Medientechnik - SoSe 2011
Audiotechnik Teil 2: Audiotechnik digital, Digitalrundfunk

Medientechnik - SoSe 2011

Play Episode Listen Later Jul 5, 2011 115:02


Diese Vorlesung führt in die digitale Audiotechnik ein.

LGM 2011 [Video]
Better and faster image resizing and resampling

LGM 2011 [Video]

Play Episode Listen Later May 31, 2011 32:24


resizing resampling
LGM 2011 [Audio]
Better and faster image resizing and resampling

LGM 2011 [Audio]

Play Episode Listen Later May 30, 2011 32:25


resizing resampling
Dixero - Technology channel
Review: The Elektron Machinedrum

Dixero - Technology channel

Play Episode Listen Later Jun 30, 2009


The Machinedrum is not something you can easily acquire. It's hard to find Elektron in your local drum machine shop - not that there are many drum machine shops out there but fortunately all their products can be ordered online - if the price isn't too steep for you. Anyway, I was eager to try this little beast out so let's see what it can do. The Machinedrum is a drum synthesizer with a small sample memory. It has 6 audio outputs and 2 inputs making it a device capable of serious routing and sound designing on stage or in studio. The heart of the machine is the 16 track sequencer which is very easy to use and unlike other devices out there it can in fact be used for live production. You get 64 step patterns out of the box - some are great, some are not - and 130 "machines" that are basically the sounds you can use and edit as you wish. Look and feel . It's a brick. I like the simple design and the square buttons although they are quite loud. The body is made of steel but the unit only weighs around 6 pounds. The screen could be bigger in my opinion. When you are performing live you often need to glance on the screen and it can be hard to read. In fact the black/white area around the screen could be used for a bigger screen making it easier to see things. Then again, you can read most of the stuff from the step sequencer and surely you wont be spending time editing sounds on stage. All the buttons and knobs are made of plastic and they react without latency. Control . Sound selection is done with the big black wheel. You can see which sound is selected on the nearby LED bar. Changing patterns is a piece of cake and I love how the Machinedrum handles that. By selecting a bank from A to H and pressing a number you can change the pattern on the next beat. Recording beats can be done in different ways. The original drum machine method is to select a sound and select when it should be played. The alternate method is live recording with quantize of course. It works like a charm. Effects are handled on the screen and it's easy to do some weird stuff with it. You can record the effect changes you made to your sound although it is hard to control them with a little knob. I love that the device responds immediately to everything. Sound . The Machinedrum sounds great. Superb quality kits can be achieved in a small amount of time (see video). Audio signals can be routed to any of the 6 outputs. A great thing to mess around is recording external audio and resample them. 2. 5 MB of sample memory however raises some questions. Why not include bigger memory or a CF card slot? Because of the lack of storage I'm refusing to call it a sampler. Resampling can be applied to the built in sounds as well opening a whole new world of sounds without limits. The sound is punchy, deep, delicate and very dynamic. Different kits will give you different styles from nu jazz to techno. Overall . The Machinedrum SPS-1 UW will cost you  $1790, shipping included. In the package Elektron's midi interface, a TM-1 is included. It might be a little too high for "just" a drum synthesizer. This device sounds great, it's easy to use and it's very compact. You can take it with you in your backpack and you can lay superb beats and patterns down in a minute. If you are looking for a sampler, I suggest you look somewhere else. However, to be honest the Machinedrum is the best drum machine out there. Period.

InDesign Secrets
InDesignSecrets Podcast 102

InDesign Secrets

Play Episode Listen Later May 8, 2009 37:30


Listener survey (with prizes!); Upcoming ID webinars and seminars by DB and AMC; Scaling and Resampling in ID; Obscurity of the Week: Snap-to Flag ----- Details below, or go to http://indesignsecrets.com/indesignsecrets-podcast-102.phpfor Show Notes, links, and to leave a comment! ----- Listen in your browser: InDesignSecrets-102.mp3 (17.2 MB, 33:13 minutes) The transcript of this podcast will be posted soon. News: Reader/Listener Survey, Upcoming Seminars and Webinars All about scaling and resampling in InDesign Obscure InDesign Feature of the Week: Snap-to Flag -- News and special offers from our sponsors: >> In-Tools has a special deal for InDesignSecrets fans: $20 off the price for either one of their InDesign plug-in bundles, InBook or InSefer, if you purchase from this special page on their site. The InBook Plug-in Pack is a suite of plug-ins that turn InDesign into an advanced automated pagination system for laying out books.The InSefer suite of plug-ins does the same for Hebrew publications. We talked about a few of the InBook plug-ins in the podcast, but the full list is here. -- Links mentioned in this podcast: InDesignSecrets Listener Survey now with prizes! InDesign Seminar Tour in San Francisco, Seattle, and Los Angeles (DB); and Boston, Minneapolis, and Chicago (AMC). Use AMCTIME09 coupon code for 15% off registration. InDesign Troubleshooting & Repair Webinar, May 21 (Anne-Marie)Introduction to GREP in InDesign Webinar, May 28 (David) Zevrix LinkOptimizer plug-in Resample Images to 100% script (CS3/CS4 only)

The Cliff Ravenscraft Show - Mindset Answer Man
014 Podcast Answer Man – Two USB Mics? Resampling in Audacity.

The Cliff Ravenscraft Show - Mindset Answer Man

Play Episode Listen Later Jul 6, 2007 16:40


[audio:http://recordings.talkshoe.com/TC-9668/TS-32833.mp3] Is it possible to use two usb mics to record audio for my podcast? How does Cliff get his audio clips into podcast recordings? How much does the Broadcast Host Cost? How can I resample my voice mails from 8000hz to 44100hz in Audacity? Right Click Here To Download This Episode The post 014 Podcast Answer Man – Two USB Mics? Resampling in Audacity. appeared first on The Cliff Ravenscraft Show.