Podcasts about histograms

  • 84PODCASTS
  • 121EPISODES
  • 22mAVG DURATION
  • ?INFREQUENT EPISODES
  • Jan 8, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about histograms

Latest podcast episodes about histograms

Uncommon Sense - Tools to Improve your Work Forever
How Are Histograms Different to Bar Charts?

Uncommon Sense - Tools to Improve your Work Forever

Play Episode Listen Later Jan 8, 2025 9:55


The patterns in your data can hold the key to better business decisions. Histograms go beyond simple bar charts, offering a clear view of data distribution to uncover trends, probabilities, and hidden opportunities in your processes. Join PMI Director Consultant, Warren Knight, as he shares real-world examples and actionable tips to help you use histograms to uncover issues, enhance performance and deliver greater value to your customers. Tune in and discover the insights you've missed. More resources:Podcast: SPC Charts: Driving Sustainable Performance GainsOn Demand Webinar: Data Collection TechniquesBlog: Future-Proof Your OrganisationMore from PMI: Dive into our Knowledge Hub for more tools, videos, and infographics Join us for a PMI LIVE Webinar Follow us on LinkedIn Take your improvement career to the next level with PMI's Lean Six Sigma Certifications - now available in two new and accessible formats, built around you. Explore On Demand >> Explore Distance Learning >>

Real Science Exchange
Journal Club Bach Changes in milk production and estimated income over feed cost of group-housed dairy cows when moved between pens

Real Science Exchange

Play Episode Listen Later Oct 10, 2023 48:50


Guests: Dr. Alex Bach, ICREA (Catalan Institution for Research and Advanced Studies), and Dr. Bill Weiss, The Ohio State UniversityIn this journal club episode, Dr. Alex  Bach with the Catalan Institution for Research and Advanced Studies joins Dr. Bill Weiss from the Ohio State University. Dr. Weiss introduces the paper as one that's immediately applicable to the industry and answers a question he received a lot during his Extension career: What's the cost of moving cows? This research gives us some real data to help producers on cow management. (3:49)Dr. Bach states that grouping cows is necessary, and the goal is to feed cows as close to their requirements as possible. But in a practical world, that can be difficult, and producers may resist moving cows due to the increased work and perceived drop in milk production. Dr. Bach gathered data from the field to see if that's the case or not by evaluating three farms with different diets and evaluating income over feed cost. (4:33)Dr. Bach goes on to describe the farms and the methods his team used for estimating individual cow intakes in a group pen setting. Cow pen/group changes coincided with a diet change. Individual farms made their own ration decisions and pen movement decisions. (8:17)In general, cows moved from a high to a medium to a low diet over the course of lactation. Primiparous cows moved from the fresh pen to the medium diet. If diet differences were adequate between groups, the loss in milk was compensated by the lower cost ration, and producers made an additional 20-30 cents per cow per day in income over feed costs. However, if the diets were more similar, lower feed costs did not compensate for the loss in milk production. (15:30)  Dr. Weiss asks Dr. Bach if he could only build two rations, a high and a low, how would he do that? Dr. Bach's approach is to look at a histogram of milk production in the pen and split that into quantiles. His goal is to make a ration that satisfies at least 70% of the animals in the pen for the high diet and around 60% of the animals in the pen for the low diet. (24:36)Dr. Bach also ran a sensitivity analysis evaluating how results would change if milk prices or feed costs (or both) went up or down. He found that the higher the milk price, the more resilient a farm will be to a single diet and that feed cost is the opposite. The most interesting scenario is high feed costs and low milk prices - that's where it's almost mandatory to make groups, if you want to survive on a dairy. (27:23)Dr. Bach evaluated the change in nutrient intake for the diet switch and projected the milk production change from that nutrient change compared to how the cows actually performed. The cows always lost less milk production than predicted. Dr. Bach thinks the main reason is that the cows were overfed before moving. (37:46)Dr. Bach invites the audience to experiment a little bit with grouping cows. Don't be afraid of losing milk, and look beyond milk. Put in place mechanisms on the farm that allow you to measure income over feed costs as the ultimate goal. Cows are flexible, so don't be afraid of making a mistake. If something goes wrong, it will go wrong for a short period of time. You can correct it. You can change the diet right away, and the cows will recover. (46:14)You can find this episode's journal club paper here: https://doi.org/10.3168/jds.2022-22875Author: Dr. Alex BachPlease subscribe and share with your industry friends to bring more people to join us around the Real Science Exchange virtual pub table.  If you want one of our new Real Science Exchange t-shirts, screenshot your rating, review, or subscription, and email a picture to anh.marketing@balchem.com. Include your size and mailing address, and we'll get a shirt in the mail to you.

Better Fuji Photos
Using the Histogram for Perfect Exposures

Better Fuji Photos

Play Episode Listen Later Oct 10, 2023 15:22


Better Fuji Photos Episode 36: Using the Histogram for Perfect Exposures Traditional "metering" is still available in today's digital cameras. But why not use an easier, better tool to get more accurate exposures? That's the histogram! This graph, which you can enable in different ways in your camera (outlined in the podcast & article), is easiest to use when broken up into three "zones": the left third representing the shadows, the mid third representing the midtones, and the right third representing the highlights. Then, you simply adjust your exposure to "place" things where they belong in the histogram, depending on how much light they should reflect - or how bright you want them to be! See this breakdown, along with a walkthrough, in the accompanying web article: https://www.jmpeltier.com/using-mirrorless-histogram/ Please subscribe, rate, and review wherever you prefer to listen to your podcasts so we can keep this show going. If you ever have any questions that you'd like to have answered in an episode, please send me an email at mail@jmpeltier.com. End music: Dylan Sitts - Tahoe Trip

Photography Explained
Understanding Histograms In Photography

Photography Explained

Play Episode Listen Later Sep 29, 2023 21:52 Transcription Available


Histogram. Doesn't sound like a photography thing. Sounds complicated. What is it, Rick? Do I need to know about this?Yes, you do need to know about this. A histogram can help you get the best exposure you can and tell if you have not got the best exposure.In this episode, I tell you.What a histogram is.How you can use a histogram to get the best exposure.How you can use a histogram to tell you if you got the exposure correct.What if I use a phone to take photos and not a camera?What if I use a film camera?And finally, what I do.All explained in plain English, without the irrelevant detail, in less than 27 (ish) minutes!What is not to love?Support the showGet your question answeredThis is what my podcast is all about, answering your photography questions - just click here. Not only will I answer your question, but I will also give you a lovely, big shout out, which is nice.Find out more about the podcast on the Photography Explained Podcast websiteAnd find out all about me on my photography websiteThanks very much for listeningCheers from me Rick

PurePerformance
The De-Facto Standard of Metrics Capture and Its Untold Histogram Story with Björn Rabenstein

PurePerformance

Play Episode Listen Later Jun 19, 2023 54:10


As far as we know - besides Kubernetes there is only Prometheus that belongs to the prestigious group of open-source projects that have their own documentary. Now why is that? Prometheus has emerged as the go-to solution for capturing metrics in modern software stacks, earning its status as the de facto standard. With its widespread adoption and a constantly expanding ecosystem of companion tools, Prometheus has become a pivotal component in the software development landscape.Join us as we sit down with Björn Rabenstein, an accomplished engineer at Grafana, who has dedicated nearly a decade to actively contributing to the Prometheus project. Björn takes us on a journey through the project's early days, unravels the reasons behind its meteoric rise, and provides us with insightful technical details, including his personal affinity for Histograms.Here are the links we discussed during the podcast for you to follow up:Prometheus Documentary: https://www.youtube.com/watch?v=rT4fJNbfe14First Prometheus talk at SRECon 2015: https://www.youtube.com/watch?v=bFiEq3yYpI8The Zen of Prometheus: https://the-zen-of-prometheus.netlify.app/Talk from Observability Day KubeCon 2023: https://www.youtube.com/watch?v=TgINvIK9SYcSecret History of Prometheus Histograms: https://archive.fosdem.org/2020/schedule/event/histograms/Prometheus Histograms: https://promcon.io/2019-munich/talks/prometheus-histograms-past-present-and-future/Native Histograms: https://promcon.io/2022-munich/talks/native-histograms-in-prometheus/PromQL for Histograms: https://promcon.io/2022-munich/talks/promql-for-native-histograms/

Face2Face Series
How to use MACD Histogram to make Profitable Trading Strategy?

Face2Face Series

Play Episode Listen Later Sep 15, 2022 13:26


According to Wikipedia, Causality (also referred to as causation, or cause and effect) is the agency or efficacy that connects one process (the cause) with another process or state (the effect), where the first is understood to be partly responsible for the second, and the second is dependent on the first. In general, a process has many causes, which are said to be the causal factors for it, and all lie in its past. An effect can, in turn, be a cause of many other effects. You must be thinking that why I am saying all these things. Right? It is just to increase your curiosity. You will soon be able to relate to it. To read more visit: https://www.elearnmarkets.com/blog/macd-histogram-demystified/

Astro arXiv | all categories
Magnetic field properties in star formation: a review of their analysis methods and interpretation

Astro arXiv | all categories

Play Episode Listen Later Sep 12, 2022 0:59


Magnetic field properties in star formation: a review of their analysis methods and interpretation by Junhao Liu et al. on Monday 12 September Linearly polarized emission from dust grains and molecular spectroscopy is an effective probe of the magnetic field topology in the interstellar medium and molecular clouds. The longstanding Davis-Chandrasekhar-Fermi (DCF) method and the recently developed Histogram of Relative Orientations (HRO) analysis and the polarization-intensity gradient (KTH) method are widely used to assess the dynamic role of magnetic fields in star formation based on the plane-of-sky component of field orientations inferred from the observations. We review the advances and limitations of these methods and summarize their applications to observations. Numerical tests of the DCF method, including its various variants, indicate that its largest uncertainty may come from the assumption of energy equipartition, which should be further calibrated with simulations and observations. We suggest that the ordered and turbulent magnetic fields of particular observations are local properties of the considered region. An analysis of the polarization observations using DCF estimations suggests that magnetically trans-to-super-critical and averagely trans-to-super-Alfv'{e}nic clumps/cores form in sub-critical clouds. High-mass star-forming regions may be more gravity-dominant than their low-mass counterparts due to higher column density. The observational HRO studies clearly reveal that the preferential relative orientation between the magnetic field and density structures changes from parallel to perpendicular with increasing column densities, which, in conjunction with simulations, suggests that star formation is ongoing in trans-to-sub-Alfv'{e}nic clouds. There is a possible transition back from perpendicular to random alignment at higher column densities. Results from observational studies using the KTH method broadly agree with those of the HRO and DCF studies. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2208.06492v2

Camera Shake Photography Podcast
LIGHT METER vs HISTOGRAM vs EYEBALLING

Camera Shake Photography Podcast

Play Episode Listen Later Sep 8, 2022 23:39


In today's episode Kersten discusses the advantages of using a light meter versus the histogram or simply eyeballing the correct exposure.======================================00:00:00 Intro00:03:02 How to get a correct exposure using light meter, histogram or eyeballing00:18:14 Apple announces iPhone 1400:21:00 Hasselblad X2D 100C======================================THIS WEEK'S LINKS:JOIN THE CAMERA SHAKE COMMUNITY for the latest news and some behind the scenes insights:  www.camerashakepodcast.com======================================CAMERA SHAKE PODCAST ON YOUTUBE:https://www.youtube.com/camerashakeFULL EPISODE 119 IS ALSO AVAILABLE ON: YouTube -Apple Podcasts - https://apple.co/2Y2LmfmSpotify - https://spoti.fi/304sm2G======================================FOLLOW US ONInstagram: https://www.instagram.com/camerashakepodcast/Facebook: https://www.facebook.com/groups/camerashakepodcastTwitter: https://twitter.com/ShakeCameraTikTok: https://www.tiktok.com/@camerashakepodcastKersten's website:www.kerstenluts.comKersten on Instagram:https://www.instagram.com/kerstenluts/https://www.instagram.com/threeheadsinarow/Nick on Instagram:https://www.instagram.com/nickkirbymedia/

Lexman Artificial
Histogram: The Poster Child for Behavioral Economics with Jeff Atwood

Lexman Artificial

Play Episode Listen Later Aug 13, 2022 4:42


Lexman interviews Jeff Atwood, co-founder of Bitcoin Magazine and O'Reilly Media's Head of Technology.During the interview, they discuss Atwood's new book "Histogram: The Poster Child for Behavioral Economics", which outlines how behavioral economics can be used to improve business decision-making.

Outdoor Photography Podcast
The Exposure Pyramid (uh, Triangle…) Explained

Outdoor Photography Podcast

Play Episode Play 37 sec Highlight Listen Later Aug 2, 2022 15:49


Episode 70: Today's Tidbit Tuesday topic was inspired by one of our listeners who wanted to understand exposure better.  So if you wish to get a better grasp of the exposure settings of aperture, shutter speed, and ISO and even start photographing in manual mode, then this episode is for you.  Enjoy!LINKS MENTIONED:OPS Articles:  What is Aperture in Photography?Aperture and F-Stops ExplainedWhich Aperture Should I Use in Outdoor Photography?ISO: It's Not What You ThinkISO: Why All the Noise?Episode 30: Understanding Histograms, ETTR, and ETTLEpisode 38: Should You Buy Filters? Landscape Photography Filters ExplainedEpisode 42: How and When to Use Exposure CompensationFull Show Notes***HAVE A QUESTION?Record a Question for Tidbit TuesdayLOVE THE OUTDOOR PHOTOGRAPHY PODCAST?Ways you can support the show:Buy Me a CoffeeLeave a Rating and ReviewSign up for the Outdoor Photography School NewsletterShare the show with others!CONFUSED ABOUT WHERE TO FOCUS?Download my FREE Hyperfocal Distance Made Easy EbookABOUT BRENDA PETRELLA (host)Learn more about meVisit my online portfolioConnect with me on InstagramTo register for the Nature Photographer's Network “Ask Me Anything” with Brenda Petrella on August 3rd at 9:00am (EST), go to https://npn.link/brenda-ama.To become a member of Nature Photographer's Network (NPN), go to https://npn.link/ops and get 10% off your first annual subscription with the coupon code "OPS10" at checkout.

Photography Explained
What Is A Histogram And How Can It Help Us When We Shoot?

Photography Explained

Play Episode Play 59 sec Highlight Listen Later Apr 5, 2022 15:00 Transcription Available


What Is A Histogram And How Can It Help Us When We Shoot?  Hi and welcome to Episode 114 of the Photography Explained podcast.I'm your host Rick, and in each episode I will try to explain one photographic thing to you in plain English in less than 10 minutes (ish) without the irrelevant details. What I tell you is based on my lifetime of photographic experience. And not Google. No Google required but I did need my camera which was nice.Before I go on I need your help. I need your questions to answer. More on this at the end.Here is the answery bit.A histogram is a visual representation of the tones in a photo. A histogram is a graph which shows the distribution of the range of tones from black to white. Histograms can be found in most cameras, and also in image editing software such as Lightroom. A histogram will tell you if a photo has been under exposed or overexposed, or if the exposure is OK. Every histogram is different, and the data in a histogram can help with image capture and processing.Listen for more, or check out the transcript and even the blog post - so many ways to find out more!What's next?Glad you asked! In Photography Explained Podcast 115 -  Listener Question. My Gear Is Covered In Soot From The Great Dorset Steam Fair.Get your question answered.This is what my podcast is all about, answering your photography questions. So please get in touch with your question, and not only will I answer your question, but I will give a shout out on that episode, which is nice. And better than me giving me a shout out!! Just head over to photographyexplainedpodcast.com/start.Check out my my blogCheck out my photography blog where you will find lots more photography stuff all written by me.Did you enjoy this episode?If you did please do the following, which will help me and not take too much time.·         Rate and review my podcast·         Subscribe·         Tell anyone you think might like my podcastThank you very much for listening and see you on the next episode.Rick McEvoy – Photography Explained PodcastSupport the show Support the show (https://www.buymeacoffee.com/Rickmcevoy)

The Research Like a Pro Genealogy Podcast
RLP 193: DNA Proof Arguments SLIG Course

The Research Like a Pro Genealogy Podcast

Play Episode Listen Later Mar 21, 2022 43:53


Today's episode of Research Like a Pro is about what Nicole learned at the SLIG course, DNA Dreamers in Action: Writing Proof Arguments in January. Karen Stanbary coordinated the course and was assisted by Tom Jones, Nancy Peters, and Scott Wilds. The course was a chance to practice writing a DNA proof argument while learning essential skills. Peer review was included as part of the course as well. Join Diana and Nicole as they discuss five takeaways from the course and gain ideas for studying proof arguments in the NGSQ. Links NGS Conference 2022 Program - Nicole and Diana are speaking - https://conference.ngsgenealogy.org/wp-content/uploads/pdf/NGS-2022-ConfRegBroch-01102022-Final.pdf Salt Lake Institute of Genealogy - SLIG website - https://slig.ugagenealogy.org/index.php SLIG Academy 2023 -  DNA Dreamers in Action: Writing Proof Arguments - https://slig.ugagenealogy.org/cpage.php?pt=643 Standard Deviation - Explained and Visualized - Youtube video - https://www.youtube.com/watch?v=MRqtXL2WX2M What is a Histogram? - YouTube Video - https://www.youtube.com/watch?v=YLPDPglvePY Shared cM Project PDF - by Blaine Bettinger - https://thegeneticgenealogist.com/wp-content/uploads/2020/03/Shared-cM-Project-Version-4.pdf Submit to the Shared cM Project: Spreadsheet submissions for those submitting over 100 rows: https://www.facebook.com/groups/geneticgenealogytipsandtechniques/posts/1280634072400290/ Individual submissions: https://tinyurl.com/SharedcentiMorganProject Research Like a Pro Resources Research Like a Pro: A Genealogist's Guide book by Diana Elder with Nicole Dyer on Amazon.com - https://amzn.to/2x0ku3d Research Like a Pro eCourse - independent study course -  https://familylocket.com/product/research-like-a-pro-e-course/ RLP Study Group - upcoming group and email notification list - https://familylocket.com/services/research-like-a-pro-study-group/ Research Like a Pro with DNA Resources Research Like a Pro with DNA: A Genealogist's Guide to Finding and Confirming Ancestors with DNA Evidence book by Diana Elder, Nicole Dyer, and Robin Wirthlin - https://amzn.to/3gn0hKx Research Like a Pro with DNA eCourse - independent study course -  https://familylocket.com/product/research-like-a-pro-with-dna-ecourse/ RLP with DNA Study Group - upcoming group and email notification list - https://familylocket.com/services/research-like-a-pro-with-dna-study-group/ Thank you Thanks for listening! We hope that you will share your thoughts about our podcast and help us out by doing the following: Share an honest review on iTunes or Stitcher. You can easily write a review with Stitcher, without creating an account. Just scroll to the bottom of the page and click "write a review." You simply provide a nickname and an email address that will not be published. We value your feedback and your ratings really help this podcast reach others. If you leave a review, we will read it on the podcast and answer any questions that you bring up in your review. Thank you! Leave a comment in the comment or question in the comment section below. Share the episode on Twitter, Facebook, or Pinterest. Subscribe on iTunes, Stitcher, Google Play, or your favorite podcast app. Sign up for our newsletter to receive notifications of new episodes - https://familylocket.com/sign-up/ Check out this list of genealogy podcasts from Feedspot: Top 20 Genealogy Podcasts - https://blog.feedspot.com/genealogy_podcasts/

Photography Side Hustle
Ep 40 Why do I need a Histogram?

Photography Side Hustle

Play Episode Play 46 sec Highlight Listen Later Jan 30, 2022 11:23 Transcription Available


This week's episode is about Histograms.  A histogram is a graph that shows the brightness of the pixels in your image. The horizontal axis of the histogram goes from pure black on the left side, to the brightest white on the right.  The vertical axis shows how many pixels are in that tone.In this episode I cover:What is a Histogram?How do I see the Histogram?Is it necessary to use a Histogram?How do you read a Histogram?Not in all casesMistakes and PitfallsA transcript of this podcast is available here - Visit 50mmframework.com and become a Member. It's free and you get access to all the downloads, including the Pricing Calculator, mini-courses, and videos of how I process my RAW images.Photo-a-Day for 2022 is on Instagram - #50mmFrameworkJoin the Facebook Group and ask as many questions as you like.Leave a voice message for Andy at SpeakPipe.com/pqa

The Thought Leader Revolution Podcast | 10X Your Impact, Your Income & Your Influence
EP331: Thought Leader Nuggets #40 - Building & Nurture An Audience

The Thought Leader Revolution Podcast | 10X Your Impact, Your Income & Your Influence

Play Episode Listen Later Jan 21, 2022 11:56


“Live simply, love generous, care deeply, speak kindly and leave the rest to God.” - Ronald Reagan   How do you build and nurture and audience?   Author Kevin Kelly has said that if you put enough value in your business to attract 1000 true fans, you will have a 6 to 7 or even 8 figure business. But you've got to be willing to do the work.   In this episode, you'll learn: How to create a powerful message that speaks to the people who need to hear it. How to keep people engaged and consuming your content. How to make use of social media to deliver your message and your content.   Also mentioned in this episode: Kevin Kelly 1000 True Fans Russ Rofino, CEO of Clients On Demand The Thought Leaders Journey: A Fable Of Life Visit eCircleAcademy.com and book a free success with Nicky to take your practice to the next level.

Outdoor Photography Podcast
How and When To Use Exposure Compensation

Outdoor Photography Podcast

Play Episode Play 21 sec Highlight Listen Later Jan 18, 2022 17:42


Episode 42: Today we return to our normal, more technically oriented Tidbit Tuesday episode with an explanation of exposure compensation - what it is, and how and when to use it in your outdoor photography.  I tie together important photography concepts around exposure, priority modes, why histograms are better than camera meters, and more.  I also announce the winner of the Backblaze license giveaway!  A huge thank you to all who participated!LINKS MENTIONED:OPS Articles:What is Aperture in Outdoor Photography: Key Concepts ExplainedAperture and F-Stops ExplainedWhich Aperture Should I Use for Outdoor Photography?ISO: It's Not What You ThinkBackblaze Online Backup (affiliate link)Episode 30: Tidbit Tuesday: Histograms, ETTR, and ETTL ExplainedFull Show Notes***HAVE A QUESTION?Record a Question for Tidbit TuesdayLOVE THE OUTDOOR PHOTOGRAPHY PODCAST?Ways you can support the show:Buy Me a CoffeeLeave a Rating and ReviewSign up for the Outdoor Photography School NewsletterShare the show with others!CONFUSED ABOUT WHERE TO FOCUS?Download my FREE Hyperfocal Distance Made Easy EbookABOUT BRENDA PETRELLA (host)Learn more about meVisit my online portfolioConnect with me on Instagram

my way on medicine
whole tumor histogram analysis discussio

my way on medicine

Play Episode Listen Later Jan 14, 2022 8:53


my way on medicine
whole tumor histogram results

my way on medicine

Play Episode Listen Later Dec 30, 2021 12:12


my way on medicine
whole tumor histogram image processing

my way on medicine

Play Episode Listen Later Dec 29, 2021 6:02


my way on medicine
whole tumor histogram ab and intro

my way on medicine

Play Episode Listen Later Dec 28, 2021 8:08


The Nonlinear Library: LessWrong Top Posts
Pseudorandomness contest: prizes, results, and analysis by UnexpectedValues

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 11, 2021 35:28


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pseudorandomness contest: prizes, results, and analysis , published by UnexpectedValues on the LessWrong. This is a linkpost for/ (Previously in this series: Round 1, Round 2) In December I ran a pseudorandomness contest. Here's how it worked: In Round 1, participants were invited to submit 150-bit strings of their own devising. They had 10 minutes to write down their string while using nothing but their own minds. I received 62 submissions. I then used a computer to generate 62 random 150-bit strings, and put all 124 strings in a random order. In Round 2, participants had to figure out which strings were human-generated (I'm going to call these strings fake from now on) and which were “truly” random (I'm going to call these real). In particular, I asked for probabilities that each string was real, so participants could express their confidence rather than guessing “real” or “fake” for each string. I received 27 submissions for Round 2. This post is long because there are lots of fascinating things to talk about. So, feel free to skip around to whichever sections you find most interesting; I've done my best to give descriptive labels. But first: Prizes Round 1 Thank you to the 62 of you who submitted strings in Round 1! Your strings were scored by the average probability of being real assigned by Round 2 participants, weighted by their Round 2 score. (Entries with negative Round 2 scores received no weight). The top three scores in Round 1 were: Jenny Kaufmann, with a score of 69.4%. That is, even though Jenny's string was fake, Round 2 participants on average gave her string a 69.4% chance of being real. For winning Round 1, Jenny was given the opportunity to allocate $50 to charity, which she chose to give to the GiveWell Maximum Impact Fund. Reed Jacobs, with a score of 68.8%. Reed allocated $25 to Canada/USA Mathcamp. Eric Fletcher, with a score of 68.6%. Eric allocated $25 to the Poor People's Campaign. Congratulations to Jenny, Reed, and Eric! Round 2 A big thanks to the 27 of you (well, 28 — 26 plus a team of two) who submitted Round 2 entries. I estimate that the average participant put in a few hours of work, and that some put in more than 10. Entries were graded using a quadratic scoring rule (see here for details). When describing Round 2, I did a back-of-the-envelope estimate that a score of 15 on this round would be good. I was really impressed by the top two scores: Scy Yoon and William Ehlhardt, who were the only team, received a score of 28.5, honestly higher than I thought possible. They allocated $150 to the GiveWell Maximum Impact Fund. Ben Edelman received a score of 25.8. He allocated $75 to the Humane League. Three other participants received a score of over 15: simon received a score of 21.0. He allocated $25 to the Machine Intelligence Research Institute. Adam Hesterberg received a score of 19.5. He allocated $25 to the Sierra Club Beyond Coal campaign. Viktor Bowallius received a score of 17.3. He allocated $25 to the EA Long Term Future Fund. Congratulations to Scy, William, Ben, simon, Adam, and Viktor! All right, let's take a look at what people did and how well it worked! Round 1 analysis Summary statistics Recall that the score of a Round 1 entry is a weighted average of the probabilities assigned by Round 2 participants to the entry being real (i.e. truly random). The average score was 39.4% (this is well below 50%, as expected). The median score was 45.7%. Here's the full distribution: Figure 1: Histogram of Round 1 scores Interesting: the distribution is bimodal! Some people basically succeeded at fooling Round 2 participants, and most of the rest came up with strings that were pretty detectable as fakes. Methods I asked participants to describe the method they used to generate their string. Of the 58 participants who told me what the...

Outdoor Photography Podcast
Understanding Histograms, ETTR, and ETTL

Outdoor Photography Podcast

Play Episode Play 34 sec Highlight Listen Later Oct 26, 2021 14:43


#030: Have you heard that using the histogram is a better way to assess exposure than your camera's metering system?  Are you still a bit confused on how to properly use the histogram, and it means to expose to the right (ETTR) or expose to the left (ETTL)?  Do you know the difference between the luminosity and RGB histograms and what each can tell you about your exposure?  In today's Tidbit Tuesday, I breakdown what histograms are, how to read them, and how to use them to assess your exposure settings, and more.  Enjoy!* Loving the podcast?  Please consider leaving a short review on Apple Podcasts (outdoorphotographyschool.com/apple-podcasts).  It only takes a minute, and ratings and reviews are extremely helpful in getting the word out about the show, convincing hard-to-get guests, and are greatly appreciated by me!  I read each and every one of them, so thank you!* Episode 30 Show Notes: outdoorphotographyschool.com/episode30* Submit a question for Tidbit Tuesdays: speakpipe.com/OutdoorPhotographyPodcast* Confused about where to focus in landscape photography? Download your FREE Hyperfocal Distance Made Easy Ebook! (outdoorphotographyschool.com/hyperfocaldistance/)Support the show (https://www.buymeacoffee.com/brendapetrella)

Adafruit Industries
EYE ON NPI - ST VL53L5CX Time-of-Flight Ranging Sensor

Adafruit Industries

Play Episode Listen Later Oct 21, 2021 8:26


This weeks' EYE ON NPI is right on time, it's the ST VL53L5CX Time-of-Flight Ranging Sensor, (https://www.digikey.com/en/product-highlight/s/stmicroelectronics/vl53l5cx-time-of-flight-ranging-sensor) the latest in a long line of successful and popular ToF ranging sensors. These sensors have gone through many revisions, and each version has improved on the prior one. Starting with the VL6180X (https://www.digikey.com/en/products/detail/stmicroelectronics/VL6180XV0NR-1/4854909?s=N4IgTCBcDaIGoBkBsBGAHABgBogLoF8g), the VL53L0X (https://www.digikey.com/en/products/detail/stmicroelectronics/VL53L0CXV0DH-1/6023691?s=N4IgTCBcDaIG4BsCsBmBAGAxgDxAXQF8g) and VL53L1X (https://www.digikey.com/en/products/detail/stmicroelectronics/VL53L1CXV0FY-1/8258055?s=N4IgTCBcDaIGoBkCsBmBBGAwgDRAXQF8g), these sensors increase the range, speed, accuracy with each generation. Each sensor works the same way: they have a tiny near-IR laser that emits light and is bounced off nearby objects to determine the distance. The sensor has extra-ordinary time sensing, so it can measure the ‘time of flight' between the photon emitted by the laser, and when it arrives bounced back. Thus, they are called “Time of Flight / ToF” sensors and the series of boards are called ST FlightSense (https://www.digikey.com/en/products/filter/optical-sensors-distance-measuring/542?s=N4IgjCBcoLQCxVAYygMwIYBsDOBTANCAG4B2aWehA9lANohxwDsATBALqEAOALlCCAC%2BwoA). STMicroelectronics' VL53L5CX is a state of the art, time-of-flight (ToF), laser-ranging sensor enhancing the ST FlightSense product family. Housed in a miniature reflowable package, it integrates a single-photon avalanche diode (SPAD) array, physical infrared filters, and diffractive optical elements (DOE) to achieve the best-ranging performance in various ambient lighting conditions with a range of cover glass materials. The use of a DOE above the vertical cavity surface-emitting laser (VCSEL) allows a square FoV to be projected onto the scene. The reflection of this light is focused by the receiver lens onto a SPAD array. Unlike conventional IR sensors, the VL53L5CX uses ST's latest generation, direct ToF technology, which allows absolute distance measurement whatever the target color and reflectance. It provides accurate ranging up to 400 cm and can work at fast speeds (60 Hz), which makes it the fastest, multizone, miniature ToF sensor currently on the market...the VL53L5CX is able to detect different objects within the FoV with a distance information range up to 60 cm. The Histogram also provides immunity to cover glass crosstalk beyond 60 cm. Multizone distance measurements are possible up to 8x8 zones with a wide 61° diagonal FoV that can be reduced by software. The big updates in the L5 version is a new diffraction element which allows the SPAD to measure an 8x8 zone of distance measurements, which makes it more like a spatial LIDAR than a point-LIDAR. It's not super-fast, but you can use it for gesture recognition and possibly simple robotics-navigation. The price is not much different than before, and you still get 4 meter range. Another nice thing about this sensor is ST is starting to release 'platform agnostic' sensor drivers, including an 'ultra lite driver' for this chip that is easy to port to other platforms (https://www.st.com/en/imaging-and-photonics-solutions/vl53l5cx.html#tools-software) without heavy IDE/toolchain/chipset dependencies. The ST VL53L5CX is in stock right now for immediate shipment from Digi-Key! (https://www.digikey.com/short/hwb7rd0m) There's also eval/dev boards if you want to get started instantly. Order today and you can add a micro-LiDAR to your project or product by tomorrow afternoon. See on digikey.com at https://www.digikey.com/short/hwb7rd0m

Python Bytes
#255 Closember eve, the cure for Hacktoberfest?

Python Bytes

Play Episode Listen Later Oct 20, 2021 46:49


Watch the live stream: Watch on YouTube About the show Sponsored by us: Check out the courses over at Talk Python And Brian's book too! Special guest: Will McGugan Michael #1: Wrapping C++ with Cython By Anton Zhdan-Pushkin A small series showcasing the implementation of a Cython wrapper over a C++ library. C library: yaacrl - Yet Another Audio Recognition Library is a small Shazam-like library, which can recognize songs using a small recorded fragment. For Cython to consume yaacrl correctly, we need to “teach” it about the API using `cdef extern It is convenient to put such declarations in *.pxd files. One of the first features of Cython that I find extremely useful — aliasing. With aliasing, we can use names like Storage or Fingerprint for Python classes without shadowing original C++ classes. Implementing a wrapper: pyaacrl - The most common way to wrap a C++ class is to use Extension types. As an extension type a just a C struct, it can have an underlying C++ class as a field and act as a proxy to it. Cython documentation has a whole page dedicated to the pitfalls of “Using C++ in Cython.” Distribution is hard, but there is a tool that is designed specifically for such needs: scikit-build. PyBind11 too Brian #2: tbump : bump software releases suggested by Sephi Berry limits the manual process of updating a project version tbump init 1.2.2 initializes a tbump.toml file with customizable settings --pyproject will append to pyproject.toml instead tbump 1.2.3 will patch files: wherever the version listed (optional) run configured commands before commit failing commands stop the bump. commit the changes with a configurable message add a version tag push code push tag (optional) run post publish command Tell you what it's going to do before it does it. (can opt out of this check) pretty much everything is customizable and configurable. I tried this on a flit based project. Only required one change # For each file to patch, add a [[file]] config # section containing the path of the file, relative to the # tbump.toml location. [[file]] src = "pytest_srcpaths.py" search = '__version__ = "{current_version}"' cool example of a pre-commit check: # [[before_commit]] # name = "check changelog" # cmd = "grep -q {new_version} Changelog.rst" Will #3: Closember by Matthias Bussonnier Michael #4: scikit learn goes 1.0 via Brian Skinn The library has been stable for quite some time, releasing version 1.0 is recognizing that and signalling it to our users. Features: Keyword and positional arguments - To improve the readability of code written based on scikit-learn, now users have to provide most parameters with their names, as keyword arguments, instead of positional arguments. Spline Transformers - One way to add nonlinear terms to a dataset's feature set is to generate spline basis functions for continuous/numerical features with the new SplineTransformer. Quantile Regressor - Quantile regression estimates the median or other quantiles of Y conditional on X Feature Names Support - When an estimator is passed a pandas' dataframe during fit, the estimator will set a feature_names_in_ attribute containing the feature names. A more flexible plotting API Online One-Class SVM Histogram-based Gradient Boosting Models are now stable Better docs Brian #5: Using devpi as an offline PyPI cache Jason R. Coombs This is the devpi tutorial I've been waiting for. Single machine local server mirror of PyPI (mirroring needs primed), usable in offline mode. $ pipx install devpi-server $ devpi-init $ devpi-server now in another window, prime the cache by grabbing whatever you need, with the index redirected (venv) $ export PIP_INDEX_URL=http://localhost:3141/root/pypi/ (venv) $ pip install pytest, ... then you can restart the server anytime, or even offline $ devpi-server --offline tutorial includes examples, proving how simple this is. Will #6: PyPi command line Extras Brian: I've started using pyenv on my Mac just for downloading Python versions. Verdict still out if I like it better than just downloading from pytest.org. Also started using Starship with no customizations so far. I'd like to hear from people if they have nice Starship customizations I should try. vscode.dev is a thing, announcement just today Michael: PyCascades Call for Proposals is currently open Got your M1 Max? Prediction: Tools like Crossover for Windows apps will become more of a thing. Will: GIL removal https://docs.google.com/document/u/0/d/18CXhDb1ygxg-YXNBJNzfzZsDFosB5e6BfnXLlejd9l0/mobilebasic?urp=gmail_link https://lwn.net/SubscriberLink/872869/0e62bba2db51ec7a/ vscode.dev Joke: The torture never stops IE (“Safari”) Eating Glue

Open Book Unbound
October 2021: Histogram

Open Book Unbound

Play Episode Listen Later Oct 4, 2021 32:48


This episode features the story 'Histogram' by Katharine Macfarlane and 'Crossing The Loch', a poem from our new Scottish Makar Kathleen Jamie.

Deploy Friday: hot topics for cloud technologists and developers

MySQL is an open source, multi-user, and multi-threaded database management system. What's more, it's still growing, as our guests, Airton Lastori and Dave Stokes, longtime users of MySQL, emphasize.We asked Airton and Dave about the newest, most exciting, and lesser-known features of MySQL. They responded enthusiastically. All quotes in this list from Dave Stokes.Recursive CTE:  as Dave explains it, this is “...an easy, painless way to write subqueries. They're easier to comprehend than standard subqueries too.”Dual passwords: saves and discards secondary passwords, helping you avoid downtimeHash joins: make joins go much more quicklyContention Aware Transaction Scheduler (CATS) : “If you have columns and rows in your data at a certain load level, this automatically switches on and knows how to handle the very hot contention there.”Invisible indexes: “Now, you can make an index invisible to the optimizer, run your explain again, make the index visible again, all without having to go back and rebuild that index.” Histograms: “A histogram is a bunch of buckets that know where the range of your data is, and the optimizer knows how to get your data much faster.”And others, including:Materialized columnsJSON supportDedicated key value interfaceMySQL's futureWhile they are huge MySQL fans, Airton and Dave do have some wishes for its future, such as improved replication, analytics, and help for beginners. Dave says, “It's a steep learning curve to get someone to use a relational database. The more I look at the beginner stuff, we're not doing enough to help people get on their way to becoming database developers.”MySQL isn't going anywhereAs an extremely popular database management system, MySQL is embedded in the very fabric of the web, and it's here to stay. Dave says, “MySQL is used by Booking.com, Ticketmaster, Twitter, Facebook, local YMCAs, big government organizations, and flight operations for the US Navy. MySQL is everywhere.”Airton adds, “Our job is to make MySQL even easier to use and continue to be reliable. So we try to implement features that customers are looking for, that developers are looking for, and keep the roots for people that are already using MySQL as a database.”Try MySQL on Platform.shPlatform.shLearn more about us.Get started with a free trial.Have a question? Get in touch!Platform.sh on social mediaTwitter @platformshTwitter (France): @platformsh_frLinkedIn: Platform.shLinkedIn (France): Platform.shFacebook: Platform.shWatch, listen, subscribe to the Platform.sh Deploy Friday podcast:YouTubeApple PodcastsBuzzsproutPlatform.sh is a robust, reliable hosting platform that gives development teams the tools to build and scale applications efficiently. Whether you run one or one thousand websites, you can focus on creating features and functionality with your favorite tech stack.

Photaudiography
Episode 17 – Histograms

Photaudiography

Play Episode Listen Later Apr 3, 2021 7:29


EDIT/OR
Episode 65 - Relative to the Histogram

EDIT/OR

Play Episode Listen Later Mar 2, 2021 60:22


Episode 65 - Relative to the HistogramKyle and Gilbert went on a trip.Show NotesSony FX3Correction: Gilbert thought the XLR module on the FX3's included top handle (AKA "XLR handle") was moveable. It's definitely not. It's clearly permanently affixed and early reports were wrong. Know who was right though? Kyle!Sony a1Canon C70Potato Jet's long-term take on the C70Henry Rollins-ish Jr. on the C70 vs the FX6Sony FX6Blackmagic Pocket Cinema Cam 6K ProKyle's super vlog covering the Yosemite Firefall and much moreShalaco's Firefall vlogFuji instax SHARE SP-2 printer, which Gilbert is promptly purchasing

Frame Focus Foto
Episode 7: Histograms and The Zone System

Frame Focus Foto

Play Episode Listen Later Sep 13, 2020 29:38


Description Want to learn more about the histogram and Zone System? Listen to this episode to find out more! Our Free Black & White Photography Bootcamp! Follow the link to find out more and sign up! https://www.framefocusfoto.com/bootcamp Links https://www.framefocusfoto.com/post/zoning-in-for-perfect-exposures The Zone System: https://en.wikipedia.org/wiki/Zone_System How does the camera's histogram work: https://photographylife.com/understanding-histograms-in-photography ANOUNCEMENT!!! If you would like us to talk about your work on the show upload your photograph with the hashtag #fff_reviewmyphoto and tag us @framefocusfoto and we will be happy to offer some feedback!

histograms zone system
Practical Operations Podcast Episode Feed
Episode 98 - Histograms and Service Level Objectives

Practical Operations Podcast Episode Feed

Play Episode Listen Later Sep 11, 2020 56:06


Where we talk about what Service Level Objectives actually are and why they are so important in the field of Site Reliability Engineering. We cover the definition of an SLO, how they relate to error budgets, and take a look at various implementations of time series databases’ support for calculating accurate percentiles. Comments for the episode are welcome - at the bottom of the show notes for the episode there is a Disqus setup, or you can email us at feedback@operations.fm. Sponsors for Episode 98: 42 Lines is a DevOps consulting firm specializing in Observability, Cloud Migration, Cost Control, Security Practices, and Team Mentoring. Links for Episode 98: Atlassian Incident Management High Availibility Percentage Calculation Google SRE Book: Embracing Risk Quantile Definition Four Golden Signals Histograms at Scale VictoriaMetrics Histograms Circonus Log-Linear Histograms T-Digests

PaperPlayer biorxiv biophysics
Dynamic signatures of lipid droplets as new markers to quantify cellular metabolic changes

PaperPlayer biorxiv biophysics

Play Episode Listen Later Jul 29, 2020


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.07.28.225235v1?rss=1 Authors: Zhang, C., Boppart, S. A. Abstract: The metabolic properties of live cells are very susceptible to intra- or extra-cellular perturbations, making their measurements challenging tasks. We show that the dynamics of lipid droplets (LDs) carry information to measure the lipid metabolism of live cells. Coherent anti-Stokes Raman scattering microscopy was used to statistically quantify LD dynamics in living cells in a label-free manner. We introduce dynamic signatures of cells derived from the LD displacement, speed, travel length, and directionality, which allows for the detection of cellular changes induced by stimuli such as fluorescent labeling, temperature change, starvation, and chemical treatment. Histogram fittings of the dynamic signatures using lognormal distribution functions provide quantification of changes in cellular metabolic states. The LD dynamics also enable separation of subpopulations of LDs correlated with different functions. We demonstrate that LD dynamics are new markers to quantify the metabolic changes in live cells. Copy rights belong to original authors. Visit the link for more info

Journal Club
Open Source AI for Everyone, Diagnosing Blindness and Histogram Reweighting

Journal Club

Play Episode Listen Later Jun 24, 2020 32:07


Another week, another episode! We are back again with our regular panelists. George brings us a clinical field study with an AI that is being used to diagnose blindness. Lan discusses the article titled "AI Infrastructure for Everyone, Now Open Sourced." Last but not least, Kyle brings us our paper for the week. He brings us the paper "Extending Machine Learning Classification Capabilities with Histogram Reweighting."  

The Tri Pod
Understanding Histograms, Film Photography with Ben Horne and POSITIVITY!

The Tri Pod

Play Episode Listen Later May 26, 2020 80:55


Hello and welcome!In the fourth installment of the Tri Pod we sat down with Ben Horne to discuss Film Photography. We explain histograms and how to get the most out of them. Our first ever Your Shot goes to Jochen aka Photobowman and Kevin spreads some positive vibes in Kevs Korner. Let us know your two choices over on our socials!We'll have free cheat sheets for Histograms available in our Facebook Group! Just search Tri Pod Community to find us!If you enjoyed the show, we'd love if you could leave review on iTunes. Don't forget to get in touch with us on Facebook, Twitter and InstagramAs always, a big thanks for listening from hosts Sean O'Riordan, Kevin Hennessy and Ronan HD.

Photo Nerds Photography Podcast
Episode 25 - Missing Camera Features, Pixel Shifting & using the histogram .

Photo Nerds Photography Podcast

Play Episode Listen Later Apr 2, 2020 58:51


Gary and Gareth are flying solo on this one. The title says it all ..

Hands-On Photography (Video HD)
HOP 13: Understanding The Histogram

Hands-On Photography (Video HD)

Play Episode Listen Later Jan 23, 2020 20:29


This week on Hands-On Photography, Ant Pruitt discusses the histogram of photography. The histogram can be a very useful tool with regards to understanding your image exposure levels. Also, Ant will dispell the myths behind the histogram.Be sure to complete our annual TWiT listener survey. Go to twit.to/survey20 Host: Ant Pruitt Find Hands-On Photography on your favorite podcatcher. https://twit.tv/shows/hands-on-photography Follow Ant Pruitt on Instagram https://instagram.com/ant_pruitt Follow TWiT on Instagram https://instagram.com/twit Join the TWiT forums https://twit.community Sponsor: expressvpn.com/hop

Hands-On Photography (MP3)
HOP 13: Understanding The Histogram

Hands-On Photography (MP3)

Play Episode Listen Later Jan 23, 2020 20:29


This week on Hands-On Photography, Ant Pruitt discusses the histogram of photography. The histogram can be a very useful tool with regards to understanding your image exposure levels. Also, Ant will dispell the myths behind the histogram.Be sure to complete our annual TWiT listener survey. Go to twit.to/survey20 Host: Ant Pruitt Find Hands-On Photography on your favorite podcatcher. https://twit.tv/shows/hands-on-photography Follow Ant Pruitt on Instagram https://instagram.com/ant_pruitt Follow TWiT on Instagram https://instagram.com/twit Join the TWiT forums https://twit.community Sponsor: expressvpn.com/hop

Hands-On Photography (Video HI)
HOP 13: Understanding The Histogram

Hands-On Photography (Video HI)

Play Episode Listen Later Jan 23, 2020 20:29


This week on Hands-On Photography, Ant Pruitt discusses the histogram of photography. The histogram can be a very useful tool with regards to understanding your image exposure levels. Also, Ant will dispell the myths behind the histogram.Be sure to complete our annual TWiT listener survey. Go to twit.to/survey20 Host: Ant Pruitt Find Hands-On Photography on your favorite podcatcher. https://twit.tv/shows/hands-on-photography Follow Ant Pruitt on Instagram https://instagram.com/ant_pruitt Follow TWiT on Instagram https://instagram.com/twit Join the TWiT forums https://twit.community Sponsor: expressvpn.com/hop

Hands-On Photography (Video LO)
HOP 13: Understanding The Histogram

Hands-On Photography (Video LO)

Play Episode Listen Later Jan 23, 2020 20:29


This week on Hands-On Photography, Ant Pruitt discusses the histogram of photography. The histogram can be a very useful tool with regards to understanding your image exposure levels. Also, Ant will dispell the myths behind the histogram.Be sure to complete our annual TWiT listener survey. Go to twit.to/survey20 Host: Ant Pruitt Find Hands-On Photography on your favorite podcatcher. https://twit.tv/shows/hands-on-photography Follow Ant Pruitt on Instagram https://instagram.com/ant_pruitt Follow TWiT on Instagram https://instagram.com/twit Join the TWiT forums https://twit.community Sponsor: expressvpn.com/hop

AI HINDI SHOW | HINDI PODCAST ON ARTIFICIAL INTELLIGENCE
Ep#30 | Difference between a BAR CHART and HISTOGRAM| AI HINDI SHOW|

AI HINDI SHOW | HINDI PODCAST ON ARTIFICIAL INTELLIGENCE

Play Episode Listen Later Jan 3, 2020 5:52


People often people get confused between the bar charts and histograms. Listen to the podcast to know the exact difference between a bar chart and a histogram. . . Follow my Instagram account:- instagram.com/@aihindishow --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app --- Send in a voice message: https://anchor.fm/aihindishow/message Support this podcast: https://anchor.fm/aihindishow/support

Physical Activity Researcher
feat Dr Timo Rantalainen - Accelerometry | Gait | Elderly

Physical Activity Researcher

Play Episode Listen Later Sep 6, 2019 62:21


In this episode, we have Dr Timo Rantalainen, who is Research Fellow at the Gerontology Research Center, Faculty of Sport and Health Sciences, University of Jyväskylä, Finland.Dr Rantalainen’s area of expertise pertains to neuromuscular and skeletal systems, specifically skeletal adaptation to mechanical loading and neural control of muscles, with a focus on estimating lower limb loading and bone responsiveness. His current research interests include: 1) the effect of loading on bone geometry, 2) neuromuscular function of muscle, 3) muscle and bone adaptations with age, and 4) the measurement of bone geometry using peripheral quantitative computed tomography (pQCT).---This podcast episode is sponsored by Fibion Inc. | The New Gold Standard for Sedentary Behaviour and Physical Activity MonitoringLearn more about Fibion: https://fibion.com/research---Physical Activity Researcher Podcast have created a ‘Purchase Guide for Researchers: Accelerometer-based Activity Trackers’. You can download it from here.---Timestamps:02:00 Funding, success rate and part of luck in the process04:20 Physical state prediction from physical activity13:40 Activity parameters indicative of problems in rehabilitation16:35 Complexity of gait and cognition17.30 Dual task paradigm - simple way of detecting early Alzheimer’s19:30 Variability of gait as indicator of health20:15 Dual task and variability of gait21:13 Exerting hard and thinking23:20 Dual task paradigm during pedalling24:45 Advantages of using ergometer compared to gait25:40 Analysis of accelerometers on thorax and thigh32:10 Relativity of activity intensity classification35:00 Histogram analysis of physical activity36:45 Absolute accelerations as predictors37:50 Why walking is such important to study41:40 Static posturography43.10 Heart rate variability and arrythmias43.50 Variability of walking45.10 How to detect the walking from accelerometer signal56.30 Counts and proprietary algorithms57.40 How to test whether accelerometer you use have been calibrated?

CM Smart Innovation Podcast
EP57 6 Sigma กับ DMAIC

CM Smart Innovation Podcast

Play Episode Listen Later Jun 2, 2019 15:35


6 Sigma อีกหนึ่งแนวคิดระดับโลกที่พัฒนาองค์กร ต่อเนื่องจาก EP ที่แล้ว สำหรับEP 57 นี้ มารู้จัก กับอีก 3 ขั้นตอนที่เหลือของ D M A I C ในการใช้ 6 Sigma กันนะ 3. Analyze ขั้นตอนนี้คือการวิเคราะห์สาเหตุของปัญหาหลัก ซึ่งเป็นการวิเคราะห์ในเชิงสถิติเพื่อระบุสาเหตุหลักที่ส่งผลโดยตรงต่อปัญหานั้น มี 3 ขั้นตอน a. การวิเคราะห์ข้อมูลตามลักษณะข้อมูลที่เก็บได้ ว่าเป็นการนับ หรือการวัด เช่น ผังการกระจาย (Scatter Diagram) ฮีสโตแกรม (Histogram) b. การวิเคราะห์กระบวนการ เพื่อหาขั้นตอนที่ไม่ก่อให้เกิดมูลค่าเพิ่ม สามารถนำแนวคิด Lean มาช่วยได้ จึงเรียกว่า Lean Six Sigma c. การวิเคราะห์หาสาเหตุที่แท้จริง ซึ่งเรียกสาเหตุหลักนี้ว่า KPIV (Key Process Input Variable) ซึ่งต้องสามารถระบุให้ชัดเจนว่า อะไรคือ KPIV ของปัญหาและต้องสามารถเชื่อมโยงกับ ตัวหลักของกระบวนการ หรือที่เรียกว่า KPOV (Key Process Output Variable) ให้ได้ หลักการสถิติที่ใช้ในการวิเคราะห์ ได้แก่ การตรวจสอบสมมติฐาน (Hypothesis Testing) ผังการกระจาย (Scattering Diagram) การวิเคราะห์การถดถอย (Regression Analysis) เป็นต้น 4. Improvement  a. สร้างแนวทางแก้ไขปัญหา ขั้นตอนนี้คือการปรับตั้งค่าสาเหตุหลัก (KPIV) โดยมีจุดประสงค์เพื่อให้ผลลัพธ์ของกระบวนการเป็นไปตามต้องการ ด้วยการใช้เทคนิคการออกแบบทดลอง(Design of Experiment : DOE) เพื่อปรับตั้งค่าสภาวะต่างๆของกระบวนการให้เป็นไปตามความต้องการ b. เลือกแนวทางการแก้ไขปัญหา 5. Control ขั้นตอนนี้เป็นขั้นตอนสุดท้าย ซึ่งต้องดำเนินการออกแบบระบบควบคุณคุณภาพของกระบวนการเพื่อให้เกิดความมั่นใจว่ากระบวนการจะย้อนไปมีปัญหาเหมือนเดิมอีก DMAIC a. กำหนดการวิธีการทางเทคนิคของการควบคุม เช่น วิธีในการสื่อสาร การผลิต ควบคุมการผลิต/บริการ จุดวิกฤติที่ต้องระมัดระวังในการทำงาน การใช้เครื่องมือสถิติ เช่น Control Chart เป็นต้น b. การจัดทำแผนตอบสนอง มาตรวัดที่สำคัญที่สุดสำหรับกระบวนการใหม่ ข้อกำหนดเฉพาะ และเป้าหมายใหม่ ผลการทำงานที่ได้รับการบันทึกใหม่ ซึ่งจะมีการติดตามอย่างต่อเนื่อง หวังว่าจะเป็นอีกหนึ่งแนวคิดในการบริหารงานของท่านให้ประสบความสำเร็จ หากท่านมีข้อแนะนำ หรือคำถามก็แจ้งเข้ามาใน Podbean ช่อง CM SMART INNOVATION ได้นะครับ และถ้ามีประโยชน์ขอช่วย Follow และแชร์ต่อๆไปด้วยครับ

Photo Geek Weekly
Photo Geek Weekly Episode 61 – False Hope and a Better Histogram

Photo Geek Weekly

Play Episode Listen Later Mar 23, 2019 59:21


On this episode of Photo Geek Weekly, Chris Marquardt offers his sage opinions on a great many topics including photo contests, artistic and technical imaging education, what waveforms are for and beer than can(‘t) develop film. Still probably pretty tasty, though! Thanks for listening! Story 1: The Winning Photo of the $120K HIPA Prize Was [...]

Photo Geek Weekly (All Shows)
Photo Geek Weekly Episode 61 – False Hope and a Better Histogram

Photo Geek Weekly (All Shows)

Play Episode Listen Later Mar 23, 2019 59:21


On this episode of Photo Geek Weekly, Chris Marquardt offers his sage opinions on a great many topics including photo contests, artistic and technical imaging education, what waveforms are for and beer than can(‘t) develop film. Still probably pretty tasty, though! Thanks for listening! Story 1: The Winning Photo of the $120K HIPA Prize Was [...]

Experiencing Data with Brian O'Neill
006 - Julien Benatar (PM for Pandora's data service, Next Big Sound) on analytics for musicians, record labels and performing artists

Experiencing Data with Brian O'Neill

Play Episode Listen Later Feb 13, 2019 45:01


We’re back with a special music-related analytics episode! Following Next Big Sound’s acquisition by Pandora, Julien Benatar moved from engineering into product management and is now responsible for the company’s analytics applications in the Creator Tools division. He and his team of engineers, data scientists and designers provide insights on how artists are performing on Pandora and how they can effectively grow their audience. This was a particularly fun interview for me since I have music playing on Pandora and occasionally use Next Big Sound’s analytics myself. Julien and I discussed: How Julien’s team accounts for designing for a huge range of customers (artists) that have wildly different popularity, song plays, and followers How the service generates benchmark values in order to make analytics more useful to artists How email notifications can be useful or counter-productive in analytics services How Julien thinks about the Data Pyramid when building out their platform Having a “North Star” and driving analytics toward customer action The types of predictive analytics Next Big Sound is doing Resources and Links: Julien Benatar on Twitter Next Big Sound website Next Big Sound blog The Data Pyramid model Quotes from Julien Benatar "I really hope we get to a point where people don’t need to be data analysts to look at data." "People don’t just want to look at numbers anymore, they want to be able to use numbers to make decisions." "One of our goals was to basically check every artist in the world and give them access to these tools and by checking millions of artists, it allows us to do some very good and very specific benchmarks" “The way it works is you can thumb up or thumb down songs. If you thumb up a song, you’re giving us a signal that this is something that you like and something you want to listen to more. That’s data that we give back to artists.” “I think the great thing today is that, compared to when Next Big Sound started in 2009, we don’t need to make a point for people to care about data. Everyone cares about data today.” Episode Transcript Brian: I’m really excited today for this episode. We have Julien Benatar on the show and he’s from a company that I’m sure a lot of people here know. You probably have had headphones on at your desk, at home, or wherever you are listening to Pandora for music. Julien , correct me if I’m wrong, you were the product manager for artist tools and insights at Next Big Sound, which is a type of data product that provides information on music listening stats to, I assume, artists’ labels as well to help them understand where their fans are and social media engagement. I love this topic. I’m also a musician, I have a profile on Next Big Sound and I feel music’s a fun way to talk about analytics and design as well because everybody can relate to the content and the domain. Welcome to the show. Did I get all that correct? Julien: Yeah, it was perfect. Brian: Cool. Tell us a little about your background. You’re from France originally? Julien: Yes, exactly. I grew up next to Paris, in Versailles more specifically, and moved to New York in 2014 to join Next Big Sound. Brian: Cool, nice. You’ve been there for about four years, something like that. You have a software engineering background and then now you’re on the product side, is that right? Julien: Exactly yes. I joined the company back when we were a startup. Software engineering was perfect, there was so much to do. To our move to Pandora, I moved to a product manager role around a year ago. Brian: Next Big Sound was independent and then they were acquired by Pandora. I assume there is good stuff about your data. Why did Pandora acquire you and how did they see you guys improving their service? Julien: We got acquired in 2015. The thing is, Next Big Sound was already really involved in the music industry. We already had clients like the three major labels and a lot of artists were using us to get access to their social data. I think it was a very natural move for Pandora as they wanted to get closer to creators and provide better analytics tools. Brian: For people that aren’t on the service, I always like to know who are the actual end users, the people logging in, not necessarily the management, but who sits down and what are some of the things that they would do? Who would log in to Next Big Sound and why? Julien: Honestly, it’s really anyone having any involvement into the music industry, so that can be an artist, obviously, try looking to try their socials and their audience on Pandora. But you can also be a booker trying to book artists in their town. We have a product that can really be used by many different user personas. But our core right now is really artists and labels, having contents on Pandora and trying to tell them the most compelling story about what they’re doing on the platform. Brian: When you think about designs, it’s hard to design and we talk about this on the mailing list sometimes but it’s really hard to design one great thing that’s perfect for everybody so usually you have to make some choices. Do you guys favor the artist, or the label, or as you call them,the bookers or whom I know as presenters,in the performing arts industry? Do you have a sweet spot, like you favor one of those in terms of experience? Julien: I think it’s something we’re moving towards, but it hasn’t always been this way. Like I told you, we used to be a startup or grow us to make a product that could work for as many people as possible. What is funny is we used to have an entity on Next Big Sound called Next Big Book where we used to provide the same type of service for the book industry. If anything, it’s been great to join Pandora because then we could really refocus on creators and it really allowed us to, I believe, create much better and more targeted analytics tools to really fulfill needs for specific people like artists and labels. Brian: I would assume individual artists are your biggest audience or is it really heavily used by the labels or who tends to... Julien: I think it’s pretty much the same honestly. I think the great thing today is that, compared to when Next Big Sound started in 2009, we don’t need to make a point for people to care about data. Everyone cares about data today. I think that everyone has reasons to look at their dashboards and especially for a platform like Pandora with millions of users every month. Our goal is really just telling them a story about what does it mean to be spinning on the platform and the opportunities it opens. Brian: You talked about opportunities, do you have any stories about a particular artist or a label that may have learned something from your data and maybe they wrote to you or you found out like in an interview how they reacted like, “Hey, we changed our tool routing,” or, “Hey, we decided to focus on this area instead of that area.” Do you know anything about how it’s been put into use in the wild? Julien: Yeah, it’s used for so many different reasons. For the people who don’t use Pandora, something I really like about the platform is it’s really about quality. As you use Pandora, you have the opportunity to thumb up or thumb down songs and as you do, you’re going to get recommended more songs like the ones you like. It’s really about making sure that you get the best songs at all times. The reality then is that for artists, their top songs on Pandora can be pretty different than their top songs on other platforms because sometimes their friends are going to be just reacting more to some part of their catalog than another one. I’ve heard many times of artists changing their playlists in looking at which songs where their fans thumbing up the most on Pandora. Brian: Could you go through that again? How would they adjust their playlist? Julien: Usually, people use Pandora as a radio service. While we already have internet today, most people are listening to the radio because they’re usually are very targeted and it just works really well. The way it works is you can thumb up or thumb down songs. If you thumb up a song, you’re giving us a signal that this is something that you like and something you want to listen to more. That’s data that we give back to artists. We tell them, “This are your most thumbed songs on Pandora. These are the songs that people engage with the most on the platform.” Looking at this data, you can actually inform them songs that they believe they should be playing more on the store. Brian: I see. A lot of it has to do with the favoriting aspect to give them idea what’s resonating with their audiences. Julien: Qualitative feedback, yes. Brian: Got it. Actually, it’s funny you mentioned the qualitative feedback. In preparation for this, I was reading an article that you guys put out back in March about a new feature called weekly performance insights, which is really cool and this actually reminds me of something that I talked about in the Designing for Analytics mailing list, which is the act of providing qualitative guides with your analytics. A lot of times they analyze for turnout quantitative data and whenever there’s an opportunity to put stuff into context or provide qualifiers, I think that’s a really good thing and you guys look like you’ve have done some really nice things here. I’ll paraphrase it and then you can jump in and maybe give us some backstory on it. One of the things that I think is really cool is there're concepts of normalcy in here so that, if I’m an artist and I look at my numbers, I have an idea. For your Twitter mentions, for example, you say, “For artists with 26,000 followers, we expect you to get around 44 mentions.” When you show me that I have 146 mentions, I can tell that I’m substantially higher than what my social group would be. I think that’s a really fantastic concept that people not in music could try to apply as well which is, are there normalcy bans where you’d want to sit? Is there some other type of group, maybe, an industry, or apparent group, or another business unit, whatever it may be to provide some context for what these out of the blue numbers mean that don’t have any context? How did you guys come up with that and can you tell us a bit about the design process of going from maybe just showing, “You’re at 826 apples,” as compared to what? How did you move from just a number into this these kind of logical groupings where you provide the comparisons? Julien: I think what’s really fascinating is, we really live in an age of data. As an artist, you need to be on social media for the most part. There still a lot of artists I listen to but just decide not to. It’s part of things but at the same time, real big success in the music industry didn’t change. It’s still being on the Billboard chart, getting a Grammy and all these things. But as we see this, we have millions of artists looking at their data every day and just are not able to understand, like is it good or is it not good. Everyone starts at zero. We have a strong belief that data can only be useful when put in context. Looking at the number on its own can give you a sense of how things are doing but that can also be dismissive. An example is, a very common way to look at data is to look at a number and look at the percent changing comparison to the previous week. You’ve got a bunch of tables and you look at, am I growing or am I not growing. The reality is it’s actually impossible to always have a positive percent change. There’s no artist in the world that always does better week by week. Even Beyonce, I can assure you that the week she released Lemonade, she had more engagement on Twitter than the week after. With that in mind, we really try to give a way for artists to understand how are they doing for who they are and where they are currently in their career. Next Big Sound started in 2009. One of our goals was to basically check every artist in the world and give them access to these tools and by checking millions of artists, it allows us to do some very good and very specific benchmarks. For an artist, like the example you said, for instance an artist with a thousand Twitter mentions in a week, is it good or bad in comparison to their audience size? This feature comes because that’s just the question we’re asked. Artists want to know is it any good? What does this number actually mean for me? That’s why we really wanted to, in some ways, get out of being a content aggregator platform and really be a data analytics platform. How can we actually give information that can help artist make better decisions? Brian: I remember the first time I got what I would call an anomaly detection email from your service and it was about some spike in YouTube views or something like that. I thought it’s fantastic in two reasons. First of all, you identify an anomalous change and I think in this case it’s a positive anomalous change. That tells me that I should log in the tool. Secondly, you proactively delivered that to me. On the Designing for Analytics mailing list, we talk about is that user experience does not necessarily live inside your web browser interface or your hard client or whatever you’re using to show your analytics. Email and notifications are a big part of that. Can you tell me about how you guys also arrived at when you pushed these things out and maybe talk about this little anomaly detection service that you have? Julien: It all started when we got acquired by Pandora. We decided to just invite a bunch of users and just talk to them, understand how to use our product and what did they think about it. We had artists, managers, and label people come over and we just talk to them and basically they all said, “We love it.” But then, by looking at their actual usage, they don’t use it that much. I guess one of their questions was when should I be looking at my data? Everyone is very busy. As you’re an artist, you need to perform, you need to write music, you need to engage with your fans and same goes with everyone. When should I look at data? The reality is by being a data company, we do get all the data, we have all the numbers. We have ways to know when things are supposed to be known, when artists should be acting on something. We just turn this into this email notifications. Anytime we notice that an artist is doing better than expected, we just let them know right away. Brian: That’s great. Do you do it on the opposite end too? If there’s an unexpected drop or maybe like, “Oh, you put a new track out and your socials dropped,” or something like that, do you look at the negative side too or do you tend to only promote the positive changes? Julien: As far as pushes, we decided to only do push for positive. But as you mentioned weekly performance, weekly performance can give you some negative insights, like, “You’re not doing as well as artists with the same size of audience as yours.” The reason we didn’t do it for our notification is, anomalies are really hard to completely control. A reason, for instance, is Twitter removing bots. Basically, every single artist would have had an email telling them, “You lost Twitter followers this week.” It was a lot of work to really tune our anomaly factor to actually only send emails when something legitimate happens. That’s the reason we only decided so far to do it for positive but we actually have been thinking about doing the same for negative but that’s another type of work. Brian: Yeah, you’re right. You have to mature these things over time. You don’t want to be a noise generator. Julien: Exactly. Brian: Too many, then people start to ignore you. I’ve seen that with other data products I’ve worked on which just have really dumb alerting mechanisms that are very binary or they’re set at a hard threshold and just shootout noise and people just tune it out. Julien: I’m glad you mentioned this because this feature was in beta for a year for that specific reason. Brian: Got it. Julien: We had to learn the hard way. We had like a hundred beta users. We’ve got way too many emails because anytime there were an anomaly anywhere, they would just get an email. For the most part, it was things that were supposed to help them. If a notification becomes noise, then that’s absolutely against its purpose. Brian: I don’t know if everybody knows how the music business works, at least from the popular music side, but just to summarize. You have individual artists that are actually performers. They may or may not have an artist manager which takes care of their business affairs, represents them like negotiations with people that book shows. Then you have labels which are sort of like an artist manager except they’re really focused on the recording assets that the artist makes and they actually tend to own the recordings outright at the beginning and then over time, the artist may recoup through sales they make it the ownership act and the sound recordings they make. Of those kinds of three major groups, is there a one that’s particularly hungry or you’re the squeaky wheel that is most interested in what you’re doing? Julien: I really think that into these three groups, we have a subset of users that are really into the data and into the actionability of it. I don’t think it’s one specific group of user. It could be all around the industry like we have the data-savvy, they really want to know. We have some users that actually would rather get more notifications even if they need to on their end to figure what is right from what is wrong. But since we have such a wide user base of different type of people, we decided to go on the conservative side and make sure to only share things that we thoroughly validated through all of our filters. Brian: I assume that your group reports into some division of Pandora, I’m not sure of that. Are you reporting into a technology, like an IT, or a business unit, or marketing? Where do you guys fit in the Pandora world? Julien: We’re part of the creator’s tools. I don’t really have a perfect answer to this. Brian: Okay. I guess my main question being, because when we talk about designing services, we talk about both user experience, which is the end user thing and about business success or organization success. I’m curious, how does Pandora measure that Next Big Sound as delivering value? I can understand, I’m sure our artist can understand how the artists value it through understanding how is my music moving my audiences, et cetera. Is there a way that Pandora looks at it? Are they interested in just time spent? The analytics on the analytics, so to speak, is what I’m asking about. How do you guys look at it like, “Hey, this is really doing a good job,” or whatever? Do you know how that’s looked at? Julien: To be honest, I think you said it right. Our goal is to help artists make their decisions through data and having artists use the platform is currently the way Pandora sees us doing a good job. Actually, it hasn’t changed that much since our acquisition. One of our main KPI for the past and couple of years is something I would call insights consumes. Just making sure that our users, artists, anyone using Next Big Sound are consuming data. That can be them logging into the website or that can be them opening one of our notifications. But so far that was our main KPI. We’re trying to work on some more targeted KPI, potentially like actions taken, that would be the North Star, but we're still working on how to do that right. Brian: Do you guys facilitate actions, so to speak, directly in the tool or are there things people can do with those actions really take place outside of the context of Next Big Sound? Julien: There are actions that artists can take to the other creator’s tools provided by Pandora. For instance, artists have the ability to send audio messages to anyone listening to them. If they go on tour into the US, they can have targeted messages in every single song they’re going to play. If anyone listens to them there, they can just click and buy a ticket. We’re working to make sure that artists are aware of these tools because they are free and they’re generally helping them grow at their careers. But regarding external actions, so far we don’t have any one-click way to tweet at the right time to the right people or with the right content or anything like this. Brian: Sure and that’s understood. Not every analytics product is going to have a direct actionable insight that comes right out of it. You guys may be feeling a longer term picture about trending and maybe for a certain artist to get an idea if they’re releasing music fairly frequently, what stuff is working and resonating, and what stuff is not. I can understand that. There may not be a button to click as a result immediately. Julien: That’s the goal though. Everything we do right now is going towards this objective. Maybe I can tell you a little about the way we think about data and that can give more sense to it. In order to work on any new feature, we follow this concept called the data pyramid. It’s something that you can Google. There’s a Wikipedia page for it. Let me explain to you how it works. The data pyramid, it’s a pyramid formed of four layers. It could be upon each other and each representing an exquisitely useful application of data. At the bottom of the pyramid we have the data layer. Any sort of data that we may have. For our case, Android data, Twitter, Facebook just getting the numbers, getting the raw data. On top of it, we have the information layer. The information layer is going to be ways you have to visualize this data. I guess it’s like the very broad sense of analytics. We’re going to give you tables, graphs, pie charts, you name it. We’re giving you ways to craft stories about this data but it’s on you to figure it out. Then on top of it we have what we call the knowledge layer. That’s where things start to get interesting. The knowledge layer is the contextual part of it. It’s like, “What do this number actually mean?” It has industry expertise. For instance, the way we’re going to work about it for musicians and their true data may be different than any other industry. The knowledge layer goes like a weekly performance. It’s a perfect answer to it. It’s what does it mean for me as a musician with a hundred fans to get two mentions this week. Same for notifications. It’s telling you that you should be looking at your data right now because something is happening. That’s how we get to the North Star and the last part of the data pyramid which is intelligence. The goal of intelligence is actionability. Now that I get to understand what does this number mean to the specific context, what should I be doing? Following your question, everything we’re trying to do here is to get to a point where we can just send an email to an artist and tell them, “Hey, you should be doing this right now because, with all the data that we have, we believe that this is going to have the highest impact for you.” Brian: It‘s really fascinating that you just outlined this data pyramid. I actually haven’t heard of this before. It made me think of one of the kind of, it’s not a joke but in the music community, I’m also a composer and when we write stuff, the kind of running joke is like nothing is new. Your ideas for this new song or this new melody I’m composing, it probably came before you. You heard it there before. I wrote a post on my list that was pretty much exactly the same thing except the knowledge layer. I was calling that insight. Data have been this raw format and information being the first human-readable format that’s like say going from raw data to a chart, a histogram. Now I have a line on a chart and then the insight layer being, I have a line on the chart and another line comparing it to like you said, average, or my social group, or a parent group, or some taxonomy, or an index. Then the action or the prescription for what to do or the prediction those that kind of lead you in about action which would be that fourth state. You’re like, “Oh, is this really a new concept?” It’s like, “Nope. Someone else already thought of that.” I totally want to go read about this data pyramid. Julien: That’s amazing. Brian: I’ll find that link to the data pyramid and I’ll put that in the show notes for sure. I thought that was really funny. Julien: It’s funny that you called it insight because that’s the way we call a lot of our features are working out. The way we define insight is bite-size, noteworthy, sharable content. How can we get into the noise of all of the data that only gives you exactly what you should be looking at. That’s how we got into notification and weekly performances. This is the one thing you should be looking at. Brian: I understand what you’re getting at there. The insights are, like you said, bite-size chunks of interesting stats that someone can put some kind of context around. That’s great and it’s good. One of the things I liked, too, that you talked about was you said, “Oh we got like a hundred users, like a beta group and that kind of inspired some of this.” Your product response to how do we help people know when to come and look at our service. I think this is really good because one of the problems that I see with clients and people on the list, I think is low engagement. This is especially true for internal analytics companies. Low engagement can be a symptom of a difficult product, it doesn’t provide the right information at the right time, it may not have a lot of utility, or it’s a resistance to change. People have done something the old way and they don't want to do it the new way. One of the recipes you can follow if you’re trying to do a redesign or increase engagement is to involve the people that are going to use the service in the design process, both the stakeholders as well as the end customers. This is especially true again for the internal analytics people. Your customers or other employees and your colleagues. By engaging them in the design process, they’re much more likely to want to change whatever they’re doing now. I loved how you guys did some research. Now I want to ask, do you frequently do either usability testing or interviews? Is that an ongoing thing at your company or is it really just in front of a big feature release or something like that? How do you guys do this research? Can you tell me about that? Julien: Of course. It’s consent. We haven’t released any major feature without doing some heavy user testing. I’m very lucky to be working with two designers, Justin and Anabelle who are very user-focused. Honestly, if you come to our office, at least every week we’re going to have some user interview and just talking to them, showing them prototypes, and just see how do they play with it. Brian: So you’re doing a lot of testing it sounds like. That’s fantastic. Julien: At the same time it’s always to find the right balance because you could be overtesting things too. We really are focusing on user testing for new things and make sure that the future that we are working on actually answers their user story that we intended. Brian: I don’t know how involved you get participating in these, but do you have any interesting stories or anecdotes that you got from one of those that you could share? Julien: Let me think. I do participate into a lot of them but I’m not sure I have an example right now. Brian: Are most of the people you interview, are they current users of Next Big Sound or do you tend to focus on maybe artists that haven’t experienced the service yet or you mix it up? Julien: We mix it up. We mostly engage with users that we already have but then we can decide to go with users that haven’t used the platform for a while, or more active users if you want to understand how we’re useful into their day to day. What I would say is that, surprisingly, it’s very easy to get users to chat about their experience with the product. I didn’t assume that we would get so many responses when we tried to have people come over or just hop on the zoom to check a new feature. Brian: I’m glad you actually mentioned that because I think in some places, recruiting is perceived to be difficult and it probably isn’t. Maybe you haven’t done it before but as I tell a lot of my clients, a lot of people love to have someone listen to them talk, tell them all about their life and what’s wrong with it, and how it could be better with their tools. They love having someone listen to them and especially if they know that their feedback is going to influence a tool or a service that they’re using. They tend to be pretty engaged with it. I find it’s really rare that I do an interview with a client’s customer and they don’t want to be included in the future round like, “Hey, when we redesign the service, can we come back to you and show you what we’ve done?” “Oh, I love to do that!” Everybody wants to get engaged with it. There are places where recruiting can be difficult when it’s hard to access the users, some of the enterprise software space that can be an issue sometimes. But generally, if you can get access to them, they tend to be pretty willing to participate. I’m glad you mentioned that. Julien: I think the great part about testing with current users on the platform is to actually show them prototypes with real data, not just show them an abstract idea that we want to work on. As soon as they can see what we’re working on apply to their own career as musicians, for instance, that can lead to fascinating discussions. Brian: You made a really good point on the real data thing. I remember as far back as 10 years ago or whenever, I use to work at Fidelity Investments, we would see this issue when we’re working on the retail site for investors. When you show a portfolio that, for example, has Apple stock trading at $22 in it, you’re not really there to test what is the price of Apple stock but you might be testing something entirely different and the customer cannot bear what is going on? They’re so stuck on this thing. It’s all fake seed data in the prototype. The story here being if you’re a listener, when you test it’s important to have at least realistic data. You don’t want to have noise in the test or whatever your studying or else you can end up on this tangent. Try to make the numbers looks somewhat realistic if you’re using quantitative data. In some cases, people can be taught to roleplay. Pretend you’re Drake or pretend you’re some big artist and then they can get their head around why they have billions of streams instead of thousands which they’re used to. Julien: Absolutely. That also helps us just build better products because the reality is we have a lot of artists with maybe 10 plays in a month. As we build visualizations like something that we built a line of looking at Drake’s data, it’s not going to work as intended for a smaller artist sometimes. Having real data involved as soon as possible into the design process has been such a game changer for us. We really have a multidisciplinary team involved into the research and design of everything we do. I’m working with a data scientist, data engineer, a web engineer, and designer on a daily basis. Obviously, we all have our things to do. But as we get into creating something new, we just make sure to have someone helping us get the real data, interview the right user, and just create prototypes as soon as possible. Working with prototypes is essential into building useful data analytics tools. Brian: Yes, you do learn a lot more with a working prototype. It’s not to say you can’t test with lower fidelity goods, especially early on but for a service like yours when the range of possible use both the personas and also you’ve got the Drakes of the world, big major label artist and then down to really small independents, it’s really important to have an idea how your charts are going to scale, and what’s going to happen with data. Even just small stuff like how many decimal points should you be showing on a mobile device, some of the numbers might cram up. Julien: Exactly. Brian: All this stuff that you never think, if you only look at one version of everything, you can end up with a mess. I’m glad that you brought that up. Julien: I couldn’t say better. The decimal is actually something that we’ve had to discover through real data. Brian: To all of you in the technical people out there, I will say this. If I’ve seen one trend with engineers, is they love precision and there’s a lot of times when there’s very unnecessary precision being added to numbers. Such as charts and histograms. Histograms are usually about the trend, they’re not about identifying what was the precise value on this date at this time. It’s about the change over time. Showing what’s my portfolio worth down to three digits of micro-cents or something like that is just unnecessary detail. You can probably just round up to the dollar or even hundreds of dollars or even thousands of dollars in some cases. It actually is worse. The reason it’s worse is that adds unnecessary noise to the interface, you’re providing all these inks that someone has to mentally process, and it’s actually not really meaningful ink because the change is what’s important. Think about precision when you’re printing values. Julien: This concept of noise is so essential today for any data analytics tools. There is so much data today. There is data for everything. I think it’s our responsibility as a data analytics company to make sure what are we actually trying to help our user with this data set is not just about adding new metrics. Adding new metrics usually is just going to add noise and not be helpful in comparison to fairing what do they need to make the right decision. Brian: Right. Complexity obviously goes up. The single verb, ‘add,’ as soon as you do that, you’re generally adding complexity. One of the design tools that is not used a lot, and this is something I try to help clients with is, what can we take away? If we're not going to cut it out entirely, can we move this feature, maybe this comparison to a different level of detail? Maybe it’s hidden behind a button click, or it’s not the default. But removing some stuff is a way to obviously simplify as well, especially if you do need to add new things. Your only weapon is not the pencil, you’ve got the eraser as well in the battle so to speak. Julien: I couldn’t agree more. On Next Big Sound we have this concept of artist stages. It’s a way for us to put artist into buckets and by looking at their social instrument data. It goes from undiscovered to epic. We do that by looking at all of the data we have and looking at it in context. I don’t have the numbers right now because they update on a daily basis but every artist starts undiscovered. For instance, as they get 1000 Facebook likes, maybe they’re going to get to a promising stage. We have all of these thresholds moving everyday looking at trends among social services. But what is interesting is that for instance, for a booker, a booker doesn’t need to look at the exact number of Twitter followers for an artist. He needs to know that he’s booking for a midsized venue in the city he’s in and he’s probably going to be looking for promising to established artists and not looking for the mainstream to epic artists. It’s always about figuring a way to use the numbers to tell the story. Brian: I’m totally selfishly asking for myself here, but I was immediately curious. I live in Cambridge which is in the Boston area, and I am curious who are the big artists in our area and what is the concentration? I’m in a niche. I’m more in the performing arts market, in the jazz, in world music, and classical music but I’m just curious. Is there a way to look at it by the city and know what your artist community looks like? You guys do anything like that? Julien: We don’t currently. But I think YouTube has actually a C-level chart available. It’s not part of something we do because I think the users it would benefit are not the users we specifically try to work on new features. It’s more something for bookers than artists ,specifically ,but it’s exactly the type of thing that we need to think about when we prioritize new features. Brian: I’m curious just because the topic’s fairly hot. Everybody is trying to do machine learning projects these days. I don’t like the term AI because it tends to be a little bit overloaded but are you guys using machine learning to accomplish any particular problems or add any new value to your service right now? Is that on your horizon? Julien: How do you think about machine learning? Brian: A lot of times I associate it with predictive analytics or understanding where you might be running instead of just using statistics. I don’t know what kind of data you might have for your learning that you can feed in but maybe there’s aspects about artists that can predict. Especially, I would think like in the pop music world where there tends to be more commercialization of the music, I would say, where it’s like we need a two-minute dance track at this tempo specifically because DJs are going to play it. It’s a very commercial thing. It’s very different than what I’m used to. So I’m curious if there’s a way to predict out how an artist may do or what kinds of tracks are performing well. Like these tempo songs, we predict over the next six months that tech house music at 160 beats for a minute is going to do really well based on the trending. I don’t know. I’m throwing stuff out there. The goal, obviously, is not to try to use like, “Oh Home Depot has this new hammer, let’s run out and get it. We don’t even know what it’s for but everyone else is buying it.” That’s how I joke about machine learning. It’s like you need to have a problem that necessitates that particular tool. I don’t ask such that, “Oh there should be some.” I’m more curious as to whether or not it’s a tool that you guys are leveraging at this time. Julien: The Next Big Sound team doesn’t worked on features following the musical aspects of things. We really are focused on the user data. Brian: Engagement and social. Julien: Engagement data mostly, yes. But at the same time, I’m sure teams have worked on this because of the way that genome works. We have a lot of data about the way songs are made. Regarding machine learning, on the Next Big Song team, we actually have something that is called the prediction chart. You said predictions. We have this chart that is available every week. Basically, it really goes back to having data for a long time. The fact that we’ve had data since 2009, we’ve been able to see artists actually get from starting to charting on the Billboard 200. By having all of these data, we’ve been able to see some trends, some things that usually happen for artists at specific times in their career up until they get into the Billboard 200. We actually do have some algorithms that allow us to apply this learning to all of the artists on Next Big Sound right now and have a list every week of artist that we believe are most likely to appear on the Billboard 200 chart next year. Brian: I see. Got it. Do you track your accuracy rate on that internally and change it over time? Do you adjust the model? Julien: Yeah, we do. Brian: Cool That’s really neat. Tell me, this chat has been super fun. I’ve selfishly got a little indulgent because being a musician, it’s fun to talk about these two worlds that I’m really passionate about so I could go on forever with you about this. But I’m curious. Do you have any advice for other product managers or analytics practitioners about how to design good data products and services? How to make either your own organization happy or your customers happy? Do you have any advice to them? Julien: Yeah, of course. I guess it’s all about asking questions, honestly. What is very good with working at Next Big Sound is that it all started in 2009. Maybe actually I can go back and tell you the story about how it started and why it’s so different today. It started in 2009. It was actually a project, a university project by the three co-founders. Basically, they were wondering about one thing. How many plays does a major artist get on the biggest music platform in the world? At that time, it was MySpace. The artist they picked was Akon. Basically, they just built a crawler, went to bed, woke up, and discovered that an artist like Akon was getting 500,000 plays on MySpace in one night in 2009. The challenge in 2009 was to get the data. That’s why for the most part in Next Big Sound as it started was, I really think a data aggregation tool. Our goal was to get as many sources as possible and just make them easily accessible into the same place. We really are much into the information layer here. We’re giving you all the numbers and you can compare Tumblr to Vimeo, to YouTube, to Twitter, to Facebook, to Vine, to you name it into a table or a graph that you want to. The reality is, today things change. We don't need to fight to get data anymore. We don’t need to hike our way into getting the numbers. Now, data is accessible to everyone in a very easy way. It’s kind of a contract. You, by being an artist, you know you’re going to get access to your Spotify, YouTube, Pandora, Apple Music or any other platform data very easily just by signing up and authenticating as an artist. That’s where our goal changes. Thankfully, we don’t need to convince people to care about data, we know they do already. But now the challenge is different. Now, the challenge is to make them understand what does their data mean and how can they turn it into getting even more data, getting into having even more engagement, and having even more plays. I think that’s something that is very interesting because it really resonates into the question we’ve been asked in the past few years like, “What does my data mean and when should I be looking at my data?” If anything, these two things correlated pretty well. People don’t just want to look at numbers anymore, they want to be able to use numbers to make decisions. That’s the core of what we’re trying to achieve today. We couldn’t be there if we didn’t have users that ask us the right questions. Brian: Cool that’s really insightful. Just to maybe tie it off at the end and maybe you can’t share this but what’s your home run? What is your holy grail look like? Is there a place you guys know you want to get? Maybe it’s the lack of data or you don’t have access to the data in order to provide that service. Do you guys have kind of a picture of where it is you want to take the service? Julien: What is very noble about our goal at Next Big Sound specifically is we’re here to help artists. The North Star would be to make sure that any artist at any time in their career is doing everything they can do to play more shows, to reach to more people, and to make sure their music is heard. Brian: Nice. I guess it’s like you’re already there, just maybe the level of quality and improving that experience over time, that’s your goal. It’s not so much that there’s so much unobtainable thing at this moment. Is that kind of how you see it? Julien: I think the more we don’t feel just a data analytics tool, the more we’re getting to that goal. I really hope we get to a point where people don’t need to be data analysts to look at data. We’re always going to provide a very customizable tool for the data-savvy because they know what they need more than we can ever do it for them. We want to make sure that for everyone else, we can just make it very easy and as simple as a click for them to do something that’s going to impact them positively. Brian: Cool, man. This has been really exciting to have you on the show. Julien, can you tell the listeners where can they find you on the interwebs? Are you on Twitter or LinkedIn? How do they find you? Julien: For sure. @julienbenatar on Twitter, nextbigsound.com is free for everyone. Actually, we made our data public recently, so if you ever want to learn more about what we do, please check it out. We try to post on our blog about what we learn through data science, through design, and share more about why we build what we build. I recommend to just check blog and do some commitment to learn more about what we do. Brian: I definitely recommend people check out the site. The fun thing is again, as you said, it’s public. If there’s a band you like or whatever, you can type in any group that you like to listen to and you can get access to those insights. Just kind of get a flavor of what the service does. I’ll put those links in the show notes as well as the data pyramid. Julien, cool. Thanks for coming on. Is there anything else do you like to add before we wrap it up? Julien: No, thank you so much. I love reading your newsletters and I’m very happy to be here. Brian: Cool. Thank you so much. Let’s do it again. Julien: Cool. Brian: Cool. Thank you. We hope you enjoyed this episode of Experiencing Data with Brian O’Neill. If you did enjoy it, please consider sharing it with #experiencingdata. To get future podcast updates or to subscribe to Brian’s mailing list where he shares his insights on designing valuable enterprise data products and applications, visit designingforanalytics.com/podcast. Never forget to look up the online HTML CheatSheet when you forget how to write an image, a table or an iframe or any other tag in HTML!

Experiencing Data with Brian O'Neill
006 – Julien Benatar (PM for Pandora’s data service, Next Big Sound) on analytics for musicians, record labels and performing artists

Experiencing Data with Brian O'Neill

Play Episode Listen Later Feb 12, 2019 45:01


We’re back with a special music-related analytics episode! Following Next Big Sound’s acquisition by Pandora, Julien Benatar moved from engineering into product management and is now responsible for the company’s analytics applications in the Creator Tools division. He and his team of engineers, data scientists and designers provide insights on how artists are performing on Pandora and how they can effectively grow their audience. This was a particularly fun interview for me since I have music playing on Pandora and occasionally use Next Big Sound’s analytics myself. Julien and I discussed: How Julien’s team accounts for designing for a huge range of customers (artists) that have wildly different popularity, song plays, and followers How the service generates benchmark values in order to make analytics more useful to artists How email notifications can be useful or counter-productive in analytics services How Julien thinks about the Data Pyramid when building out their platform Having a “North Star” and driving analytics toward customer action The types of predictive analytics Next Big Sound is doing Resources and Links: Julien Benatar on Twitter Next Big Sound website Next Big Sound blog The Data Pyramid model Quotes from Julien Benatar “I really hope we get to a point where people don’t need to be data analysts to look at data.” “People don’t just want to look at numbers anymore, they want to be able to use numbers to make decisions.” “One of our goals was to basically check every artist in the world and give them access to these tools and by checking millions of artists, it allows us to do some very good and very specific benchmarks” “The way it works is you can thumb up or thumb down songs. If you thumb up a song, you’re giving us a signal that this is something that you like and something you want to listen to more. That’s data that we give back to artists.” “I think the great thing today is that, compared to when Next Big Sound started in 2009, we don’t need to make a point for people to care about data. Everyone cares about data today.” Episode Transcript Brian: I’m really excited today for this episode. We have Julien Benatar on the show and he’s from a company that I’m sure a lot of people here know. You probably have had headphones on at your desk, at home, or wherever you are listening to Pandora for music. Julien , correct me if I’m wrong, you were the product manager for artist tools and insights at Next Big Sound, which is a type of data product that provides information on music listening stats to, I assume, artists’ labels as well to help them understand where their fans are and social media engagement. I love this topic. I’m also a musician, I have a profile on Next Big Sound and I feel music’s a fun way to talk about analytics and design as well because everybody can relate to the content and the domain. Welcome to the show. Did I get all that correct? Julien: Yeah, it was perfect. Brian: Cool. Tell us a little about your background. You’re from France originally? Julien: Yes, exactly. I grew up next to Paris, in Versailles more specifically, and moved to New York in 2014 to join Next Big Sound. Brian: Cool, nice. You’ve been there for about four years, something like that. You have a software engineering background and then now you’re on the product side, is that right? Julien: Exactly yes. I joined the company back when we were a startup. Software engineering was perfect, there was so much to do. To our move to Pandora, I moved to a product manager role around a year ago. Brian: Next Big Sound was independent and then they were acquired by Pandora. I assume there is good stuff about your data. Why did Pandora acquire you and how did they see you guys improving their service? Julien: We got acquired in 2015. The thing is, Next Big Sound was already really involved in the music industry. We already had clients like the three major labels and a lot of artists were using us to get access to their social data. I think it was a very natural move for Pandora as they wanted to get closer to creators and provide better analytics tools. Brian: For people that aren’t on the service, I always like to know who are the actual end users, the people logging in, not necessarily the management, but who sits down and what are some of the things that they would do? Who would log in to Next Big Sound and why? Julien: Honestly, it’s really anyone having any involvement into the music industry, so that can be an artist, obviously, try looking to try their socials and their audience on Pandora. But you can also be a booker trying to book artists in their town. We have a product that can really be used by many different user personas. But our core right now is really artists and labels, having contents on Pandora and trying to tell them the most compelling story about what they’re doing on the platform. Brian: When you think about designs, it’s hard to design and we talk about this on the mailing list sometimes but it’s really hard to design one great thing that’s perfect for everybody so usually you have to make some choices. Do you guys favor the artist, or the label, or as you call them,the bookers or whom I know as presenters,in the performing arts industry? Do you have a sweet spot, like you favor one of those in terms of experience? Julien: I think it’s something we’re moving towards, but it hasn’t always been this way. Like I told you, we used to be a startup or grow us to make a product that could work for as many people as possible. What is funny is we used to have an entity on Next Big Sound called Next Big Book where we used to provide the same type of service for the book industry. If anything, it’s been great to join Pandora because then we could really refocus on creators and it really allowed us to, I believe, create much better and more targeted analytics tools to really fulfill needs for specific people like artists and labels. Brian: I would assume individual artists are your biggest audience or is it really heavily used by the labels or who tends to… Julien: I think it’s pretty much the same honestly. I think the great thing today is that, compared to when Next Big Sound started in 2009, we don’t need to make a point for people to care about data. Everyone cares about data today. I think that everyone has reasons to look at their dashboards and especially for a platform like Pandora with millions of users every month. Our goal is really just telling them a story about what does it mean to be spinning on the platform and the opportunities it opens. Brian: You talked about opportunities, do you have any stories about a particular artist or a label that may have learned something from your data and maybe they wrote to you or you found out like in an interview how they reacted like, “Hey, we changed our tool routing,” or, “Hey, we decided to focus on this area instead of that area.” Do you know anything about how it’s been put into use in the wild? Julien: Yeah, it’s used for so many different reasons. For the people who don’t use Pandora, something I really like about the platform is it’s really about quality. As you use Pandora, you have the opportunity to thumb up or thumb down songs and as you do, you’re going to get recommended more songs like the ones you like. It’s really about making sure that you get the best songs at all times. The reality then is that for artists, their top songs on Pandora can be pretty different than their top songs on other platforms because sometimes their friends are going to be just reacting more to some part of their catalog than another one. I’ve heard many times of artists changing their playlists in looking at which songs where their fans thumbing up the most on Pandora. Brian: Could you go through that again? How would they adjust their playlist? Julien: Usually, people use Pandora as a radio service. While we already have internet today, most people are listening to the radio because they’re usually are very targeted and it just works really well. The way it works is you can thumb up or thumb down songs. If you thumb up a song, you’re giving us a signal that this is something that you like and something you want to listen to more. That’s data that we give back to artists. We tell them, “This are your most thumbed songs on Pandora. These are the songs that people engage with the most on the platform.” Looking at this data, you can actually inform them songs that they believe they should be playing more on the store. Brian: I see. A lot of it has to do with the favoriting aspect to give them idea what’s resonating with their audiences. Julien: Qualitative feedback, yes. Brian: Got it. Actually, it’s funny you mentioned the qualitative feedback. In preparation for this, I was reading an article that you guys put out back in March about a new feature called weekly performance insights, which is really cool and this actually reminds me of something that I talked about in the Designing for Analytics mailing list, which is the act of providing qualitative guides with your analytics. A lot of times they analyze for turnout quantitative data and whenever there’s an opportunity to put stuff into context or provide qualifiers, I think that’s a really good thing and you guys look like you’ve have done some really nice things here. I’ll paraphrase it and then you can jump in and maybe give us some backstory on it. One of the things that I think is really cool is there’re concepts of normalcy in here so that, if I’m an artist and I look at my numbers, I have an idea. For your Twitter mentions, for example, you say, “For artists with 26,000 followers, we expect you to get around 44 mentions.” When you show me that I have 146 mentions, I can tell that I’m substantially higher than what my social group would be. I think that’s a really fantastic concept that people not in music could try to apply as well which is, are there normalcy bans where you’d want to sit? Is there some other type of group, maybe, an industry, or apparent group, or another business unit, whatever it may be to provide some context for what these out of the blue numbers mean that don’t have any context? How did you guys come up with that and can you tell us a bit about the design process of going from maybe just showing, “You’re at 826 apples,” as compared to what? How did you move from just a number into this these kind of logical groupings where you provide the comparisons? Julien: I think what’s really fascinating is, we really live in an age of data. As an artist, you need to be on social media for the most part. There still a lot of artists I listen to but just decide not to. It’s part of things but at the same time, real big success in the music industry didn’t change. It’s still being on the Billboard chart, getting a Grammy and all these things. But as we see this, we have millions of artists looking at their data every day and just are not able to understand, like is it good or is it not good. Everyone starts at zero. We have a strong belief that data can only be useful when put in context. Looking at the number on its own can give you a sense of how things are doing but that can also be dismissive. An example is, a very common way to look at data is to look at a number and look at the percent changing comparison to the previous week. You’ve got a bunch of tables and you look at, am I growing or am I not growing. The reality is it’s actually impossible to always have a positive percent change. There’s no artist in the world that always does better week by week. Even Beyonce, I can assure you that the week she released Lemonade, she had more engagement on Twitter than the week after. With that in mind, we really try to give a way for artists to understand how are they doing for who they are and where they are currently in their career. Next Big Sound started in 2009. One of our goals was to basically check every artist in the world and give them access to these tools and by checking millions of artists, it allows us to do some very good and very specific benchmarks. For an artist, like the example you said, for instance an artist with a thousand Twitter mentions in a week, is it good or bad in comparison to their audience size? This feature comes because that’s just the question we’re asked. Artists want to know is it any good? What does this number actually mean for me? That’s why we really wanted to, in some ways, get out of being a content aggregator platform and really be a data analytics platform. How can we actually give information that can help artist make better decisions? Brian: I remember the first time I got what I would call an anomaly detection email from your service and it was about some spike in YouTube views or something like that. I thought it’s fantastic in two reasons. First of all, you identify an anomalous change and I think in this case it’s a positive anomalous change. That tells me that I should log in the tool. Secondly, you proactively delivered that to me. On the Designing for Analytics mailing list, we talk about is that user experience does not necessarily live inside your web browser interface or your hard client or whatever you’re using to show your analytics. Email and notifications are a big part of that. Can you tell me about how you guys also arrived at when you pushed these things out and maybe talk about this little anomaly detection service that you have? Julien: It all started when we got acquired by Pandora. We decided to just invite a bunch of users and just talk to them, understand how to use our product and what did they think about it. We had artists, managers, and label people come over and we just talk to them and basically they all said, “We love it.” But then, by looking at their actual usage, they don’t use it that much. I guess one of their questions was when should I be looking at my data? Everyone is very busy. As you’re an artist, you need to perform, you need to write music, you need to engage with your fans and same goes with everyone. When should I look at data? The reality is by being a data company, we do get all the data, we have all the numbers. We have ways to know when things are supposed to be known, when artists should be acting on something. We just turn this into this email notifications. Anytime we notice that an artist is doing better than expected, we just let them know right away. Brian: That’s great. Do you do it on the opposite end too? If there’s an unexpected drop or maybe like, “Oh, you put a new track out and your socials dropped,” or something like that, do you look at the negative side too or do you tend to only promote the positive changes? Julien: As far as pushes, we decided to only do push for positive. But as you mentioned weekly performance, weekly performance can give you some negative insights, like, “You’re not doing as well as artists with the same size of audience as yours.” The reason we didn’t do it for our notification is, anomalies are really hard to completely control. A reason, for instance, is Twitter removing bots. Basically, every single artist would have had an email telling them, “You lost Twitter followers this week.” It was a lot of work to really tune our anomaly factor to actually only send emails when something legitimate happens. That’s the reason we only decided so far to do it for positive but we actually have been thinking about doing the same for negative but that’s another type of work. Brian: Yeah, you’re right. You have to mature these things over time. You don’t want to be a noise generator. Julien: Exactly. Brian: Too many, then people start to ignore you. I’ve seen that with other data products I’ve worked on which just have really dumb alerting mechanisms that are very binary or they’re set at a hard threshold and just shootout noise and people just tune it out. Julien: I’m glad you mentioned this because this feature was in beta for a year for that specific reason. Brian: Got it. Julien: We had to learn the hard way. We had like a hundred beta users. We’ve got way too many emails because anytime there were an anomaly anywhere, they would just get an email. For the most part, it was things that were supposed to help them. If a notification becomes noise, then that’s absolutely against its purpose. Brian: I don’t know if everybody knows how the music business works, at least from the popular music side, but just to summarize. You have individual artists that are actually performers. They may or may not have an artist manager which takes care of their business affairs, represents them like negotiations with people that book shows. Then you have labels which are sort of like an artist manager except they’re really focused on the recording assets that the artist makes and they actually tend to own the recordings outright at the beginning and then over time, the artist may recoup through sales they make it the ownership act and the sound recordings they make. Of those kinds of three major groups, is there a one that’s particularly hungry or you’re the squeaky wheel that is most interested in what you’re doing? Julien: I really think that into these three groups, we have a subset of users that are really into the data and into the actionability of it. I don’t think it’s one specific group of user. It could be all around the industry like we have the data-savvy, they really want to know. We have some users that actually would rather get more notifications even if they need to on their end to figure what is right from what is wrong. But since we have such a wide user base of different type of people, we decided to go on the conservative side and make sure to only share things that we thoroughly validated through all of our filters. Brian: I assume that your group reports into some division of Pandora, I’m not sure of that. Are you reporting into a technology, like an IT, or a business unit, or marketing? Where do you guys fit in the Pandora world? Julien: We’re part of the creator’s tools. I don’t really have a perfect answer to this. Brian: Okay. I guess my main question being, because when we talk about designing services, we talk about both user experience, which is the end user thing and about business success or organization success. I’m curious, how does Pandora measure that Next Big Sound as delivering value? I can understand, I’m sure our artist can understand how the artists value it through understanding how is my music moving my audiences, et cetera. Is there a way that Pandora looks at it? Are they interested in just time spent? The analytics on the analytics, so to speak, is what I’m asking about. How do you guys look at it like, “Hey, this is really doing a good job,” or whatever? Do you know how that’s looked at? Julien: To be honest, I think you said it right. Our goal is to help artists make their decisions through data and having artists use the platform is currently the way Pandora sees us doing a good job. Actually, it hasn’t changed that much since our acquisition. One of our main KPI for the past and couple of years is something I would call insights consumes. Just making sure that our users, artists, anyone using Next Big Sound are consuming data. That can be them logging into the website or that can be them opening one of our notifications. But so far that was our main KPI. We’re trying to work on some more targeted KPI, potentially like actions taken, that would be the North Star, but we’re still working on how to do that right. Brian: Do you guys facilitate actions, so to speak, directly in the tool or are there things people can do with those actions really take place outside of the context of Next Big Sound? Julien: There are actions that artists can take to the other creator’s tools provided by Pandora. For instance, artists have the ability to send audio messages to anyone listening to them. If they go on tour into the US, they can have targeted messages in every single song they’re going to play. If anyone listens to them there, they can just click and buy a ticket. We’re working to make sure that artists are aware of these tools because they are free and they’re generally helping them grow at their careers. But regarding external actions, so far we don’t have any one-click way to tweet at the right time to the right people or with the right content or anything like this. Brian: Sure and that’s understood. Not every analytics product is going to have a direct actionable insight that comes right out of it. You guys may be feeling a longer term picture about trending and maybe for a certain artist to get an idea if they’re releasing music fairly frequently, what stuff is working and resonating, and what stuff is not. I can understand that. There may not be a button to click as a result immediately. Julien: That’s the goal though. Everything we do right now is going towards this objective. Maybe I can tell you a little about the way we think about data and that can give more sense to it. In order to work on any new feature, we follow this concept called the data pyramid. It’s something that you can Google. There’s a Wikipedia page for it. Let me explain to you how it works. The data pyramid, it’s a pyramid formed of four layers. It could be upon each other and each representing an exquisitely useful application of data. At the bottom of the pyramid we have the data layer. Any sort of data that we may have. For our case, Android data, Twitter, Facebook just getting the numbers, getting the raw data. On top of it, we have the information layer. The information layer is going to be ways you have to visualize this data. I guess it’s like the very broad sense of analytics. We’re going to give you tables, graphs, pie charts, you name it. We’re giving you ways to craft stories about this data but it’s on you to figure it out. Then on top of it we have what we call the knowledge layer. That’s where things start to get interesting. The knowledge layer is the contextual part of it. It’s like, “What do this number actually mean?” It has industry expertise. For instance, the way we’re going to work about it for musicians and their true data may be different than any other industry. The knowledge layer goes like a weekly performance. It’s a perfect answer to it. It’s what does it mean for me as a musician with a hundred fans to get two mentions this week. Same for notifications. It’s telling you that you should be looking at your data right now because something is happening. That’s how we get to the North Star and the last part of the data pyramid which is intelligence. The goal of intelligence is actionability. Now that I get to understand what does this number mean to the specific context, what should I be doing? Following your question, everything we’re trying to do here is to get to a point where we can just send an email to an artist and tell them, “Hey, you should be doing this right now because, with all the data that we have, we believe that this is going to have the highest impact for you.” Brian: It‘s really fascinating that you just outlined this data pyramid. I actually haven’t heard of this before. It made me think of one of the kind of, it’s not a joke but in the music community, I’m also a composer and when we write stuff, the kind of running joke is like nothing is new. Your ideas for this new song or this new melody I’m composing, it probably came before you. You heard it there before. I wrote a post on my list that was pretty much exactly the same thing except the knowledge layer. I was calling that insight. Data have been this raw format and information being the first human-readable format that’s like say going from raw data to a chart, a histogram. Now I have a line on a chart and then the insight layer being, I have a line on the chart and another line comparing it to like you said, average, or my social group, or a parent group, or some taxonomy, or an index. Then the action or the prescription for what to do or the prediction those that kind of lead you in about action which would be that fourth state. You’re like, “Oh, is this really a new concept?” It’s like, “Nope. Someone else already thought of that.” I totally want to go read about this data pyramid. Julien: That’s amazing. Brian: I’ll find that link to the data pyramid and I’ll put that in the show notes for sure. I thought that was really funny. Julien: It’s funny that you called it insight because that’s the way we call a lot of our features are working out. The way we define insight is bite-size, noteworthy, sharable content. How can we get into the noise of all of the data that only gives you exactly what you should be looking at. That’s how we got into notification and weekly performances. This is the one thing you should be looking at. Brian: I understand what you’re getting at there. The insights are, like you said, bite-size chunks of interesting stats that someone can put some kind of context around. That’s great and it’s good. One of the things I liked, too, that you talked about was you said, “Oh we got like a hundred users, like a beta group and that kind of inspired some of this.” Your product response to how do we help people know when to come and look at our service. I think this is really good because one of the problems that I see with clients and people on the list, I think is low engagement. This is especially true for internal analytics companies. Low engagement can be a symptom of a difficult product, it doesn’t provide the right information at the right time, it may not have a lot of utility, or it’s a resistance to change. People have done something the old way and they don’t want to do it the new way. One of the recipes you can follow if you’re trying to do a redesign or increase engagement is to involve the people that are going to use the service in the design process, both the stakeholders as well as the end customers. This is especially true again for the internal analytics people. Your customers or other employees and your colleagues. By engaging them in the design process, they’re much more likely to want to change whatever they’re doing now. I loved how you guys did some research. Now I want to ask, do you frequently do either usability testing or interviews? Is that an ongoing thing at your company or is it really just in front of a big feature release or something like that? How do you guys do this research? Can you tell me about that? Julien: Of course. It’s consent. We haven’t released any major feature without doing some heavy user testing. I’m very lucky to be working with two designers, Justin and Anabelle who are very user-focused. Honestly, if you come to our office, at least every week we’re going to have some user interview and just talking to them, showing them prototypes, and just see how do they play with it. Brian: So you’re doing a lot of testing it sounds like. That’s fantastic. Julien: At the same time it’s always to find the right balance because you could be overtesting things too. We really are focusing on user testing for new things and make sure that the future that we are working on actually answers their user story that we intended. Brian: I don’t know how involved you get participating in these, but do you have any interesting stories or anecdotes that you got from one of those that you could share? Julien: Let me think. I do participate into a lot of them but I’m not sure I have an example right now. Brian: Are most of the people you interview, are they current users of Next Big Sound or do you tend to focus on maybe artists that haven’t experienced the service yet or you mix it up? Julien: We mix it up. We mostly engage with users that we already have but then we can decide to go with users that haven’t used the platform for a while, or more active users if you want to understand how we’re useful into their day to day. What I would say is that, surprisingly, it’s very easy to get users to chat about their experience with the product. I didn’t assume that we would get so many responses when we tried to have people come over or just hop on the zoom to check a new feature. Brian: I’m glad you actually mentioned that because I think in some places, recruiting is perceived to be difficult and it probably isn’t. Maybe you haven’t done it before but as I tell a lot of my clients, a lot of people love to have someone listen to them talk, tell them all about their life and what’s wrong with it, and how it could be better with their tools. They love having someone listen to them and especially if they know that their feedback is going to influence a tool or a service that they’re using. They tend to be pretty engaged with it. I find it’s really rare that I do an interview with a client’s customer and they don’t want to be included in the future round like, “Hey, when we redesign the service, can we come back to you and show you what we’ve done?” “Oh, I love to do that!” Everybody wants to get engaged with it. There are places where recruiting can be difficult when it’s hard to access the users, some of the enterprise software space that can be an issue sometimes. But generally, if you can get access to them, they tend to be pretty willing to participate. I’m glad you mentioned that. Julien: I think the great part about testing with current users on the platform is to actually show them prototypes with real data, not just show them an abstract idea that we want to work on. As soon as they can see what we’re working on apply to their own career as musicians, for instance, that can lead to fascinating discussions. Brian: You made a really good point on the real data thing. I remember as far back as 10 years ago or whenever, I use to work at Fidelity Investments, we would see this issue when we’re working on the retail site for investors. When you show a portfolio that, for example, has Apple stock trading at $22 in it, you’re not really there to test what is the price of Apple stock but you might be testing something entirely different and the customer cannot bear what is going on? They’re so stuck on this thing. It’s all fake seed data in the prototype. The story here being if you’re a listener, when you test it’s important to have at least realistic data. You don’t want to have noise in the test or whatever your studying or else you can end up on this tangent. Try to make the numbers looks somewhat realistic if you’re using quantitative data. In some cases, people can be taught to roleplay. Pretend you’re Drake or pretend you’re some big artist and then they can get their head around why they have billions of streams instead of thousands which they’re used to. Julien: Absolutely. That also helps us just build better products because the reality is we have a lot of artists with maybe 10 plays in a month. As we build visualizations like something that we built a line of looking at Drake’s data, it’s not going to work as intended for a smaller artist sometimes. Having real data involved as soon as possible into the design process has been such a game changer for us. We really have a multidisciplinary team involved into the research and design of everything we do. I’m working with a data scientist, data engineer, a web engineer, and designer on a daily basis. Obviously, we all have our things to do. But as we get into creating something new, we just make sure to have someone helping us get the real data, interview the right user, and just create prototypes as soon as possible. Working with prototypes is essential into building useful data analytics tools. Brian: Yes, you do learn a lot more with a working prototype. It’s not to say you can’t test with lower fidelity goods, especially early on but for a service like yours when the range of possible use both the personas and also you’ve got the Drakes of the world, big major label artist and then down to really small independents, it’s really important to have an idea how your charts are going to scale, and what’s going to happen with data. Even just small stuff like how many decimal points should you be showing on a mobile device, some of the numbers might cram up. Julien: Exactly. Brian: All this stuff that you never think, if you only look at one version of everything, you can end up with a mess. I’m glad that you brought that up. Julien: I couldn’t say better. The decimal is actually something that we’ve had to discover through real data. Brian: To all of you in the technical people out there, I will say this. If I’ve seen one trend with engineers, is they love precision and there’s a lot of times when there’s very unnecessary precision being added to numbers. Such as charts and histograms. Histograms are usually about the trend, they’re not about identifying what was the precise value on this date at this time. It’s about the change over time. Showing what’s my portfolio worth down to three digits of micro-cents or something like that is just unnecessary detail. You can probably just round up to the dollar or even hundreds of dollars or even thousands of dollars in some cases. It actually is worse. The reason it’s worse is that adds unnecessary noise to the interface, you’re providing all these inks that someone has to mentally process, and it’s actually not really meaningful ink because the change is what’s important. Think about precision when you’re printing values. Julien: This concept of noise is so essential today for any data analytics tools. There is so much data today. There is data for everything. I think it’s our responsibility as a data analytics company to make sure what are we actually trying to help our user with this data set is not just about adding new metrics. Adding new metrics usually is just going to add noise and not be helpful in comparison to fairing what do they need to make the right decision. Brian: Right. Complexity obviously goes up. The single verb, ‘add,’ as soon as you do that, you’re generally adding complexity. One of the design tools that is not used a lot, and this is something I try to help clients with is, what can we take away? If we’re not going to cut it out entirely, can we move this feature, maybe this comparison to a different level of detail? Maybe it’s hidden behind a button click, or it’s not the default. But removing some stuff is a way to obviously simplify as well, especially if you do need to add new things. Your only weapon is not the pencil, you’ve got the eraser as well in the battle so to speak. Julien: I couldn’t agree more. On Next Big Sound we have this concept of artist stages. It’s a way for us to put artist into buckets and by looking at their social instrument data. It goes from undiscovered to epic. We do that by looking at all of the data we have and looking at it in context. I don’t have the numbers right now because they update on a daily basis but every artist starts undiscovered. For instance, as they get 1000 Facebook likes, maybe they’re going to get to a promising stage. We have all of these thresholds moving everyday looking at trends among social services. But what is interesting is that for instance, for a booker, a booker doesn’t need to look at the exact number of Twitter followers for an artist. He needs to know that he’s booking for a midsized venue in the city he’s in and he’s probably going to be looking for promising to established artists and not looking for the mainstream to epic artists. It’s always about figuring a way to use the numbers to tell the story. Brian: I’m totally selfishly asking for myself here, but I was immediately curious. I live in Cambridge which is in the Boston area, and I am curious who are the big artists in our area and what is the concentration? I’m in a niche. I’m more in the performing arts market, in the jazz, in world music, and classical music but I’m just curious. Is there a way to look at it by the city and know what your artist community looks like? You guys do anything like that? Julien: We don’t currently. But I think YouTube has actually a C-level chart available. It’s not part of something we do because I think the users it would benefit are not the users we specifically try to work on new features. It’s more something for bookers than artists ,specifically ,but it’s exactly the type of thing that we need to think about when we prioritize new features. Brian: I’m curious just because the topic’s fairly hot. Everybody is trying to do machine learning projects these days. I don’t like the term AI because it tends to be a little bit overloaded but are you guys using machine learning to accomplish any particular problems or add any new value to your service right now? Is that on your horizon? Julien: How do you think about machine learning? Brian: A lot of times I associate it with predictive analytics or understanding where you might be running instead of just using statistics. I don’t know what kind of data you might have for your learning that you can feed in but maybe there’s aspects about artists that can predict. Especially, I would think like in the pop music world where there tends to be more commercialization of the music, I would say, where it’s like we need a two-minute dance track at this tempo specifically because DJs are going to play it. It’s a very commercial thing. It’s very different than what I’m used to. So I’m curious if there’s a way to predict out how an artist may do or what kinds of tracks are performing well. Like these tempo songs, we predict over the next six months that tech house music at 160 beats for a minute is going to do really well based on the trending. I don’t know. I’m throwing stuff out there. The goal, obviously, is not to try to use like, “Oh Home Depot has this new hammer, let’s run out and get it. We don’t even know what it’s for but everyone else is buying it.” That’s how I joke about machine learning. It’s like you need to have a problem that necessitates that particular tool. I don’t ask such that, “Oh there should be some.” I’m more curious as to whether or not it’s a tool that you guys are leveraging at this time. Julien: The Next Big Sound team doesn’t worked on features following the musical aspects of things. We really are focused on the user data. Brian: Engagement and social. Julien: Engagement data mostly, yes. But at the same time, I’m sure teams have worked on this because of the way that genome works. We have a lot of data about the way songs are made. Regarding machine learning, on the Next Big Song team, we actually have something that is called the prediction chart. You said predictions. We have this chart that is available every week. Basically, it really goes back to having data for a long time. The fact that we’ve had data since 2009, we’ve been able to see artists actually get from starting to charting on the Billboard 200. By having all of these data, we’ve been able to see some trends, some things that usually happen for artists at specific times in their career up until they get into the Billboard 200. We actually do have some algorithms that allow us to apply this learning to all of the artists on Next Big Sound right now and have a list every week of artist that we believe are most likely to appear on the Billboard 200 chart next year. Brian: I see. Got it. Do you track your accuracy rate on that internally and change it over time? Do you adjust the model? Julien: Yeah, we do. Brian: Cool That’s really neat. Tell me, this chat has been super fun. I’ve selfishly got a little indulgent because being a musician, it’s fun to talk about these two worlds that I’m really passionate about so I could go on forever with you about this. But I’m curious. Do you have any advice for other product managers or analytics practitioners about how to design good data products and services? How to make either your own organization happy or your customers happy? Do you have any advice to them? Julien: Yeah, of course. I guess it’s all about asking questions, honestly. What is very good with working at Next Big Sound is that it all started in 2009. Maybe actually I can go back and tell you the story about how it started and why it’s so different today. It started in 2009. It was actually a project, a university project by the three co-founders. Basically, they were wondering about one thing. How many plays does a major artist get on the biggest music platform in the world? At that time, it was MySpace. The artist they picked was Akon. Basically, they just built a crawler, went to bed, woke up, and discovered that an artist like Akon was getting 500,000 plays on MySpace in one night in 2009. The challenge in 2009 was to get the data. That’s why for the most part in Next Big Sound as it started was, I really think a data aggregation tool. Our goal was to get as many sources as possible and just make them easily accessible into the same place. We really are much into the information layer here. We’re giving you all the numbers and you can compare Tumblr to Vimeo, to YouTube, to Twitter, to Facebook, to Vine, to you name it into a table or a graph that you want to. The reality is, today things change. We don’t need to fight to get data anymore. We don’t need to hike our way into getting the numbers. Now, data is accessible to everyone in a very easy way. It’s kind of a contract. You, by being an artist, you know you’re going to get access to your Spotify, YouTube, Pandora, Apple Music or any other platform data very easily just by signing up and authenticating as an artist. That’s where our goal changes. Thankfully, we don’t need to convince people to care about data, we know they do already. But now the challenge is different. Now, the challenge is to make them understand what does their data mean and how can they turn it into getting even more data, getting into having even more engagement, and having even more plays. I think that’s something that is very interesting because it really resonates into the question we’ve been asked in the past few years like, “What does my data mean and when should I be looking at my data?” If anything, these two things correlated pretty well. People don’t just want to look at numbers anymore, they want to be able to use numbers to make decisions. That’s the core of what we’re trying to achieve today. We couldn’t be there if we didn’t have users that ask us the right questions. Brian: Cool that’s really insightful. Just to maybe tie it off at the end and maybe you can’t share this but what’s your home run? What is your holy grail look like? Is there a place you guys know you want to get? Maybe it’s the lack of data or you don’t have access to the data in order to provide that service. Do you guys have kind of a picture of where it is you want to take the service? Julien: What is very noble about our goal at Next Big Sound specifically is we’re here to help artists. The North Star would be to make sure that any artist at any time in their career is doing everything they can do to play more shows, to reach to more people, and to make sure their music is heard. Brian: Nice. I guess it’s like you’re already there, just maybe the level of quality and improving that experience over time, that’s your goal. It’s not so much that there’s so much unobtainable thing at this moment. Is that kind of how you see it? Julien: I think the more we don’t feel just a data analytics tool, the more we’re getting to that goal. I really hope we get to a point where people don’t need to be data analysts to look at data. We’re always going to provide a very customizable tool for the data-savvy because they know what they need more than we can ever do it for them. We want to make sure that for everyone else, we can just make it very easy and as simple as a click for them to do something that’s going to impact them positively. Brian: Cool, man. This has been really exciting to have you on the show. Julien, can you tell the listeners where can they find you on the interwebs? Are you on Twitter or LinkedIn? How do they find you? Julien: For sure. @julienbenatar on Twitter, nextbigsound.com is free for everyone. Actually, we made our data public recently, so if you ever want to learn more about what we do, please check it out. We try to post on our blog about what we learn through data science, through design, and share more about why we build what we build. I recommend to just check blog and do some commitment to learn more about what we do. Brian: I definitely recommend people check out the site. The fun thing is again, as you said, it’s public. If there’s a band you like or whatever, you can type in any group that you like to listen to and you can get access to those insights. Just kind of get a flavor of what the service does. I’ll put those links in the show notes as well as the data pyramid. Julien, cool. Thanks for coming on. Is there anything else do you like to add before we wrap it up? Julien: No, thank you so much. I love reading your newsletters and I’m very happy to be here. Brian: Cool. Thank you so much. Let’s do it again. Julien: Cool. Brian: Cool. Thank you. We hope you enjoyed this episode of Experiencing Data with Brian O’Neill. If you did enjoy it, please consider sharing it with #experiencingdata. To get future podcast updates or to subscribe to Brian’s mailing list where he shares his insights on designing valuable enterprise data products and applications, visit designingforanalytics.com/podcast.

Podcast over Fotografie
S02E10 - Belichting bekijken op histogram

Podcast over Fotografie

Play Episode Listen Later Feb 9, 2019 15:32


In deze aflevering leg ik uit hoe je de belichting kunt evalueren op een histogram. Hieronder nog een aantal links die je kunt doornemen met illustraties die het allemaal net iets eenvoudiger maken. https://zoom.nl/artikel/cursussen/24020-histogram-lezen-voor-beginners.html https://vinkacademy.nl/fotografietips/belichting-checken-check-het-histogram/ https://www.digitalefotografietips.nl/basiscursus/histogram/ https://www.cameranu.nl/nl/advies/foto/wat-is-het-histogram

The Learn Stage Lighting Podcast
Episode 35 – Lighting for Video (Part 2)

The Learn Stage Lighting Podcast

Play Episode Listen Later Sep 25, 2018 19:59


In this week's episode, we continue with our Lighting for Video Series. If you haven't already, check out Lighting for Video (Part 1) where I break down the meanings of Color Temperature, Color Quality, and Histograms. Lighting News! (1:05) This week in Lighting news there were a couple of announcements from the Trade Show, PLASA, happening in Europe. […]

The Learn Stage Lighting Podcast
Episode 33 – Lighting for Video (Part 1)

The Learn Stage Lighting Podcast

Play Episode Listen Later Sep 11, 2018 15:28


This week's Episode I'm excited to dive in and discuss Lighting for a Video. I'll break down the meanings of Color Temperature, Color Quality, and Histograms. Lighting News (1:15) This week in Lighting News, I wanted to share an article I read in the PLSN Magazine. Brad Schiller writes the Feeding the Machine segment and […]

ptguru's podcast
The Histogram

ptguru's podcast

Play Episode Listen Later May 17, 2018 16:04


My favourite quality tool by far is the histogram. In this edition of the PodCast, I talk about how a histogram works and why they are such valuable tools.

Python for Everybody (Audio/PY4E)
9.2 Building Histograms

Python for Everybody (Audio/PY4E)

Play Episode Listen Later Sep 30, 2016 9:26


We look at how we can use dictionaries to count the frequencies of many things at the same time. We learn how the key and value are related in a dictionary and example the get method to retrieve values from a Python dictionary.

Python for Everybody (Video/PY4E)
9.2 Building Histograms

Python for Everybody (Video/PY4E)

Play Episode Listen Later Sep 29, 2016 9:26


We look at how we can use dictionaries to count the frequencies of many things at the same time. We learn how the key and value are related in a dictionary and example the get method to retrieve values from a Python dictionary.