Attend any conference for any topic and you will hear people saying after that the best and most informative discussions happened in the bar after the show. Read any business magazine and you will find an article saying something along the lines of "Business Analytics is the hottest job category out there, and there is a significant lack of people, process and best practice." In this case the conference was eMetrics, the bar was….multiple, and the attendees were Michael Helbling, Tim Wilson and Jim Cain (Co-Host Emeritus). After a few pints and a few hours of discussion about the cutting edge of digital analytics, they realized they might have something to contribute back to the community. This podcast is one of those contributions. Each episode is a closed topic and an open forum - the goal is for listeners to enjoy listening to Michael, Tim, and Moe share their thoughts and experiences and hopefully take away something to try at work the next day. We hope you enjoy listening to the Digital Analytics Power Hour.
Michael Helbling, Tim Wilson, and Moe Kiss
The Digital Analytics Power Hour podcast is an absolute gem for anyone interested in digital analytics. Hosted by Michael, Tim, and Moe, this podcast offers a unique blend of expert insights, relatable discussions, and hilarious banter. Whether you're new to the field or a seasoned professional, this podcast provides valuable knowledge and perspectives on various topics related to digital analytics.
One of the best aspects of this podcast is the depth and breadth of the topics covered. The hosts and their guests delve into a wide range of subjects, including reporting vs analytics, data visualization, user experience, marketing trends, and much more. They bring on industry experts who share their experiences and provide practical advice that listeners can apply directly to their own work. The hosts' ability to explain complex concepts in a clear and accessible manner makes it easy for both beginners and experts to follow along.
Another great aspect is the authenticity of the hosts. They don't pretend to know everything but instead approach each topic with humility and curiosity. They openly discuss the challenges they face in their own roles as analysts and offer practical solutions based on their own experiences. This honesty creates a refreshing atmosphere where listeners can relate to the struggles and triumphs discussed.
That being said, one potential downside is that occasionally there can be too much banter and inside jokes that may not be understood by all listeners. While it adds to the charm of the podcast, some listeners may prefer more focused discussions without as many tangents.
In conclusion, The Digital Analytics Power Hour podcast is an excellent resource for anyone interested in digital analytics. The hosts' expertise combined with their entertaining style make for an engaging listening experience. Whether you're looking for practical advice or simply want to stay up-to-date with industry trends, this podcast has something for everyone.
No matter how simple a metric's name makes it sound, the details are often downright devilish. What is a website visit? What is revenue? What is a customer? Go one level deeper with a metric like customer acquisition cost (CAC) or customer lifetime value (CLV or LTV, depending on how you acronym), and things can get messy in a hurry. In some cases, there are multiple "right" definitions, depending on how the metric is being used. In some cases, there are incentive structures to thumb the definitional scale one way or another. In some cases, a hastily made choice becomes a well-established, yet misguided, norm. In some cases, public companies simply throw their hands up and stop reporting a key metric! Dan McCarthy, Associate Professor of Marketing at the Robert H. Smith School of Business at the University of Maryland, spends a lot of time and thought culling through public filings and disclosures therein trying to make sense of metric definitions, so he was a great guest to have to dig into the topic! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
Data that tracks what users and customers do is behavioral data. But behavioral science is much more about why humans do things and what sorts of techniques can be employed to nudge them to do something specific. On this episode, behavioral scientist Dr. Lindsay Juarez from Irrational Labs joined us for a conversation on the topic. Nudge vs. sludge, getting uncomfortably specific about the behavior of interest, and even a prompting of our guest to recreate and explain a classic Seinfeld bit! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
We finally did it: devoted an entire episode to AI. And, of course, by devoting an episode entirely to AI, we mean we just had GPT-4o generate a script for the entire show, and we just each read our parts. It's pretty impressive how the result still sounds so natural and human and spontaneous. It picked up on Tim's tendency to get hot and bothered, on Moe's proclivity for dancing right up to the edge of oversharing specific work scenarios, on Michael's knack for bringing in personality tests, on Val's patience in getting the whole discussion to get back on track, and on Julie being a real (or artificial, as the case may be?) Gem. Even though it includes the word "proclivity," this show overview was entirely generated without the assistance of AI. And yet, it's got a whopper of a hallucination: the episode wasn't scripted at all! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
How is an outlier in the data like obscenity? A case could be made that they're both the sort of thing where we know it when we see it, but that can be awfully tricky to perfectly define and detect. Visualize many data sets, and some of the data points are obvious outliers, but just as many (or more) fall in a gray area—especially if they're sneaky inliers. z-score, MAD, modified z-score, interquartile range (IQR), time-series decomposition, smoothing, forecasting, and many other techniques are available to the analyst for detecting outliers. Depending on the data, though, the most appropriate method (or combination of methods) for identifying outliers can change! We sat down with Brett Kennedy, author of Outlier Detection in Python, to dig into the topic! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
Do you cringe at the mere mention of the word, "insights"? What about its fancier cousin, "actionable insights"? We do, too. As a matter of fact, on this episode, we discovered that Moe has developed an uncontrollable reflex: any time she utters the word, her hands shoot up uncontrolled to form air quotes. Alas! Our podcast is an audio medium! What about those poor souls who got hired into an "Insights & Analytics" team within their company? Egad! Nonetheless, inspired by an email exchange with a listener, we took a run at the subject with Chris Kocek, CEO of Gallant Branding, who both wrote a book and hosts a podcast on the topic of insights! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
Why? Or… y? What is y? Why, it's mx + b! It's the formula for a line, which is just a hop, a skip, and an error term away from the formula for a linear regression! On the one hand, it couldn't be simpler. On the other hand, it's a broad and deep topic. You've got your parameters, your feature engineering, your regularization, the risks of flawed assumptions and multicollinearity and overfitting, the distinction between inference and prediction... and that's just a warm-up! What variables would you expect to be significant in a model aimed at predicting how engaging an episode will be? Presumably, guest quality would top your list! It topped ours, which is why we asked past guest Chelsea Parlett-Pelleriti from Recast to return for an exploration of the topic! Our model crushed it. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
In celebration of International Women's Day, this episode of Analytics Power Hour features an all-female crew discussing the challenges and opportunities in AI projects. Moe Kiss, Julie Hoyer and Val Kroll, dive into this AI topic with guest expert, Kathleen Walch, who co-developed the CPMAI methodology and the seven patterns of AI (super helpful for your AI use cases!). Kathleen has helpful frameworks and colorful examples to illustrate the importance of setting expectations upfront with all stakeholders and clearly defining what problem you are trying to solve. Her stories are born from the painful experiences of AI projects being run like application development projects instead of the data projects that they are! Tune in to hear her advice for getting your organization to adopt a data-centric methodology for running your AI projects—you'll be happier than a camera spotting wolves in the snow!
Every listener of this show is keenly aware that they are enabling the collection of various forms of hyper-specific data. Smartphones are movement and light biometric data collection machines. Many of us augment this data with a smartwatch, a smart ring, or both. A connected scale? Sure! Maybe even a continuous glucose monitor (CGM)! But… why? And what are the ramifications both for changing the ways we move through life for the better (Live healthier! Proactive wellness!) and for the worse (privacy risks and bad actors)? We had a wide-ranging discussion with Michael Tiffany, co-founder and CEO of Fulcra Dynamics, that took a run at these topics and more. Why, it's possible you'll get so excited by the content that one of your devices will record a temporary spike in your heart rate! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
We all know that data doesn't speak for itself, but what happens when multiple instruments of measurement contain flaws or gaps that impede our ability to measure what matters on their own? Turning to our intuition and triangulation of what's happening in the broader macro sense can often help explain our understanding of our customers' ever-changing choices, opinions, and actions. Thankfully we had Erika Olson, co-founder of fwd. — which in our opinion is essentially the Freakonomics of marketing consultancies — join Tim, Moe and Val for this discussion to dive into some real-world examples of things that are inherently hard to measure and ways to overcome those challenges. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
Every so often, one of the co-hosts of this podcast co-authors a book. And by “every so often” we mean “it's happened once so far.” Tim, along with (multi-)past guest Dr. Joe Sutherland, just published Analytics the Right Way: A Business Leader's Guide to Putting Data to Productive Use, and we got to sit them down for a chat about it! From misconceptions about data to the potential outcomes framework to economists as the butt of a joke about the absolute objectivity of data (spoiler: data is not objective), we covered a lot of ground. Even accounting for our (understandable) bias on the matter, we thought the book was a great read, and we think this discussion about some of the highlights will have you agreeing! Order now before it sells out! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
The start of a new year is a great time for reflection as well as planning for the year ahead. Join us for this special bonus episode where we talk through some of our favorite learnings and takeaways from our 2024 listener survey and some of the ways we've already been able to put that feedback into practice! We also have some freebies and helpful nuggets to share with our listeners, so be sure to tune in to learn more. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
Every year kicks off with an air of expectation. How much of our Professional Life in 2025 is going to look a lot like 2024? How much will look different, but we have a pretty good idea of what the difference will be? What will surprise us entirely—the unknown unknowns? By definition, that last one is unknowable. But we thought it would be fun to sit down with returning guest Barr Moses from Monte Carlo to see what we could nail down anyway. The result? A pretty wide-ranging discussion about data observability, data completeness vs. data connectedness, structured data vs. unstructured data, and where AI sits from an input and an output and a processing engine. And more. Moe and Tim even briefly saw eye to eye on a thing or two (although maybe that was just a hallucination). For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
Ten years ago, on a cold dark night, a podcast was started, 'neath the pale moonlight. There were few there to see (or listen), but they all agreed that the show that was started looked a lot like we. And here we are a decade later with a diverse group of backgrounds, perspectives, and musical tastes (see the lyrics for "Long Black Veil" if you missed the reference in the opening of this episode description) still nattering on about analytics topics of the day. It's our annual tradition of looking back on the year, albeit with a bit of a twist in the format for 2024: we took a few swings at identifying some of the best ideas, work, and content that we'd come across over the course of the year. Heated exchanges ensued, but so did some laughs! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
Data storytelling is a perpetually hot topic in analytics and data science. It's easy to say, and it feels pretty easy to understand, but it's quite difficult to consistently do well. As our guest, Duncan Clark, co-founder and CEO of Flourish and Head of Europe for Canva, described it, there's a difference between "communicating" and "understanding" (or, as Moe put it, there's a difference between "explaining" and "exploring"). Data storytelling is all about the former, and it requires hard work and practice: being crystal clear as to why your audience should care about the information, being able boil the story down to a single sentence (and then expand from there), and crafting a narrative that is much, much more than an accelerated journey through the path the analyst took with the data. Give it a listen and then live happily ever after! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
There's data, data everywhere, including in the media! Data often gets collected, analyzed, published in a study, covered by a journalist, and then distilled down to a headline. The opportunities for lost-in-translation (or lost-in-simplification? Lost-in-summarization?) misfires are many. We tried an experiment—each of the available co-hosts brought some headlines that made them raise an eyebrow, and we tested our own data literacy (data skepticism) with a real-time review. The parallels to the day-to-day work of an analyst were many! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
KPIs? Really? It's 2024. Can't we just ask Claude to generate those for us? We say… no. There are lots and lots of things that AI can take on or streamline, but getting meaningful, outcome-oriented alignment within a set of business partners as they plan a campaign, project, or initiative isn't one of them! Or, at least, we're pretty sure that's what our special guest for this episode would say. He's been thinking about (and ranting about) organizations' failure to take goal establishment, KPI identification, and target-setting seriously enough for years (we found a post he wrote in 2009 on the subject!). He also really helped us earn our explicit tag for this episode — scatologically and onanistically, we're afraid. But solid content nonetheless, so hopefully you can hear past that! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
udging by the number of inbound pitches we get from PR firms, AI is absolutely going to replace most of the work of the analyst some time in the next few weeks. It's just a matter of time until some startup gets enough market traction to make that happen (business tip: niche podcasts are likely not a productive path to market dominance, no matter what Claude from Marketing says). We're skeptical. But that doesn't mean we don't think there are a lot of useful applications of generative AI for the analyst. We do! As Moe posited in this episode, one useful analogy is that thinking of using generative AI effectively is like getting a marketer effectively using MMM when they've been living in an MTA world (it's more nuanced and complicated). Our guest (NOT from a PR firm solicitation!), Martin Broadhurst, agreed: it's dicey to fully embrace generative AI without some understanding of what it's actually doing. Things got a little spicy, but no humans or AI were harmed in the making of the episode. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
For the first time since they've been a party of five, all of the Analytics Power Hour co-hosts assembled in the same location. That location? The Windy City. The occasion? Chicago's first ever MeasureCamp! The crew was busy throughout the day inviting attendees to "hop on the mic" with them to answer various questions. We covered everything from favorite interview questions to tips and tricks, with some #hottake questions thrown in for fun. During the happy hour at the end of the day, we also recorded a brief live show, which highlighted some of the hosts' favorite moments from the day. Listen carefully and you'll catch an audio cameo from Tim's wife, Julie! And keep an eye on the MeasureCamp website to find the coolest way to spend a nerdy Saturday near you (Bratislava, Sydney, Dubai, Stockholm, Brussels, and Istanbul are all coming up before the end of the year!). For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
To data analyst, or to data science? To individually contribute, or to manage the individual contributions of others? To mid-career pivot into analytics, or to… oh, hell yes! That last one isn't really a choice, is it? At least, not for listeners who are drawn to this podcast. And this episode is a show that can be directly attributed to listeners. As we gathered feedback in our recent listener survey, we asked for topic suggestions, and a neat little set of those suggestions were all centered around career development. And thus, a show was born! All five co-hosts—Julie, Michael, Moe, Tim, and Val—hopped on the mic to collaborate on some answers in this episode. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
It's human nature to want to compare yourself or your organization against your competition, but how valuable are benchmarks to your business strategy? Benchmarks can be dangerous. You can rarely put your hands on all the background and context since, by definition, benchmark data is external to your organization. And you can also argue that benchmarks are a lazy way to evaluate performance, or at least some co-hosts on this episode feel that way! Eric Sandosham, founder and partner at Red & White Consulting Partners (and prolific writer), along with Moe, Tim, and Val break down the problems with benchmarking and offer some alternatives to consider when you get the itch to reach for one! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
While we don't often call it out explicitly, the driving force behind much of what and how much data we collect is driven by a "just in case" mentality: we don't know exactly HOW that next piece of data will be put to use, but we better collect it to minimize the potential for future regret about NOT collecting it. Data collection is an optionality play—we strive to capture "all the data" so that we have as many potential options as possible for how it gets crunched somewhere down the road. On this episode, we explored the many ways this deeply ingrained and longstanding mindset is problematic, and we were joined by the inimitable Matt Gershoff from Conductrics for the discussion! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
Broadly writ, we're all in the business of data work in some form, right? It's almost like we're all swimming around in a big data lake, and our peers are swimming around it, too, and so are our business partners. There might be some HiPPOs and some SLOTHs splashing around in the shallow end, and the contours of the lake keep changing. Is lifeguarding…or writing SQL…or prompt engineering to get AI to write SQL…or identifying business problems a job or a skill? Does it matter? Aren't we all just trying to get to the Insights Water Slide? Katie Bauer, Head of Data at Gloss Genius and thought-provoker at Wrong But Useful, joined Michael, Julie, and Val for a much less metaphorically tortured exploration of the ever-shifting landscape in which the modern data professional operates. Or swims. Or sinks? For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
We're seeing the title "Analytics Engineer" continue to rise, and it's in large part due to individuals realizing that there's a name for the type of work they've found themselves doing more and more. In today's landscape, there's truly a need for someone with some Data Engineering chops with an eye towards business use cases. We were fortunate to have the one of the co-authors of The Fundamentals of Analytics Engineering, Dumky de Wilde, join us to discuss the ins and outs of this popular role! Listen in to hear more about the skills and responsibilities of this role, some fun analogies to help explain to your grandma what AE's do, and even tips for individuals in this role for how they can communicate the value and impact of their work to senior leadership! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
A claim: in the world of business analytics, the default/primary source of data is real world data collected through some form of observation or tracking. Occasionally, when the stakes are sufficiently high and we need stronger evidence, we'll run some form of controlled experiment, like an A/B test. Contrast that with the world of healthcare, where the default source of data for determining a treatment's safety and efficacy is a randomized controlled trial (RCT), and it's only been relatively recently that real world data (RWD) -- data available outside of a rigorously controlled experiment -- has begun to be seen as a useful complement. On this episode, medical statistician Lewis Carpenter, Director of Real World Evidence (there's an acronym for that, too: RWE!) at Arcturis, joined Tim, Julie, and Val for a fascinating compare and contrast and caveating of RWD vs. RCTs in a medical setting and, consequently, what horizons that could broaden for the analyst working in more of a business analytics role. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
How good are humans at distinguishing between human-generated thoughts and AI-generated…thoughts? Could doing an extremely unscientific exploration of the question also generate some useful discussion? We decided to dig in and find out with a show recorded in front of a live audience at Marketing Analytics Summit in Phoenix! With Michael in the role of Peter Sagal, Julie, Tim, and Val went head-to-GPU by answering a range of analytics-oriented questions. Two co-hosts delivered their own answers, and one co-host delivered ChatGPT's, and the audience had to figure out which was which. Plus, a bit of audience Q&A, which included Michael channeling his inner Charlie Day! This episode also features the walk-on music that was written and performed live by Josh Silverbauer (no relation to Josh Crowhurst, the producer of this very podcast who also wrote and recorded the show's standard intro music; what is it about guys named Josh?!). For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
Application Programming Interfaces (APIs) are as pervasive as they are critical to the functioning of the modern world. That personalized and content-rich product page with a sub-second load time on Amazon? That's just a couple-hundred API calls working their magic. Every experience on your mobile device? Loaded with APIs. But, just because they're everywhere doesn't mean that they spring forth naturally from the keystrokes of a developer. There's a lot more going on that requires real thought and planning, and the boisterous arrival of AI to mainstream modernity has made the role of APIs and their underlying infrastructure even more critical. On this episode, Moe, Julie, and Tim dug into the fascinating world with API Maven Marco Palladino, the co-founder and CTO at Kong, Inc. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
Professional development is a big topic—way more than just thinking about what job you want in five years and setting milestones along the way. Thankfully we had Helen Crossley, Senior Director of Marketing Science at Meta, join Michael, Moe, and Val to dive deep into this topic! We explored how to set really good, meaningful goals, the challenges across each stage from junior analyst to leader, and how to give great feedback. We also spent quite a bit of time discussing the new challenges that becoming a first-time manager presents and, hopefully, some helpful tips and thought exercises to help out our listeners who are or are about to be faced with this challenge. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
From running a controlled experiment to running a linear regression. From eyeballing a line chart to calculating the correlation of first differences. From performing a cluster analysis because that's what the business partner asked for to gently probing for details on the underlying business question before agreeing to an approach. There are countless analytical methodologies available to the analyst, but which one is best for any given situation? Simon Jackson from Hypergrowth Data joined Moe, Julie, and Tim on the latest episode to try to get some clarity on the topic. We haven't figured out which methodology to use to analyze whether we succeeded, so you'll just have to listen and judge for yourself. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
You know you've arrived as a broadcast presence when you open up the phone lines and get your first, "Long time listener, first time caller" person dialing in. Apparently, we have not yet arrived, because no one opened with that when they sent in their questions for this show. Our question is: why not?! Alas! That is a question not answered on this episode. Instead, we got the whole crew together and fielded questions from listeners that were actually worth attempting to answer, and we had a blast doing it! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
In order to produce a stellar analysis, have you ever requested a team to teardown a Tesla and count every last washer and battery cell? No? Well our guest this week, Jason DeRise, joined Tim, Julie, and Val to share that story and others on how alternative data can be used to enrich analyses. Luckily you don't have to have a Wall Street-sized budget in order to tap into the power of alternative data. Looking just outside your tried and true data sets and methodologies to see how you might be able to add to your mosaic of understanding a business question can be powerful! In this episode we talk about some of the considerations and approaches when you put down that hammer and see the world around you is more than just a bunch of nails. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
It happens occasionally. Someone in the business decides they need to just take the analysis into their own hands. That leaves the analyst conflicted — love the interest and enthusiasm, but cringe at the risk of misuse or misinterpretation. Occasionally (rarely!), though, such a person goes so deep that they come out the other side having internalized everything from Deming's obsession with variability all the way through the Amazon Weekly Business Review (WBR) process. And they've written extensively about it. Cedric Chin was such a person, and we had a blast digging into his exploration of statistical process control — including XmR charts — and mulling over the broader ramifications and lessons therein. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
Data communities have played a major role in the careers of many analysts, but times they are a-changin'. We're not sure if we're different, if the communities' purposes and missions have shifted, or both. One thing we are confident in, though, is that Pedram Navid was absolutely the right guest to invite on to the show to explore the topic alongside Michael, Moe, and Val. His blog post last year that discussed how "this used to be fun" was a great reflection on some of the environmental trends influencing the communities we've come to know and love. But don't worry, it's not all doom and gloom! The crew all agreed that there are still places and ways for data practitioners to connect and support each other, even if it doesn't look identical to the early aughts. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
Long-time listeners to this show know that its origin and inspiration was the lobby bar of analytics conferences—the place where analysts casually gather to unwind after a day of slides interspersed with between-session conversations initiated awkwardly and then ended abruptly when the next session begins. Of the many conferences where this occurs, Marketing Analytics Summit (née, eMetrics) is the one in which this show is most deeply rooted. And, we'll be recording an episode in front of a live audience with all of the North America-based co-hosts on Friday, June 7, 2024, in Phoenix, Arizona at the next one! To call that out, including announcing a promo code for any listeners interested in joining us for the event, Michael, Val, and Tim turned on the mics for a bonus episode with a little reminiscing about past experiences at the conference, including Val's mildly disturbing retention of dates and physical artifacts. Visit the show page for, well, not much more than you see here.
As a general rule, analysts are drawn to precision: let's understand the business problem and then go figure out how the data can be acquired and crunched to provide something specific and useful. Fair enough. Where, then, do pencil and paper and 10-second sketches fit in? Or hastily and collaboratively drawn flippy chart or whiteboard sketches? We could draw you a picture to explain, but podcasts are an audio medium, so, instead, we brought on the illustrious illustrator, consultant, and author, Dan White. From triangles, to rolling snowballs, to trees, to Venn diagrams, to the conjoined triangles of success, this episode paints a pretty clear picture of the power of the quick sketch! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
They say an analysis is only as good as the question that was asked, so for our 2024 International Women's Day Episode, Julie, Moe, and Val were joined by Taylor Buonocore Guthrie to discuss how to ask better questions. Every analyst is naturally curious, but the thoughtfulness that Taylor puts into what type of questions to ask, how to ask them, and when to ask them to get the optimal response is truly an art form. Instead of drilling the five-whys the next time you are gathering context with a business partner for an analysis or conducting discovery interviews, try prompting them with, "Can you walk me through your thinking?" or "What else is important for me to know?" to gather the right context and clarify your understanding. We can't wait for you to hear all of the practical advice and suggestions for things you might consider incorporating into your repertoire! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
Is it just us, or does it seem like we're going to need to start plotting the pace of change in the world of analytics on a logarithmic scale? The evolution of the space is exciting, but it can also be a bit dizzying. And intimidating! There's so much to learn, and there are only so many hours in a day! Why did we choose that [insert totally unrelated field of study] degree program?! These questions and more—including a quick explanation of bootstrapping for Tim's benefit, which is NOT bootstrapping or bootstrap—are the subject of the latest episode of the show, with Kirsten Lum, the CTO of storytellers.ai, joining us to discuss strategies and tactics for the technically-non-technical analyst to thrive in an increasingly technical analytics world. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
The data has problems. It ALWAYS has problems. Sometimes they're longstanding and well-documented issues that the analyst deeply understands but that regularly trip up business partners. Sometimes they're unexpected interruptions in the data flowing through a complex tech stack. Sometimes they're a dashboard that needs to have its logic tweaked when the calendar rolls into a new year. The analyst often finds herself on point with any and all data problems—identifying an issue when conducting an analysis, receiving an alert about a broken data feed, or simply getting sent a screen capture by a business partner calling out that something looks off in a chart. It takes situational skill and well-tuned judgment calls to figure out what to communicate and when and to whom when any of these happen. And if you don't find some really useful perspectives from Julie, Michael, and Moe on this episode, then we might just have a problem with YOU! (Not really.) For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
The backlog of data requests keeps growing. The dashboards are looking like they might collapse under their own weight as they keep getting loaded with more and more data requested by the business. You're taking in requests from the business as efficiently as you can, but it just never ends, and it doesn't feel like you're delivering meaningful business impact. And then you see a Gartner report from a few years back that declares that only 20% of analytical insights deliver business outcomes! Why? WHY?!!! Moe, Julie, and Michael were joined by Kathleen Maley, VP of Analytics at Experian, to chat about the muscle memory of bad habits (analytically speaking), why she tells analysts to never say "Yes" when asked for data (but also why to never say "No," either), and much, much more! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
Aptiv, Baidu, Cerebras, Dataiku… we could keep going… and going… and going. If you know what this list is composed of (nerd), then you probably have some appreciation for how complex and fast moving the AI landscape is today. It would be impossible for a mere human to stay on top of it all, right? Wrong! Our guest on this episode, Matthew Lynley, does exactly that! In his Substack newsletter, Supervised, he covers all of the breaking news in a way that's accessible even if you aren't an MLE (that's a "machine learning engineer," but you knew that already, right?). We were thrilled he stopped by to chat with Julie, Tim and Val about some of his recent observations and discuss what the implications are for analysts and organizations trying to make sense of it all. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
For those who celebrate or acknowledge it, Christmas is now in the rearview mirror. Father Time has a beard that reaches down to his toes, and he's ready to hand over the clock to an absolutely adorable little Baby Time when 2024 rolls in. That means it's time for our annual set of reflections on the analytics and data science industry. Somehow, the authoring of this description of the show was completely unaided by an LLM, although the show did include quite a bit of discussion around generative AI. It also included the announcement of a local LLM based on all of our podcast episodes to date (updated with each new episode going forward!), which you can try out here! The discussion was wide-ranging beyond AI: Google Analytics 4, Marketing Mix Modelling (MMM), the technical/engineering side of analytics versus the softer skills of creative analytical thought and engaging with stakeholders, and more, as well as a look ahead to 2024! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
It would be a fool's errand to try to list out every expectation for an analyst's role, but where should you draw the line? How specific do you need to be? And how can you document the unspoken expectations without stepping into micromanagement? Tim, Moe, and Julie took a run at hashing these questions out in our most recent episode so you don't have to rely solely on that generic role expectations grid you got from HR. Even though this topic is about setting expectations for other analysts, the conversation took quite a few introspective turns about how your internal standards are calibrated and what experiences along the way shaped them. As usual, you can expect some great stories about expectation setting gone wrong and what happens when you make Tim have a conversation about feelings, you miss one of Moe's deadlines, or use the wrong font in one of Julie's deliverables! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
To mentor, or not to mentor, that is the question: whether 'tis more productive to hole up in a cubicle and toil away without counsel, or to hold close one's experience to the benefit of no one else. Perchance, the author of this show summary should have checked with one of his mentors before attempting a Shakespearian angle. But, he didn't, and the show title is pretty self-explanatory, so we'll just roll with it. On this episode, Michael, Val, and Tim chatted about mentorship: its many flavors, its many uses, and what has and has not worked for them both when being mentored as well as when being mentors. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
It's been said that, in this world, nothing is certain except death and taxes, so why is it so hard to communicate uncertainty to stakeholders when delivering an analysis? Many stakeholders think an analysis is intended to deliver an absolute truth; that if they have just enough data, a smart analyst, and some fancy techniques, that the decision they should make will emerge! In this episode, Tim, Moe, and Val sat down with Michael Kaminsky, co-founder of Recast, to discuss strategies such as scenario planning and triangulation to help navigate these tricky conversations. Get comfortable with communicating the strengths and drawbacks of your different methodological approaches to empower decision making from your stakeholders! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
Have you ever noticed that recipes that include estimates of how long it will take to prepare the dish seem to dramatically underestimate reality? We have! And that's for something that is extremely knowable and formulaic — measure, mix, and cook a fixed set of ingredients! When it comes to analytics projects, when you don't know the state of the data, what the data will reveal, and how the scope may shift along the way, answering the question, "How long will this take?" can be downright terrifying. Happy Halloween! Whether you are an in-house analyst or working in an agency setting, though, it's a common and reasonable question to be asked. In this episode Michael, Moe, and Val dive into the topic, including sharing some stories of battle scars and lessons learned along the way. As a bonus, Sensei Michael explains how he uses Aikido on his clients to avoid scope creep! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
Seemingly straightforward data sets are seldom as simple as they initially appear. And, many an analysis has been tripped up by erroneous assumptions about either the data itself or about the business context in which that data exists. On this episode, Michael, Val, and Tim sat down with Viyaleta Apgar, Senior Manager of Analytics Solutions at Indeed.com, to discuss some antidotes to this very problem! Her structured approach to data discovery asks the analyst to outline what they know and don't know, as well as how any biases or assumptions might impact their results before they dive into Exploratory Data Analysis (EDA). To Viyaleta, this isn't just theory! She also shared stories of how she's put this into practice with her business partners (NOT her stakeholders!) at Indeed.com. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
Most of the time, we think of analytics as taking historical data for a business, munging it in various ways, and then using the results of that munging to make decisions. But, what if the business has no (or very little) historical data… because it's a startup? That's the situation venture capitalists — especially those focused on early stage startups — face constantly. We were curious as to how and where data and analytics play a role in such a world, and Sam Wong, a partner at Blackbird Ventures, joined Michael, Val, and Tim to explore the subject. Hypotheses and KPIs came up a lot, so our hypothesis that there was a relevant tie-in to the traditional focus of this show was validated, and, as a result, the valuation of the podcast itself tripled and we are accepting term sheets. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
It's a lot of work to produce each episode of this show, so we were pretty sure that, by this time, we would have just turned the whole kit and kaboodle over to AI. Alas! It seems like the critical thinking and curiosity and mixing of different personalities in a discussion are safely human tasks… for now. Dr. Brandeis Marshall joined Michael, Julie, and Moe for a discussion about AI that, not surprisingly, got a little bleak at times, but it also had a fair amount of hope and handy perspectives through which to think about this space. We recommend listening to it rather than running the transcript through an LLM for a summary! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
One of the biggest challenges for the analyst or data scientist is figuring out just how wide and just how deep to go with stakeholders when it comes to key (but, often, complicated) concepts that underpin the work that's being delivered to them. Tell them too little, and they may overinterpret or misinterpret what's been presented. Tell them too much, and they may tune out or fall asleep… and, as a result, overinterpret or misinterpret what's been presented. On this episode, Dr. Nicholas Cifuentes-Goodbody from WorldQuant University joined Julie, Val, and Tim to discuss how to effectively thread that particular needle. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
We were curious about… curiosity. We know it's a critical trait for analysts, but is it an innate characteristic, a teachable skill, or some combination of both? We were curious. How can the breadth and depth of a candidate's curiosity be assessed as part of the interview process? We were curious. Who could we kick these questions (and others) around with? We were NOT curious about that! MaryBeth Maskovas, founder and Principal Consultant at Insight Lime Analytics, joined Michael, Julie, and Tim to explore the topic. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
This topic was such a big deal that we managed to have no guests, and yet we had five people on the mic! Why? Because this episode doubles as a marker of a shift in the show itself. Beyond that, though, we had a lively discussion about how every business stakeholder professes to being committed to being data driven. That should make every stakeholder super easy to work with, right? And, yet, analysts often find themselves struggling to get on the same page with their counterparts due to the realities of the data: what it can and can't do and how it is most effectively worked with. Not a small topic! There were even pop quizzes (feel free to let us know how you'd score the answers)! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
On the one hand, analysts generally know and accept that part of their responsibility is to not only conduct analyses, but to effectively communicate the results of those analyses to their stakeholders. On the other hand, "communication" can feel like a pretty squishy and nebulous skill. On this episode, Michael, Moe, and Tim tackled that nebulosity (side note: using obscure words is generally not an effective communication tactic). For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.