POPULARITY
In this episode, we are delighted to have Arvind Balasundaram as our guest, who serves as the head of the Commercial Insights & Analytics (I&A) division at Regeneron, a pioneering biotechnology firm renowned for its groundbreaking medical innovations. Our conversation centers around data, analytics, and artificial intelligence (AI)'s pivotal role in the Life Sciences industry. Arvind shares his insights on the three significant shifts currently unfolding in the realm of data and analytics, and he delves into the crucial consideration of quantity versus quality when making data-driven decisions. Tune in to gain a deeper understanding of how the dynamics of this industry are transforming. IN THIS EPISODE: [4:06] Arvind discusses significant changes in data and analytics and focuses on the first major change. [8:18] The second major change is the evolution from small data to big data we can store. [10:43] The third change is how much data we have to parse before we get to knowledge and how quickly we have to turn it around from input to output. [12:41] Arvind shares his thoughts on the quality versus the quantity of insights and what insights and analytics are all about. [17:56] Arvind discusses how you make choices on what data you select. [22:34] Arvind explains why decision-makers need to drop their biases regarding data curation, and he gives an example. [25:55] Arvind discusses the realm of possibility versus probability. [33:44] Arvind explains what he sees as the challenges moving forward and how he sees mindset and culture changing. KEY TAKEAWAYS The concept of having more data is good; however, what is more important is getting the data you need, not just more and more in quantity. The cost of data can be prohibitive, so the selection of the data used is critical. Data that includes sentiment would be a more accurate way of doing insights and analytics in healthcare. BIOGRAPHY: Arvind Balasundaram Arvind leads the Commercial Insights & Analytics (I&A) group at Regeneron, a leading biotechnology company that invents life-transforming medicines for people with serious diseases. In this capacity, he oversees the implementation of deep insight frameworks and analytical capabilities to help bring the power of science and new medicines to patients who need them. A primary focus of the I&A group is to help realize Regeneron's mission to do well by doing good. Prior to Regeneron, Arvind also spent time at Sanofi, Johnson & Johnson, Bristol-Myers Squibb, and Pfizer, mainly on the early pipeline and launch side of the business. During his tenure at these companies, he participated in several industry-leading pharma brand launches, spanning diverse therapeutic areas. Arvind earned his MBA from the Owen Graduate School of Management at Vanderbilt University. He also recently attained the Award of Achievement in Digital Analytics degree from the University of British Columbia (in association with the Digital Analytics Association) and a Certification in Data Science at UC Irvine (in association with Predictive Analytics World). He is a former Associate in the Applied Analytics Capstone program at Columbia University and a member of the Design Thinking Advisory Board at Rutgers University. Arvind is a past President of the Pharmaceutical Management Science Association (PMSA). His interests include exploring new capabilities to enrich the understanding of customer experience and engagement in omnichannel business ecosystems, staying engaged in the evolution of machine intelligence and AI, and identifying choice contexts and biases in noisy decision-making environments. YOUR HOST: JASMEET SAWHNEY Jasmeet Sawhney is a life sciences industry executive, marketing leader, and serial entrepreneur with deep roots in technology and data analytics. He is currently the global head of marketing at Axtria. Jasmeet has over 20 years of experience in the life sciences domain and has helped build and scale three successful companies. He has received several company and individual awards, including Inc 500, Deloitte Fast 500, Crain's NY Fast 50, NJBiz Fast 50, Business of the Year, SmartCEO Future 50, Top CMO, Forty Under 40, and many more. Jasmeet Sawhney - LinkedIn Axtria on LinkedIn Arvind Balasundaram - LinkedIn
Eric Siegel is a leading consultant and former Columbia University professor. He is the founder of the popular Predictive Analytics World and Deep Learning World conference series. In this episode, Eric shares his decades of experience in predictive analytics. He discusses why ML is useful, and how predictive analytics have been used in business. Eric shares his view on prescriptive analytics, AI, and also explains uplift-modelling concepts, and why it is hard and so powerful. Eric's RecommendationsBooks:Competing on Analytics: Updated with a New Introduction, The New Science of Winning by Thomas H. Davenport, Jeanne G. Harris, 2017Applied Predictive Analytics: Principles and Techniques for the Professional Data Analyst, by Dean Abbot Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die by Eric Siegel Papers: Sculley, David, Gary Holt, Daniel Golovin, Eugene Davydov, Todd Phillips, Dietmar Ebner, Vinay Chaudhary, Michael Young, Jean-Francois Crespo, and Dan Dennison. "Hidden technical debt in machine learning systems." Advances in neural information processing systems 28 (2015). Elder IV, John F. "The generalization paradox of ensembles." Journal of Computational and Graphical Statistics 12, no. 4 (2003): 853-864.
“Even if you don't drink one bit of the A.I. Kool-Aid and you're a senior expert practitioner in machine learning, when you're trying to start a new initiative at your corporation, you're still very liable to make this extremely prevalent mistake that leads to a lack of deployment. And the mistake is that we all, on some level, are fetishizing the technology.” - Eric Siegel In this episode of Data Chats, Chris Richardson interviews Eric Siegel, Ph.D., leading consultant and former Columbia University professor who makes machine learning understandable and captivating. He is also the founder of the Predictive Analytics World and Deep Learning World conference series, which has served more than 17,000 attendees since 2009.They discuss: How to use A.I. to improve your organization's capabilities and performance Common mistakes business leaders make with understanding machine learning Why top data scientists fail to make successful models most of the time How operational changes lead to improved data analysis What separates a great organization from the rest when it comes to predictive analytics Necessity of socialization and buy-in to successfully implement predictive analysis Ethical implications and risks of machine learning Continue Learning | Data Science for Business Leaders Data Science for Business Leaders shows you how to partner with data professionals to uncover business value, make informed decisions and solve problems. Learn More
IBF on Demand sponsored by Arkieva. Learn about Swiftcast, Machine Learning Enabled Demand Forecasting from Arkieva: http://bit.ly/swifcastIBF's industry-leading S&OP Assessments assess your company's maturity level and provide tailor-made recommendations to improve specific areas - all designed to take you to the next level in your S&OP evolution. Click here for more information: https://bit.ly/3d58dMTWhat is predictive analytics and how does it work? I bring in Eric Siegel to help answer that question because two Eric's are better than one! He is author of the bestseller, 'Predictive Analytics, The Power To Predict Who Will Click, Buy Lie Or Die', former Columbia University professor, founder of Predictive Analytics World, and rapper (seriously!). Demand planning will become the home of predictive analytics, so let's get ready for it now.05:00 What is predictive analytics?06:00 The overlap between forecasting and predictive analytics.07:13 Why are predictive and analytics revolutionizing business? 09:20 Is predictive analytics as complicated as it sounds?11:30 How machine learning works.13:25 Logistic regression.15:33 Big Data and training data.15:58 Finding the right predictive model.20:20 Ensemble models and the wisdom of averages.23:10 Uplift modeling/persuasion modeling to create the future.27:15 How to gain knowledge in predictive analytics.33:00 My new book on predictive analytics (coming soon!).34:35 Arkieva's Swiftcast.
Martin Szugat ist Inhaber und Geschäftsführer der Datentreiber GmbH. Vor Datentreiber hat er bereits weitere Unternehmen gegründet und geführt – unter anderem in den Bereichen Softwareentwicklung, Social Media Marketing und Datenanalyse. Seit 2014 verantwortet Martin Szugat als Programmdirektor die Predictive Analytics World und Deep Learning World-Konferenzen in Deutschland und ist Mitgründer von 42AI – einem Marktplatz für Künstliche Intelligenz. Alle Themen im Überblick: Was macht Datentreiber? (ab 08:40) Worum geht es bei Design Thinking? (ab 14:34) Design Thinking bei Datentreiber. (ab 17:46) Wo beginnt und wo endet eine Datenstrategie? (ab 25:41) Wie lange dauert es, bis ein Unternehmen eine Datenstrategie hat? (ab 29:29) Sollte man mit einer Datenstrategie anfangen, auch wenn man intern keine Leute für die Umsetzung hat? (ab 32:44) Was macht eine gute Datenstrategie aus oder was sind oft die Pitfalls in einer Datenstrategie? (ab 37:28) Beispiele für die erfolgreiche Umsetzung einer Datenstrategie aus dem Projektgeschäft. (ab 47:12) Veränderungen durch die Corona-Krise? (ab 52:13)
Eric Siegel, Ph.D. is founder of the Predictive Analytics World and Deep Learning World conference series, executive editor of “The Predictive Analytics Times,” and author of “Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die.” A former Columbia University professor and host of the Dr. Data Show web series, Siegel is a renowned speaker and educator who has been commissioned for more than 100 keynote addresses across multiple industries. Eric is best known for making the “how” and “why” of predictive analytics (aka machine learning) understandable and captivating to his audiences. In our chat, we covered: The value of defining business outcomes and end user’s needs prior to starting the technical work of predictive modeling, algorithms, or software design. The idea of data prototypes being used before engaging in data science to determine where models could potentially fail—saving time while improving your odds of success. The first and most important step of Eric’s five-step analytics deployment plan Getting multiple people aligned and coordinated about pragmatic considerations and practical constraints surrounding ML project deployment. The score (1-10) Eric gave the data community on its ability to turn data into value The difference between decision support and decision automation and what the Central Intelligence Agency’s CDAO thinks about these two methods for using machine learning. Understanding how human decisions are informed by quantitative predictions from predictive modes, and what’s required to deliver information in a way that aligns with their needs. How Eric likes to bring agility to machine learning by deploying and scaling models incrementally to mitigate risk Where the analytics field currently stands in its overall ability to generate value in the last mile. Resources and Links: Machine Learning Week #experiencingdata PredictiveAnalyticsWorld.com ThePredictionBook.com Dr. Data Show Twitter: @predictanalytic Quotes from Today’s Episode “The greatest pitfall that hinders analytics is not to properly plan for its deployment.” — Brian, quoting Eric “You don’t jump to number crunching. You start [by asking], ‘Hey, how is this thing going to actually improve business?’ “ — Eric “You can do some preliminary number crunching, but don’t greenlight, trigger, and go ahead with the whole machine learning project until you’ve planned accordingly, and iterated. It’s a collaborative effort to design, target, define scope, and ultimately greenlight and execute on a full-scale machine learning project.” — Eric “If you’re listening to this interview, it’s your responsibility.” — Eric, commenting on whose job it is to define the business objective of a project. “Yeah, so in terms of if 10 were the highest potential [score], in the sort of ideal world where it was really being used to its fullest potential, I don’t know, I guess I would give us a score of [listen to find out!]. Is that what Tom [Davenport] gave!?” — Eric, when asked to rate the analytics community on its ability to deliver value with data “We really need to get past our outputs, and the things that we make, the artifacts and those types of software, whatever it may be, and really
This podcast interview focuses on a key aspect to drive product innovation and that is mastering human centered design. My guest is Brian T. O’Neill, founder and principal of Designing for Analytics.Brian T. O'Neill is a designer, advisor, and founder of Designing for Analytics, an independent consultancy which helps organizations design innovate innovative products powered by data science and analytics. For over 20 years, he has worked with companies including DellEMC, Global Strategy Group, Tripadvisor, Fidelity, JP Morgan Chase, ETrade and several SAAS startups. He has spoken internationally, giving talks at O'Reilly Strata, Enterprise Data World, the International Institute for Analytics Symposium, Predictive Analytics World, and Boston College. Brian also hosts the 5-star podcast, Experiencing Data, where he reveals the strategies and activities that product, data science and analytics leaders are using to deliver valuable experiences around data. In addition to consulting, Brian is also a professional percussionist and has performed at Carnegie Hall and The Kennedy Center. What triggered me to invite Brian to my podcast was one of his quotes about the fact that 85% of AI, analytics, and big data projects fail. That’s why we explore why this is the case, and what needs to be done different in order to be successful – creating software products that people find worth making a remark about. Here are some of his quotes:I started to see really, really bad survey results over 10 plus years. What I'm specifically talking about here is the success rate for delivering data projects.The theme here is the success rate on launching successful data initiatives hovered around 10 to 25%. So that means there’s failure rates up in the 75% plus.My general feeling was: There's a lack of a focus on the human aspect of analytics and data science projects and products right now. We're trying to use the data science and analytics hammer, and we're looking for stuff to hit. But no one's really aware why do we need holes? Who needs a hole? And where do they need the hole? Instead, it's just hit nails wherever we can and hope that someone maybe needs a hole there.During this interview, you will learn three things:That a first step in succeeding data projects is to stop forgetting about the value of fun and engaging with people.Why it is key to define the owner of value creation in your team – i.e. someone that owns the problem and the accountability for analytics and data science solutions to product value.That we have lost the humanity aspect in solution design – and a way to fix that and get some real wins is to spend time developing soft skills See acast.com/privacy for privacy and opt-out information.
EP 111: Piyanka Jain (President & CEO of Aryng) talks with us about ways to build a data culture at your company -- and how to tie data literacy programs to KPIs that matter to your business. Piyanka writes for Forbes, Harvard Business Review, InsideHR, and other publications. She has been a featured speaker at American Marketing Association conferences, Microsoft Modern Workplace, Predictive Analytics World, Growth Hacker TV, GigaOm, Google Analytics User Conference and more. Learn more about her work: https://aryng.com/
In episode ten I was joined by David Stephenson PhD. David is a published author (UK Bestseller - "Big Data Demystified,")and also MD of strategic analytics Consultancy, DSI Analytics. David has a unique insight into what it takes to build successful Analytics and Data Science teams and continues to consult with household names and smaller firms alike, in this capacity. He is also the Program Chair for Predictive Analytics World in London.
Brian T. O'Neill leads the acclaimed dual-ensemble, Mr. Ho's Orchestrotica and has performed at prestigious venues in the US including Carnegie Hall, the Kennedy Center, and the Montreal Jazz Festival. In addition to being a busy independent musician, Brian is also a product designer and founder of the consultancy, Designing for Analytics, which helps enterprise companies turn data into indispensable information products and services. For over 20 years, he has worked with companies including DELL/EMC, Tripadvisor, Fidelity, NetApp, MITRE, JP Morgan Chase, ETrade and numerous SAAS startups. Today Brian focuses on helping clients create more useful, usable, profitable, and engaging decision support software and information products. Brian is also an international speaker and podcast guest, having appeared at multiple O'Reilly Strata conferences, Predictive Analytics World in Berlin, and on the IBM Analytics podcast, Making Data Simple. He also authored the Designing for Analytics Self-Assessment Guide for Non-Designers, maintains an active mailing list, and hosts the new podcast, Experiencing Data. Earlier in 2018, Brian joined the International Institute for Analytics' Expert Network as an advisor on design and UX. Our conversation covers the intersection of music, data, and technology and I hope you find it as fascinating as I did! Episode Links: http://crashandboom.com http://orchestrotica.com/presskit | brian@orchestrotica.com | @orchestrotica https://www.designingforanalytics.com | brian@designingforanalytics.com | @rhythmspice https://www.designingforanalytics.com/podcast-subscribe/
Welcome to the 2019 Predictive Analytics World (PAW) Conference Series. Recorded live in Las Vegas, this series is bringing interviews straight to you from exhibitors and speakers at this year’s event. In this interview, host Jamin Brazil interviews Jeff Todd, Senior Account Executive at Wolfram Research. Find Jeff Online: Email: jtodd@wolfram.com LinkedIn Wolfram [00:02] This is Jamin, and you’re listening to the Happy Market Research Podcast live today at Predictive Analytics World. I have Jeff Todd, the Senior Technology Expert at Wolfram Research, Inc. Jeff, thank very much for being on the podcast today. [00:14] Thanks for having me, Jamin. [00:18] What do you think about the show? [00:20] I think it’s great. There’s a collaborative spirit that I feel like a lot of the people here... As an industry, I think people are all trying to solve a lot of hard problems now. For a while, I think they were trying to get the data all into one place. Now that they’ve got it there, they kind of the real problem ahead of them. So I think everyone is up against the same wall. And so, rather than trying to everyone get their own competitive edge and find a way to outsmart the other people, they’re all kind of just here to learn and figure out the same problems, which is really exciting. [00:50] It definitely feels like the rising-tide principle’s applying here, where it’s a lot less cut-throat and a lot more collaborative, recognizing the fact that we’re at the very early stages of massive growth inside of major and I call it minor (not in a negative way) but smaller organizations. At the end of the day, whether it’s optimizing your production line or creating outstanding customer experiences, data plays a key part in the entire ecosystem for success. And the proportion of the decisions that are happening in the organization are still uninformed. In that way I’m saying we’ve got an outsized opportunity in front of us for growth as it relates with insights inside of companies. [01:40] Yeah, I think that’s exactly right. I think too we see many organizations that are trying to achieve automation for a variety of tasks that have traditionally been manual tasks. And that’ll divert a lot of human resources. And you certainly hear about machine learning, neural nets, AI as a fear of interruption to folks like cab drivers and truck drivers and replacing them, and the question of what do these people do. Obviously, there’s going to be a place for those people to go. It always evolves and expands, but I think it’s very exciting that you have both the type of AI and machine learning outcomes that you have where we’ll be able to get more insights out of the data to be able to make new decisions, make new discoveries, new innovations and kind of push things past and have that growth that you’re describing. At the same time, I think we’ll see a lot of our world start to change a little bit as things that we’ve traditionally been used to interacting with humans become less and less so, which we already see at the shopping markets, grocery stores interact less and less everyday people on the way out. [02:39] Yeah, yeah. Let’s back up a little bit. You’ve been in the industry for a while. What do you see as one of the megatrends? How are we going to be different in three to five years [02:53] Well, I think autonomous driving is going to be probably one of the first major things that most of us see a huge change in our regular day-to-day world. I think that it’s already happening. Cars are already on the road as we can tell. I was just riding along with a friend of mine, who had a Tesla, marveling at the ingenuity of speeding down the highway [03:09] The autopilot is crazy! [03:12] It’s amazing. I was actually just transfixed on the screen watching it interpret all the cars around, and, in fact,
Welcome to the 2019 Predictive Analytics World (PAW) Conference Series. Recorded live in Las Vegas, this series is bringing interviews straight to you from exhibitors and speakers at this year’s event. In this interview, host Jamin Brazil interviews James Taylor, CEO of Decision Management Solutions. Find James Online: Email: james@decisionmanagementsolutions.com LinkedIn Decision Management Solutions [00:02] My guest today is James Taylor. We are live at Predictive Analytics World, Marketing Analytics World. His business is Decision... [00:11] Management Solutions. [00:14] Got it. You are chairing a track today. [00:18] Yeah, I’m monitoring the business track. I’m kicking it off right after the keynotes, and then I’m monitoring it for the next couple of days. [00:24] That’s great. Obviously, you know who’s on the panel. [00:28] Yeah, there’s a panel; there’s a whole bunch of presenters. It tends to be the ones who are not so much talking about how to build a predictive analytic model so much as how to use a predictive analytics model: how to put it into production, how to get people to adopt it, what the challenges are, getting people to understand what a model does, and those non-modeling kinds of things. [00:49] Which actually the better that goes, the bigger the lever of the model, right? I mean... [00:55] Yeah, I’d go further. I would say that if you... There’s not difference between business value and analytic values. If your analytic, however good it is, isn’t actually being used, then it has no value. [01:05] That’s right, that’s right. The ROI on that’s really easy to calculate. Now, this is not your first show. [01:11] No, I’ve been, I think, to every Predictive Analytics World. I’ve been coming here a long time. I know Eric and the Rising Media folks well. And it’s been fun to watch it grow and watch it change from really a very niche kind of show where’s there’s people from financial services and credit cards to one where all sorts of people are here. It’s great. [01:31] Now, your business, tell us a little bit about what it is that you guys do. [01:34] So, Decision Management Solutions focuses on companies that are really trying to automate decision-making. So, there’s some high-volume transaction where it’s not obvious what to do. So, you have to decide what to do for each transaction. And, obviously, that’s driving a lot of analytics. A lot of people what they want to do is they want to use machine learning, analytics, AI to make a better decision about these transactions, but they’re high enough volume that you can’t just show stuff to people and hope the people can handle the transaction, you got to automate it. So we help build those automated systems, into which you can embed these kinds of analytic models. [02:08] Do you have like a favorite project or ideal customer-type? [02:12] Uh, my favorite project probably is around like “next best offer, next best action” kinds of things. If you’re a multi-line company, you’ve got lots of different products or your products have complex eligibility like insurance products, then it’s easy to say, “Oh, you should make the next best offer to this customer this moment in the customer journey.” But actually, figuring out what that offer is – given what they already own, what they’re allowed to buy, which products go with which other products, what the rules are – is a non-trivial problem for most companies. And those systems tend to be more fun ‘cause they’re not as heavily regulated as some other decisions. So you don’t have to worry about the law quite as much as you do; you have to worry a little but not a lot. The key concern is privacy but the kinds of systems we build don’t need to who you are, they just need to know things about you. [02:57] So, you’re not dealing with PII,
Welcome to the 2019 Predictive Analytics World (PAW) Conference Series. Recorded live in Las Vegas, this series is bringing interviews straight to you from exhibitors and speakers at this year’s event. In this interview, host Jamin Brazil interviews Tony Ayaz, CEO and Co-founder of Gemini Data Inc. Find Tony Online: Email: tony.ayaz@geminidata.com LinkedIn Gemini Data Inc. [00:02] Tony, when did you start Gemini? [00:03] We started the company in 2015. [00:06] 2015. We’re in the Happy Market Research Podcast right now. We’re at Predictive Analytics World and Marketing Analytics World and there’s lots of worlds in this particular conference. I think there’s like twelve. Have you guys been to this conference before? [00:19] This is actually our first time at this conference. And for us, I think it’s a win because what we’re looking for is real users with real pain a little bit beyond the typical IT folks that we’re looking for. [00:32] Got it. So, you’re based out of San Francisco. You started the business in 2015; so, you’ve had some success obviously. Tell me a little bit about what you guys do. [00:41] Sure. At Gemini Data, we help our customers with digital transformation initiatives. What I mean by that is we help customers achieve data availability. And data availability is a necessary requirement today if you’re really looking to do something significant or on digital transformation, AI, or ML initiatives. And what we mean by that is that you have to access to data and you have to make that data available. And there’s a lot of talk about out-of-the-box machine learning solutions and things that are out there in the market. But the reality is that if you’re doing complex things and trying to run your business, you need data diversity, and you only get that through data availability. And so, what we do is we leverage the customer’s existing investments in various, different data platforms: it could be in a CSV; it could be in a data lake. It doesn’t really matter to us. We have a method that we apply called Zero Copy Data Virtualization that actually takes your data that’s sourced without you to move or copy that data or do the complex ETL processes that we’ve all been used to for the past two to three decades, which just simply doesn’t scale with AI. [01:47] Data diversity is a term I’ve never heard before, but it is my favorite one in this conference. Diversity is something that we’ve... we’re becoming more and more aware, especially in the Bay Area, like Silicon Valley... I’d say globally you’re seeing... The math is that if you have more diversity in your senior leadership team, then you have a better world view, which gives you an improved advantage in the marketplace, right? [02:12] Exactly. [02:13] And what’s interesting is how you’re connecting that in with data. It isn’t about a single… right? It’s about different types of data. You mentioned CSV versus data lake, which are vastly different, like profoundly different. Your system is able to ingest both of those? [02:32] I wouldn’t say ingest, access those systems. [02:35] Got it [02:36] We don’t want you to move or copy the data, but we allow you to access it in a unified way. [02:40] OK, cool. So that bypasses some PIII? [02:42] Yes, it bypasses it in the sense that you’re giving access to people that should have access to it. So we follow the same protocols of data access they have as their role or authority would provide them. But we take it a step further of looking into five years from now, Zero Trust Networks are going to be deployed, which is a new, let’s call it, security protocol or methodology, which basically changes things versus where we’re at today: It’s the perimeter of defense, which I’m going to put firewalls around things; I’m going to give you access to things you should have; and then when ...
Welcome to the 2019 Predictive Analytics World (PAW) Conference Series. Recorded live in Las Vegas, this series is bringing interviews straight to you from exhibitors and speakers at this year’s event. In this interview, host Jamin Brazil interviews Satish Pala, Senior Vice President, Digital at Indium Software. Find Satish Online: Email: satish.pala@indiumsoft.com LinkedIn Indium Software [00:02] You’re listening to the Happy Market Research Podcast. I’ve already screwed up this intro a couple of times. I apologize for that to my wonderful guests. Right now, I’ve got Satish Pala, Indium Software. He is a market veteran, been in the industry for 20 years. Welcome to the podcast. [00:22] Thanks. Thank you so much. [00:23] We are live today at Predictive Analytics World and Marketing Insights World and all kinds of worlds. I think there’s 12 shows. It’s a lot of shows. That’s right, yeah. From health care... Everybody’s using insights. What do you see as one of the big trends in the industry? [00:42] So, one of the things that I see is people having a lot of data, people thinking a data analytics solution will help them but have no direction or a strategy towards these concerns of theirs. So they are trying to do what I call a prototype-based analytics. So, they would want to spend some money, figure out what insight could come up with that amount of data they have in a shorter cycle, maybe a month or two. And then they figure out how it is going to impact their business. And based on the business impact, they’re trying to spend more, invest more in the scheme of data analytics where they can get impact, a positive impact on their business. So, for example, if a company who’s having products related to customer, consumer, for example, electronics and they see the trend in electronics buying has reduced, they would want to analyze this challenge. They would want to analyze why the trend is so in terms of how the consumer behavior is. So they would want to take the data that they have and analyze the patterns of consumer behavior and identify how they could fix it or, I would say, how they could work around these challenges that they’re facing. Having analysis on data insights on data can give them a picture of what they need to do in the near future. [02:04] So, Indium Software, you guys have been around awhile? [02:08] Yeah, we’ve been around for almost 20 years. And, as you said, we are veterans in the IT services. We deal with quality solutions, services. Primary, I would say the portfolio that we solve, is at once analytics; then we have Big Data analytics, where in we help customers do data engineering, where we bring in all the data into a warehouse or data lake. We give them business intelligence solutions; we give them visualization solutions; we also offer descriptive analytics solutions; we also offer advanced analytics, predictive-modeling-type solutions. We also help them integrate these solutions into their web portals. So, one of the key challenges we have noticed is that, while they have done their analytics model on a dashboard or a visualization layer, they are trying to figure out how do I input this into my day-to-day life or day-to-day operations. So that is something we really focus on and help the customers with. [03:07] You know it used to be the case that data was far away from the person that needed it, and now it’s moving really close, even like integrating insights into the decisions that are the workflows of the practitioners that are inside of the brands. [03:25] I don’t want to interfere in your questioning, but the thought you have is right. People have identified analytical models; people have insights. But how do they operationalize these insights? How do they put in the business...? [03:37] I love that term “operationalize.” [03:39] So,
Welcome to the 2019 Predictive Analytics World (PAW) Conference Series. Recorded live in Las Vegas, this series is bringing interviews straight to you from exhibitors and speakers at this year’s event. In this interview, host Jamin Brazil interviews Mike Galvin, Executive Director of Data Science Corporate Training at Metis. Find Mike Online: Email: michael@thisismetis.com LinkedIn Metis [00:02] Hi, I’m Jamin, and you’re listening to the Happy Market Research Podcast. We are live today at Predictive Analytics World. My last guest at the show is Mike at Metis. Tell me a little bit about the company. [00:17] Sure, so, as you know, that in analytics, data science talent... There’s a huge gap and there’s a large demand for it. So, that’s where we come into play. We’re a data science and analytics training company. We’re part of Kaplan. So, if you’ve heard of Kaplan, the global education company... We’re about six years old, launched organically, and we work with companies to upskill their staff, both technically and non-technically in kind of all things data science and analytics (data literacy, tools, machine learning). [00:45] That’s awesome. I actually think data science is the No. 1 job right now, nationally; I’m not sure if it’s global but certainly in the U.S. And there’s a big gap in terms of the need, the desire to hire from big companies and small companies, for that matter, and the available workforce. Sounds like you guys are playing a big part in, after people are hired, that subsequent improvements and ongoing skills training. [01:15] That’s part of what we do. There’s a little bit more. So, we have an accredited boot- camp that’s twelve weeks long. We have it in New York, San Francisco, Seattle, and Chicago. That is a retail consumer product for people who want to shift into or pivot their careers into data science roles. [01:30] How is that helping? [01:32] So, that helps with the data acquisition, intel acquisition pipeline at the entry level. Then, there’s the corporate training business, which is where I work. Within the corporate training business, we work with companies who not only upskill their existing tech talent in data sciences and in new areas and new tools and things like that but also their broader workforce; sometimes, even not technical in C-suite all the way down to individual contributors to build that literacy and fluency so that they can interact and collaborate with the data science teams more. [02:01] Oh, that’s very cool, very cool. Do you uh... On the engagement side of things, do you guys also have placement, help companies with placement or job candidate as you’re doing...? It seems like there’s that middle piece between people want to pivot their careers, right; so, you’re training at the data camps, etc. And then, all of a sudden, there’s like the need, which you’re training people internally, right; so, the space in the middle is, “I want to hire.” [02:32] So, not directly but indirectly. So on the bootcamp side, part of that is getting people jobs. So we have an entire career support team. [02:40] Oh, you do then. [02:41] Yeah. [02:41] OK, got it. [02:41] To get people into actual data science jobs. And over the past five-and-a-half, six years, we developed a huge network of hiring partners that we work with, and this ranges from Apple and Facebook to IBM and Ooze and all the way down to smaller companies as well, depending on who it is. We started with the bootcamp, but that hiring network is really how we kind of started getting to the data science corporate training space ‘cause we started talking to them and realized, “Hey, there’s not only this entry-level hiring partner...” [03:12] See, I think that’s really important because you’re offering really the whole product for the corporation,
Welcome to the 2019 Predictive Analytics World (PAW) Conference Series. Recorded live in Las Vegas, this series is bringing interviews straight to you from exhibitors and speakers at this year’s event. In this interview, host Jamin Brazil interviews Lawrence Cowan, Senior Partner and COO of Cicero Group. Find Lawrence Online: Email: lcowan@cicerogroup.com LinkedIn Cicero Group [00:02] Hi, I’m Jamin. You’re listening to the Happy Market Research Podcast. My guest today is Lawrence. He is a partner at Cicero Group. We are live at Predictive Analytics World, Marketing Analytics World. There’s eight worlds. I forget all the worlds, but we’re covering a gamut of health care, etc. etc. Welcome to the show, Lawrence. [00:21] Thanks, appreciate it. Thanks for having me. [00:22] Yeah, of course. You guys are exhibiting here. [00:25] We are. [00:25] What do you think about the show? [00:27] It’s been a great experience so far. I’ve been to three or four of these, and they get larger every year. I think the organizers do a great job putting on the event. And the topics continue to expand as well. So far, our interactions with the audience have been very engaging: great questions, hard questions. But we’ve enjoyed it so far. [00:50] That’s great. So, tell me a little bit about Cicero group. What do you guys do? [00:53] You bet. So, Cicero Group is a full-service management consulting company. We emphasize in data analytics and data strategy. Our roots were actually in market research. And so, we identified the value of data early on in the strategic projects we were working on. And so, we leveraged primary market research in a lot of our work and have continued to expand that into more strategic transformation strategy, advanced analytics work as well. [01:24] That’s great. One of the terms I’ve been hearing recently at this conference (I’ve never heard it before) is data diversity. And I think that you’re hitting on that exact point, right? It’s about the primary data that’s collected and then really trying to triangulate truth even though sometimes it’s a lot more than three points. [01:42] It really is. Yeah, so, bringing new data to the table is critical in all the projects we work on. Primary research happens to be a big one but, again, a lot of the topics people are talking about at the conference today are other sources of data: secondary data, partner data. There’s so many opportunities with leveraging data these days. [02:04] Got it. Talk to me about who an ideal customer is. [02:07] That’s a great question. So, we serve a broad spectrum of industries and business functions. I think our sweet spot, I would say, is more in the sales and marketing and insights and analytics functions within organizations. And if you look at the services we’re offering, I think there’s a lot of value being provided to organizations that have RN-subscription-type businesses whether that’s SAS or even in B-to-C subscription-type businesses because our expertise is really in understanding the customer journey and finding those opportunities across the customer journey to improve that experience. [02:46] Do you have a favorite customer story? [02:48] That’s a great question. There’s a lot, but one that comes to mind... This one is actually from a few years back. One of our large clients a few years back was Groupon. This was pre-IPO days. [03:00] I actually worked with them pre-IPO days. [03:03] Oh, you did. Maybe we crossed paths. You never know, yeah. So, at that time, Groupon didn’t have an insights function. And so, we were brought in through a partnership to help them set up their customer and merchant feedback system globally. I’m going to forget the number of countries and languages, but it was in the 20s and 30s of countries and languages that we designed their ...
Welcome to the 2019 Predictive Analytics World (PAW) Conference Series. Recorded live in Las Vegas, this series is bringing interviews straight to you from exhibitors and speakers at this year’s event. In this interview, host Jamin Brazil interviews Krishna Kallakuri, Founder and CEO of diwo. Find Krishna Online: Email: kkallakuri@getdiwo.com LinkedIn diwo [00:02] You’re listening to the Happy Market Research Podcast. I’m Jamin Brazil, your host. We are live at Predictive Analytics World, Marketing Analytics World. There’s lots of worlds. I think there’s nine. Anyway, there’s health care. [00:14] Quite a few. [00:15] Yeah, quite a few. My guest today is Krishna. He is the CEO of diwo. Tell me a little bit about what diwo does. [00:24] Diwo is a cognitive decision-making platform. So, the word “cognitive” means the whole idea of how can you take the knowledge aspect of your data, teach it to a machine, and can the machine help the humans with their decision-making process, can it augment. When the human is confused, can the machine help him or “This is a better choice for you today,” “This is how you should react in your business,” and “This is the possible outcome.” So, that is the whole motion around diwo. That’s exactly why we call it “data in, wisdom out.” We want to help the business community with the decision-making process where we are able to reduce their cognitive load of their human brain, reduce cognitive confusion and, most importantly, augment the decision-making process. [01:10] That is really interesting. I like the way that you’re framing that out: basically, making decisions a little bit easier while still using insights as the premise. So, when did you guys start the business? [01:24] So, it’s about four-and-a-half years ago. [01:26] Fairly recent. You have a favorite customer story? [01:29] Yes, I do. We actually started our first engagement with a fast fashion retailer. When you look at the process that they had on how they were managing quite a few of their stores across the country, one of their biggest challenges was which product needs to be on the shelf at what location and at what time. When you look at these simple questions, you may think the retailers have figured it out, they have so much experience in business. But the real problem that this customer was facing is they can’t put the right product at the right time at the right location because the customers are not reacting to it or either the product is not on the shelf at the right time, right? So, when you look at the whole business model, it was directly impacting on the revenues because, if a product leaves a warehouse, it’s never coming back. Either you sell it or you promote it or you throw it. So the challenge that we were asked to solve for them is how do we allocate the right product at right location at the right time for the upcoming season. So, diwo, as a platform, it was able to really consolidate the customer behavior, the product behavior, the location behavior and, most importantly, it was able to embed the complete business process around it, which means now all the merchandisers, they know, “OK, 20 weeks ahead, this is where I have demand” “This is how I need to create a buy plan, which means now OK, I need 1,000 units for these 10 locations.” Now, they have enough time to procure the material; they have enough time to manufacture the product; and then, when they manufacture the product, diwo, as a platform, is able to tell them the merchandisers that “OK, this is how you need to deploy,” which means, “Now, OK, you put 100 units in store A or put 100 units in store B or what are the cases.” So, we are really closing that whole loop. This what we mean by taking insights to a whole new level, right? How can you help that business community start consuming analytics instead of producing analytics?
Welcome to the 2019 Predictive Analytics World (PAW) Conference Series. Recorded live in Las Vegas, this series is bringing interviews straight to you from exhibitors and speakers at this year’s event. In this interview, host Jamin Brazil interviews Allison Swihart, CEO and Co-founder of Syndetic. Find Allison Online: Email: allison@syndetic.co LinkedIn Syndetic [00:02] Hi, everyone. I’m Jamin Brazil, and you’re listening to the Happy Market Research Podcast. We are live today at Predictive Analytics World. I am standing with Allison, the CEO of an up and coming company Syndetic. You can find information on them at Syndetic.co. Allison, thanks for being on the Happy Market Research Podcast. [00:24] Thanks for having me. [00:26] Well, let’s start with what do you think? I mean this is Day 2 or 3. [00:31] Day 2. [00:32] Day 2 of the event. What do you think about the event so far? [00:34] I think it’s been wonderful. It feels like there’s a really good energy here compared to some of the other conferences that I’ve been to recently. The audience is slightly more technical, which is great for us. And I am getting a lot of good feedback on the talks. And I think that the decision to kind of combine the different PAWs together into one Mega PAW. [01:01] Eight PAWs [01:01] Were there eight? There have to four PAWs. What kind of animal is an eight-pawed animal? I mean... There should be four PAWs. Anyway, yeah, I think it was a good decision because everyone who came for the marketing side versus the kind of deep machine learning side is mingling together, and you’re getting lots of great conversation. [01:28] It is actually interesting seeing that convergence of... One of the terms that a previous CEO I had on the podcast said is “diversified data” or “data diversification,” meaning... You see this in executive teams. So executive teams that have diverse gender, ethnicity, whatever, they outperform non-diverse groups. It was interesting having him creating that connection with respect to different data. And I hadn’t actually thought of it in the way that you just articulated it, but that’s actually pretty relevant also where you’ve got broad disciplines that are now being combined into a single event, which is kind of cool. [02:15] Yeah, yeah. And having people of different titles and at different levels in the organizations, I think, is really important for conferences. I know there’s a lot of summits where people kind of focus on just the CIO level, but I think at this event, in particular, we’ve talked to a bunch of people who are analysts, who are doing the actual work, which is really important to bring their perspective of the individual contributor, not just the manager. The manager might be making the decision to purchase one piece of software over another, but you need the perspective of the kind of “boots-on-the-ground” in order to, I think, make the best decision. So, I think they’ve done really well. [02:56] So, you have a relatively new company. [02:58] Brand-new. We just launched this year. [3:00] Congratulations. What month? [03:02] March. [03:03] Awesome. So literally brand new. [03:05] Literally brand new. [03:06] Tell us what your company does. [03:07] So, we are a virtual data warehouse, which means that we allow you to leave all of the data where it lives: in databases, spreadsheets, third-part services that you access via API, cloud-hosted services like say you use SalesForce. You have an HR system; you have an accounting system plus you have a non-premise Oracle database. Leave all that data where it is; we virtualize it into a data warehouse as if it were in the same physical location. So you can query against it, using plain SQL, and define the right slice of data for every project you want to do,
Welcome to the 2019 Predictive Analytics World (PAW) Conference Series. Recorded live in Las Vegas, this series is bringing interviews straight to you from exhibitors and speakers at this year’s event. In this interview, host Jamin Brazil interviews Bob Selfridge, CEO and CTO of TMMData. Find Bob Online: Email: bob.selfridge@tmmdata.com LinkedIn TTMData [00:02] Hi, you’re listening to the Happy Market Research Podcast. I’m Jamin, your host. Today I’ve got Bob with TMM Data. We are live at Predictive Analytics World, Marketing Analytics World, Health Care. [00:16] There’s a giant list… [00:17] There’s a giant list. [0:18] of events. Email is in there somewhere. Deep Analytics, there’s lots of good stuff here. [00:23] The email one to me is kind of interesting. I’m like, "Gosh, email’s dying." [00:28] You think? You think so? [00:30] Yeah, open rates are decreasing. There’s actually divisions of companies now that aren’t even responding to emails. They only use Slack and other...anyway. So, Bob, thanks for being on the show. [00:42] Thank you for having me. I appreciate it. [00:44] Let’s start out. Talk a little bit about TMM Data. What do you guys do [00:48] Sure. We’re about a 13-year-old company. I started it back in 2008 on my front porch, closed in the building, brought people in: one of those good, old-fashioned, kind of bootstrap things. We really started TMM was originally Track My Marketing. So we were all about channel marketing and, ah... Well, at that point, it was multi-channel; now it’s omni-channel ‘cause we have to have cool new words. But being able to set this up so that you could do reporting. [01:14] That was like early pre-social media focus market too. [01:18] Right, right. 2008: I mean it was around, but nobody was really diving in. It was more of a fun thing to have that us old folks were starting to play with more so than day-to-day operational thing that it is now. But we started measuring that around 2012. My phrase is, as I like to say, “It’s about the data dummy, not about specifically marketing data or analytics.” And we really refocused the company to becoming a data-integration company. We saw, while campaign management and marketing analytics is still very the core of what we do, we find that a lot of folks are still struggling with simple things. They’ve got spreadsheets coming in email; they’ve got, of course, 7,000 marketing technologies out there, floating out there in the atmosphere that they need to pull that data in and merge it and meld it and marry it and all the cool phrases we use now. So, our goal is to make it easier for analysts, whether they be predictive analysts, marketing analysts, financial analysts, just analysts that are fighting. We just did a survey with Digital Analytics Association here last year. People are spending 40% to 60% of their day just copying and pasting, cleaning data to start doing their work. And our ultimate mission at the company, our official mission statement is “Meet data needs painlessly,” very short and sweet. [02:32] Oh, I love that. [02:33] But we want to be able to allow analysts to come in at 8 in the morning and start doing their job instead of spending four to six hours cleaning up the data to then move to the next step to start doing their job. So, that’s kind of ultimately our goal in life. [02:47] An ROI on that is really easy to get to. That’s the nice part. [02:50] Well, it is. Unfortunately, for us and unfortunately for some of the analysts, the really good analysts, end up doing the extra work and working into the wee hours. And the senior management still get all their reports because they just put in extra hours. So the ROI to the individuals on the ground, certainly the feet on the ground really know that there’s a great ROI to it ‘cause it saves them a...
Welcome to the 2019 Predictive Analytics World (PAW) Conference Series. Recorded live in Las Vegas, this series is bringing interviews straight to you from exhibitors and speakers at this year’s event. In this interview, host Jamin Brazil interviews Brian Shindurling, VP of Marketing at Big Squid. Find Brain Online: Email: bshindurling@bigsquid.com LinkedIn Big Squid [00:02] Hi, I’m Jamin, and you are listening to the Happy Market Research Podcast. We are live at Predictive Analytics World, Marketing Analytics World, and many other worlds. My guest right now is Brian, VP of Marketing at Big Squid. You guys have the coolest booth, I think, on the show floor by the way. [00:20] Oh, thank you. [00:20] The hot pink's great. I have a kick-ass sticker that I’ve added to my bag. Thanks very much for the tchotchke stuff. [00:26] Absolutely. [00:27] Tell me a little bit about the company. [00:29] Cool, so, we’re Big Squid. We’re based in Salt Lake City, Utah. We’ve been around for ten years actually. The company’s kind of evolved out the marketing and analytics world. We’ve done a lot of business intelligence consulting in our years past. And, really, kind of the genesis of where we are today was we were consistently hearing a lot of the same challenges with our customers where leveraging business intelligence for an analytics environment, you’re building really interesting and cool dashboards all the time, but you’re always looking at data in a historical context, which led our customers to the next questions, which is “What’s going to happen? “Why is it happening?” and “What can I do about it?” [01:07] I mean that’s the Holy Grail. [01:09] Absolutely. That’s right. So, a couple of years ago, we launched our product we call Kracken. It’s an automated machine-learning platform. [01:16] Bad-ass name. [01:17] Thank you, thank you. And the approach that we’ve taken is to integrate with the analytics infrastructure that BI analysts are using every day. So, most of your enterprise data warehouses that are on the market today, most of the major business intelligence platforms... We’re able to round-trip data in and out of those environments where we’re basically just enhancing it with predictive metrics to give analysts a little bit better idea of what’s going to happen. [01:39] That’s really cool. What kind of data are you dealing with? [01:42] It depends. We deal with all kinds of different data. We’re kind of a horizontal play. We work with companies across basically any vertical that you can think of. The data that we play with is always structured, again coming out of kind of that business intelligence environment. [01:57] So it’s been cleaned. [01:58] It’s been pretty well curated, pretty well cleaned. This is data that has been used or is being used for reporting on a day-to-day basis. So we’re lucky in that sense; there’s already been some thought behind the business questions that we’re trying to support with analytics. Yeah, structured data and set up in a way that it’s being used in reporting environments. [02:18] Who is an ideal customer? [02:20] Good question. So, our ideal customer is the BI-analyst or data engineer, those that are leveraging these platforms like a Snowflake and/or a Tableau, Looker, Click (Places where they’re leveraging data on a day-to-day basis to derive insights and then reporting on and telling stories to their executive stakeholders about what’s happening in the business and what do we think is going to happen, how should we be thinking about making things better. But they haven’t really been classically trained on data science and machine learning in the past. So what we’ve done is we’ve created a platform that enables them to very easily navigate towards that concept that Gartner calls a “citizen data scientist.” So,
My guest today is Menaka Gopinath, President of Ipsos Social Media Exchange North America. Founded in 1975, Ipsos is one of the largest global market research and a consulting firm with worldwide headquarters in Paris, France. Menaka has held senior positions at Fuel Cycle and was a Creator and Producer at Wilcox Sessions which was an online video series where musicians played an intimate performance in their living room. Find Menaka Online: LinkedIn Website: https://www.ipsos.com/en-us Find Us Online: Social Media: @happymrxp LinkedIn This Episode's Sponsor: This episode is brought to you by Clearworks. Clearworks is an insights, innovation, and customer-experience company. They help clients understand their customers better, identify opportunities for innovation, and create products, services, and experiences that matter. Their clients are diverse in size and industry but share one important thing: a passion to drive more business by driving more meaningful human connection. For more information, please visit them at www.clearworks.net. [00:00] On Episode 2022, I’m interviewing Menaka, the President of Ipsos Social Media Exchange – North America, but first a word from our sponsor. [00:11] This episode is brought to you by Clearworks. So, we have a couple of sponsors on our show. I just want to underscore how much I appreciate those of you who have sponsored the Happy Market Research Podcast. It makes a ton of value to the ecosystem that is actually transcending market research right now. I say “transcending”; that’s probably the wrong framework, but exceeding, moving beyond into user experience research as well as data analytics and insights. In fact, recently we’ve been picking up shows like “Predictive Analytics World” and “Marketing Insights World.” These are two different shows that are great examples of where the Happy Market Research has a presence and, subsequently, an audience that is well outside of the normal market research vein. So, Clearworks, thank you so much for your sponsorship. For those of you who don’t know, they are insights and innovation and customer experience company. They help their clients understand their customers better, identify opportunities for innovation, and create products, services, and experience that actually matter. Their clients are diverse, both in size and industry, probably like all of ours, but they do share one important thing, which is a passion to drive more business by driving more meaningful human connections. You can find them online at www.clearworks.net. Again, it’s www.clearworks.net. And again, thank you so much for your time. [01:43] Hi, I’m Jamin Brazil, and you’re listening to the Happy Market Research Podcast. My guest today is Menaka Gopinath, President at Ipsos Social Media Exchange – North America. Founded in 1975, Ipsos is one of the largest market research firms globally and is also a consulting firm with worldwide headquarters in Paris, France. Menaka has held senior positions at FuelCycle and was a creator and producer at Wilcox Sessions. This is an interesting, little side hustle she’s got going on. I might wind up cutting that piece. I know you talked to me about it. I actually found it really interesting, but we’ll see. Menaka, thanks so much for joining me on the Happy Market Research Podcast today. [02:26] Thanks for having me. I’m glad to be here. [02:29] Let’s start out with a little bit of context. Tell us about your early years and how you wound up in market research. [02:33] Sure. Well, I never really thought I’d end up in market research; so, that was a surprise. But I have always been in the space of connecting with consumers. So when I first entered the work force out of college, I was working in the original startup boom at the end of the 90s. And I was working at a company that was creating online communities,
Episode 39 of the Mavens Do It Better Podcast features CEO of Jivoo and Microsoft Azure MVP, Steven Fowler. Steven is a Microsoft Azure MVP, entrepreneur, and Board Member of the Atlanta Chapter of the IAMCP.Steven and Heather caught up virtually from Atlanta, GA and Marina del Rey, CA.Listen in as Steven and Heather talk about:· His start in technology with basic programming on the Atari 400 to be an entrepreneur System Integrator in the SharePoint world to his new start up, Jivoo, which will launch a new product for Cloud governance and becoming a Microsoft Azure MVP.· His work in IoT and winning an award for working with a small lettuce farm in Georgia by bringing efficiencies, new revenue streams by modeling data and create new product lines that were not possible before launching their IoT solution.· An explanation into what it is like to be an Azure MVP and thoughts on how the 152 or so services and workloads are brought into clients – with the best use of cloud – moving CAPEX and OPEX, virtualization (lift and shift) and darker containers and serverless for enterprise and medium-sized businesses.· How to love what you do, have a positive impact in the world in everything you do and a look into his humanitarian work in Puerto Rico and Honduras with his family working with orphanages and assisting college students to get the support they need in their local communities.To follow Steven: LinkedIn| Twitter| MVP ListingSteven Fowler is a Founding Partner of Sevenbrook a DX company focused on IoT, Cloud, and Data Analytics. Steven is a top 50 IoT Influencer, award winning IoT Leader, and Microsoft Azure MVP. Steven presents at prominent conferences such as IoT Evolution EXPO, Predictive Analytics World, and IoT Solutions World Congress.
Dr. Andrey Sharapov is a senior data scientist and machine learning engineer at Lidl. He is currently working on various projects related to machine learning and data product development including analytical planning tools that help with business issues such as stocking and purchasing. Previously, he spent 2 years at Xaxis and he led data science initiatives and developed tools for customer analytics at TeamViewer. Andrey and I met at a Predicitve Analytics World conference we were both speaking at, and I found out he is very interested in “explainable AI,” an aspect of user experience that I think is worth talking about and so that’s what today’s episode will focus on. In our chat, we covered: Lidl’s planning tool for their operational teams and what it predicts. The lessons learned from Andrey’s first attempt to build an explainable AI tool and other human factors related to designing data products What explainable AI is, and why it is critical in certain situations How explainable AI is useful for debugging other data models We discuss why explainable AI isn’t always used Andrey’s thoughts on the importance of including your end user in the data production creation process from the very beginning. Also, here’s a little post-episode thought from a design perspective: I know there are counter-vailing opinions that state that explainability of models is “over-hyped.” One popular rationalization uses examples such as how certain professions (e.g. medical practitioners) make decisions all the time that cannot be fully explained, yet people believe the decision making without necessarily expecting it to be fully explained. The reality is that while not every model or end UX necessarily needs explainability, I think there are human factors that can be satisfied by providing explainability such as building customer trust more rapidly, or helping convince customers/users why/how a new technology solution may be better than “the old way” of doing things. This is not a blanket recommendation to “always include explainability” in your service/app/UI; I think many factors come into play and as with any design choice, I think you should let your customer/user feedback help you decide whether your service needs explainability to be valuable, useful, and engaging. Resources and Links: Andrey Sharapov on LinkedIn Explainable AI- XAI Group (LinkedIn) Quotes from Today’s Episode “I hear frequently there can be a tendency in the data science community to want to do excellent data science work and not necessarily do excellent business work. I also hear how some data scientists may think, ‘explainable AI is not going to improve the model’ or ‘help me get published’ – so maybe that’s responsible for why [explainable AI] is not as widely in use.” – Brian O’Neill “When you go and talk to an operational person, who has in mind a certain number of basic rules, say three, five, or six rules [they use] when doing planning, and then when you come to him with a machine learning model, something that is let’s say, ‘black box,’ and then you tell him ‘okay, just trust my prediction,’ then in most of the cases, it just simply doesn’t work. They don’t trust it. But the moment when you come with an explanation for every single prediction your model does, you are increasing your chances of a mutual conversation between this responsible person and the model…” – Andrey Sharapov “We actually do a lot of traveling these days, going to Bulgaria, going to Poland, Hungry, every country, we try to talk to these people [our users] directly. [We] try to get the requirements directly from them and then show the results back to them…” – Andrey Sharapov “The sole purpose of the tool we built was to make their work more efficient, in a sense that they could not only produce better results in terms of accuracy, but they could also learn about the market themselves because we created a plot for elasticity curves. They could play with the price and see if they made the price too high, too low, and how much the order quantity would change.” – Andrey Sharapov Episode Transcript Brian: I’m really excited to share my chat with Dr. Andrey Sharapov today from Lidl, the large grocery store chain from Europe. Andrey is a data scientist. I met him at Predictive Analytics World while we’re speaking there. He told me that he was quite interested in removing the black from the black box concept that goes with predictive models. This is called Explainable AI or Ex-AI. I think this has a lot of relevancy to designing good decision support tools and good analytics that people can believe in and will engage with. There’s obviously been a lot of talk about this area whether it’s from a compliance perspective or an ethics perspective or just a customer, an end-user experience perspective. Being able to tell what models are doing and how they’re deriving their predictions has value on multiple different levels. Without getting super into the technical side of this, we’re going to talk about how explainability within machine learning and predictive models has relevance to the design and user experience of your software application. Here’s my chat with Andrey. Hey, everyone. Welcome back to Experiencing Data. This is Brian and I’m happy to have Dr. Andrey Sharapov from Lidl. Did I say that right? Lidl, obviously, not obviously, maybe to our American listeners, people that don’t live in Europe. But is it Lee-del or Lie-del, the grocery store chain in Europe? Andrey: It’s Lee-del. Hi, everyone! Brian: Welcome to the show, Andrey. You’re a senior data scientist at Lidl. Tell us about your background. What are you doing with grocery data? Andrey: Hi, Brian. Thanks for having me on the podcast. As you said, Lidl is one of the largest retailers in Europe. We have more than 10,000 stores. We obviously have quite a lot of data that we’re trying to put to work at the moment while building this data products. For instance, we try to create decision support tools in order to help our action planners or promotion planners to make better decisions. On the other side, we’re automating various processes like for instance order disposition which means ordering of goods automatically for all the stores. We have a lot of other use cases related to marketing and everything that has to do with business as Lidl more or less. Brian: I’m excited to talk to you about [a]particular area of interest that you have which is explainable AI. But before we get into that, I’m curious, it sounds like [you’ve touched several different aspects of the business that Lidl with some ]of the data products that you’re creating. What’s hard about getting that right? Not so much from the technical standpoint, and the engineering, and the data science piece, but in terms of someone’s doing the purchasing of the carrots and someone’s doing the planning of the promotions, tell us about that experience and how you go about getting those people the information they need such that they’re willing to use your analytics and your decision support tools to actually make decisions. Can you to talk to us a little bit about that? Andrey: Yeah, sure. As you’ve pointed out correctly, these days, it’s not really much about technology or data crunching but more about weaving together the relationship between data scientists and the business in order to get a buy-in from the actual users. Let me maybe just say a little bit about how we work on our first data product, the one that I created along with the team. We went through a lot of struggles, trying to figure out [what data scientists do or do any data engineers,],and then at some point, we’ve got product owners on the team, the people who actually talk to the business in the language that the business understands. We also got businesspeople onboard as advisors for the project. The product that we built is called a planning tool, so to say. Every week, the operational people plan a promotion for certain date in the future. They have to pick a certain number of decisions, take into account conditions of the market weather, time of the season, and a lot of other things, then come up with the number to order. The sole purpose of the tool that we built was to make their work more efficient in a sense that they could, not only produce better results in terms of accuracy, but they could also learn about the market themselves because we created a certain clause for[ instance instant]elasticity curves and they could play with the price and see, if they make the price too high or too low, how much the order and quantity would change. That’s the main idea behind the product. I guess most companies have the same problem of trying to onboard the business users. The main sort of idea or the main way of thinking, “Okay, we have AI in that.” They will search in the user, they’re like, “Okay, let’s just do it.” But most likely, it’s not that easy. We went through this experience of learning that, “Okay. Although we have the coolest algorithms in our system and from coolest people working for us as data scientists and engineers, [totally doesn’t that the final user ]??will use the product.” The way that we try to convince them was by building fancy user interface in terms of making it more beautiful, so to say, but nonetheless, they were not very convinced. As far as I know, the operational people are really hard to convince because maybe the majority of operational people try to use such tools in order to execute certain tasks very fast. They don’t have a lot of time to try to learn what’s going on but rather, they would like to do a few clicks and the job is done and they move to something else. In our case, since it was a lot of machine learning, a lot of predictions, there was this problem of trusting the system because although they could use it in this way that I just described—that they could just do some clicking and then complete the task within, I don’t know, 15 minutes maybe. But nonetheless they were hesitant to use it because there was this lack of trust to the system. They would question why the prediction is maybe a little bit higher than they expect or a little lower, and there was no way to explain it to them. You basically say, “Okay, it’s an algorithm.” This aspect was, I guess, quite crucial because in their mind they also have a certain number of rules that they follow when they do the planning. This is basically kind of an invite for the next question so to say about explainable AI that we try to show the end-users various types of explanations later in order to gain more trust from their side. I guess, as I said at the beginning, this phrase that if we have AI in the data product then they will use it, “Let’s just build it.” Well, in the end we’ve learned that it’s of course not the case. People first were very skeptical but later we’ve tried to really work hand-in-hand with them trying to polish this single feature that they had in their mind. In this way, we’ve brought in more people who’ve tried the product. Brian: You hit the nail in the head there with these technologies. Whatever is new, as we talk right now, AI is definitely high on the hype cycle, it’s not magic sauce. You don’t just stick it into the cake and then all of a sudden everything is solved. You still need to map these technologies to fit the tool into the way the customer wants to use it. In this case, the way the tool does your modeling, is it based on how, for example like a purchaser, all the factors that they were using whether it was of a calculator or some kind of a manual process in Excel, I imagine that they have some kind of a recipe that they would follow to do this prior to you doing any type of AI or machine learning to help with that decision support. Is that how you help get adoption to be higher? Is it modeled on that or were you looking at other data? I’m curious especially around like if there’s experience or maybe—I don’t know if you’d call it biased—but there might be decision points that a human would be using in the traditional or the old way that they use to do those tasks that you can’t maybe perhaps integrate into the model. Does that make them not trust it as much even if maybe you’re actually factoring in more variables that they never use to like, “Oh, well, you never had weather when you forecasted crop and prices to figure out how much to purchase or whatever. You never had that. We actually provide that, but we don’t have last month’s purchase,” so you don’t know what the price was last month. That’s a bad example but are you following what I’m saying? Can you talk a little bit to that terms of adoption? Andrey: Before we even started, people were able to plan different promotions for the last, I don’t know how many years, and the main tool that they used was Excel. People with a great number of years of experience, they put together all these datasets for themselves and they’ve developed a certain tactic for how to approach the planning. Don’t forget, Lidl is operating in about 30 countries and each country has its own secret sauce on how to do the planning. As I said at the beginning, we had one person from one of the countries; one real planner who showed us how she does it for real, what factors they consider, which logic, and from then we tried to mimic the whole thing using machine learning algorithms. Of course, machine learning can take into account a lot more different factors than human planners. But nonetheless, the results that we got were of course slightly different because with human planners—I mean, it’s impossible to get the same number for a human and machine learning algorithm. They will always kind of say that, “Okay, why is it lower than I expect? I need to understand why.” The only answer we had at the time was, “Because the model said so.” That’s certainly not enough for them. Just recently, we’ve developed quite a good relationship with Polish planners. They have similar concerns, so they tried the tool, they like to tool, but again, for them it’s somewhat hard to start trusting the system immediately. They try to plan some promotions and it was fine for them but for some of them, they got unexpected results. You hit the wall; they start asking you all sorts of contrast-effects explanations. “What if it was different then what would have happened?” Without explainable AI or any kind of explanation of machine learning, you just cannot go forward with them. This is how I would put it. Brian: I do want to get into the explainable AI piece. I’m curious though, you mentioned Lidl [is]in 30 countries. For example, the purchasing department here, are there really 30 different recipes that are valid and/or there are cultural distinctions in the way stock is purchased for stores in Germany versus Poland or is it that just these Heuristic models for doing these were kind of organically-borne in different ways in each place ? Are you guys trying to centralize that or are you creating unique models? Are you trying to map it on like, “In order to get the Polish buyer to trust us, we need to show that our model is based on the way they were doing it even if it’s a different model than we use in Germany or Italy.” Can you talk a little bit about how you make those decisions and how you keep that trust that people are still going to use these decision support tools? Andrey: As I said at the beginning, each country has a small feature when they do the planning of promotions. Lidl is currently developing various tools in order to standardize this process, but it is an ongoing thing. Like with any person, be it the promotion planner, or a bank teller, or whatever, people tend to oppose changes; they just don’t buy these many cases. It takes quite a lot of time in order to convince them that, “Okay, this is the right way to do things. We’re suggesting [you a new way ]??that is not really new, it’s just kind of more generic,” so to say. Nonetheless, they [would..] say that, “Okay. We have this one data point that must be everywhere otherwise we just don’t take it.” It’s really a matter of time, matter of interaction with the client trying to convince them that if it’s so important then we can build it into the tool, then they will go along, and they will buy it. Or on the other hand, try to convince them that that is not important then maybe they, at some point, they will get convinced. These are all the different types of things that you have to discuss with the customer one-on-one. We actually do a lot of traveling these days, going to Bulgaria, going to Poland, Hungary. Every country, we try to talk to these people directly. Try to get the requirements directly from them and then show again the results back to them and say, “Okay, we did it for you specifically so let’s work together.” Brian: I think that it’s great that you’re going out and doing that one-on-one research with your customers because that’s another way to just build support is when they feel they’re being included in the process and you’re not imposing a tool, but you’re actually modeling the tool based on them that’s another way to increase engagement. I’m curious, do you find that the unique countries, the managers, whoever ultimately makes the decision on what tools or what model is going to be used to make the final decisions, are they interested in like, “Oh, look. Italy factors in the weather. They factor in this thing that we never thought to do. We never really gave it that much weight. Maybe we should do it that way?” [Is it 30 independent recipes and then you guys have been generalizing that based on the variables that you find have the most impact on the quality of the predictions? ]??Is that shared and the countries are aware of what each other are doing or is it more like, “Yeah, yeah. That’s nice but Poland is different. We want to do it this way and we know it’s right.” Andrey: It depends. Sometimes there are certain legal things that we have to take into account that are not quite transferable across different countries, so these things, we just cannot take them into account because then the tool becomes too specific for each country. But on the other side of things that we are able to generalize we just do it simply by trying to get more data from countries or blind gate or something like that. Brian: Talk to me about how explainable AI, this technology, [is]for people that don’t know what this is. Effectively, what we’re talking about when you’re doing things like showing a prediction from a model, it’s actually showing what some of the criteria were and perhaps how they were outweighed and how they had an impact on the conclusion that was derived by the system. Is that a fair summary of what explainable AI is? Why don’t you tell us instead of me trying to do it since this is a space that you’re interested in. Andrey: Absolutely. The area that use, kind of a research area of explainable AI, it’s all about trying to understand the reasoning of a black box model. This is not a new idea. It was quite popular back in the ‘90s but then was somehow forgotten and it resurfaced back in 2016, 2017 when [?? I think it’s a name of a company??].] announced a lot of funding for this area of explainable AI. What it actually does is for instance, let’s say you have a data scientist working on a sophisticated model, whether that be a neural network or anything else, and then it produces a prediction which is just a single number or it’s a binary decision, yes or no. In many cases, these black box models are really hard to explain to a non-expert. Even data scientist, in many cases, don’t know why it predicted yes versus no. There is no clear explanation or human-readable explanation that can be delivered in this case. Unless, the whole area of research of explainable AI is trying to, first of all, come up with the whole philosophy of what an explanation really is and this is not a done deal I would say. People are still trying to understand what it really means. The second part is, “Okay, how do we generate something that a human being can understand?” Whether it be, I don’t know, some factors, “Okay, for this prediction, Factor A played the biggest role and then Factor B played somewhat a lesser role,” and so on and so on. Or even generate a sequence of rules, if [??.] rules such that, “Air temperature is higher than 30 degrees, it’s the middle of the day, then the prediction for the sales of ice cream would be high.” What I’m trying to say here is that we use this technique in order to interact with our customers. For instance, when you go and talk to an operational person, a person who works in operations, who has in mind a certain number of basic rules; three, five, six rules when doing a planning. And then when you come to him with a machine learning model, something that is say, a black box, and then you tell him, “Okay. Just trust my prediction.” In most of the cases, it just simply doesn’t work. They just don’t trust it. But the moment when you come with an explanation for every single prediction your model does, you are increasing your chances of a mutual conversation between this responsible person and the model in this case. For instance, if the model predicts sales one, two, three, four, five for the May-June and then it says, “Why is it one, two, three, four, five?” And then you say, “Okay. It’s because if regular sales in June are greater than two, three, four. It’s May or June and something else, something else.” And then this person can relate these statements to something that he has in mind when doing plenty himself. This is where this kind of the eureka happens so to say because they see that the model is reasoning in a similar way as they do it. This way, the level of trust certainly goes up and then they’re willing to try it even more. I’m aware of similar stories. For instance, Yandex has been in the business of building similar tools for their customers and they also have explanation modules. It’s not a kind of a one-shot thing that we do at Lidl but it’s gaining quite a lot of momentum, I guess. Brian: I think that’s natural that trust and engagement is likely to go up if you have this in place because as you said, people can see that the tool is modeled on the work and the tasks that they want to do, and it’s not imposing a magic answer. It’s kind of like saying, “Hey, none of you here are experienced for the last 10 years of running promotions at Lidl matters anymore. Here’s what you should do. Here’s the product. Here’s the sale price and how long you should run it for.” I think it’s just human nature, there’s a natural tendency to not want to trust that, “Well, my whole job and these activities I do are completely replaceable by a magic box.” But when people start to see how it’s actually a decision support, I think it’s natural that the trust goes up. Although having said, I’m curious, you’re using some of these, would you say this is a regular ingredient in the products that you bake-up at Lidl? The data products that you guys are working on these tools[ or this is or is this an ]occasional thing? Why or why not would it be included on everything if it’s possible? Andrey: Well, there are different cases. Certainly, explainable AI is not something that you should or must use on every situation but I’m a great believer in decision support tools and human in the loop applications. That’s not necessarily in retail but in general. Every time where people have to look at certain predictions, we try to come up with explanations or at least some sort of a strategy for, “How can we come up with these explanations?” On the other side, these techniques are very useful when you do debugging of machine learning models. Even if you are not planning to show these explanations to anybody in the business, you are still benefiting quite a lot when you’re actually developing a machine learning model by using these tools, sort of just to fill in a few words. If you can avoid all sorts of overfitting in the model or removing plenty of features that actually make a model unstable and so on. I think the main point here is that in order to build things, models, what we don’t really understand a lot of what is really going on inside when they make predictions. It would be really sad, that’s the main idea. Brian: I haven’t thought of it that way, but I could see how, even as a debug, it could be useful as you’re trying to improve the quality of the decision; the advice that the tool is generating. I don’t know if you have data or even if it’s just qualitative in nature, but having included this on any of the products that are at Lidl, do you find people trust, like once they’ve seen that the model has some explainability behind the predictions that are being made, do they tend to still pay attention to all of that going forward or is it more like, “Oh, I can see that Andrey and his team factored in last month’s purchase data plus your competitor data and that’s what I always do. Now that I know that he always does that, I don’t really need to see that every time. I’m not going to second guess as much as I used to. I’m going to trust that now going forward.” Or do you find that the explainable AI, that portion of the UI, is actually an integral part of using the tool every time? Are you following what I’m saying? Do you start to ignore that over time or do customers see that as an ongoing, useful aspect of the interface? Andrey: We don’t have the explainability to build in into the user interface at the moment. We have it as more of a PoC, we show it on the month more or less but we don’t have it as a default feature. Certainly, this should be working in a way that you described. I actually have read a few papers recently about the effect of explainability. People were tested through, I don’t remember exactly the test set-up, but the point that the researchers were making was that the accuracy of predictions within this realm of human in the loop applications does not go up. Whenever people are using machine learning model that makes a prediction yes versus no, it—altogether the performance of this blend of human and machine—does not go higher but [well-significantly ]??that they could prove it. But the trust into the system can beat those by 20% higher than without any explanation. I guess what you said is exactly the point of building in explainable AI into any tool to make it transparent and then at some point, people trusted them, they don’t really have to check these explanations every single time because they know, “Okay, we are on the same page; machine and me.” Unless, they’re trying to explore some unusual situation where they really want to test the system or learn something from that, because well, this is also a possibility for a human in the loop application when humans actually learn something new from the system. I think this another case when they would use it occasionally after that. Brian: Without getting too technical here, obviously, Experiencing Data, this podcast, is more about customer experience with data products and that type of thing, but I’m curious there maybe listeners wondering, “Hey, I have this decision support tool or this analytics deployment that’s starting to use some machine learning, it doesn’t have any type of explainability. We are seeing the same symptoms you talked about like, low engagement, people don’t trust it, we put a significant data science investment in place, is it easy to retrofit in a technology investment you may have made to include some of this such that you might be able to start to improve the trust factor or is this something that really needs to be implemented from the start and it’s much more difficult to put in after the fact? Andrey: It all depends on the system itself right now. How many models you have there? What is the complexity of the whole thing? Technically speaking, there are already a number of libraries available for doing these things. Everything is open sourced. You can just Google for words like Lime or Shep or Anchore or contact me on LinkedIn for instance. I could point you out at various sources, but the point of explainable AI is that these tools are sort of model agnostic in a sense that they just need a prediction and they need just a model, and the predictions that model produces, and that’s pretty much all. Then you can write a few lines of Python and then it’s there. You can get your explanations. The short answer is you don’t have to invest too much into getting explanations if you already have a working system. Brian: Wow. That’s kind of a little bit mind-boggling. If you hear in terms of, “Why isn’t this being used more? Is there a perception that it’s costly, or it’s more difficult than it is, or is the quality not there, or do you think the business and the leaders and the people doing this don’t think it’s necessary?” It fascinates me if it’s that simple and of the quality, if this is such an easy way to build trust, why is this not happening more? Did something happen in the night between the ‘90s and now that this kind of fell out of trend? Can you talk to that a little bit? Andrey: In my personal opinion, I think over the recent 10 years, data scientist put too much attention into getting high model performance in terms of accuracy or lower error. This whole trend of toggle competitions where you try to build super accurate model and say the opponent who gets the first price is probably hundredth of a percent more precise than you, but the main question is, “Does it actually make sense? Is it what business really wants?” In my opinion is, it’s not the case. Yeah. For scientific breakthrough, maybe yes, it is useful. But certainly, in order to gain trust, you cannot just build more and more sophisticated high-precision model. It just leads you nowhere. The other thing is that over the last four or five years, the deep learning hype took place and a lot of attention was in the deep learning area where people were all in doing neural networks and nobody really cared about explainability. It’s more of, “Okay, let’s just predict this to 99.9% of accuracy.” At some point, some executives realized, “Okay, we have a lot of these models and we have no idea what is really going on inside. As I mentioned, the DAPPER program and also GDPR regulation that came to light in May 2018, put the spotlight on the right to explainability, the right to explanations, all of these factors together propelled the explainable AI topic forward and it’s now gaining a lot more attention than before. Brian: In terms of when you get into neural nets and some of those technologies, is this technology available to products that are leveraging neural networks and some of this more complicated artificial intelligence technologies? Is it widely available to add the explainability portion? Andrey: Well, it depends on what kind of data you’re working with. If you’re working with this regular table or data, the data that is in tables, no text, no images, then it doesn’t really matter. You can take in neural net or any other model but once you go into more sophisticated realm of neural nets working with text data, then it is slightly more complicated to get it to work but it’s still possible. They could just use lime or shep. It’s very interesting what they do actually. For instance, if you try to say classify legal documents or medical documents or say fake news classifications and write yes versus no, then these explainability tools can highlight the actual words in the sentence that play the most role in terms of, “Okay, if it’s fake news, then it will underline certain words,” that a human being can get them on and say, “Okay, this content is maybe more full of feelings or calling for more action,” or something like that versus a prediction for no fake, and it’s mostly facts, facts, facts. Basically, the explainability tools, they highlight words in a sentence. In terms of images, even more complicated. There are a lot of things that are model agnostic but even more things that are not. In this case, you really have to be an expert in whatever neural net and try to get it to work. Just get a code from GitHub and try to reproduce the results of a research paper. For these more complicated cases, it’s not that easy but it’s possible. Brian: You brought up something too. I’ve heard this trend repeated too that there can be a tendency in the data science community to want to do excellent data science work and not necessarily do excellent business work—building tools and solutions that help the business. I could see how maybe some data scientists may see that’s not going to improve the model adding the explainability. “I can’t write a paper on that and build my credentials with that type of information,” so maybe that’s responsible for why it’s not as widely in use. Do you think that’s going to change? Do you think this will become more of an expectation going forward that, we won’t be talking about black boxes as much in a year from now or two years from now, do you think that’ll start to go away and expectations will change? Any thoughts on that? Andrey: The whole trend is going into the direction of explainable AI anyway for one simple reason. In the last years, AI was mostly used in the labs and probably to automate certain processes where no humans are really involved. It’s more like robotics or something like that. But these days, AI is going into various fields like healthcare, or legal domains where you deal with things that affect humans directly. For instance, how would you explain why a certain person didn’t get a low at the bank and the other one who looks very much similar got it, right. There are a lot of questions that are coming up these days because AI is touching upon some points where humans are personally involved. Because yeah, we don’t really care how some robots are moving goods at Alibaba warehouse. I mean, doing an explainability for that, yeah, maybe it’s a really sophisticated model but I don’t care. I get my goods; I order them and that’s it. But whenever things go into the direction of some social interaction or things that affect people directly, or say these high-stake decisions, then interpretability and explainability is a must. I think many people would probably choose a model that is maybe not as exact and accurate but explainable versus something that is extremely accurate but okay, sometimes it can kill you. This is kind of my logic here. Brian: That’s kind of the dividing line between the business and the human side of it. The pure data science side is that you might have a super accurate model but if you find out that they’re still buying carrots the old way, does it really matter that you have an excellent prediction on how many carrots to buy or at what price if they don’t ever take advantage of that. All of that investment is kind of thrown out the door. From a business standpoint, an acceptable model quality with a highly trusted interface and user experience might be the better business decision even if it’s not the best model quality from that perspective. I think that’s important stuff to consider in all of this as we build these tools. This is awesome. It’s actually exciting from a design perspective to hear that this is available as kind of a tool that we can implement. Obviously, context matter,and you have to look at particular domains and particular types of data and all of that but it sounds like something that—as part of our tool box—it’s something that we should be leveraging regularly when possible especially if we’re talking about human in the loop tools and decision support tools as opposed to as you said, the Alibaba robots… Andrey: Yeah. That’s [ I didn’t hear a word here]. Brian: …get my book or my shoes or whatever I ordered. Just on this topic here as we wrap up here, any broad level advice for people that are kind of looking to jump into this on making efficient data products? It doesn’t have to be just products explainable AI but just from you experience at Lidl. Maybe a mistake that you’ve realized, and you changed, and how you’re approaching your projects and building these tools. Any advice for people? Andrey: I guess the only advice I would give to anyone who wants to build a product that is used is to go to the people and ask for the actual requirements; try to involve the end-user at the earliest stage possible. I think this is the only way to succeed in the end. This is how startup fail or succeed. You have to really understand what you’re doing. At Lidl, we’ve kind of cleared the way from zero to hero over the last two years and we learned it the hard way. The more you interact, I mean it doesn’t really matter to the people because data product is a piece of software that has machine learning inside of it or algorithms in the end. Nobody really cares about how sophisticated these algorithms are. People just want to make sure that they can get the job done efficiently or have a nice experience, no bugs, and stuff like that. This is [all the same stories that was probably 10 years ago this is all the same story as 10 years ago probably]when we didn’t build any data products,[ that build]?? just regular software but again the main advice is just don’t try to build or create this ‘moon-shotty’ product within a few months and try to reiterate and try to onboard your users as early as possible. This is the main advice that I could give. Of course, use explainable AI in order to convince and gain trust. This is a must in my view. I mean, of course in the cases where someone is doing something, and no one can see him. If it’s an interactive tool, interacting with users, they have to be sure that it is doing the right thing. Brian: Great advice. People that have been on my designing products mailing list and the Experiencing Data podcast have definitely heard this advice beaten into the ground many times about getting out there and talking to people early in the process to inform what you’re doing and not working in isolation because that’s almost a sure way to produce something that people aren’t going to use because it’s full of your own bias about how things should be done, and it’s not informed by what customer wants to do. Good words, good parting advice. Where can people find out more about you? Are you on Twitter? Do you have a website, LinkedIn, anything like that? Andrey: Well, I’m posting quite a lot on LinkedIn. I have a group dedicated to explainable AI; it’s called Explainable AI-XAI. Everyone who’s interested or learning more, feel free to join or contact me through LinkedIn. I don’t tweet much on Twitter. My presence is mostly on LinkedIn at the moment. Brian: Great. I’ll definitely put those links to your profile and to the Explainable AI group on LinkedIn on the show notes. This has been really great, Andrey. It’s been great to talk to you. Thanks for coming on in Experiencing Data. Andrey: Thank you, Brian. Thanks for inviting me.
Work 2.0 | Discussing Future of Work, Next at Job and Success in Future
Synopsis: Many innovative businesses and IT organizations appreciate the competitive advantage analytics capabilities can provide and have ambitions to reach increasing levels of analytics maturity. However, the well-documented shortage of analytic talent leaves many firms without a strong analytic talent bench and little knowledge about how and where to find analytics professionals needed to get there. In this presentation, Greta Roberts will discuss results of a major quantitative Study of the "raw talent" of professional analytics professionals. This Study crossed industries, experience and skills. Practical insights shared will include: raw talent characteristics businesses are looking for in their analytics professionals, trends and correlations that lend unexpected insight into how organizations are building a strong and scalable analytic talent bench. Attendees will be provided with the ability to compare themselves to the Analytics Professional benchmark for no fee. About the Speaker: Greta Roberts, CEO, Talent Analytics [http://www.talentanalytics.com/] , Corp. Greta Roberts is the CEO of Talent Analytics, Corp and a faculty member at the International Institute for Analytics. She has 20+ years working for world-class technology innovators like Lotus, Netscape, WebLine, Cisco and Open Ratings. Under her direction, Talent Analytics has grown to be a leader in predicting employee behavior — the next logical step beyond predicting customer behavior. In 2012, she led a Research Team with the International Institute for Analytics that resulted in the world's only Benchmark for hiring Data Scientists / Analytics Professionals. Greta is a sought-out thought leader, presenter, and author. In 2013, she has spoken at the Predictive Analytics World events around North America, SAS Day at Kennesaw State, SAP Sapphire NOW, IIA's Chief Analytics Officer Summit, SAS's Analytics 2013 & other major analytics & business events. She is also a frequent guest on the SAP's Game-Changers Radio Show. Greta has recently been quoted in MIT Sloan Management Review, the Harvard Business Review blog network, Forbes, VentureBeat, Information Management, Computerworld, Data Informed, Tech Target, and many other major influential publications. Follow Greta on twitter @GretaRoberts [ https://twitter.com/GretaRoberts ].
Predictive Analytics World founder Eric Siegel on how data science could determine the 2016 election. Doug Wiens at Washington University shares his work discovering a giant rift underneath the American Midwest. Scott M. Peterson of Peterson Wealth Advisors describes how to make good financial decisions during retirement. Male Survivor board member Chris Anderson on why abuse survivors keep quiet so long. Urologist at Michigan State University David Wartinger on how roller coasters help pass kidney stones.
Predictive workforce analytics isn’t just an HR issue. In this podcast, APQC’s human capital management research program manager, Elissa Tucker, interviews Greta Roberts, co-founder and CEO of Talent Analytics, Corp. Greta will be a keynote speaker at the Predictive Analytics World for Workforce conference (April 3-6, 2016, San Francisco). In this podcast, Greta discusses the many business problems that predictive workforce analytics can help companies solve. Hear her provide examples of how brand-name companies today are using predictive workforce analytics. Find out what she thinks about reports that there is a shortage of data scientist talent. (Her answer might surprise you). Greta will explain what business leaders need and don’t need to know about predictive workforce analytics. And, she shares her predictions for new uses of predictive workforce analytics that are on the horizon.PredictiveAnalytics World for Workforce, April 3-6, 2016 in San Francisco, is the premier workforce analytics event for HR professionals, business leaders, line of business managers, and analytics practitioners. This global, cross-industry event covers predictive solutions to today's greatest workforce challenges. Join Greta Roberts and APQC when you register today with 15% off code APQC15.Remember to follow us on Twitter @apqc!
Join Dr. Carlos as he explores whether analytics can predict who lies, what they buy, and even dies. What are companies predicting about me as a customer?Here are just a few examples:Microsoft helped develop technology that, based on GPS data, accurately predicts one's location up to multiple years beforehand.Target predicts customer pregnancy from shopping behavior, thus identifying prospects to contact with offers related to the needs of a newborn's parents.Tesco (UK) annually issues 100 million personalized coupons at grocery cash registers across 13 countries. Predictive analytics increased redemption rates by a factor of 3.6.Netflix sponsored a $1 million competition to predict which movies you will like in order to improve movie recommendations.One top-five U.S. health insurance company predicts the likelihood an elderly insurance policy holder will die within 18 months in order to trigger end-of-life counseling.Con Edison predicts energy distribution cable failure, updating risk levels that are displayed on operators' screens three times an hour in New York City.ERIC SIEGEL, PhD, founder of Predictive Analytics World and Executive Editor of the Predictive Analytics Times, makes the how and why of predictive analytics understandable and captivating. Eric is a former Columbia University professorwho used to sing educational songs to his studentsand a renowned speaker, educator, and leader in the field.
The TalentCulture #TChat Show is back live on Wednesday, April 22, 2015, at its new time from 1-2 pm ET (10-11 am PT). Last week we talked about how to look people in the eye digitally, and this week we’re going to talk about how to turn horrible bosses into happier relationships. Today we’re faced with a difficult and complex economic landscape never before seen in the modern world. Regardless of the job growth of late and unemployment plummeting, wages are still pretty flat and employers and workers are under a great deal of strain to produce. Unfortunately, a bad boss can undermine our ability to work effectively and efficiently. Poor communication skills, lack of direction, micromanaging, bullying — any of these traits can make a boss difficult to work with leading to stress, anxiety, frustration and anger. It’s time to learn powerful techniques for reinventing your relationship with your boss and turn horrible into happier. Join TalentCulture #TChat Show co-founders and co-hosts Meghan M. Biro and Kevin W. Grossman as we talk about how to turn horrible bosses into happier relationships with this week’s guest: Tony Deblauwe, Founder of consulting firm HR4Change. Thank you to all our TalentCulture sponsors and partners: Dice, Jibe, TalentWise, Hootsuite, IBM, CareerBuilder, PeopleFluent, Jobvite, Predictive Analytics World for Workforce and HRmarketer Insight. Plus, we're big CandE supporters!
The TalentCulture #TChat Show is back live on Wednesday, April 15, 2015, from 7-8 pm ET (4-5 pm PT). Last week we talked about the adoption of social software for workforce collaboration and communication, and this week we're going to talk about how to look people in the eye digitally. Building and sustaining authentic relationships in person or online are no easy tasks. It takes an investment of being “present” when you’re talking to someone. This week's guest is a big proponent of "Looking People in the Eye Digitally" (as well as personally). Something we know the TalentCulture #TChat Show community believes in as well. Introductions and ongoing relationships in social platforms require the same personal attention as the human touch and eye contact in a physical relationship. Join TalentCulture #TChat Show co-founders and co-hosts Meghan M. Biro and Kevin W. Grossman as we talk about how to look people in the eye digitally with this week’s guest: Ted Rubin, a leading Social Marketing Strategist, Keynote Speaker, Brand Evangelist, and Acting CMO of Brand Innovators. Thank you to all our TalentCulture sponsors and partners: Dice, Jibe, TalentWise, Hootsuite, IBM, CareerBuilder, PeopleFluent, Jobvite, Predictive Analytics World for Workforce and HRmarketer Insight. Plus, we're big CandE supporters!
The TalentCulture #TChat Show is back live on Wednesday, April 8, 2015, from 7-8 pm ET (4-5 pm PT). Last week we talked about the predictive power HR can bring, and this week we're going to talk about the adoption of social software for workforce collaboration and communication. Social software today enables workforce collaboration and communication. The McKinsey Global Institute estimates productivity improves by 20-25% in organizations with connected employees, and the potential for revenue amounts to $1.3 trillion per year. But the adoption of internal social media will require a strategic change management initiative to move away from email that still dominates the enterprise today. Join TalentCulture #TChat Show co-founders and co-hosts Meghan M. Biro and Kevin W. Grossman as we talk about the adoption of social software for workforce collaboration and communication with this week’s guest: Shel Holtz, Principal of Holtz Communication + Technology and a prolific blogger and co-host of the first and longest-running communications podcast, For Immediate Release. Thank you to all our TalentCulture sponsors and partners: Dice, TalentWise, Hootsuite, IBM, CareerBuilder, PeopleFluent, Jobvite, Predictive Analytics World for Workforce and HRmarketer Insight. And we're big CandE supporters!
The TalentCulture #TChat Show is back live on Wednesday, April 1, 2015, from 7-8 pm ET (4-5 pm PT). Last week we talked about the realities of the Ideal HR-vendor relationship, and this week we're going to talk about the predictive power HR can bring (we're proud sponsors of the Predictive Analytics World for Workforce). In a global market where the recruiters spend seconds reviewing resumes, it’s no wonder the "gut feel in hiring" is less accurate than a coin flip. Enter predictive analytics: the ability to take what happened in the past and find common relationships and factors (leveraging human behavior and neural networks) to model and predict the future and report back in analytics with recommendations for the future. Departments like Finance, Sales and Marketing are already using predictive analytics. Now it’s HR’s turn. Join TalentCulture #TChat Show co-founders and co-hosts Meghan M. Biro and Kevin W. Grossman as we talk about the predictive power HR can bring with this week’s guests: Chad W. Harness, VP of Lead Human Capital Analytics Consultant at Fifth Third Bank; and Jen Phillips Kirkwood, ADP Analytics and Innovation Ambassador. Thank you to all our TalentCulture sponsors and partners: Dice, TalentWise, Hootsuite, IBM, CareerBuilder, PeopleFluent, Jobvite, Predictive Analytics World for Workforce and HRmarketer Insight. And we're big CandE supporters!
The TalentCulture #TChat Show is back live on Wednesday, March 25, 2015, from 7-8 pm ET (4-5 pm PT). Last week we talked about how research and relationship building win in tech recruiting, and this week we're going to talk about the realities of the Ideal HR-vendor relationship. According to new KeyInterval Research, a practitioner centric market research firm founded by William Tincup and John Sumser, HR Department increasingly looks like a purchasing department with a specific set of subject matter expertise. HR is also getting increasingly good at managing vendors and other relationships that cause work to be done by people who are not direct members of the HR Team (employees). However, per Key Interval Research, only a tiny fraction of vendor-practitioner relationships produce extraordinary levels of excellence and return. Join TalentCulture #TChat Show co-founders and co-hosts Meghan M. Biro and Kevin W. Grossman as we realities of the Ideal HR-vendor relationship with this week’s guests: William Tincup and John Sumser, long-time HR and recruiting industry luminaries and the founders of KeyInternal Research, a practitioner centric market research firm. Thank you to all our TalentCulture sponsors and partners: Dice, TalentWise, Hootsuite, IBM, CareerBuilder, PeopleFluent, Jobvite, Predictive Analytics World for Workforce and HRmarketer Insight.
The TalentCulture #TChat Show is back live on Wednesday, March 18, 2015, from 7-8 pm ET (4-5 pm PT). Last week, we talked about email productivity and a new way to work, and this week we're going to talk about how research and relationship building win in tech recruiting. The competition for top tech talent is fierce, employers must still find creative ways to entice people with in-demand STEM skills to join their company -- getting to know who they're targeting is critical prior to and especially during outreach. According to Dice's recent Tech Candidate Sentiment Survey, at least 50% of candidate respondents said that they wish recruiters would do more research on them and their background before calling, but this is significantly down from 2013. The right online research tool combined with continuous relationship building and developing a repository for a ready pool of tech candidates, companies can competitively source the most qualified people when the time is right. Join TalentCulture #TChat Show co-founders and co-hosts Meghan M. Biro and Kevin W. Grossman as we learn about how research and relationship-building win in tech recruiting with this week’s guests: Ashley Fox, Program Associate at Partnership for Public Service; and Pete Radloff, Lead Technical Recruiter with comScore. Thank you to all our TalentCulture sponsors and partners: Dice, TalentWise, Hootsuite, IBM, CareerBuilder, PeopleFluent, Jobvite, Predictive Analytics World for Workforce and HRmarketer Insight.
Looking for a crystal ball to help you predict what 2014 may bring for your business, your industry, your marketplace? We've got the next best thing – dozens of expert insights into the technologies, strategies, and trends that can help you grow and compete better this year and beyond. These thought leaders will appear on SAP Game-Changers Radio 2014 Predictions – Part 3 LIVE: Todd Wilms, SAP; Michael Denis, Flatirons Solutions; Neal Schact, CommuniTech; Nicolette Van Exel, SAP; Kathy-Ann Hutson, IBM; Padman Ramankutty, Intrigo; Steve Hilton, MachNation; Bill Newman, Newport Consulting; China Gorman, Great Place to Work; Steve Player, Beyond Budgeting Round Table; Jorge Garcia, TEC; Dennis Goodhart, IP Network Consulting; Marcus Baur, Sailing Team Germany; Eric Siegel, Predictive Analytics World; Tim Minahan, SAP. To hear the first two parts of our 2014 Predictions special, visist: http://bit.ly/Predictions2014A and http://bit.ly/Predictions2014B. Wishing you a positively game-changing New Year!
Looking for a crystal ball to help you predict what 2014 may bring for your business, your industry, your marketplace? We've got the next best thing – dozens of expert insights into the technologies, strategies, and trends that can help you grow and compete better this year and beyond. These thought leaders will appear on SAP Game-Changers Radio 2014 Predictions – Part 3 LIVE: Todd Wilms, SAP; Michael Denis, Flatirons Solutions; Neal Schact, CommuniTech; Nicolette Van Exel, SAP; Kathy-Ann Hutson, IBM; Padman Ramankutty, Intrigo; Steve Hilton, MachNation; Bill Newman, Newport Consulting; China Gorman, Great Place to Work; Steve Player, Beyond Budgeting Round Table; Jorge Garcia, TEC; Dennis Goodhart, IP Network Consulting; Marcus Baur, Sailing Team Germany; Eric Siegel, Predictive Analytics World; Tim Minahan, SAP. To hear the first two parts of our 2014 Predictions special, visist: http://bit.ly/Predictions2014A and http://bit.ly/Predictions2014B. Wishing you a positively game-changing New Year!
33voices interviews Eric Siegel, Ph.D., founder of Predictive Analytics World.