POPULARITY
What's folks, today I'm pumped to be joined by Michael Katz, CEO and co-founder at mParticle, the leading independent customer data platform.Summary: In the contentious debate over Packaged and Composable CDPs, Michael delivers a clear-eyed perspective that cuts through the hype. Rejecting the idea that Pacakged CDPs are becoming obsolete, he emphasizes the continued importance of data quality, integrity, and privacy, and he warns against becoming entangled in marketing illusions. He also highlights the need for adaptability, dismissing some of the more pervasive myths in the martech landscape, such as the magic of zero copy data. With strategic acquisitions, mParticle is focusing on intelligence and automation, aiming to be more than just “simple pipes” in data management. Michael's insights provide a grounded roadmap, focusing on genuine value creation and thoughtful navigation of the complex industry that is Customer Data Platforms.About Michael Michael got his start as an analyst at Accenture and later focused on customer acquisition and marketing strategy for a mobile content company He entered the entrepreneurial world founding interclick in 2005, a data-valuation platform for advertisers He ran the company as President and took the company public in 2009 and sold to Yahoo in 2011 for $270M He's been on the Board of Directors for several companies including Adaptly and BrightLine He's a volunteer at Southampton Animal Shelter He's also a Mentor at Techstars After a year as VP of Optimization and Analytics at Yahoo after his company's acquisition, Michael took on his second venture, co-founding mParticle in 2013 mParticle is a global, remote-first company that provides a real-time AI customer data platform. They help get the highest quality customer data to any system that marketers or product managers use – ultimately improving customer experiences. They work with big players and small, fueling the customer success of brands like Paypal, Seatgeek, Venmo, Headspace, Lyft, McDonalds, and Airbnb. Unpacking the 8 Components of Customer Data PlatformsWhen asked about Arpit Choudhury's enumeration of the eight essential components of Customer Data Platforms (CDPs), Michael's response was swift and assertive. With an appreciative shoutout to Arpit for articulating the complex aspects of CDPs, he aligned himself with the eight facets laid out in the question.These eight components, according to Michael, indeed compose an end-to-end solution for the first generation of CDPs. They include: CDI, customer data infra, collect 1st party event data from customers from website and apps ETL, data ingestion, extract data from other tools and load it into DWH Data Storage/warehousing, store a copy of data collected Identity resolution, a solution for tying together a customer's various interactions with you across multiple platforms and devices Audience segmentation, drag and drop UI Reverse ETL, extract/activate from DWH to other tools Data quality, validity, accuracy, consistency, freshness, completeness… Data governance and privacy compliance, user consent, HIPAA compliance Emphasizing the integrated nature of these components, Michael asserts that the value of the whole system is greater than the sum of the individual parts. He proudly reflects on mParticle's reputation as a complete CDP and emphasizes that many existing CDPs lack strong stories around data quality and governance.The conversation with Michael reveals his confidence in the synergy that arises when these parts function together. He cautions against skipping any of these steps, underscoring that a weak foundation will undermine the entire system. Speed in data processing should not compromise quality and privacy protection, and mParticle's holistic approach ensures this balance is maintained.Takeaway: Michael's insights into the eight essential components of CDPs not only align with industry experts but also highlight the importance of a unified approach. By valuing integration, quality, and consumer privacy, mParticle positions itself as a leading player in the CDP landscape. The wisdom shared by Michael emphasizes that genuine value is derived not merely from the individual elements but from the careful orchestration of all parts into a coherent and resilient system.Debunking the Myths Around Reverse ETL and Composable CDPsReverse ETL and composable CDP proponents assert that the traditional CDP is becoming obsolete and that the future lies in Composable CDPs that leverage modern data warehouses and processes like Reverse ETL. Claiming that existing CDP vendors will have to adapt to this shift or risk becoming irrelevant.Michael's written extensively about this debate over the years. He argued that product marketing around the composable CDP is just modern day sleight of hand tricks…designed to dupe the buyer. To be fair, mParticle has adapted to the rise of the modern data stack by offering services like data warehouse sync and value-based pricing. Michael highlighted the rise of the Cloud Data Warehouse as an essential system within organizations, but he was quick to emphasize that the real challenges lie in maintaining data quality, integrity, and privacy. As he elaborated, legacy CDP vendors like mParticle deliver value not in the storage of data, but in the movement and activation of it. Michael stressed the importance of going beyond mere data collection to understanding the context and the “why” behind customer behavior.According to Michael, the true value in the CDP space has shifted towards enhancing context, improving understanding, and introducing an insights layer. For mParticle, this has translated into a focus on finding truth and meaning in their data, creating an infinitely optimizing loop. He vehemently argued against reverse ETL, characterizing it as “garbage in, garbage out,” and took aim at what he described as “sleight of hand” tricks in product marketing designed to distract from the real issues.Michael challenged several narratives in the debate, dismissing the importance of zero data copy, the vulnerability of CDPs to security threats, and the notion of faster deployment times leading to sustained value. He warned against getting enticed by aggressive product marketing, stressing that what might appear easy to implement could be hard to maintain.Takeaway: The transformation of CDPs isn't just about new technologies or marketing tactics but lies in understanding the true needs of customers. With a focus on integrity, context, and sustained value, Michael exposes the fallacies in current debates, emphasizing that real success comes from creating genuine value, not just noise.The Realities of Replacing Traditional CDPs with Reverse ETL ToolsWhen asked about the growing trend where some reverse ETL customers have found ways to replace their traditional Customer Data Platforms (CDP) with reverse ETL tools, Michael acknowledged that this represents only a very narrow subsegment of the market. He expressed a concern that the fragmented “Do It Yourself” approach isn't always a practical solution, particularly for most businesses within the enterprise sector.Michael pointed out that during the pandemic, certain habits had developed, often driven by data engineers working with limited perspectives and without a comprehensive understanding of the complexities of running successful digital marketing campaigns. This lack of integration and understanding has led to an increasing need for a return of the decision-making power to the marketers.Highlighting the importance of usability, Michael described how mParticle is designed to make it easy for marketers to contextualize and activate data in a low code, no code manner. This approach stands in contrast to other CDPs and modern data stack tools that require intricate knowledge of SQL scripts and schema. A significant portion of his argument revolved around the practical challenges of troubleshooting across multiple different systems. He explained that when a business relies on eight or more different systems to serve the purpose of an end-to-end CDP, it introduces a unique set of complexities. If something goes wrong, troubleshooting becomes an intricate web of challenges involving different account managers. In Michael's words, “the whole thing becomes a bit of a mess.”Takeaway: Michael's insight sheds light on the realities of replacing a traditional CDP with reverse ETL tools. The fragmented approach may work for some but presents complexities and challenges that might be impractical for the broader market. Usability, integration, and streamlined workflows are highlighted as essential elements for optimizing business value, suggesting that while there are different paths to success, a straight line is often the fastest and most efficient route. The emphasis on integration over “hobbyist” solutions presents a compelling argument for businesses looking to evolve in the ever-changing landscape of martech.Debunking the Myth of Zero Copy Data in MartechWhen Michael was asked about the notion of zero copy data, he didn't mince words, immediately cutting through the hype to lay bare the underlying realities. He expressed skepticism about the idea that zero copy data is a magical solution, pointing to the assumption that copying data creates inefficiency and additional access cost.Michael argued that the cost of storage isn't the main driver of expenses; it's the cost of compute. He believes that creating duplicate copies of data doesn't drastically change costs and, moreover, that there's considerable efficiency to be gained by replicating data for different uses and use cases.He also emphasized the importance of focusing on the value side of the equation. Minimizing costs is essential to maximizing investable resources for growth, but it shouldn't overshadow the primary goal of driving customer value. Michael expressed concern that focusing on zero copy data might lead businesses down the wrong path, solving for a non-existent problem.His perspective on the issue extended to a critique of some reverse ETL companies. He noted that they often face a churn problem, luring customers in with the promise of an “easy button” only to disappoint when reality doesn't meet expectations.Takeaway: Michael's dismantling of the zero copy data concept offers a vital reminder that not all that glitters is gold in the world of martech. By focusing on the practicalities of costs and the importance of efficiency and value, he encourages businesses to ask the right questions and prioritize what truly matters. His argument against zero copy data serves as a caution against getting swept up in appealing but potentially misguided solutions, emphasizing instead a thoughtful approach to data management that delivers real value.Examining the Warehouse Native Approach to MartechWhen Michael was asked about the increasing trend of warehouse native approaches in martech and its potential impact on companies with large volumes of non-revenue-generating users, his response was insightful. He broke down the question into specific elements, focusing on both the technological and practical aspects of this approach.He acknowledged the structure of a typical marketing tech stack, with various components like analytics, customer engagement platforms, experimentation tools, and customer support services. However, he questioned the real beneficiaries of having all these tools built natively on the Cloud Data Warehouse. He emphasized that the benefit might lie more with the data warehouse provider than with the customer.Michael also pointed out that as different vendors leverage multiple datasets and run their own compute cycles on the data warehouse, it's not necessarily clear if that would result in cost savings. He challenged the assumption that avoiding multiple copies of data would inherently save money, stating that there hasn't been enough side-by-side comparison to substantiate this belief.He concluded that whether it's through a company like Snowflake or mParticle, they are, in essence, reselling cloud compute in different forms. Simply assuming cost savings because of a lack of data duplication might not hold true in practical terms.Takeaway: Michael's analysis of the warehouse native approach in martech opens a nuanced conversation about the real-world implications of this trend. By examining who benefits from this strategy and challenging the common assumption that it leads to cost savings, he encourages a more critical evaluation. The discussion underscores that what might appear as an intuitive solution needs more robust evidence and careful consideration to understand its true value and impact.The Insights Layer of mParticle's Approach to Customer DataIt's getting harder and harder to track the packaged vs composable battle these days, there's a ton of overlap with so many tools: ETL tools adding rETL features while rETL tools and CDIs becoming composable CDPs CDPs adding product analytics and AI features while product analytic tools adding CDP and AI features CDPs adding marketing automation features while MAPs adding CDP features CDPs also adding “warehouse connectors” or “warehouse sync” Adding an interesting layer to the debate here is extending the capabilities of the CDP into new areas. mParticle made some interesting acquisitions over the last few years: Aug 2022 Vidora, AI personalization platform for customer data Jan 2022 Indicative, a customer journey analytics platform to address data entropy With these capabilities, mParticle is adding an intelligence layer that not many CDPs have. Not only are they capturing and helping customers move data around, they're helping them make sense of the data, look back to see what happened and also make predictions on what will happen.Initially, mParticle's efforts were directed at solving mobile data collection challenges, aiming to set up organizations on a durable and scalable API-based system. By addressing unique mobile data challenges that no one else was confronting, they sought to position themselves at the center of mass for many consumer brands.According to Michael, the solution to these challenges led to mParticle's focus on multi-channel data challenges, revolving around vital components like data quality, governance, and identity resolution. Identity resolution, Michael believes, remains one of the most misunderstood aspects of the whole process.But the vision didn't stop there. The evolution went beyond these challenges, aiming at what would come next: intelligence and automation. The acquisitions of Vidora and Indicative, as Michael revealed, probably accelerated mParticle's roadmap by four or five years.Michael brought to light mParticle's ambitious strategy to move beyond mere segmentation tools and “simple pipes.” As Michael argued, many existing tools are like “simple pipes” that do exactly what you tell them to do. However, mParticle's approach aims to be an intelligent force that moves the industry forward.Michael's discourse paints a picture of a company that's not just satisfied with optimizing first-generation capabilities. It's a story of looking ahead, focusing on intelligent pipes and striving to put customers in the best possible position to extract value from their first-party customer data.Takeaway: By focusing on next-generation capabilities and accelerating their roadmap through strategic acquisitions, mParticle is positioning itself as a leading force in the evolving landscape of martech. The compelling insight is their move towards intelligent pipes that can make sense of the data, not just move it around, guiding the industry into a new era of customer data understanding and utilization.The Vidora Acquisition: Empowering Marketers with Machine LearningWhen asked about the acquisition of Vidora and its integration into mParticle's CDP offering, Michael dove into the compelling dynamics behind this strategic move. The conversation revolved around AI tools like IBM's Watson Studio, Amazon SageMaker, and Google's AutoML, which are generally built for data scientists. What set Vidora apart, however, was its design to be accessible to knowledge workers and marketers, aligning with the founders' vision to democratize machine learning.Michael was keen to clarify that many tools in the market offer a single type of machine learning, often centered around propensity scores. But Vidora went beyond, impressing him with the building of diverse ML pipelines. The suite enabled regression testing, propensity scoring, uplift analysis, and more, without constraining the types of intelligence or automation that customers could access.According to Michael, the uniqueness of customer data demands tailored solutions, as no two customers' data look, shape, or behave the same way. With Vidora, now branded as Cortex, mParticle has extended a full suite to users that align with various channels. The seamless integration of models within mParticle allows marketers to create, sync, and activate models effortlessly, accommodating different channels from paid advertising to customer support.But what really resonated with Michael's view was how this acquisition tackled a common industry problem: the gap between the creation of cool models and their actual implementation into production. Most in-house models never see the light of day, and those that do are often channel-specific, failing to transcend their original context. Cortex, on the other hand, offers flexibility without channel dependency, backed by mParticle's robust and diverse set of connectors.Takeaway: mParticle's acquisition of Vidora, rebranded as Cortex, has redefined the machine learning landscape for marketers. It provides a versatile and accessible set of tools that break down conventional barriers and facilitate the practical application of models across diverse channels. By doing so, it empowers marketers to extract greater value from data and paves the way for a more intelligent and integrated approach to customer engagement.Innovating Martech Pricing: A Fresh Approach to Value-Based PricingWhen asked about the recent shift in mParticle's pricing structure, Michael delves into the exciting philosophy behind this change. He emphatically expresses that the change isn't merely superficial, but rather a product of innovation, something that's more than just a re-packaging of their pricing model. Michael explains the need for de-averaging or de-aggregating pricing, acknowledging that the traditional charging based on users or events is fairly straightforward, but it doesn't capture the full picture. According to Michael, not all events, users, or use cases hold equal value, and treating them as such creates a logjam through the system. This one-size-fits-all approach undermines the ability to provide marketers with appropriate solutions.The heart of the problem is that this logjam prevents Customer Data Platforms (CDP) from having access to all necessary data, typically due to how they are priced. Michael highlights that when they analyzed how customers were using mParticle, they discovered three distinct use cases: real-time event federation, data maintenance for historical lookup and redundancy, and targeting and personalization.With this fresh approach, mParticle managed to “unclog the pipes” of data, allowing it to flow where needed and at the right pace. This shift allowed for acceleration in audience calculation and refresh, and extended the look-back window on real-time audiences from a mere 90 days to perpetuity without sacrificing performance.Takeaway: Michael's insights into mParticle's new pricing structure reveal an innovative and necessary departure from traditional user or event-based pricing. By recognizing the unique value in different data points and use cases, mParticle has managed to not only create a more effective pricing model but also to enhance the functionality and efficiency of their platform. It's a lesson in understanding the complex dynamics of the martech space and the importance of aligning pricing models with actual value and functionality.Empowering Black Founders with TechnologyOne of the coolest discoveries when digging through Michael's socials is that he actually created Tech for Black Founders. He got together with a list of data vendors to provide free software to early-stage startups led by Black founders, as part of an initiative to support Black technologists and entrepreneurs, who currently make up only 1% of founders backed by venture capital in the US. In the midst of 2020, during a peak of social unrest, he found himself pondering how his company, mParticle, could serve the community better. It was more than a fleeting thought; it was a shower epiphany that would soon spark a wave of empowerment for black technologists and entrepreneurs.Michael's initiative, which might seem simple, was profound. Recognizing that black founders made up less than 1% of those backed by venture capital in the U.S., he set out to make a difference. The idea was to provide free software from leading tech companies to early-stage, black-led start-ups. The aim was to bridge the equity gap, offering services usually costing six to seven figures to those underrepresented.He texted friends and fellow founders from braze, amplitude, branch, and more. His proposal was met with instant approval, and a simple application page was launched. What happened next was nothing short of extraordinary. The initiative went viral, with 50 to 100 companies reaching out, eager to contribute, and the movement continues to grow, now encompassing hundreds of companies offering their services to black and other minority tech founders.Takeaway: Michael's leadership in rallying tech companies to offer free software to black and minority tech founders is a powerful example of how one person's idea can ignite a movement. It underscores the importance of community and collaboration, and showcases a tangible effort to close the equity gap in the tech industry. Simple, immediate, and impactful, it's a testament to what can be achieved when passion meets purpose.Finding Balance and Joy in a Multifaceted LifeWhen asked about how he remains happy and successful amidst his diverse roles as a founder, writer, sports fanatic, dad, animal shelter volunteer, mentor, and board member, Michael's response is a reflection of self-awareness, clarity, and wisdom. His take on balancing a life filled with various passions and responsibilities is both refreshing and deeply inspiring.First and foremost, Michael's priority is being a dad, a role he deems his most important job. Everything else, whether it's being a CEO or a board member, follows in sequence. He admits that although he doesn't always follow his own advice, the goal isn't merely about becoming proficient at navigating the ups and downs of company building and the entrepreneurial journey. Instead, it's about transcending these fluctuations and reaching a state of equanimity.Michael stresses that the pursuit isn't happiness itself; rather, the pursuit is happiness. Finding joy, meaning, and growth in whatever he's doing is what keeps him motivated and content. He measures his alignment with his work by his excitement every Monday morning and his anxiety every Friday for not getting enough done. If those feelings begin to reverse, that's his cue to reassess his path.Takeaway: Michael's philosophy on balance and happiness is a profound lesson in understanding one's priorities and embracing the journey itself as the source of joy. His words are a reminder to find contentment in the pursuit, to align passions with purpose, and to recognize the importance of self-awareness in living a fulfilling life. His perspective turns the conventional wisdom of “work-life balance” on its head, offering a unique insight into living a life filled with meaning and happiness.Michael Teases Exciting Announcements from mParticleWhen asked if there was anything he wanted to share with the audience or any exciting things launching soon, Michael's response was filled with enthusiasm and intrigue. He hinted at some compelling announcements coming from mParticle in September. Without divulging specific details, he provided a glimpse into what the company is focusing on.Michael mentioned that these new developments would continue to expand on their mission of creating value. They are looking to transpose their services and add value not just in their own data store but across any data store, including the data warehouse ecosystem. Though he kept the specifics under wraps, the anticipation in his voice was clear. The audience was left eagerly awaiting the “cool stuff” that mParticle has in store.Episode RecapThe martech industry is no stranger to bold claims and sweeping predictions, and the recent debate around Reverse ETL and Composable CDPs is no exception. The air is thick with assertions that traditional CDPs are going the way of the dinosaur, set to be replaced by sleek, modern solutions. Michael, however, has a more grounded take.For starters, he considers the buzz around Composable CDPs to be a well-executed marketing illusion, a sleight of hand rather than a genuine revolution. Sure, modern data warehouses and Reverse ETL processes are capturing attention, but at the core, the need for data quality, integrity, and privacy still reigns supreme. Michael doesn't view this shift as a death blow to existing CDP vendors like mParticle, but rather a call to adapt, focusing on the movement and activation of data.Adaptation is a theme that resonates throughout Michael's insights. While acknowledging that some Reverse ETL customers are indeed replacing traditional Customer Data Platforms, he emphasizes that this trend represents a narrow slice of the market. The fragmented “Do It Yourself” approach has its limitations, especially when applied to the complex landscape of enterprise-level marketing. Here, mParticle's approach stands out, prioritizing usability and enabling marketers to contextualize and activate data without becoming entangled in intricate coding.Michael doesn't shy away from debunking popular narratives in the debate, including the myth of zero copy data. Cutting through the hype, he directs attention to the real drivers of expenses and underscores the importance of focusing on customer value over cost-cutting.Perhaps the most intriguing aspect of Michael's perspective lies in the strategic evolution of mParticle. The company's recent acquisitions, including Vidora, an AI personalization platform, signal a commitment to intelligence and automation. Moving beyond simple data collection and segmentation, mParticle aims to become an intelligent force that drives the industry forward. Their tools aren't mere “simple pipes”; they're designed to meet the unique needs of customers and provide tailored solutions that enhance understanding and value extraction.All and all, Michael offers a refreshingly realistic and actionable perspective on the current CDP landscape. Rather than getting caught up in marketing tricks or chasing after the latest shiny object, he encourages a return to core principles and a commitment to intelligent, adaptable solutions. It's an approach that recognizes the complexity of the industry while providing clear pathways for growth, innovation, and genuine value creation.Whether you're a marketer, data engineer, or business leader, listen below for insights that offer a solid foundation for navigating the ever-complex world of martech and data platforms, without falling prey to illusions or unnecessary complexity.✌️--Intro music by Wowa via UnminusCover art created with Midjourney
What's up everyone,If you haven't checked out our previous 3 episodes in our AI series you might want to before this episode, we give you a lot of context around some of the events that have happened and will shape the conversation today.So basically How fast could AI change or replace marketing jobs? How marketers can stay informed and become AI fluent Exploring new paths to future-proof your marketing career in the age of AI Today we're diving into specific tools… there's a lot of noise out there right now.What tools you should play around withIn TMW #107 | ChatGPT and the artificial marketer, Juan Mendoza explains that“...generative AI tools are already everywhere. From text generation to video and audio production, to image creation, there's a thriving industry of technologies taking small slices out of our creative talents, packaging them up, and selling them as a SaaS product on a recurring revenue model. If you're wanting to stay relevant five years from now in the marketing technology industry, you're probably going to have to learn some of these platforms. In 2010 we used to say: “there's an app for that”. In 2023, we will be saying: “there's an AI for that.””OutlineHere are some of the topics for this third AI episode: Key AI technology definitions and how to differentiate real AI tools vs all the noise out there Deep dive into tools Content marketing tools Email and marketing automation tools Predictive analytics tools Text to presentation and pitch deck tools 3D animation tools for product marketers Sales and outreach tools Text to website creator tools Ad and social creative tools AutoGPT and AI agents And a bunch of other tools like conversational search engines, 1-1 convos with celebrities and an even longer list of honorable mentions Here's today's main takeaway:The key to future proofing your marketing career with the ever changing AI landscape is to stay curious, get your hands dirty and experiment fearlessly: Fill out some forms, spin up free trials, get on wait lists, and give new AI tools a chance. It's only by actually getting your hands dirty that you'll discover which tools truly work for you and which are just part of the ever growing sea of gimmicky AI tools.Definition of tech termsI'll be using some of these terms throughout my analysis of some of these tools so here's a primer explaining the three most common AI technologies used for marketing applications: MLMachine Learning): ML is a way to teach computers to learn by themselves, without having to be programmed for every task. They learn from examples and data patterns to make predictions or decisions. Applications include segmentation, predictive analytics and propensity models. NLPNatural Language Processing: NLP is a subset of ML and focuses on enabling computers to understand, interpret, and generate human language. Includes sentiment analysis, machine translation, named entity recognition, text summarization, and more. NLP techniques usually helps computers understand and communicate with humans using everyday language. GNNGraph Neural Network: GNN also a subset of ML is a type of neural network that aims to handle graph-structured data, data organized like a network or web of connected points. Applications include analyzing relationships between different things like users in a social network or users in your database or recommending additional products based on past purchase history. Real AI vs noisePart of the reason AI gets a really bad rep, especially in martech, is that anything that's built on if statements or simple Javascript logic gets called AI. There's still plenty of AI startups that shout about their proprietary AI when it's probably just a few decision trees and a few interns running spreadsheets.Now though, you have an even bigger bucket of noise that's essentially “slight tweak on Chat-GPT”. Developing AI that was comparable to human performance was a challenging feat prior to GPT's arrival. To achieve this level of sophistication, a company would have had to: make a substantial investment, amounting to millions of dollars developing its own algorithms performing extensive data cleanup But it's so easy now because GPT is so good out of the box. Allen Cheng puts it simply. Starting a new AI venture can be achieved by simply assembling a few elements: a product developed on GPT-4's user-friendly API a website, and a marketing campaign. This is why we're seeing hundreds of AI tolls pop up every week.A lot of these GPT-based products are pretty much indistinguishable from one another. Maybe a handful have a significant advantage over others but most are gimmicky. And over the next few months, every tool is going to be integrating ChatGPT features inside their products in the hopes of making it stickier.The threat of GPT-nThe part that I find trickiest and the most discouraging about building anything on top of GPT is that any progress you make on fine tuning GPT-4 will totally be wiped out by GPT-5 or GPT-n… Kind of like we talked about in a previous episode with all the tools GPT's plugins killed. So let's cut through the noise and dive into legit AI tools, the ones you should be playing with and experimenting. Content marketing toolsCopy.ai and Jasperhttps://copy.ai/ https://jasper.ai/ AI text generators are very common these days, the two most popular tools, especially for marketers are Copy.ai and Jasper. Both allow you to bypass the initial stage of writing where you face a blank page. The promise of these tools is that they help you in generating ideas, saving time on brainstorming and drafting, and ensuring a consistent production flow, freeing you to focus on higher-level strategic tasks, original research, and connecting with your audience.I've played around with both Jasper and Copy.ai before ChatGPT came out… and they were super unique. But both Copy.ai and Jasper are built on top of GPT, they essentially rent usage of the platform. So they built a pretty nice UI on top of GPT… but now that ChatGPT came out, I'm sure they've seen a drop in usage. Plus GPT-4 is 3 times more expensive.They still offer marketing specific value though and can get you up to speed faster than using CGPT in the form of templates, prompts and workflows. Both are super powerful, you could make a case that Jasper outshines its counterpart a bit, especially on the longer content format but it's also way more expensive. Miguel Rebelo from Zapier has a solid breakdown comparison here https://zapier.com/blog/jasper-vs-copy-ai/ GramarlyGOhttps://www.grammarly.com/grammarlygo Grammarly, the popular spelling and grammar checker which has been using AI for over a decade already, also entered the generative AI niche last month unveiling GrammarlyGO. You guessed it, built on GPT. It has a particular advantage because Grammarly is already widely used and this is just an extra feature so to speak. Instead of just checking your grammar it can now also help you with drafting documents, composing and responding to emails, editing writing for tone, clarity, and length, and brainstorming ideas or outlines for projects within the apps you're already working in. Lots of tools are going the Grammarly route in adding GPT features to their product, like Notion and more recently Superhuman.Other AI writing toolsSome of these specialize in SEO, some in long form content, some in short form… they all do similar things:Copysmith https://copysmith.ai/ Anyword https://anyword.com/ Writesonic https://writesonic.com/Copymatic https://copymatic.ai/ Yaara https://www.yaara.ai/ Rytr https://rytr.me/ Frase https://frase.io/ EmailEmail is just a channel of potential generative content tools so it's not totally distinguishable from the tools we chatted about in the content category. Chances are that the Marketing Automation platform or the customer engagement platform you're already using as a suite of features they are throwing AI next to. Most of these though are just ML. Some tools like Iterable and Braze have propensity models to find users that are likely to perform a purchase event, that's ML, and it's only based on your data set. Some tools like Seventh Sense throw AI in front of Send Time Optimization features, these have been around for a long time and are only based on your recipients. This is also ML. Some tools throw AI in front of Sentiment Analysis features allowing you to analyze and classify the emotional tone of text into useful data. This is a subset of NLP that uses ML. Some tools like 6sense throw AI in front of conversational email assistants that are supposed to be a step up from static drip emails sequences. This is also a form of NLP and generative AI. You're likely to start seeing copy assistants and additional AI features powered on GPT inside of these tools. I wanted to chat about one product that I feel like stands out from others in terms of being built around AI rather than it simply being a feature on the side.Persadohttps://www.persado.com/ One example I've seen (but haven't used myself) of a CEP using more advanced AI is a tool called Persado. The two co-founders are former founders of Upstream in the mobile marketing space. Similar to 6sense's conversational email platform, they've been around for a decade and they claim to leverage NLP + ML to create, optimize, and personalize engagement messaging. So they essentially analyze a bunch of data and identify the most effective emotions, CTAs, language, phrases to drive engagement. It's worth calling it out because it's more than just predicting user behavior and optimizing the best time to send a message, it takes it a step further by also incorporating NLP techniques, understanding the nuances of human language, and generating custom marketing messages that resonate with a unique user. One thing that makes Persado unique is it's not built on GPT, it has its own model that's trained on more than 100 billion digital business language impressions across industries. Potentially less data points than GPT but arguably better and more relevant impressions. As Lisa Spira, VP of Content Intelligence at Persado explains in an interview with Martechseries, “models like OpenAI's ChatGPT are trained on the entire “Wild West” of the Internet, so their results, while delivered confidently by the model, can be riddled with inaccuracies, or even offensive content”. She adds that “Generative AI tools might be capable of writing anything, but we're able to cut through that noise, and train our generative AI to write in a way that motivates recipients to take actions: to open an email, convert on a shopping cart webpage, or stop scrolling and click on a social media ad.”And not just generative AI. Persado is pushing a term called Motivation AI and they have a really cool example of it. Let's say you're in D2C and you're selling sweaters. Generative AI gives you level 1 and 2 content which is Intelligibility and Relevance so it might spit out content like: Sweaters are a knitted garment that usually have long sleeves and are designed to be worn over other clothing In the fall, sweaters provide an extra layer of warmth in the cooler temperatures. They are stylish choice for fall fashion because they come in a variety of colors and styles But the next stage of this is including Emotion to get an output like this:You're going to love these cozy sweaters for fallAnd the following stage, where Persado claims to play is Motivation where you would get an output like this:A cozy invitation: indulge in our luxuriously soft sweaters this fallNow this might all be semantics. I'd argue that with great prompting you can get to generative content that includes motivation and emotion. Optimovehttps://www.optimove.com/ This tool could actually go in the CRM, the CDP or even the email category with the acquisitions they've made in the last few years but another name that I've heard a few times is Optimove. They play in the enterprise arena and primarily serve retailers and gaming operators but they have an interesting suite of AI tools for marketers. I've personally not used the tool but they claim to provide personalized recommendation campaign orchestration with their AI-based marketing bot (Optibot). It provides what they call self-optimizing campaigns that are essentially advanced A/B/n campaign testing that automatically serves the best treatment to individual customers through the use of ML. Source: https://www.optimove.com/pdf/Optimove-Product-Features.pdf Predictive AnalyticsPredictive analytics is a super cool niche of AI application. It essentially enables us to anticipate trends and patterns of consumer behavior based on a bunch of user data. Done right, you can do cool things like uncover users ready to buy amongst a sea of tire kickers, find free users primed for upsells and anticipate customers about to churn.Vidora Cortex (mParticle Intelligent Attributes)https://www.vidora.com/platform/ I wanted to start by calling out a few no-code / low-code predictive analytics / ML pipeline platforms. There's not a ton in this space but it has been growing over the past few years. Many of the big analytics players like Qlik's AutoML, IBM's Watson Studio, Amazon SageMaker, Google's AutoML, have a ML specific tool that does this but it's built for data scientists. Vidora is worth calling out here because it was built more for knowledge workers.Vidora was founded by 3 academics while pursuing their Ph.Ds Caltech and Berkeley, they built systems that utilized machine learning to convert raw data into informed decisions. Their initial vision was to democratize machine learning capabilities. They are a small 20 person startup in SF but their product offers huge potential, so much so that they were acquired by mParticle, a popular CDP. Vidora's platform, Cortex, provides you with the ability to build distinct pipelines for your customer data, so you can then produce various predictive attributes tailored for particular applications. For example if your goal is to identify the customers with the highest probability of purchasing your product in the next 30 days, the pipeline allows you to enhance your retargeting ROI by focusing on this user segment. Similar: https://www.pecan.ai/ https://www.akkio.com/ 6sensehttps://6sense.com/It's worth highlighting 6sense here as well. Not specifically an ML tool but it has a wider set of use cases that are a mix of intent data and predictive analytics and a slice of lead scoring… but they've been around for a decade. They use real-time data about your buyers to predict their buying intent and what their current purchasing stage might be. They uncover some of the magic in a blog post about their predictive engine here. They claim to be using: Intent data, web visits, content engagement (1st party) and 3rd party user behavior data from all over the Internet Historical data, events that led up to sales in the past ICP, firmographic and technographic data So they layer intent data on top of historical data and filter it through your ICP and the output is a target list of accounts that show intent to buy and are a good fit for your solution.The secret sauce here is really the 3rd party intent data. This is hidden data that they are able to assign to leads and accounts. Part of the source here is 6sense's proprietary intent network and other B2B review sites like G2, TrustRadius, and PeerSpot.Kumohttps://kumo.ai/ Founded by Jure Leskovec, former Stanford University computer science professor who's known for his bold assertion that AI would eventually be capable of predicting the future. He partnered with the former CTO of Pinterest and Airbnb as well as the former Head of Growth AI at LinkedIn to build Kumo.Using Kumo, companies can not only analyze past events but also predict future opportunities. Kumo uses GNNs (Graph Neural Networks) to identify patterns and relationships in complex data sets that cannot be easily analyzed using traditional statistical or machine learning techniques. This essentially allows marketers to anticipate customer behavior (how much they will spend, which new products they will be more interested in, things that would make them leave for a competitor) and offer personalized product recommendations, promotions, and communication.Predictive analytics isn't a new concept though. We talked a bit about this in our first episode when we mentioned propensity models which tons of larger companies employ today. But this is operationalizing it a step further and not just on your company's datasets. And imagine unlocking this capability for startups. So the idea is that marketing operations teams would change their focus to future customer behaviors. PresentationTomehttps://beta.tome.app/ There are countless AI productivity tools that are all super similar. But one that's been buzzing lately is Tome. They are founded by product leads from Facebook and Instagram, and recently got $43 million in their Series B funding round. They launched an innovative document-to-presentation AI tool, which utilizes GPT-4 to generate text and images and transform a document into compelling presentations, narratives, and stories. Tome's tile system sets it apart from other generative PowerPoint tools on the market and gives Microsoft a run for its money. Not only does it offer robust functionality, but it also boasts a sleek and impressive design.https://gamma.app/ 3D AnimationSpline AIhttps://spline.design/ai Spline AI was created by Alejandro León, it's a YC21 startup that's building a 3D design platform comparable to Figma, it basically allows you to convert text to 3D. For decades, creating in 3D has been super hard. Spline is changing this.You can easily create objects and scenes, edit objects, colors, and properties, add physics and randomness, create animations and events, generate style alternatives, collaborate with others in real-time, and much more. The applications for videos and product marketing are endless here, see a demo here. SalesAI-powered sales tools are popping up every week. Especially in the niche of email outreach. Personalization, subject line optimization, send-time optimization, sales rep coaching, auto suggest follow-up cadences… just a few of the areas where AI can enhance your email campaigns. NLP can analyze email responses and suggest appropriate next steps, helping your sales team respond efficiently. There's tons of players in this space like Cresta, Lyne, Regie. Cresta was funded out of the AI Grant program organized by some of the brightest tech minds in AI. Their founder, S. Zayd Enam chose to leave his PhD program at Stanford to establish the startup. They specifically provide AI guidance software and support that elevates each representative, behavioral mentoring of agents to enhance performance and locate solutions and areas to streamline monotonous tasks.AI Sales Email Coach. It assists you in real-time. Get more positive replies and write better emails faster.https://www.lavender.ai/ https://cresta.com/ https://lyne.ai/cold-email-ai/ https://www.regie.ai/ Web creatorsButternut AIhttps://butternut.ai/ Tagline says it all: Create a website for your business in 20 seconds with AI. Butternut.ai uses generative AI technology that allows users to create websites by simply answering text prompts. The AI technology is designed to function like a personal developer, providing users with an efficient and cost-effective solution for creating and editing websites.Butternut.ai is continuously improving its website builder and is currently working on its v2 version that will offer more design and editing functionalities. Users can even command the AI to perform specific tasks like creating a pricing page with a CTA to sign up.http://Stunning.so https://10web.io Ad and social creativeAlbert.aihttps://albert.ai/ Albert is an autonomous and self-learning digital marketing tool that uses ML + NLP as well as analytics to automate and optimize your paid ad campaigns, specifically Google's search and programmatic channels, as well as Facebook, Instagram, YouTube and Bing. It can automate bidding and budget allocation, keyword and audience targeting, as well as creative testing and optimization. So obviously the part that stands out here is that unlike many other campaign AI tools that just spit out recommendations and then a marketer takes the action, Albert claims to be one of the first tools that's an autonomous AI, it does the action also. Not only that, it's also making adjustments and improvements constantly. You seem to be able to set guardrails of course. They also claim the ability to identify trends, uncover new audience segments, and optimize ad placements. In their docs they say they are more useful in large data sets and B2C environments. AdCreativehttps://adcreative.ai/ AutoGPT and AI agentsI don't think we can go a full episode about AI tools without talking about AutoGPT and AI agents.Essentially you can assign an objective to an AI agent and they work on tasks that lead to accomplish this objective. It's making prompting a bit easier, instead of giving full instructions, the AI identifies the necessary steps to achieve your goal and some of the more advanced ones generate additional AI to assist. You may have seen this on Twitter, I think the first true demo was Yohei Nakajima's impressive demonstration of babyAGI.Things started blowing up with AutoGPT, released by SigGravitas just last week. Described as a self-running AI agent that can write its own code, heal itself from errors, and access the internet via Google search. It does sound really cool for several reasons: Internet access, long-term and short-term memory management, text generation, and integration with 11 Labs for AI text to speech generation. It's arguable that all of this will be possible with GPT plugins but this is doable today and has different applications, let's unpack a few examples:Social media is primed for automation, lots of the work here can already be automated. But imagine setting an AI agent loose with the objective of creating content, scheduling posts (not just posting them) and replying to comments and questions. Replying to questions is a huge part of customer service, you could argue this is another area that's already being automated with chat bots, especially the better ones connected to help docs. But imagine setting an AI agent loose with the objective of really understanding customer questions (not just picking up a keyword and recommending a help article), it could speak any language and you could train it on your brand tone.We're still early but there's already been a ton of innovation here.Here are some of the tools you can play around with today without reading a GitHub repo:Cognosyshttps://www.cognosys.ai/ Cognosys allows you to operate your personal AutoGPT Agent in-browser which is currently available and free to utilize. I've personally played around with it quite a bit and it's pretty wild, it's not perfect yet but it's really cool seeing how it will add new tasks to get to the objective. The beta is still on GPT-3.5. Here's my favorite example:Main objective was to: Build a list of AI tools that have a marketing application, that were released in the last 3 months. Here were the follow-up tasks it added to get to the objective: 1. Retrieve a list of AI tools that were released in the last 3 months. 2. Filter the list to only include tools that have a marketing application. 3. Compile the filtered list into a final list of AI tools with marketing applications released in the last 3 months. The final list of AI tools with marketing applications released in the last 3 months is as follows: 1. Receptiviti: A tool that uses AI to analyze consumer emotions and sentiment in order to optimize marketing strategies. 2. Cortex: A platform that uses AI to help marketers personalize content and improve customer engagement. 3. Adext AI: A tool that uses AI to optimize ad campaigns and improve ad performance. 4. Acoustic: An AI-powered marketing platform that helps businesses automate and optimize their marketing strategies. 5. Pathmatics: A tool that uses AI to provide insights into ad spending and campaign performance for competitive analysis. It then added a few extra tasks because it didn't seem satisfied with the list. Research and compile a list of AI tools that have a marketing application and were released in the last 6 months. The result was 6 tools, but not all 5 from the first task were there. Identify the unique features and functionalities of each AI tool on the list and create a comparison chart to evaluate which tools are most suitable for specific marketing applications. Conduct a survey of marketing professionals to gather feedback and opinions on the identified AI tools and their effectiveness in improving marketing strategies.But then it apologized and said it cannot perform that task… Research and compile a list of case studies showcasing the successful implementation of AI tools in marketing campaigns.It spun up a case study about Coke using "Albert" to optimize their social media advertising campaigns and Amazon using "Rekognition" to analyze customer behavior and personalize their shopping experience. Create a step-by-step guide on how to integrate an AI tool into a marketing strategy, including best practices and potential challenges.The step by step wasn't too bad. 1: Define the problem or opportunity, 2: Identify the right AI tool by researching and comparing different tools, 3: Understand the data requirements like type of data, format, frequency updates, 4: Prepare the data for integration like cleaning and formatting 5: Integrate the AI tool Research and compile a list of AI tools that have a marketing application and were released in the last 2 months.Not sure why it did this for 2 months but this time it gave me tools with release dates which was pretty cool but also obvious that I wasn't getting tools released in the last 2 months, I was only getting tools released in the last 2 months since GPT's latest batch of data which was mid 2021. Create a comparison chart to evaluate the unique features and functionalities of each AI tool on the list and determine which tools are most suitable for specific marketing applications. Also try: https://agentgpt.reworkd.ai/ https://app.hyperwriteai.com/earlyAccess https://godmode.space/ OtherRewind.aihttps://twitter.com/dsiroker/status/1646895452317700097?s=20 https://www.rewind.ai/ You may have seen their public investor pitch on Twitter, the founder is the former creator of Optimizely and his team built a way for you to record and store everything you've said or heard or seen and they make it searchable. Obviously there's huge privacy considerations with something like this. But people don't seem to care haha they went from 0 to 700k in ARR in 3 months. Perplexity AI https://www.perplexity.ai/ Perplexity is a startup that's changing the way we search the web. With their conversational search engine, you can ask questions in plain English and get accurate answers from various sources. And with $26 million in Series A funding, (including investments from Ed Gil) they're looking to revolutionize the search engine game.Their AI technology sets it apart from traditional search engines like Google, and their chatbot-like interface is user-friendly and intuitive, it is built on top of GPT.Perplexity's focus on accuracy is a breath of fresh air in an era where search engines can be manipulated by advertisers and search engine optimization. The Series A funds will be used to optimize their knowledge database and expand their reach. All in all, Perplexity AI is definitely one to watch in the coming years!Character AIhttps://beta.character.ai/ Imagine being able to have a one-on-one conversation with your favorite celebrity or fictional character - well, that's now a possibility with Character.ai, an innovative AI website created by two former Google engineers. The platform has been growing in popularity since its launch last September, offering users the opportunity to chat with a wide range of characters for free. What's more, the interactions are so seamless that some users have reported spending hours chatting with their chosen personality, almost forgetting that they're talking to a machine.However, there's a catch - the interactions are not real, and the AI's responses may not always be accurate or reliable. Despite this, Character.ai has been embraced by fans who are looking for new ways to engage with their favorite personalities, particularly online. This is especially true for fan fiction lovers who are excited about the tool's potential for creating new experiences and making the barrier to entry for writing fan fiction much lower.But as with any new technology, there are concerns about its impact on real-world relationships, especially if users spend more time on it than with the people they care about. Furthermore, the AI technology could be used by stans to go after a perceived enemy of their favorite star, which could be a problem if it leads to harmful interactions or behavior towards other users.Despite these concerns, Character.ai represents a new frontier for fan culture, offering a new way for fans to engage with their favorite personalities and characters. The platform's growth in popularity is a testament to the human need for connection, and while it may not substitute actual interactions, it provides a unique and exciting way for fans to engage with their favorite personalities.Magichttps://magic.dev/ Magic, a startup developing an AI-driven tool to help software engineers write, review, debug, and plan code changes, just raised $23 million in a funding round led by Alphabet's CapitalG. The platform's CEO, Eric Steinberger, was inspired by AI at a young age and is using his experience as an AI researcher to create a tool that will communicate with users in natural language, helping to improve the speed and cost of developing software.Steinberger claims that Magic can understand legacy code and collaborate with users on code changes, operating like a pair programmer that learns more about the context of both coding projects and developers. The AI colleague that Magic offers will be able to understand code and can help developers navigate it, allowing companies to scale the impact of their current employees and train new employees with less personal coaching.What sets it apart is that it allows developers to describe what they want in English, and the AI will understand it and collaborate with them on the changes. This means that developers can work with Magic like a colleague and send messages telling it what changes they want to be made to the code. This one isn't built on GPT, the company has developed its own neural network architecture that can rapidly read code and is capable of detecting, warning about and overcoming potential bugs in the code. Honorable mentionsGalileo AI - Create editable UI designs from text descriptionhttps://usegalileo.ai/ Notocat - Write your newsletters in Notion and send them to your subscribershttps://twitter.com/razvanilin/status/1643565644061999104 Brainf FM - music made by AI that's scientifically proven to increase focushttps://www.brain.fm/ Meeting notes and transcription appshttps://tldv.io/ https://vowel.com/ Image Vectorizer - Turn small images to vectorhttps://vectorizer.io/Speech-to-text generator for podcasts that creates notes, timestamps and summary contenthttps://castmagic.io/ https://podsqueeze.com/ Text-to-speech AI voice generatorhttps://murf.ai/ https://beta.elevenlabs.io/ Text-to-music AI music generator https://mubert.com/ Text-to-SQL query, connect your database, ask a question, get an answerhttps://usechannel.com/ Teachable's AI course curriculum generatorhttps://teachable.com/ai-curriculum-generator https://twitter.com/teachable/status/1641154548419969048?s=20 The opportunities are endless with AI tools and applications go far beyond marketing. I see too many people on Twitter dunking on using it to help you generate text or as a search engine or fact checker… and they're missing the big picture. People are using AI to Help them build custom meal plans Custom exercise plans Help them build sleeping plans Help them build routines and schedules with their newborns Planning road trips Planning date ideas Acting as a therapist Getting movie and book recommendations Planning a party or event Designing personal budgets Assisting with resume and cover letter writing Summarizing long articles and youtube videos Write SQL queries Explain python and CSS code in plain English ConclusionIt's not like our marketing jobs are gonna vanish overnight, but the shift is happening faster than many of us realize. AI's no longer just a loosely backed buzzword; it's doing things today that we used to think were impossible. So, as marketers, we've gotta take this tech seriously.There's 4 main things marketers should be doing to future-proof their careers: Instead of asking if AI's gonna replace our roles in marketing, we should be talking about how quickly it could happen and what it'll look like if it does. Staying informed and learning from some of the best podcasts and newsletters about AI. Ben's Bites, No Priors Podcast, A Guide for Thinking Humans and the AI Exchange are great resources. Now is the time to figure out if you need to make changes to your current area of speciality. Ask yourself if you should double down on additional areas like data and API services, getting closer to product and customers or starting to learn about ethics and data privacy. Stay curious, get your hands dirty and experiment fearlessly: Fill out some forms, spin up free trials, get on wait lists, and give new AI tools a chance. It's only by actually getting your hands dirty that you'll discover which tools truly work for you and which are just part of the ever growing sea of gimmicky AI tools. ✌️--Intro music by Wowa via UnminusCover art created with Midjourney
What are the practical applications of AI in smart cities today that can actually benefit citizens? In our latest Smart City Podcast, we interviewed Seth Dobrin, chief AI officer for IBM, and Graham MacDonald, CIO and VP of technology and data sciences at the Urban Institute about their collaborative AI-based approach to mitigating the adverse effects of gentrification in American cities and communities. The theme of this podcast was "AI for Good." There's a lot of hype surrounding AI, and it can be difficult to find truly practical and actionable use cases, but this is a great use case with direct benefits for people being affected by increasing gentrification in many US cities and communities. It's also a great example of a public/private partnership that is producing results. The focus of this project was to put the tools and technologies offered by companies like IBM into the hands of nonprofit research organizations like the Urban Institute so they can do research and execute projects that provide the greatest benefit to the people that need it most. Seth describes the technology side of the solution and Graham describes the outcomes. Here's what Seth had to say about the project:"How do we help the Urban Institute better manage gentrification in the communities? How do we ensure that it's done in a trustworthy manner? How do we ensure that it's fair and free of specific types of biases, or in this case, how do we help them identify specific types of biases? How do we make the models transparent? How do we make them robust? And how do we ensure that we're preserving the privacy of people that are being impacted by these models? What we did was we brought our tools, our Cloud Pak for Data with Watson Studio, to the Urban Institute, along with some folks from the data science and AI elite team, to work on this problem."Armed with these tools, Grant MacDonald and the Urban Institute team were able to develop a solution to measure and mitigate the adverse effects of gentrification, where affluent residents move into long-established neighborhoods comprised of middle-class and working-class citizens who in many cases find themselves priced out of their own homes because they can no longer afford to live there. According to Grant:"We initially worked with HUD to define neighborhoods that were gentrifying, which were declining, and which were inclusively growing. The vast majority of neighborhoods happened to be in the unchanging or inclusively growing category. And there are a small number of neighborhoods that are in the declining and gentrifying category. And what we were trying to do was to find those neighborhoods, and then use machine learning, taking the data that we have to project or predict, in real-time, what is happening right now."You'll have to listen to the rest to get the details, but it's an excellent real-world example of a practical application of AI that provides true economic and social benefits to citizens and communities.
Daniel Krook is the Chief Technology Officer of IBM Code and Response and Call for Code, a multi-year competition that inspires developers to create sustainable software solutions to the world's most pressing problems. He ensures that those ideas are delivered where they can make the greatest impact as CTO of the IBM Code and Response Initiative. Most recently seen with Hurricane Dorian, the impact of devastating wildfires, landslides, hurricanes and other natural disasters continues to grow every year - but with the help of the global developer community and open source technologies, we have the potential to significantly improve preparedness for natural disasters and accelerate relief when they hit this is where the “Call for Code” comes in right Daniel? Enter Call for Code, a $30 million, five-year global challenge that unites developers to create applications powered by open-source technology that can tackle some of the world's biggest challenges. Call for Code was launched by the David Clarke Cause alongside founding partner IBM in 2018. Now in its second year, the Call for Code program has engaged 24 million developers around the world). This year, more than 180,000 independent and enterprise developers, data scientists, activists, and students from 165 nations participated to create technologies that can help local communities in need. These teams used data and open source-powered technology – including cloud, artificial intelligence, and blockchain – to create more than 5,000 applications.The five finalists for this years' Call for Code, were chosen from an elite group of top technology solutions around the world. AsTeR (Europe) – During natural disasters, emergency call centers are overwhelmed and lack the human resources to deal with the sudden uptick in calls. Project AsTeR helps prioritize emergency call center calls during natural disasters, by their level of emergency, but transcribing calls and analyzing them to extract key information, such as the number of victims, type of emergency and location. AsTeR then provides first responders with a map identifying areas with high levels of emergency based on the number of people involved and the type of injuries. Healios (North America) – Healios provides victims of natural disasters with high-quality mental healthcare by streamlining the process for caseworkers to connect with survivors who may be struggling after a traumatic experience. By leveraging the IBM Watson platform, Helios can provide high-quality mental healthcare at scale by way of a mobile application, compatible with both iOS and Android. Prometeo (Europe) – Prometeo is a cognitive platform that collects data from IoT sensors worn by firefighters and sends it to Watson Studio, a Watson Machine Learning service, with professionals to monitor their health in real time, detect trends, and recommend intervention. Rove (North America) – Project Rove is an emergency response solution that connects responders to victims deemed to be most in need. Natural language processing and crowdsourced SMS messaging first attempts to organize victim health, food, and hygiene needs by priority and location. Rove then groups and highlights at-risk victims on an interactive web dashboard, and uses real-time satellite imagery analysis to enable rescue workers to quickly reach victims by generating routes avoiding damaged buildings and roads. Sparrow Platform (Asia Pacific) – Sparrow Platform is an open-source ecosystem that can ensure medical and psychological preparedness, well-being, and recovery. By leveraging AI, IoT, mesh networks, and cloud, Sparrow enables ubiquitous access to medical help, medical records, information and alerts during and after disasters. The global initiative culminates on Saturday, October 12th when the 2019 Call for Code winner was announced during an awards ceremony at the United Nations Delegates Dining Room in New York City. The winning team will receive a cash prize of USD $200,000 and support from IBM, The Linux Foundation, and other partners interested in turning the winning idea into a real-world, open-sourced deployment to benefit communities in need. Second, third, fourth and fifth-place winners also receive a cash price. The winning solution will be further developed and deployed via IBM Code and Response, an initiative in collaboration with of some of the world's leading disaster, technology, and human rights organizations including: United Nations Office for Disaster Risk Reduction, Clinton Foundation and Clinton Global Initiative University, The Linux Foundation, AT&T, FirstNet Authority, Consumer Technology Association Foundation, and Partnership for Inclusive Disaster Strategies. Code and Response is currently piloting Project Owl, the winning solution from Call for Code 2018, in regions recently affected by natural disasters, including Puerto Rico and Houston, Texas. For more information please visit: callforcode.org
Daniel Krook is the Chief Technology Officer of IBM Code and Response and Call for Code, a multi-year competition that inspires developers to create sustainable software solutions to the world's most pressing problems. He ensures that those ideas are delivered where they can make the greatest impact as CTO of the IBM Code and Response Initiative. Most recently seen with Hurricane Dorian, the impact of devastating wildfires, landslides, hurricanes and other natural disasters continues to grow every year - but with the help of the global developer community and open source technologies, we have the potential to significantly improve preparedness for natural disasters and accelerate relief when they hit this is where the “Call for Code” comes in right Daniel? Enter Call for Code, a $30 million, five-year global challenge that unites developers to create applications powered by open-source technology that can tackle some of the world's biggest challenges. Call for Code was launched by the David Clarke Cause alongside founding partner IBM in 2018. Now in its second year, the Call for Code program has engaged 24 million developers around the world). This year, more than 180,000 independent and enterprise developers, data scientists, activists, and students from 165 nations participated to create technologies that can help local communities in need. These teams used data and open source-powered technology – including cloud, artificial intelligence, and blockchain – to create more than 5,000 applications.The five finalists for this years' Call for Code, were chosen from an elite group of top technology solutions around the world. AsTeR (Europe) – During natural disasters, emergency call centers are overwhelmed and lack the human resources to deal with the sudden uptick in calls. Project AsTeR helps prioritize emergency call center calls during natural disasters, by their level of emergency, but transcribing calls and analyzing them to extract key information, such as the number of victims, type of emergency and location. AsTeR then provides first responders with a map identifying areas with high levels of emergency based on the number of people involved and the type of injuries. Healios (North America) – Healios provides victims of natural disasters with high-quality mental healthcare by streamlining the process for caseworkers to connect with survivors who may be struggling after a traumatic experience. By leveraging the IBM Watson platform, Helios can provide high-quality mental healthcare at scale by way of a mobile application, compatible with both iOS and Android. Prometeo (Europe) – Prometeo is a cognitive platform that collects data from IoT sensors worn by firefighters and sends it to Watson Studio, a Watson Machine Learning service, with professionals to monitor their health in real time, detect trends, and recommend intervention. Rove (North America) – Project Rove is an emergency response solution that connects responders to victims deemed to be most in need. Natural language processing and crowdsourced SMS messaging first attempts to organize victim health, food, and hygiene needs by priority and location. Rove then groups and highlights at-risk victims on an interactive web dashboard, and uses real-time satellite imagery analysis to enable rescue workers to quickly reach victims by generating routes avoiding damaged buildings and roads. Sparrow Platform (Asia Pacific) – Sparrow Platform is an open-source ecosystem that can ensure medical and psychological preparedness, well-being, and recovery. By leveraging AI, IoT, mesh networks, and cloud, Sparrow enables ubiquitous access to medical help, medical records, information and alerts during and after disasters. The global initiative culminates on Saturday, October 12th when the 2019 Call for Code winner was announced during an awards ceremony at the United Nations Delegates Dining Room in New York City. The winning team will receive a cash prize of USD $200,000 and support from IBM, The Linux Foundation, and other partners interested in turning the winning idea into a real-world, open-sourced deployment to benefit communities in need. Second, third, fourth and fifth-place winners also receive a cash price. The winning solution will be further developed and deployed via IBM Code and Response, an initiative in collaboration with of some of the world's leading disaster, technology, and human rights organizations including: United Nations Office for Disaster Risk Reduction, Clinton Foundation and Clinton Global Initiative University, The Linux Foundation, AT&T, FirstNet Authority, Consumer Technology Association Foundation, and Partnership for Inclusive Disaster Strategies. Code and Response is currently piloting Project Owl, the winning solution from Call for Code 2018, in regions recently affected by natural disasters, including Puerto Rico and Houston, Texas. For more information please visit: callforcode.org
After more than 20 years delivering timeless music for many of the world best electronic music labels, native Scottish producer Vince Watson returns to Suara PodCats with a great 60 minute set. Make sure to tune in!
This week host Al Martin invites Shadi Copty, who is the Executive Director for AI Tools & Runtimes at IBM. In this episode he talks about automation, tools and multiple IBM AI technologies. Check us out on:- YouTube- Apple Podcasts- Google Play Music- Spotify- TuneIn- StitcherShow notes: 00:00 - Check out "Making Data Simple" on YouTube and SoundCloud.00:10 - Connect with Producer Steve Moore on LinkedIn and Twitter.00:15 - Connect with Producer Liam Seston on LinkedIn and Twitter.00:20 - Connect with Producer Rachit Sharma on LinkedIn.00:25 - Connect with Host Al Martin on LinkedIn and Twitter.00:40 - Connect with Shadi Copty on LinkedIn.04:20 What is Data Science?08:00 What is Watson Studio?21:45 What is OpenScale?
This week host Al Martin invites Shadi Copty, who is the Executive Director for AI Tools & Runtimes at IBM. In this episode he talks about automation, tools and multiple IBM AI technologies. Check us out on:- YouTube- Apple Podcasts- Google Play Music- Spotify- TuneIn- StitcherShow notes: 00:00 - Check out "Making Data Simple" on YouTube and SoundCloud.00:10 - Connect with Producer Steve Moore on LinkedIn and Twitter.00:15 - Connect with Producer Liam Seston on LinkedIn and Twitter.00:20 - Connect with Producer Rachit Sharma on LinkedIn.00:25 - Connect with Host Al Martin on LinkedIn and Twitter.00:40 - Connect with Shadi Copty on LinkedIn.04:20 What is Data Science?08:00 What is Watson Studio?21:45 What is OpenScale?
The #AI Eye: IBM (NYSE: $IBM) Adds AutoAI for Watson Studio, Collaborates with Wimbledon Using AI and Cloud and NICE (Nasdaq: $NICE) Unveils AI Engine for New Version of Satmetrix
The #AI Eye: IBM (NYSE: $IBM) Adds AutoAI for Watson Studio, Collaborates with Wimbledon Using AI and Cloud and NICE (Nasdaq: $NICE) Unveils AI Engine for New Version of Satmetrix
This week's guest is Jorge Castanon, a Sr. Data Scientist for Watson Studio at IBM. Host Al Martin and Jorge discuss some typical data problems currently plaguing the industry, and how Watson Studio makes dealing with those problems that much easier. Get ready for an in-depth, technical conversation with two industry experts.Show Note00:10 - Connect with Producer Steve Moore on LinkedIn and Twitter. 00:15 - Connect with Producer Liam Seston on LinkedIn and Twitter. 00:20 - Connect with Producer Rachit Sharma on LinkedIn.00:25 - Connect with Host Al Martin on LinkedIn and Twitter. 00:41 - Connect with Jorge Castanon on LinkedIn and Twitter05:42 - Check out the machine learning hub here.09:53 - Unsure what customer churn is? Find out in this article.20:00 - AI is not magic. Read an article discussion the topic here.24:34 - Learn about SPSS Modeler here.35:46 - Check out coursera here.
This week's guest is Jorge Castanon, a Sr. Data Scientist for Watson Studio at IBM. Host Al Martin and Jorge discuss some typical data problems currently plaguing the industry, and how Watson Studio makes dealing with those problems that much easier. Get ready for an in-depth, technical conversation with two industry experts.Show Note00:10 - Connect with Producer Steve Moore on LinkedIn and Twitter. 00:15 - Connect with Producer Liam Seston on LinkedIn and Twitter. 00:20 - Connect with Producer Rachit Sharma on LinkedIn.00:25 - Connect with Host Al Martin on LinkedIn and Twitter. 00:41 - Connect with Jorge Castanon on LinkedIn and Twitter05:42 - Check out the machine learning hub here.09:53 - Unsure what customer churn is? Find out in this article.20:00 - AI is not magic. Read an article discussion the topic here.24:34 - Learn about SPSS Modeler here.35:46 - Check out coursera here.
Nancy Hensley is the Chief Digital Officer for IBM Analytics, a multi-billion dollar IBM software business focused on helping customers transform their companies with data science and analytics. Nancy has over 20 years of experience working in the data business in many facets from development, product management, sales, and marketing. Today’s episode is probably going to appeal to those of you in product management or working on SAAS/cloud analytics tools. It is a bit different than our previous episodes in that we focused a lot on what “big blue” is doing to simplify its analytics suite as well as facilitating access to those tools. IBM has many different analytics-related products and they rely on good design to make sure that there is a consistent feel and experience across the suite, whether it’s Watson, statistics, or modeling tools. She also talked about how central user experience is to making IBM’s tools more cloud-like (try/buy online) vs. forcing customers to go through a traditional enterprise salesperson. If you’ve got a “dated” analytics product or service that is hard to use or feels very “enterprisey” (in that not-so-good way), then I think you’ll enjoy the “modernization” theme of this episode. We covered: How Nancy is taking a 50-year old product such as SPSS and making it relevant and accessible for an audience that is 60% under 25 years of age The two components Nancy’s team looks at when designing an analytics product What “Metrics Monday” is all about at IBM Analytics How IBM follows-up with customers, communicates with legacy users, and how the digital market has changed consumption models Nancy’s thoughts on growth hacking and the role of simplification Why you should always consider product-market fit first and Nancy’s ideas on MVPs The role design plays in successful onboarding customers into IBM Analytics’ tools and what Brian refers to as the “honeymoon” experience Resources and Links: Nancy Hensley on Twitter Nancy Hensley on LinkedIn Quotes: “It’s really never about whether it’s a great product. It’s about whether the client thinks it’s great when they start using it.” –Nancy “Every time we add to the tool, we’re effectively reducing the simplicity of everything else around it.”–Brian “The design part of it for us is so eye-opening, because again, we’ve built a lot of best in class enterprise products for years and as we shift into this digital go-to-market, it is all about the experience…”–Nancy “Filling in that “why” piece is really important if you’re going to start changing design because you may not really understand the reasons someone’s abandoning.”–Brian “Because a lot of our products weren’t born in the cloud originally, they weren’t born to be digitally originally, doesn’t mean they can’t be digitally consumed. We just have to really focus on the experience and one of those things is onboarding.” –Nancy “If they [users] can’t figure out how to jump in and use the product, we’re not nailing it. It doesn’t matter how great the product is, if they can’t figure out how to effectively interact with it. –Nancy Episode Transcript Brian: Today on Experiencing Data, I [talked] to Nancy Hensley, the Chief Digital Officer of IBM Analytics. Nancy brings a lot of experience and has a lot to say about how user experience and design have become very integral to IBM’s success especially as they move their applications into the cloud space. They really try to bring the price point down and make their services and applications much more low touch in order to access a new base of subscribers and users. I really enjoyed this talk with her about what the designers and people focused on the product experience have been doing at IBM to keep their company relevant and keep them pushing forward in terms of delivering really good experiences to their customers. I hope you enjoy this episode with Nancy Hensley. Hello everybody. I’m super stoked to have Nancy Hensley, the Chief Digital Officer of IBM Analytics. How’s it going, Nancy? Nancy: Good. I’m happy to be here. Happy Friday. Brian: Yeah. It’s getting cold here in Cambridge, Mass ; [ you’re] in Chicago area, if I remember correctly. Nancy: Yeah, it’s a little bit chilly here as well. Brian: Nice. So it begins. You’ve done quite a bit of stuff at IBM when we had our little pre-planning call. You talked a lot about growth that’s been happening over at IBM. I wanted to talk to you specifically about the role that design and experience has played, how you guys have changed some of your products, and how you’re talking to new customers and that type of thing. Can you tell people, first of all, just a little bit about your background, what you’re currently doing, and then we could maybe […] some of those things. Nancy: Sure, happy to. Thank you for having me again. I think I’m one of those people that doesn’t fit nicely into a box of, “Are you product? Are you marketing?” I am a little bit of both. Most of my IBM career, I have moved in between product marketing and product management. That’s why I love digital so much because it really is a nice mixture. And in particular, growth hacking because it combines all the things I love, including data as a part of what we do. What I’m doing right now as a Chief Digital Officer in the Analytics Division and Hypercloud is how do we transform our products to make them more consumable, more accessible? We have best in class products in data science, in unified governments and integration, in hyper data management products, but our products and our business is built on a traditional face-to-face model. There is even a perception that we’re not as accessible to them and that’s what we’re looking to change. Creating those lower entry points, making it easier for people who didn’t have access to us before, to start small and grow through a digital channel, through a lower entry point product, and then scale up from there. That’s really what we’re trying to do and as part of a bigger mission to really democratize data science—I kind of cringe when I say that word—I think it’s really important for more clients to be able to be more data-driven, have tools that are easy to use, and leverage data science to optimize their business. Part of the way we’re doing that is to develop a digital route to market. We’re pretty excited about it. Brian: I think a lot of our listeners probably come from internal roles of companies. They might be someone that’s purchasing vendor software as opposed to a SaaS company where they may have a closer role to marketing and all that. Can you tell me what you guys are doing there? Part of the thing with my experience is that some of the legacy companies, the older companies that are out there tend to get associated with big giant enterprise installations, really crappy user experience. It’s just so powerful, you have to put up with all these stuff. People’s tendency these days to accept that poor experience as just status quo is changing. What have you guys done? Not that you’re to blame but I’m sure that opinion exist. How do you guys adapt to that and wonder if upstart analytics companies coming out with other things, what do you guys to to address the experience? Nancy: There’s certainly a perception that IBM is that big, complicated, enterprise-focused product out there. We see the data. There’s a lot of articles, there’s a lot of feedback, there’s endless report that all validate that clients are trading off complexity or features and functions for consumability, because they got to get things done, they have less people to do it. We fully recognize that. Where we started to look for that was how we first started to make things much more accessible, not just our cloud products because that’s pretty easy if you have stuff in the cloud—it’s pretty accessible—but our on-prime products as well. So, for clients that are running analysis behind the private cloud, whether it’s a statistical product, or a predictive analytics product, or data science project, or even what they’re doing on their data catalog, all of that was not something people would go to the cog to look for it. There are some things they need especially financial and health care, and there’s large and small companies on both sides. One of the things we set out to do is how do we create that cloud-like experience for clients that are running things behind their firewall. We started a project about a year ago to look at some of our on-prime products and create that experience where literally you could, within a couple of clicks, download, try, and be using a product within 15 minutes. That was our goal. As opposed to before where you would have to contact and IBM salesperson, get them to come out and meet with you, and then set-up a trial. That’s what we started to change was that at least make it accessible. As we progressed that capability, we started changing our pricing and packaging to be appropriate, to create that entry-level point, to create a shift to subscription. You want to buy everything on subscription these days, I think. The last part of that shift for us has been to really focus on the experience because a lot of these products were not born digital. We really need to make sure that when clients were coming through that channel, that it was a great experience. That’s really where design experience came into play for us. Brian: How did you know of what’s wrong beyond broad surveys or just that general feeling that like, “Oh, it’s the big giant bloated software…” the stereotype, right? How do you guys get into the meat and potatoes of like you said, sounds like there’s a benchmark there, 15 minutes on that first onboarding experience, but can you tell us a little bit of maybe if you have a specific example about how you figured it out? What do we need to change about this software application to make it easier to get value out of the analytics of the data that’s there? Nancy: I’ve got lots of examples. We’ll opt with one that clients actually are very familiar with, which is SPSS Statistics that a lot of us used back in college. That was a product that actually turns 50 years old this year. It’s been out a while, a lot of people still using it a lot, and most of our base of users for statistics, I think if you look at the demographics of it, over 60% are under the age of 25. So, their buying preferences were very different than they were when they started out in 1968. We look at the verbatims from our NPS feedback and it was clear that clients really wanted a much more simplified and flexible experience than buying SPSS Statistics and having access to it. A lot of times, students have to get it really quickly for a project because they’ve might have waited until the last minute and they wanted a much more flexible subscription-based program. They might only use it for a few months and then come back to it. That was one of the first things that we implemented was to change the buying experience for the consumption model. We didn’t actually change the product at that point. We just changed the consumption model to see if in fact that actually will help us have some growth on that product, and it absolutely did. Since then, we’ve actually gone back and change the product as well. It’s got a whole new UI for its 50th anniversary. Joke around that it’s got a face lift for it’s 50th anniversary. Brian: Does it have a green screen mode? Nancy: It is a completely different experience, not just from a buying perspective, but also from a UI perspective as well. We have other products, too, that have been around maybe not 50 years but have been very popular products like our DB2 Warehouse on Cloud and our DB2 database that clients have been buying for years to run their enterprises. We wanted to make that again, as we created a SaaS alternative of these products that it was extremely consumable. So, we’ve been looking specifically, is it easy to figure out which version to buy? How much to buy? What it’s going to do for you? Like I said, which version? How do I calculate things? We’ve been really looking at the experience of that is, if there was no salesperson at all, how do we help clients through that buying experience? Brian: I’m curious. When you decided to helping them through the buying experience, does any of that thinking or that strategy around hand-holding someone through that experience happen in the product itself? I’m guessing you’re downloading a package at some point, you’re running an installer, and at that point, did you continue that hand-holding process to get them out of the weeds of the installation and onboarding again to the actual, “Is this tool right for what I needed to do?” Everything else being friction up until that point where you’re actually working with your data, did you guys carry that through? Can you talk about that? Nancy: You’re hitting one of my favorite topics which is onboarding. Because a lot of our products weren’t born in the cloud originally, they weren’t born to be digital originally, doesn’t mean they can’t be digitally consumed. We just have to really focus on the experience and one of those things in onboarding. Let’s say, DB2 in particular, which won the process of creating onboarding experience for DB Warehouse on Cloud. For anybody who’s used DB2, we do have an updated UI for that. They can jump in and start using it. But that’s not everyone, the people that haven’t used it before. So, we just started working with a couple of different onboarding tools to create these experiences. Our goal was to be able—at least I’m offering management side alongside our partners but design—to create these experiences in a very agile way and make them measurable—my second favorite topic, which is instrumentation—but not have a burden on development, because the fact is, in almost any organization, development wants to build features and functions. Whenever we talk about this, they were prioritized lower because they want to build new capabilities. They’re less enthusiastic about building in things like onboarding experiences. With some of the tools like [.DB2..] give us, is a way to make it codeless to us. We can create these experiences, then pass the code snippet, and then measure whether those are effective or not because we actually see those flowing through segment into our amplitude as a part of the shuttle. We’ve got some great feedback as to whether they’re working or where they’re falling down. We can create checklists of things that we want the clients to do that we know makes the product sticky, and see if they actually complete that checklist. It’s giving us so much better view because before, what we would see with a client is register for trial, they downloaded the trial, they’ve created their instance, and then boom they fall off the cliff. What happened? Now we’re getting a much better view to what’s actually going on for the products that have been instrumented as well as the view we’re getting in from the onboarding experiences. Brian: For every one of these applications that you’re trying to move into a cloud model or simplify whether it’s cloud, to me the deployment model doesn’t matter. It’s really about removing the friction points whether it’s on-premise software or not. I think we all tend to use the word ‘cloud’ to kind of feel like, “Oh, is this browser-based thing? There’s no hard clients? There’s no running scripts at the terminal and all that kind of stuff?” Do you guys have a set of benchmarks or something that you establish for every one of these products that are going to go through a redesign? Nancy: We do. We’ve got a set of criteria, it’s really broken down into two pieces. Whether it’s going to be a cloud product or an on-premise product—I actually have a mix of both—there is what we call the MVP side, which might be something that’s not born in the cloud, it’s not a new product, and we’re looking to create a lower entry point, a really good trial experience, a very optimized journey. We’re even doing things like taking some of the capabilities that we used to have from a technical perspective and making those more digitally available. Online proof of concepts, hands-on labs that you do online instead of waiting for a technical salesperson to come out to see you. Tap us that can answer your questions faster even before you talk to a sales rep. All of that is included in the what we call the MVP portion of the criteria that we look at. Pricing and packaging’s got to be right for the product, for the marketplace. Got to have that right product market fit that you’ve got a good valuable product but a low-enough entry point where somebody can start small and scale up. The second part of the criteria is where the growth magic happens, where we’re dumbing down a lot more on the experimentation, where we’re making sure that we’ve got onboarding, instrumentation we want done, and the MVP phase, we don’t always get it, but our development partners really understand the value of that now, which is great. Though more often, we’re getting into the second phase of where we’re more doing the transformation. Through that, then we’re getting a lot more feedback, where we can create the onboarding experience. We can do even more on the optimized journey. We’re doing a lot of growth hacking that’s based on terms of optimizing. Things like how clear is information on the pricing page? Is it easy for the customer to figure out what they need to buy? What the pricing is for that? Can they get their questions answered quickly? Can we create a deeper technical experience for them, even outside of the trial itself? Like I mentioned, things we’re doing with our digital technical engagement, thinking that what used to be our tech sales modeling and making it more digital. Brian: That’s cool. When you guys go through this process of testing, are you primarily looking at quantitative metrics then that are coming back from the software that you guys are building, or you’re doing any type of qualitative research to go validate like, “Hey, is the onboarding working well?” Obviously, the quantitative can tell you what. It doesn’t tell you why someone might have abandoned at this point. You guys do any research there? Nancy: We do. It happens in a couple of places. We run squads that are cross-functional across marketing, product, development, and design, each product. Then every Monday we have this thing called Metrics Monday where we get the cross-functional routines together, we share the insights around the metrics. If we had a big spike or we had a big decrease, or if we had a change in engagement, or if we did some experimentation that came out with a very interesting result, we actually share that across teams. We really focus on why did things happen. We have a dashboard. Everybody is religious in using on a daily basis that tracks all of our key metrics, whether it’s visits, engage visits, trials, trial-to-win conversions, number of orders, things like that, but we also want to dive deeper into the ebbs and flows of the business itself, why things are happening, and if the experimentation we’re doing is helping or not helping. We’ve got a lot of focus on that on a daily and a weekly basis. Brian: Do you have any way to access the trial users and do one-on-one usability study or a follow-up with them that’s not so much quantitative? Nancy: Our research team and design will do that and they’ll take a very thorough approach to both recording users using the product, getting their feedback. It’s pretty thorough and also gives us some feedback. We usually don’t do that until the product’s been in the market for a little bit longer. We’ve got some hypothesis of how we think it’s doing, and then the research team will spend a couple of weeks diving a lot deeper into it. We get some great feedback from that. Honestly, as a product person, as much as I’d to think I’m focused on a beautiful experience, my lens versus our designers’ lens is completely different and they just see things we don’t. Brian: Yeah, the friction points and filling in the why’s, it takes time to go and do that, but it can tell you things, it helps you qualify the data, and makes sense especially when you’re collecting. I’m sure at the level that you guys are collecting that, you have a lot of inbound analytics coming back on what’s happening. But it’s really filling in that “why” piece that is really important if you’re going to start changing design because you may not really understand the reasons someone’s abandoning. Maybe it’s like, “I couldn’t find the installer. I don’t know where the URL is. I ended up locking the server on my thing and I don’t know how to localhost, but I forgot the port number,” and the whole product is not getting accessed because they don’t know the port number for the server they installed or whatever the heck it is, and it’s like, “Oh, they dropped off. They couldn’t figure it out how to turn it on, like load the browser…” Nancy: Right, and even behavioral things that we don’t always think of, like putting a really cool graphic in the lead space that actually takes the attention away from the callback-ends. We’re all proud of, “Hey look at this cool graphic we built.” One of our designers uses a tool that tracks eye movements and [wait a second] “We’re losing the focus here.” But again, you don’t always see from that lens. The design part of it, for us has been so eye-opening because again, we’ve built a lot of best in class enterprise products for years. As we shift into this digital go-to market, it is all about the experience. It’s all about how good the experience is, how easy the experience is, how frictionless it is, and it’s also about how consumable and accessible the product is in the marketplace. Brian: You mentioned earlier, it sounded like engineering doesn’t want to go back and necessarily add onboarding on all of this. This gets into the company culture of who’s running the ship, so to speak. Is it engineering-driven in your area? How do you guys get aligned around those objectives? I’ve seen this before with larger enterprise clients where engineering is the most dominant force and sprints are often set up around developing a feature and all the plumbing and functionality required to get that feature done, but there’s not necessarily a collective understanding of, “Hey, if someone can’t get from step A to step G, horizontally across time, then all that stuff’s a failure. Step F which you guys went in deep on is great, but no one can get from E to F, so definitely they can’t get to G.” So, that’s you’re qualifier of success. How do you guys balance that? Who’s running the ship? Does your product management oversee the engineering? Can you talk a little bit about that structure? Nancy: We call operating management aside from product management for a reason, because we really do want the operating managers to feel like they’re the CEO of their business and run the ship. Of course, development has a big say at the table, but they have a natural tendency to want to build capabilities. It’s never going to go away. It’s been that way for ages. We just don’t want to fight that tendency. We want them to focus on building, not take six months to build an onboarding experience when they could build in really valuable functionality in that six months instead. So, we really run it as a squad, just like many other companies. Operating management does leave a lot of the strategy with our products and development, but I would say that design is also a really, really chief at the table, for sure, absolutely. Brian: Tell us a little bit about your squads and is this primarily a designer or a UX professional up in your offering manager? Are they a team and then you pull in the engineering representatives as you strategize? Nancy: My team is a digital offering management. We’re a subset of offering management better known as product management. We will run the squads and the squads will be a cross-function of our product marketing team, our performance marketing team, which is demand to and type marketing. They run the campaigns, design, developments, the core product managers because we’re the digital product managers and such, and then there’s the core product managers. They have all routes to market. We’re just focused on the digital ones. With that is the cross-functional squad that gets together on a weekly basis and they run as a team. From a digital perspective, it’s led by the Digital OM for our route to market there. Brian: That’s interesting. How do you ensure that there are some kind of IBMness to all these offerings? Your UX practice and offering managers sound like they are part of one organization, but I imagine some of these tools, you might be crossing boundaries as you go from tool X to tool Y. Maybe you need to send data over like, “Oh, I have this package of stuff and I need to deploy this model,” then we have a different tool for putting the model into production and there’s some cross user experience there. Can you talk about that? Nancy: That’s really why design’s been key because their job is to keep us onus making sure that the experience is somewhat consistent across the tools so they seem familiar to us, especially within a segment data science. Some of these are using our Watson Studio tool and then moves to our statistics for our modeler tool. There should be a very familiar experience across those. That’s why design is really the lead in the experience part of it. From pricing and packaging, we try to maintain a consistency as much as possible across all the products again. Whatever level of familiarity you have and how we price and package things should be consistent across the entire segment. So we strive for that as well. On the digital side, in terms of the experience on the actual web, we partner with a team called the Digital Business Group. They are basically the host of our digital platform and they maintain a level of consistency worldwide across all the products in terms of the digital journey itself with us. Brian: That’s cool that you guys are keeping these checkpoints, so to speak, as stuff goes out the door. You’ve got the front lenses on it looking at it from different quality perspectives, I guess you could say. Earlier, you mentioned democratizing data science and we hear this a lot. Are we talking about democratizing the results of the data science, so at some point there’s maybe decision support tool or there’s some kind of outcome coming from the data science? Is that what you’re talking about democratizing? Or are you saying for a data scientist of all levels of ability, it’s more for the toolers as opposed to the [consumers..]? Nancy: It’s about the capability. The ability to put more of these products or these products in people’s hands that bought, that they might have been out of their reach, or that they were too enterprisey, or that they are for big companies. That’s one of the key things that we want to do. When you look at some of our products, they start really, really low. Cognizant Analytics is another great example where people might have had a perception that it’s really expensive but we just introduced a new version of it, and it’s less than $100 a month. You can get these powerful tools for analysis for a lot less than you think. Statistics in $99 a month, one of our pay products are significantly less, and it allows these companies that might not have considered doing business with us, to smart small and build up. That’s one of the key things we noticed as we shifted to a subscription model. With that, we started to see double digit increases in the number of clients that were new on products. Just because opened up this new route to market, doesn’t mean that we still didn’t maintain our enterprise face-to-face relationships because, of course, we did, but this allowed us to open up relationships with clients might have not gotten to before. Brian: How are the changes affecting the legacy users that you have? I imagine you probably do have some people that are used to, “Don’t change my toolset,” like, “I’ve been using DB2 for 25 years.” How are they reacting to some of the changes? I imagine at some point maybe you have some fat clients that turn to browser-based interfaces. They undergo some redesign at that point. Do you have a friction between the legacy experience and maybe do you employ the slow change mentality? Or do you say, “Nope, we’re going to cut it here. We’re jumping to the new one and we’re not going to let the legacy drag us back”? You talk about how you guys make those changes? Nancy: We’re shifting towards the subscription model. Our clients are, too. We have clients that are demanding that this is the only way that they actually want to buy software is through a subscription model. So it’s changing for them as well. I think in many ways, it’s a welcome change across the board. I can’t think of any negativity that we’ve had in both the change for the consumption models on a subscription side, as well as the new UI changes and things that we’re doing to the product that really update them and give them a modern feel. I know a lot of the onboarding is a welcome change, even for clients that are familiar with us. It helps them because they have to do less training internally to help people use the tool because now we’re building it into the product. Brian: How do you measure that they’re accepting that? Do you wait for that inbound feedback? Do you see if there’s attrition and then go talk to them? I imagine there’s some attrition that happens when you make a large tooling change. Is there a way to validate that or why that happened? Was it a result of changing too quickly? Any comment on that? Nancy: I think it’s a couple of things. We’re constantly monitoring the flow of MRR and the contraction of revenue where the attrition that we get through some of our subscription, to see if there is any anomalies there. But also we’re always were very in-tune with NPS. A lot of our product managers live and die in the verbatims and with the integration of FLAX, they get a lot of it. They’re coming right at them constantly, that they respond to. We are very, very in-tune with NPS and the feedback we’re getting there. We’re also getting a lot of reviews now on our software using tools like G2 Crowd where we keep an eye on that. I think the feedback doesn’t just come from one place. We’ll look at things like the flow through Amplitude. Our clients, when they’re coming in and during the trial, are they getting stuck someplace? Are they falling off someplace? Are they falling off either at a specific page like the pricing page? Or are they falling off as soon as they get the trial because they don’t know what to do with it? We look at things like that. We look at NPS in particular after we’ve introduced new capabilities. Did our NPS go up? What’s the feedback? Are our clients truly embracing this? I think it’s a combination of things. There is a lot of information, a lot of data that we just need to stay in-tune with. We’ve got a couple of dashboards that I know my team wakes up with everyday and takes a look at, and the product team. The core product manager stayed very focused on NPS. Brian: Do you have a way of collecting end-user feedback directly? I would imagine maybe in your newer tools, it’s easier to tool some of that in, but is there any way to provide customer feedback or something to chat or any type of interactivity that’s directly in the tools that you’re creating these days? Nancy: Sure. We are rolling out more end-product nurture capability than we ever had before. That gives them the ability to chat directly within the product, as well as schedule a time with an expert. We’re working in making that even easier through a chat bot. So if you do get stuck and you’re chatting with that bot, you can schedule the appointment with an expert right there. I think there’s lots of ways to do that. I think sometimes I worry that there’s too much data coming at us but we [didn’t have enough..] before, so I’m not going to do that. Brian: Right. It’s not about data, right? It’s, do we have information? Nancy: Do we have information? Exactly. I would say my team spends a lot of time going through that, looking at Amplitude, analyzing the flows, looking in the patterns, in the orders, in the data, and the revenue. With the NPS feedback, it’s a combination of all of that stuff that really gives us a good view. As well as looking at the chat data, and analyzing some of the keywords that’s coming across on the chat, the Watson robots are constantly learning, which is great. We’re using machine learning to get smarter about what do people ask about, and that’s giving us also some good insight into the questions they ask, the patterns of information they’re searching for by product. Brian: In terms of the net promoter score that you talked about, tell me about the fact that how do you interpret that information when not everybody is going to provide a net promoter score? You have nulls, right? Nancy: Right. Brian: How do you factor that in? That’s the argument against NPS as the leading indicator. Sometimes, it’s not having any information. So you may not be collecting positive or potentially negative stuff because people don’t even want to take the time to respond. Do you have comments on how you guys interpret that? Nancy: I think you also have to look at the NPS is going to go up and down. If you have a client who has particularly a bad experience, it’s the week of thanksgiving, there was only X amount of surveys, and one of them had a bad experience that could make your NPS score looks like it drops like a rock. [right] you’ve got to look at it like the stock market. It’s more of the patterns over the long haul, what’s coming across within those patterns of information and feedback the clients are giving you. We react but you have to look at the data set, you have to look at the environmental things that are happening, and take that all into consideration from an NPS perspective. We’re very driven by that and that comes down from our CEO. She’s very cognizant, making sure that the product teams and the development teams are getting that feedback directly from the clients. As an organization—we’re a few years old—the way we used to do that is we would have these client advisory boards. It was a small number of clients that would give us feedback on our products, roadmap, and usability of that. The reality is just that then you end up building the product for 10% of your clients. Now it’s been eye-opening for us as we really open that up. Obviously, we’re still getting feedback from a larger community and client advisory board still, but NPS comments and feedback has really widen the aperture of the feedback we’ve gotten from a broader scope of clients. Brian: You brought up a good point. I had a client who luckily was cognizant of this and they did the same things where they fly their clients, they do two-day workshops, and they gather feedback from them. I was doing some consulting there and he said, “Brian, I’d like you to just go walk around, drop in some of the conversations and just listen, but take it with a grain of salt because I hate these freaking things. All we do is invite people that are willing to come for 2–3 days and tell us how much they love our stuff, it’s a free trip, we’re not getting to the people that don’t like our stuff…” Nancy: Or don’t use it. Brian: Or don’t use it at all. I love the concept of design partners, which is new, where you might have a stable of customers who are highly engaged, but that the good ones are the ones that are engaged who will pummel you when you’re stuff is not happening. They will come down on you and they will let you know. So it’s really about finding highly communicative and people who are willing to tell it like it is. It’s not, we’ll go out and find people that rah-rah, cheerleading crowd for you. Did that inspire the changes? Nancy: Even in the client advisory councils that we had—I ran a couple of them for products like Netezza for a while—we started to change the way we even ran those. I remember the biggest aha moment was, we had a client advisory board for Netezza one year and not too long ago. We decided to run a design thinking camp as part of the agenda, so that they would actually drive what they wanted from our requirements prospectus, going through the design thinking process through that. What came out of it was truly eye-opening. You know how a design thinking process progresses. I think even they were surprised at what they ended up prioritizing across the group of requirement. I think we’re really starting using differently about that feedback from clients. I do remember that day when we were looking at those things and that was not where we thought we would end up. Brian: Do you have a specific memory about something that was surprising to the group that really stuck? Something you guys learned in particular that stuck with you? Nancy: I think we focused a lot more at that point. At the time there was a lot of issues around security and what was one of our leading things going into the next version. What clients actually were not necessarily as verbal about was that, as they were using these appliances and they were becoming more mission-critical, they were doing more mixed workloads. Yes, security was still incredibly important, but what was emerging beyond that for them was workload management because they had this mixed workload that was emerging. So many different groups were jumping in with different types of workload. They have not anticipated on their [day route?] appliance, so it was something that I think came out of the next in the design thinking process that was important to them that they actually hadn’t been able to verbalize to us. Outside of that process, which was really, really interesting to us, we were on track with the requirements that we have but beyond that, the requirements that we just hadn’t thought of and quite honestly they hadn’t verbalized. Brian: You make a good point there. Part of the job of design is to get really good clarity on what the problems are and they’re not always going to be voiced to you in words or in direct statements. It’s your job to uncover the latent problems that are already there, crystalize them, so ideally whoever your project manager in the organization and your leadership, can understand and make them concrete because then you can go and solve them. When they’re not concrete and vague, like, “We need better security.” But what does that mean specifically? If you start there and really the problem had to do with the mixed workloads and managing all that, it’s like you can go down a completely different path. You can still write a lot of code, you can build a lot of stuff, and you can do a lot of releases, but if you don’t really know what that problem is that you’re solving, then you’re just going through activity and you’re actually building debt. You’re building more technical debt, you’re wasting money and time for everybody, and you’re not really driving the experience better for the customer. I think you made a good point about the design thinking helps uncover the reality of what’s there, when it’s not being explicitly stated, support requests are not going to get that type of information. They tend to be much more tactical. You’re not going to get a, “Hey, strategically I think the project needs to go this direction.” Nancy: Right and if you would have asked of us an open-ended question, you would have gotten and answer that could have been interpreted slightly differently. I think this was when I became the biggest fan of design is that, there was this magical person who was running this design camp for me that got information that I didn’t think I could get to. I mean, I knew nothing about the product. It was pretty amazing. Brian: That can happen when you also get that fresh lens on things even when they may not be a domain expert. You get used to seeing the friction points that people have and you can ask questions in a way to extract information that’s not biased. You’re not biased by the legacy that might be coming along with that product or even that domain space. It’s sometimes having jthat almost like first grade, “Tell it to me like I’m your grandfather,” or, “Explain that to me this way,” and then you can start to see where some of those friction points are and make them real. I always enjoyed that process of when you’re really fresh. Maybe this happens for other people but especially as a designer and consultant, coming into a product and a new domain, and just having that first-grader lens on it like, “Hey, could you unpack that for me?” “What is the workload in there like?” looking at you like, “What?” and you make them unpack that but you give that full honesty there to really get them to extract out of their head into words that you [and.] everyone can understand. That’s where some of those magical things happen like, “Oh my gosh. We had no idea that this was a problem,” because he or she thought it was so obvious like, “Of course, they know this,” and it’s like, “No. No one’s ever said that.” Nancy: Right. We’re experiencing that now. We have an embedded designer into our team that’s focused on our growth products. Again, she’s coming in with a complete fresh set of eyes and her perspective that she brings on the experience is just so completely different, not completely different but there are things that she flushes out we would have never see. It’s really helping because a lot of times, too, when you’re focused on the experience as opposed to the features and functions analysis, and you come down to looking at it from that perspective. I don’t want to go to development and tell them this because it’s like calling their baby ugly. But at the end of the day, the client needs to have a great experience. They need to see the value. When they’re even just trying the product out, they don’t get to that aha experience like, “I know how this will help me within 15 minutes.” We’re just not nailing it. If they can’t figure out how to jump in and use the product, we’re not nailing it. It doesn’t matter how great the product is, if they can’t figure out how to effectively interact with it. Brian: Effectively, none of the stuff really exist in their world. It just doesn’t exist because they can’t get to it. So, effectively it’s totally worthless. Whatever that island you have on the island, if there’s no bridge to get there, it doesn’t matter because its just totally inaccessible. Nancy: Right and it’s harder sometimes for even the product managers to see it. When I was sitting down in a demo of a product that we are going to be releasing, dude was cruising through the demo, my eyes were like glazed over, I just look and I was like, “Boy, we’re going to need some onboarding with that.” Great product, amazing capabilities, very complex and dense in its capability. It’s never really about whether it’s a great product. It’s about whether the client understands that’s great, when they start using it. Brian: Yeah and I think especially for analytics tools, highly technical tools used by engineers and other people that have better working in this kind of domain. Sometimes we gloss over stuff that seems like it would be totally easy or just not important. I have this specific example I was working on a storage application. It was a tool I think for migrating storage between an old appliance and a new appliance. At some point during that workload migration, something as simple as like, “Oh, I need a list of these host names and these IP addresses,” some other information that’s just basically setup-related stuff, and all the tool needed to do was have a CSV download of a bunch of numbers to be piped into another thing so that they could talk to each other. It’s not sexy. It’s literally a CSV. It was the only technical lift required, but it was not seen as engineering. It’s not part of the product. That has to do with some other product but you have to go type it into. It’s like yes, but that bridge is never going to happen. It takes them 10 years to go figure out where all these IP addresses are listed, domain names, and all these kind of stuff. It’s not sexy but if you look at the big picture, the full end-to-end arc, and if we’re all lying around, what is that A to G workflow, there’s six steps that have to happen there. This is not sexy, it’s not a new feature but this is the blocker from getting from B to E. They’re never get to A, which is where the product begins. Nancy: We definitely had those discussions in the early days about making it more consumable instead of giving it more features and functions, and can’t we really hack growth that way? That is a mind shift that if you are a design-led organization, you get it, and we believed in every part of our being that we are. Sometimes we still have that natural resistance that we really need to add more features and functions to make this product grow, but I think we’ve really turned the corner on that. Digital really has been the task for us to do that because we build the experience in the products as if there was no IBM sales team that’s going to surround you to help make you a success. That’s a very different way that we’ve done things for so many years, and the only way you can do that is by focusing on experience. Brian: You bring up a good point and I think that it’s worth reiterating to listeners. You can add these features but they do come at a cost. The cognitive load goes up. Every time we add to the tool, we’re effectively reducing the simplicity of everything else around it. Typically as a general rule, removing choice simplifies because you’re just removing the number of things that someone has to think about. So those features don’t really come for free. It’s almost like you have a debt as soon as you add the feature and then you hope you recoup it by, “Oh there’s high engagement. People are really using that,” so that was a win. If there’s low engagement with it, you just add it. It’s like Microsoft Word 10 years ago. You just added another menu bar and another thing that no one’s gonna use, and now it’s even worse. The pile continues to grow and it’s so hard to take stuff out of software once it’s in there, because you’re going to find, “You know what? But IBM’s our client, and they’re using it. IBM makes $3 million a year. We’re not taking that button out of the tool. End of story,” and now you have that short-term like, “We can’t take that out because Nancy’s group uses this.” Nancy: That’s right and we can’t point out exactly. I think my favorite story when it comes to that is the Instagram story that people talk about, where it was launched as a tool, a product called Bourbon. It had all of these great capabilities and it was going nowhere. So they dug into the usability side of things and said, “Well, what are people actually using?” which is what we do as well from an instrumentation perspective, and found that they were really only using a couple of things. They wanted to post a picture, they wanted to comment on the picture, they want to add some sort of emojis or in like system the picture and they are like, “Let’s [do.]. Let’s just do three or four things, do them really great, and relaunch the product,” and then of course the rest is history. I think that that’s a great illustration of more features and functions. If they’re not important, relevant, and consumable, all three of those things, are not going to give you growth. It comes down to, is it easy to use? Can I get value out of it? Do I immediately see that I can get value out of it? That’s all product market fit. That’s where we shifted our focus and digital’s helped us, too. That’s why my job is so cool. Brian: Cool. This has been super fun. Can you leave us with maybe an anecdote? Do you have a big lesson learned or something you might recommend to people that are either building internal tools, internal enterprise software or even SaaS products, something like, “Hey, if I was starting fresh today, I might do this instead of doing that.” Anything from your experience you could share? Nancy: For me, the biggest thing is just really focusing on product market fit because we build something sometimes to be competitively great, but not necessarily competitively great and competitively different, or that. So to understand that you not only have something that solves somebody’s problem but does it in a way that’s unique, and that’s so valuable that they’ll pay the price that’s appropriate for whatever they’ll pay for it. You’ve got to start thinking about that upfront because oftentimes, we’ll build something we’ll see a market opportunity for, but we may not truly understand product market fit whereas we know who the target is, we know what they’ll pay for this, we know what the value is, we know how to get to them, and I think you’ve got to start with that upfront, like you really got to understand product market fit or you’re never be able to grow the product. I’ve got a lot of religion around that and we really try very, very hard to create pricing and packaging around making sure we hit that, but the product has to have that value. It can’t be too overwhelming, it can’t be too underwhelming, it’s got to hit that great value spot. Brian: Fully agree on getting that fit upfront. You save a lot of time, you could solve a lot of technical debt instead of jumping in with the projects that you going to have to change immediately because you find out after the fact and now you’re starting it like… Nancy: See you in Instagram not a Bourbon, right? Brian: Exactly. Tell us where can people find you on the interwebs out there? Nancy: I probably spend a lot of time on Twitter. Maybe not so much lately. It’s been a little bit crazy but you can find me on Twitter @nancykoppdw […] or you can find me on LinkedIn. I am going to try and do better. I am on Medium. I haven’t done as good about blogging but that’s one of my goals for trying to get back on blogging. I’m usually out there on Medium or Twitter talking about growth hacking and digital transformation. I do podcast as well. Brian: Cool. I will put those links up on the show notes for anyone. Thanks for coming to talk with us, Nancy. It’s been fun. This has been Nancy Hensley from IBM Analytics, the Chief Digital Officer. Thanks again for coming on the show and hope we get the chance to catch up again. Nancy: Thank you.
Guest: Jim Miller QueBIT Master Consultant Jim Miller has an unusual hobby: he researches and writes books about technology! Jim has written successful books on a variety of topics, but today we are going to talk about his series of books on IBM Watson, the most recent of which is called “Machine Learning with IBM Watson Studio”. In our conversation with Jim, we ask him to explain the various tools behind the IBM Watson brand, and then go on to talk about real-world business use-cases for Watson Studio. We wrap up by talking about how ANYONE can get started with the free version!
In this Marketing Over Coffee: Learn about AI Project Debater, LinkedIn Live Video and more! Direct Link to File Brought to you by our sponsors: ahrefs and LinkedIn THINK wrap up – Watson Studio, Watson Machine Learning, Watson Open Scale How do you do AI? Bias in AI ahrefs – keyword explorer 3.0 7:41 Matthew […] The post THINK, Bias, and Subliminal Messages! appeared first on Marketing Over Coffee Marketing Podcast.
Paul Zikopolous, VP of big data cognitive systems at IBM, joins us to discuss tactics for both career and personal growth. Paul is also an established author and public speaker, and leverages experiences gained through those pursuits in the advice he gives. Have a pen and paper ready as there is a lot to take away from this enlightening conversation.Show notes00:00 - Check us out on YouTube.00:00 - We are now on Soundcloud.00:10 - Add producer Liam Seston on LinkedIn and Twitter. 00:15 - Add producer Steve Moore on LinkedIn and Twitter.00:25 - Add host Al Martin on LinkedIn and Twitter. 01:43 - Connect with Paul Zikopolous on LinkedIn and Twitter. 07:02 - Get up to speed with Watson Studio.10:16 - Develop a continuous learning lifestyle.14:27 - How to figure out what you want out of a job.20:55 - How to succeed with failure. 24:50 - "Get comfortable feeling uncomfortable." 30:54 - Here are some tips to make time for the gym.38:28 - "Don't let other people define you."
Paul Zikopolous, VP of big data cognitive systems at IBM, joins us to discuss tactics for both career and personal growth. Paul is also an established author and public speaker, and leverages experiences gained through those pursuits in the advice he gives. Have a pen and paper ready as there is a lot to take away from this enlightening conversation.Show notes00:00 - Check us out on YouTube.00:00 - We are now on Soundcloud.00:10 - Add producer Liam Seston on LinkedIn and Twitter. 00:15 - Add producer Steve Moore on LinkedIn and Twitter.00:25 - Add host Al Martin on LinkedIn and Twitter. 01:43 - Connect with Paul Zikopolous on LinkedIn and Twitter. 07:02 - Get up to speed with Watson Studio.10:16 - Develop a continuous learning lifestyle.14:27 - How to figure out what you want out of a job.20:55 - How to succeed with failure. 24:50 - "Get comfortable feeling uncomfortable." 30:54 - Here are some tips to make time for the gym.38:28 - "Don't let other people define you."
In this Marketing Over Coffee: Learn about Watson Studio, What’s Up with Facebook, The Martech Salary Survey and more! Direct Link to File Brought to you by our sponsors: LinkedIn and Optimove IBM Think 2018 – The challenge of getting easy explanations for Quantum and Probable Bit Computing Programming Quantum Computing with virtual circuits How […] The post Nobody Understands Quantum Computing appeared first on Marketing Over Coffee Marketing Podcast.