POPULARITY
Categories
What's Next With DataThe Big ThemesBusiness and IT convergence: IT and business operations are increasingly converging. CEOs and executives are recognizing the critical role of technology in achieving strategic business objectives. Juergen is seeing more and more "fusion teams" where IT and business come together.Generative AI impact: Generative AI is a transformative technology with significant implications across industries. Businesses are exploring how generative AI can enhance operations, decision-making, and create entirely new opportunities. Business that are still on-prem may see the advent of Gen AI as a pressing reason to move onto the cloud.Data management and data access: Effective data management is necessary for leveraging generative AI and other advanced technologies. SAP's Datasphere is an answer for remote data access and federation. The Big Quote: "No customer I'm aware of is successful with the central data lake. The efforts of bringing data together in their heterogeneous application landscape are just too big. And therefore, we firmly believe that data federation will be the prevailing method of accessing data."
F1 headed to the land of the rising sun this week as the Japanese GP took place at the Suzuka Circuit.The story of the week for some conspiracy theorists out there (and our very own Sap) was the the technical directive had brought Red Bull back into the pack which had seen Sainz grab the win on the last time out.However, right from the first practice, Max was on a mission to remind everyone that it was just a blip. The guys pull apart the weeks news as well as doing break down of all the main events over the race weekend. There is also a big discussion around the future of Logan Sargeant.Episode running order:News & SocialReview of the latest F1 news that caught our eye on the internet and social media channelsBrian's Video Vaulthttps://www.youtube.com/watch?v=-kXjDN_YCrY. Hotel room rates drop ahead of Formula 1 Las Vegas Grand Prix. KTNV channel in Vegas https://www.youtube.com/watch?v=PsNyag6JGv4. F1 Drivers Vs WILD Japanese Game Show!
What do today's employees want in a job?Dr. Steven T. Hunt, Ph.D., SAP's Chief Expert of Technology & Work, joins JD Dillon to share his perspective on what it takes to create a desirable workplace. Steve digs into trendy topics, like generational differences and remote work, and explains why they're distractions from the important ideas organizations should focus on: job design and human psychology. Steve also talks about his new book, Talent Tectonics, and how it can help companies design work experiences that attract, enable and retain exceptional employees.Watch the full video of this episode on the Axonify YouTube Channel.Subscribe for ITK updates and show announcements at axonify.com/itk.Grab a copy of Steve's book at talenttectonics.com. Get your ticket for AxoniCom 2023 in Nashville this October at axonify.com/axonicom. Grab a copy of JD's book - The Modern Learning Ecosystem at jdwroteabook.com.In The Know is brought to you by Axonify, the proven frontline enablement solution that gives employees everything they need to learn, connect and get things done. With an industry-leading 83% engagement rate, Axonify is used by companies to deliver next-level CX, higher sales, improved workplace safety and lower turnover. To learn more about how Axonify enables over 3.5 million frontline workers in 160-plus countries, in over 250 companies including Lowe's, Kroger, Walmart and Citizens Bank, visit axonify.com.
“We have taught one another terribly when it comes to what leadership/management means.” Dan Pontefract In this episode you will learn: How data can shift your perspective when thinking about balance - at work, at home, and with self. A new way to integrate our personal and professional lives Work and Life factors that contribute to our blooming, budding, stunted, or renewal state Dan Pontefract With a thriving curiosity and a heart bursting with empathy, Dan Pontefract is the epitome of a benevolent nerd. Dan's knack for understanding the dynamics of leadership in high-stress environments has made him a respected figure in this field. Seamlessly blending business strategy with empathy, Dan's approach was honed during his time at SAP, where he was part of the incredible team that improved the company culture. His insights on transparent communication and self-awareness are lauded for their practicality and foresight and his ability to balance family life and work resonates with leaders. Episode Resources: www.worklifebloom.com Work-Life Bloom Assessment Catch up on our last 3 episodes: Adversity, Control and Compartmentalizing Podcasting, Entrepreneurship, Masterminds, Authenticity and Attraction with Lou Mongello Origin Story Connect With Us! Instagram YouTube Twitter LinkedIn www,findmycatalyst.com Please share this podcast episode with friends and colleagues and leave us a review and rating via your favorite podcast platform.
Welcome to the "Secrets of #Fail," a new pod storm series hosted by Matt Brown. In this series of 2023, Matt dives deep into the world of failures and lessons learned along the way from high-net-worth individuals. Join Matt as he dives into the world of failures and lessons.Series: Secret of #FailApproyo provides full SAP service technology with extensive capabilities in hosting and managed services, upgrades, and migrations for our customers, running any SAP supported core functionality. With over a thousand SAP environments under management around the globe, we support businesses from production landscapes to migrations onto SAP S/4 HANAGet an interview on the Matt Brown Show: www.mattbrownshow.comSupport the show
Taken from a special live event, Tig is joined by her good friends, comedians Fortune Feimster and Mae Martin, in this hilarious episode of Don't Ask Tig. The trio give advice on talking to eight-year-olds, understanding the North Carolina accent, gay dating for the first time, and deciding when it's the right time to move in with someone. Fortune Feimster's credits include the hit Netflix show “FUBAR” and “The Mindy Project,” “Chelsea Lately,” NBC's “Kenan,” and “Champions.” Mae Martin is the star of their own award-winning Netflix series “Feel Good,” and the Netflix special, “SAP,” as well as HBO's “The Flight Attendant.”This episode is sponsored by BetterHelp (go to Betterhelp.com/TIG for 10% off the first month of online therapy) and Indeed (visit indeed.com/TIG for a $75 sponsored job credit).Don't Ask Tig is supported by listeners like you. Donate today: https://support.americanpublicmedia.org/dontasktig-podcast Need advice? Submit your question for Tig at dontasktig.org/contact.
In this episode, Ritu Bhargava, Chief Product Officer, CX/CRM @ SAP, joins us to discuss collaboration, relationship building, and navigating conflicts in large-scale organizations with competing priorities. We cover Ritu's philosophy regarding building bridges between people, how to gain buy-in toward your priorities, unlocking support from fellow exec leaders, and how to address conflicts & competing interests across a massive org. Ritu also shares her strategies for minimizing ego & generating curiosity as an eng leader, her most valuable prioritization tool & how it works for SAP, and identifying / managing conflicts before they become an issue.ABOUT RITU BHARGAVARitu Bhargava (@ritubhargava) is the Chief Product Officer of SAP Customer Experience (CX). In her role, she heads product, engineering, user experience, strategy, and operations for the entire CX portfolio and recently has been appointed to the Qualtrics Board of Directors.Before joining SAP at the end of 2021, Ritu held various technology leadership positions and most recently came from Salesforce as the Senior Vice President of Engineering for Sales Cloud, Salesforce's flagship product suite. Having started her career as an SAP developer, Ritu went on to work at Oracle for ten years and was responsible for financial applications in various roles. With extensive experience in enterprise applications and the CX space, Ritu brings a strong market focus, both from a business and engineering perspective. Ritu holds a bachelor's degree in Economics and Psychology from Lady Shri Ram College, Delhi University, and an M.B.A. in Finance and IT from the University of Lincolnshire, U.K. She recently joined the Qualtrics Board of Directors and co-chairs the West Coast Advisory Board for Asian University for Women. AUW is a Bangladesh-based nonprofit dedicated to women's education and leadership development. She also enjoys supporting cricketing initiatives in America, having played on the U.S.A. Women's Cricket team."If we were to rely purely on just having to re-org for every business requirement that we need to deliver to or a customer need that we need to execute to, we would endlessly be re-orging and it's just not possible, which means that we have to and we must operate in matrix words, which also then further means that we have to be okay with working with each other in a way that is not just, 'Hey, if I don't report to you or you're not on my team is only when I will make you successful.'- Ritu Bhargava Check out Jellyfish's Scenario Planner to help you accelerate your development!With Jellyfish's Scenario Planner, you can analyze tradeoffs, and optimize resources - to ensure your highest priority initiatives meet your delivery goals and deadlines!To learn more about how Scenario Planner can help you better accelerate, predict & plan your software delivery
In der heutigen Folge „Alles auf Aktien“ sprechen die Finanzjournalisten Daniel Eckert und Holger Zschäpitz über die familienfreundliche Seite von SAP, die bevorstehende IPO-Flut und Zinsenttäuschungen bei Börsianern. Außerdem geht es um Adobe, BASF, Brenntag, Porsche AG, Nvidia, C3.AI, Birkenstock, Instacart, Klaviyo, Tesla, Alphabet, Amazon.com, Meta Platforms, Microsoft, Apple, Nvidia, Morgan Stanley Der Aktionär Magnificent 7 Indexzertifikat (WKN: DA0AC0), iShares Edge MSCI World Size Factor ETF (WKN: A12ATH), Mynaric, 2G Energy, Abo Wind, Union Pacific Railway, CSX, Canadian National Railway, Canadian Pacific Railway, Norfork Southern, iShares Global Infrastructure ETF ausschüttend (WKN: A0LEW9), Rize Global Sustainable Infrastructure ETF (WKN: A3ENM8), BIT Global Internet Leaders 30 R - I Fonds (WKN: A2N812). Wir freuen uns an Feedback über aaa@welt.de. Disclaimer: Die im Podcast besprochenen Aktien und Fonds stellen keine spezifischen Kauf- oder Anlage-Empfehlungen dar. Die Moderatoren und der Verlag haften nicht für etwaige Verluste, die aufgrund der Umsetzung der Gedanken oder Ideen entstehen. Hörtipps: Für alle, die noch mehr wissen wollen: Holger Zschäpitz können Sie jede Woche im Finanz- und Wirtschaftspodcast "Deffner&Zschäpitz" hören. Außerdem bei WELT: Im werktäglichen Podcast „Kick-off Politik - Das bringt der Tag“ geben wir Ihnen im Gespräch mit WELT-Experten die wichtigsten Hintergrundinformationen zu einem politischen Top-Thema des Tages. Mehr auf welt.de/kickoff und überall, wo es Podcasts gibt. +++ Werbung +++ Du möchtest mehr über unsere Werbepartner erfahren? [**Hier findest du alle Infos & Rabatte!**](https://linktr.ee/alles_auf_aktien) Impressum: https://www.welt.de/services/article7893735/Impressum.html Datenschutz: https://www.welt.de/services/article157550705/Datenschutzerklaerung-WELT-DIGITAL.html
Are you curious about the world of IT consulting and how it has evolved over the years?Join us as we explore Ralph's remarkable journey in the world of IT consulting, from his early days as a programmer to his current role as a sales leader at Navigator Business Solutions. In this episode, we'll gain insights into SAP implementations for life sciences clients, including different project models, the keys to success, and effective partnership management with SAP. If you're curious about IT consulting and SAP, you won't want to miss this episode!Get in touch with Ralph:➡️LinkedIn: https://www.linkedin.com/in/ralph-hess➡️Website: Navigator Business Solutions https://www.nbs-us.comTo get in touch with Diogene:➡️LinkedIn: https://www.linkedin.com/in/diogenentirandekura➡️Instagram: https://www.instagram.com/diogenentirandekura➡️Consulting Lifestyle Community: https://consultinglifestyle.fm/community➡️Coaching with Diogene: https://consultinglifestyle.fm/coachingSupport the showTo get in touch with Diogene Ntirandekura, the host of the show: Linkedin Profile community page for coaching Instagram profile
SAP and Enterprise Trends Podcasts from Jon Reed (@jonerp) of diginomica.com
On July 20, 2023, during SAP's Q2 earnings call, SAP CEO Christian Klein made bold statements regarding the future of SAP innovation - in particular RISE, AI, and why SAP's close relationship with customers, including opt-in customer data, give SAP an AI advantage (and premium AI pricing). It is our view that these statements constitute a notable change/evolution in innovation strategy that warrant debate, user group dialogue - and discussion on how customers should track these issues. Since Klein's earnings call statements, other executives inside of SAP have re-iterated and provided more context to these statements. These points are not necessarily firm policy yet, but are definitely messaging to look hard at. Since that time, along with my podcast guests Geoff Scott, CEO ASUG, and analyst Josh Greenbaum, we've had a chance to clarify these points and press questions. We don't have all the answers, and this is still unfolding, but we now have enough information to have an informed debate with our own views. Some of the hottest issues, such as AI pricing, reflect a surging market that has not yet gelled around the value of AI and what that will look like for customers. These AI pricing/value issues are not unique to SAP, but warrant scrutiny nonetheless. We taped this podcast as a way of letting listeners know what we've learned so far, and to lay out the questions we can pursue across SAP events this fall, culminating in SAP TechEd India, virtual SAP TechEd, and ASUG TechConnect New Orleans, which occur across the same times in November. Strong opinions are declared in this podcast but we do want to point out that SAP may yet shift and further clarify the policies we are debating. Those customers with specific questions should not treat this podcast as definitive, but contact SAP with open questions. We would also like to thank DSAG for providing their views, some of which are incorporated here. Also a thank you to our various contacts at SAP, who worked hard to get us the most current information that is publicly available. They may not agree with our opinions, in fact we are pretty certain they will not, but we are much better informed due to SAP's efforts, and SAP's willingness to engage in an important community dialogue. Any inaccuracies will be corrected in the podcast description, but these topics are too important to wait for perfect information, some of which is likely to be tied to event announcements this fall, yet to be made public. We expect to regroup with a shorter podcast later this fall to discuss what we learned at those events. Finally, we did cover, in brief, our initial takes on SAP's LeanIX acquisition, though we are out of time for a thorough review. This discussion relies almost entirely on published information in our prior blog posts, and in other news stories and transcripts: Geoff: https://www.asug.com/insights/asug-ceo-we-will-clarify-and-communicate-saps-cloud-strategy-for-our-community Josh: https://www.linkedin.com/posts/joshuagreenbaum_read-the-full-story-now-activity-7092925003704266754-eN4b/ Jon: https://diginomica.com/generative-ai-disruptions-rise-and-grow-thomas-saueressig-reveals-next-steps-saps-ai-strategy DSAG: https://dsag.de/presse/on-premise-customers-cut-off-from-innovations/
This week Dr. Doug talks: Cyberdog, Pegasus, Webex, Peach Sandstorm, SAP, Caesar, Penn State, Aaran Leyland, and More News on this edition of the Security Weekly News. Visit https://www.securityweekly.com/swn for all the latest episodes! Follow us on Twitter: https://www.twitter.com/securityweekly Like us on Facebook: https://www.facebook.com/secweekly Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/swn-325
This week Dr. Doug talks: Cyberdog, Pegasus, Webex, Peach Sandstorm, SAP, Caesar, Penn State, Aaran Leyland, and More News on this edition of the Security Weekly News. Visit https://www.securityweekly.com/swn for all the latest episodes! Follow us on Twitter: https://www.twitter.com/securityweekly Like us on Facebook: https://www.facebook.com/secweekly Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/swn-325
Cyberdog, Pegasus, Webex, Peach Sandstorm, SAP, Caesar, Penn State, Aaran Leyland, and More News on this edition of the Security Weekly News. Show Notes: https://securityweekly.com/swn-325
Welcome episode 227 of the Cloud Pod podcast - where the forecast is always cloudy! This week your hosts are Justin, Jonathan, Matthew and Ryan - and they're REALLY excited to tell you all about the 161 one things announced at Google Next. Literally, all the things. We're also saying farewell to EC2 Classic, Amazon SES, and Azure's Explicit Proxy - which probably isn't what you think it is. Titles we almost went with this week:
In der heutigen Folge „Alles auf Aktien“ sprechen die Finanzjournalisten Nando Sommerfeldt und Philipp Vetter über den Absturz von Oracle, einen überraschenden Chef-Wechsel bei BP und einen seltenen Dax-Tages-Sieger. Außerdem geht es um Commerzbank, Beiersdorf, Henkel, MTU, SAP, Chevron, Birkenstock, Tesla, Twitter, SpaceX, PayPal, Apple, Huawei. Wir freuen uns an Feedback über aaa@welt.de. Disclaimer: Die im Podcast besprochenen Aktien und Fonds stellen keine spezifischen Kauf- oder Anlage-Empfehlungen dar. Die Moderatoren und der Verlag haften nicht für etwaige Verluste, die aufgrund der Umsetzung der Gedanken oder Ideen entstehen. Hörtipps: Für alle, die noch mehr wissen wollen: Holger Zschäpitz können Sie jede Woche im Finanz- und Wirtschaftspodcast "Deffner&Zschäpitz" hören. Außerdem bei WELT: Im werktäglichen Podcast „Kick-off Politik - Das bringt der Tag“ geben wir Ihnen im Gespräch mit WELT-Experten die wichtigsten Hintergrundinformationen zu einem politischen Top-Thema des Tages. Mehr auf welt.de/kickoff und überall, wo es Podcasts gibt. +++ Werbung +++ Du möchtest mehr über unsere Werbepartner erfahren? [**Hier findest du alle Infos & Rabatte!**](https://linktr.ee/alles_auf_aktien) Impressum: https://www.welt.de/services/article7893735/Impressum.html Datenschutz: https://www.welt.de/services/article157550705/Datenschutzerklaerung-WELT-DIGITAL.html
Looking back, Jeff Coulter is not exactly certain how he landed a spot on a team tasked with designing and implementing the first-ever budgeting and reporting processes responsible for tracking Procter & Gamble's marketing dollars on a single worldwide system. “P&G had hundreds of disparate setups that we had to bring into one system globally,” explains Coulter, recalling the effort behind the information systems upgrade with SAP software that many at the time (the year 2000) deemed to be a historic milestone not only for the packaged goods company but also for industry at large. Coulter had been plucked out of Procter & Gamble's Iowa City office, where he had been working as a cost analyst for such products as Pantene and Scope. The new assignment required Coulter to relocate to Cincinnati, where for the next 2 years he became involved in multiple aspects of the implementation, including the rollout of SAP end-user training across P&G globally. “At the time, any career management at Procter & Gamble was essentially the result of a benevolent dictatorship—you were basically told where you were going to go next,” remembers Coulter, who adds that the experience and training that he gleaned along his P&G way made his time there a very worthy investment. Still, Coulter was eager to return west. Living close to family had always been a priority for the young finance executive, and Cincinnati turned out to be not so short a stint. Consequently, while geography is perhaps not the first reason that people give for having joined Intel Corporation, for Coulter—who would first join the chip maker's Portland, Oregon, complex—it was certainly among his top three impetuses. To move from a consumer products company to a technology company may seem unconventional, but Coulter tells us that his love for learning and his growth mindset helped him to adapt quickly at Intel, where he would remain for the next 6 years. He emphasizes the versatility of finance, which allows professionals to work across various industries. Says Coulter: “I love learning business models and figuring out how they're making money and how to optimize that.” –Jack Sweeney
Case Interview Preparation & Management Consulting | Strategy | Critical Thinking
Welcome to an interview with the author of The Experience Mindset: Changing the Way You Think About Growth, Tiffani Bova. This book details exactly how your company can adopt an Experience Mindset, at scale. It's not enough to know that happy employees equals happy customers. You must have an intentional, balanced approach to company strategy that involves all stakeholders – IT, Marketing, Sales, Operations, and HR – with KPIs and ownership over outcomes. Tiffani Bova is the global customer growth and innovation evangelist at Salesforce, and the Wall Street Journal bestselling author of Growth IQ. Over the past two decades, she has led large revenue-producing divisions at businesses ranging from start-ups to the Fortune 500. As a Research Fellow at Gartner, her cutting-edge insights helped Microsoft, Cisco, Salesforce, Hewlett-Packard, IBM, Oracle, SAP, AT&T, Dell, Amazon-AWS, and other prominent companies expand their market share and grow their revenues. She has been named one of the Top 50 business thinkers in the world by Thinkers50 twice. She is also the host of the podcast What's Next! with Tiffani Bova. Get Tiffani's book here: https://rb.gy/bcal2 The Experience Mindset: Changing the Way You Think About Growth Here are some free gifts for you: Overall Approach Used in Well-Managed Strategy Studies free download: www.firmsconsulting.com/OverallApproach McKinsey & BCG winning resume free download: www.firmsconsulting.com/resumepdf Enjoying this episode? Get access to sample advanced training episodes here: www.firmsconsulting.com/promo
SAP kicked off the week by announcing it has entered into an agreement to acquire LeanIX, an Enterprise Architecture Management software provider. Salesforce announced the general availability of Bring Your Own Lake (BYOL) Data Sharing with the Snowflake Data Cloud from Salesforce Data Cloud. Salesforce and Databricks announced an expanded strategic partnership that delivers zero-ETL (Extract, Transform, Load) data sharing in Salesforce Data Cloud. IFS first announced the appointment of Andre Robberts as the new regional President for Southern and Western Europe and LATAM. Aptean announced the appointment of Miguel Gernaey as the company's new Chief Marketing Officer.Connect with us!https://www.erpadvisorsgroup.com866-499-8550LinkedIn:https://www.linkedin.com/company/erp-advisors-groupTwitter:https://twitter.com/erpadvisorsgrpFacebook:https://www.facebook.com/erpadvisorsInstagram:https://www.instagram.com/erpadvisorsgroupPinterest:https://www.pinterest.com/erpadvisorsgroupMedium:https://medium.com/@erpadvisorsgroup
Minha Experiência SAP: Nesse episódio, em comemoração ao 1 ano da série, as convidadas Fernanda Saraiva, Diretora de Recursos Humanos SAP Brasil e Nayla Santos, Partner Success Director, mediadas pela Daniela Lima, Business Development Manager, vão abordar diversos tópicos sobre a sua caminhada profissional, a importância da diversidade e inclusão dentro da SAP e como os programas promovidos pela SAP contribuem para que a empresa se torne um ambiente cada vez mais acolhedor. Tanto Nayla, quanto Fernanda, possuem jornadas incríveis, com contribuições valiosas para a construção da história do BWN e das iniciativas de diversidade e inclusão dentro do ambiente SAP. Venha conferir!
Banking's move to the cloud: The banking industry is shifting from building and maintaining on-premise infrastructure to adopting cloud services to improve efficiency and agility. The shift is gradual due to the responsibility of safeguarding data and complying with stringent regulatory requirements.Global regulations: Navigating diverse global privacy laws and regulatory requirements remains one of the industry's biggest challenges. There are few common positions, although regional agreements are a step in the right direction. There's an example of a customer who had to talk to more than 50 different regulators while implementing an SAP suite.AI and machine learning: Artificial intelligence (AI) and machine learning are being applied in banking processes to improve efficiency, predict contract consumption, and provide guided buying procedures. The Big Quote: ". . .cloud is not like the old managed services where you took your mess, and you made it someone else's problem to execute. It really does require you to understand what you have today [and] where you want to go. And you have to be prepared for that. Because otherwise, it really could be a very unpleasant deployment . . . we avoided a lot of that by being prepared and taking our time and not rushing."
Tony Baer, Principal at dbInsight, joins Corey on Screaming in the Cloud to discuss his definition of what is and isn't a database, and the trends he's seeing in the industry. Tony explains why it's important to try and have an outsider's perspective when evaluating new ideas, and the growing awareness of the impact data has on our daily lives. Corey and Tony discuss the importance of working towards true operational simplicity in the cloud, and Tony also shares why explainability in generative AI is so crucial as the technology advances. About TonyTony Baer, the founder and CEO of dbInsight, is a recognized industry expert in extending data management practices, governance, and advanced analytics to address the desire of enterprises to generate meaningful value from data-driven transformation. His combined expertise in both legacy database technologies and emerging cloud and analytics technologies shapes how clients go to market in an industry undergoing significant transformation. During his 10 years as a principal analyst at Ovum, he established successful research practices in the firm's fastest growing categories, including big data, cloud data management, and product lifecycle management. He advised Ovum clients regarding product roadmap, positioning, and messaging and helped them understand how to evolve data management and analytic strategies as the cloud, big data, and AI moved the goal posts. Baer was one of Ovum's most heavily-billed analysts and provided strategic counsel to enterprises spanning the Fortune 100 to fast-growing privately held companies.With the cloud transforming the competitive landscape for database and analytics providers, Baer led deep dive research on the data platform portfolios of AWS, Microsoft Azure, and Google Cloud, and on how cloud transformation changed the roadmaps for incumbents such as Oracle, IBM, SAP, and Teradata. While at Ovum, he originated the term “Fast Data” which has since become synonymous with real-time streaming analytics.Baer's thought leadership and broad market influence in big data and analytics has been formally recognized on numerous occasions. Analytics Insight named him one of the 2019 Top 100 Artificial Intelligence and Big Data Influencers. Previous citations include Onalytica, which named Baer as one of the world's Top 20 thought leaders and influencers on Data Science; Analytics Week, which named him as one of 200 top thought leaders in Big Data and Analytics; and by KDnuggets, which listed Baer as one of the Top 12 top data analytics thought leaders on Twitter. While at Ovum, Baer was Ovum's IT's most visible and publicly quoted analyst, and was cited by Ovum's parent company Informa as Brand Ambassador in 2017. In raw numbers, Baer has 14,000 followers on Twitter, and his ZDnet “Big on Data” posts are read 20,000 – 30,000 times monthly. He is also a frequent speaker at industry conferences such as Strata Data and Spark Summit.Links Referenced:dbInsight: https://dbinsight.io/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is brought to us in part by our friends at RedHat.As your organization grows, so does the complexity of your IT resources. You need a flexible solution that lets you deploy, manage, and scale workloads throughout your entire ecosystem. The Red Hat Ansible Automation Platform simplifies the management of applications and services across your hybrid infrastructure with one platform. Look for it on the AWS Marketplace.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Back in my early formative years, I was an SRE sysadmin type, and one of the areas I always avoided was databases, or frankly, anything stateful because I am clumsy and unlucky and that's a bad combination to bring within spitting distance of anything that, you know, can't be spun back up intact, like databases. So, as a result, I tend not to spend a lot of time historically living in that world. It's time to expand horizons and think about this a little bit differently. My guest today is Tony Baer, principal at dbInsight. Tony, thank you for joining me.Tony: Oh, Corey, thanks for having me. And by the way, we'll try and basically knock down your primal fear of databases today. That's my mission.Corey: We're going to instill new fears in you. Because I was looking through a lot of your work over the years, and the criticism I have—and always the best place to deliver criticism is massively in public—is that you take a very conservative, stodgy approach to defining a database, whereas I'm on the opposite side of the world. I contain information. You can ask me about it, which we'll call querying. That's right. I'm a database.But I've never yet found myself listed in any of your analyses around various database options. So, what is your definition of databases these days? Where do they start and stop? Tony: Oh, gosh.Corey: Because anything can be a database if you hold it wrong.Tony: [laugh]. I think one of the last things I've ever been called as conservative and stodgy, so this is certainly a way to basically put the thumbtack on my share.Corey: Exactly. I'm trying to normalize my own brand of lunacy, so we'll see how it goes.Tony: Exactly because that's the role I normally play with my clients. So, now the shoe is on the other foot. What I view a database is, is basically a managed collection of data, and it's managed to the point where essentially, a database should be transactional—in other words, when I basically put some data in, I should have some positive information, I should hopefully, depending on the type of database, have some sort of guidelines or schema or model for how I structure the data. So, I mean, database, you know, even though you keep hearing about unstructured data, the fact is—Corey: Schemaless databases and data stores. Yeah, it was all the rage for a few years.Tony: Yeah, except that they all have schemas, just that those schemaless databases just have very variable schema. They're still schema.Corey: A question that I have is you obviously think deeply about these things, which should not come as a surprise to anyone. It's like, “Well, this is where I spend my entire career. Imagine that. I might think about the problem space a little bit.” But you have, to my understanding, never worked with databases in anger yourself. You don't have a history as a DBA or as an engineer—Tony: No.Corey: —but what I find very odd is that unlike a whole bunch of other analysts that I'm not going to name, but people know who I'm talking about regardless, you bring actual insights into this that I find useful and compelling, instead of reverting to the mean of well, I don't actually understand how any of these things work in reality, so I'm just going to believe whoever sounds the most confident when I ask a bunch of people about these things. Are you just asking the right people who also happen to sound confident? But how do you get away from that very common analyst trap?Tony: Well, a couple of things. One is I purposely play the role of outside observer. In other words, like, the idea is that if basically an idea is supposed to stand on its own legs, it has to make sense. If I've been working inside the industry, I might take too many things for granted. And a good example of this goes back, actually, to my early days—actually this goes back to my freshman year in college where I was taking an organic chem course for non-majors, and it was taught as a logic course not as a memorization course.And we were given the option at the end of the term to either, basically, take a final or do a paper. So, of course, me being a writer I thought, I can BS my way through this. But what I found—and this is what fascinated me—is that as long as certain technical terms were defined for me, I found a logic to the way things work. And so, that really informs how I approach databases, how I approach technology today is I look at the logic on how things work. That being said, in order for me to understand that, I need to know twice as much as the next guy in order to be able to speak that because I just don't do this in my sleep.Corey: That goes a big step toward, I guess, addressing a lot of these things, but it also feels like—and maybe this is just me paying closer attention—that the world of databases and data and analytics have really coalesced or emerged in a very different way over the past decade-ish. It used to be, at least from my perspective, that oh, that the actual, all the data we store, that's a storage admin problem. And that was about managing NetApps and SANs and the rest. And then you had the database side of it, which functionally from the storage side of the world was just a big file or series of files that are the backing store for the database. And okay, there's not a lot of cross-communication going on there.Then with the rise of object store, it started being a little bit different. And even the way that everyone is talking about getting meaning from data has really seem to be evolving at an incredibly intense clip lately. Is that an accurate perception, or have I just been asleep at the wheel for a while and finally woke up?Tony: No, I think you're onto something there. And the reason is that, one, data is touching us all around ourselves, and the fact is, I mean, I'm you can see it in the same way that all of a sudden that people know how to spell AI. They may not know what it means, but the thing is, there is an awareness the data that we work with, the data that is about us, it follows us, and with the cloud, this data has—well, I should say not just with the cloud but with smart mobile devices—we'll blame that—we are all each founts of data, and rich founts of data. And people in all walks of life, not just in the industry, are now becoming aware of it and there's a lot of concern about can we have any control, any ownership over the data that should be ours? So, I think that phenomenon has also happened in the enterprise, where essentially where we used to think that the data was the DBAs' issue, it's become the app developers' issue, it's become the business analysts' issue. Because the answers that we get, we're ultimately accountable for. It all comes from the data.Corey: It also feels like there's this idea of databases themselves becoming more contextually aware of the data contained within them. Originally, this used to be in the realm of, “Oh, we know what's been accessed recently and we can tier out where it lives for storage optimization purposes.” Okay, great, but what I'm seeing now almost seems to be a sense of, people like to talk about pouring ML into their database offerings. And I'm not able to tell whether that is something that adds actual value, or if it's marketing-ware.Tony: Okay. First off, let me kind of spill a couple of things. First of all, it's not a question of the database becoming aware. A database is not sentient.Corey: Niether are some engineers, but that's neither here nor there.Tony: That would be true, but then again, I don't want anyone with shotguns lining up at my door after this—Corey: [laugh].Tony: —after this interview is published. But [laugh] more of the point, though, is that I can see a couple roles for machine learning in databases. One is a database itself, the logs, are an incredible font of data, of operational data. And you can look at trends in terms of when this—when the pattern of these logs goes this way, that is likely to happen. So, the thing is that I could very easily say we're already seeing it: machine learning being used to help optimize the operation of databases, if you're Oracle, and say, “Hey, we can have a database that runs itself.”The other side of the coin is being able to run your own machine-learning models in database as opposed to having to go out into a separate cluster and move the data, and that's becoming more and more of a checkbox feature. However, that's going to be for essentially, probably, like, the low-hanging fruit, like the 80/20 rule. It'll be like the 20% of an ana—of relatively rudimentary, you know, let's say, predictive analyses that we can do inside the database. If you're going to be doing something more ambitious, such as a, you know, a large language model, you probably do not want to run that in database itself. So, there's a difference there.Corey: One would hope. I mean, one of the inappropriate uses of technology that I go for all the time is finding ways to—as directed or otherwise—in off-label uses find ways of tricking different services into running containers for me. It's kind of a problem; this is probably why everyone is very grateful I no longer write production code for anyone.But it does seem that there's been an awful lot of noise lately. I'm lazy. I take shortcuts very often, and one of those is that whenever AWS talks about something extensively through multiple marketing cycles, it becomes usually a pretty good indicator that they're on their back foot on that area. And for a long time, they were doing that about data and how it's very important to gather data, it unlocks the key to your business, but it always felt a little hollow-slash-hypocritical to me because you're going to some of the same events that I have that AWS throws on. You notice how you have to fill out the exact same form with a whole bunch of mandatory fields every single time, but there never seems to be anything that gets spat back out to you that demonstrates that any human or system has ever read—Tony: Right.Corey: Any of that? It's basically a, “Do what we say, not what we do,” style of story. And I always found that to be a little bit disingenuous.Tony: I don't want to just harp on AWS here. Of course, we can always talk about the two-pizza box rule and the fact that you have lots of small teams there, but I'd rather generalize this. And I think you really—what you're just describing is been my trip through the healthcare system. I had some sports-related injuries this summer, so I've been through a couple of surgeries to repair sports injuries. And it's amazing that every time you go to the doctor's office, you're filling the same HIPAA information over and over again, even with healthcare systems that use the same electronic health records software. So, it's more a function of that it's not just that the technologies are siloed, it's that the organizations are siloed. That's what you're saying.Corey: That is fair. And I think at some level—I don't know if this is a weird extension of Conway's Law or whatnot—but these things all have different backing stores as far as data goes. And there's a—the hard part, it seems, in a lot of companies once they hit a certain point of maturity is not just getting the data in—because they've already done that to some extent—but it's also then making it actionable and helping various data stores internal to the company reconcile with one another and start surfacing things that are useful. It increasingly feels like it's less of a technology problem and more of a people problem.Tony: It is. I mean, put it this way, I spent a lot of time last year, I burned a lot of brain cells working on data fabrics, which is an idea that's in the idea of the beholder. But the ideal of a data fabric is that it's not the tool that necessarily governs your data or secures your data or moves your data or transforms your data, but it's supposed to be the master orchestrator that brings all that stuff together. And maybe sometime 50 years in the future, we might see that.I think the problem here is both technical and organizational. [unintelligible 00:11:58] a promise, you have all these what we used call island silos. We still call them silos or islands of information. And actually, ironically, even though in the cloud we have technologies where we can integrate this, the cloud has actually exacerbated this issue because there's so many islands of information, you know, coming up, and there's so many different little parts of the organization that have their hands on that. That's also a large part of why there's such a big discussion about, for instance, data mesh last year: everybody is concerned about owning their own little piece of the pie, and there's a lot of question in terms of how do we get some consistency there? How do we all read from the same sheet of music? That's going to be an ongoing problem. You and I are going to get very old before that ever gets solved.Corey: Yeah, there are certain things that I am content to die knowing that they will not get solved. If they ever get solved, I will not live to see it, and there's a certain comfort in that, on some level.Tony: Yeah.Corey: But it feels like this stuff is also getting more and more complicated than it used to be, and terms aren't being used in quite the same way as they once were. Something that a number of companies have been saying for a while now has been that customers overwhelmingly are preferring open-source. Open source is important to them when it comes to their database selection. And I feel like that's a conflation of a couple of things. I've never yet found an ideological, purity-driven customer decision around that sort of thing.What they care about is, are there multiple vendors who can provide this thing so I'm not going to be using a commercially licensed database that can arbitrarily start playing games with seat licenses and wind up distorting my cost structure massively with very little notice. Does that align with your—Tony: Yeah.Corey: Understanding of what people are talking about when they say that, or am I missing something fundamental? Which is again, always possible?Tony: No, I think you're onto something there. Open-source is a whole other can of worms, and I've burned many, many brain cells over this one as well. And today, you're seeing a lot of pieces about the, you know, the—that are basically giving eulogies for open-source. It's—you know, like HashiCorp just finally changed its license and a bunch of others have in the database world. What open-source has meant is been—and I think for practitioners, for DBAs and developers—here's a platform that's been implemented by many different vendors, which means my skills are portable.And so, I think that's really been the key to why, for instance, like, you know, MySQL and especially PostgreSQL have really exploded, you know, in popularity. Especially Postgres, you know, of late. And it's like, you look at Postgres, it's a very unglamorous database. If you're talking about stodgy, it was born to be stodgy because they wanted to be an adult database from the start. They weren't the LAMP stack like MySQL.And the secret of success with Postgres was that it had a very permissive open-source license, which meant that as long as you don't hold University of California at Berkeley, liable, have at it, kids. And so, you see, like, a lot of different flavors of Postgres out there, which means that a lot of customers are attracted to that because if I get up to speed on this Postgres—on one Postgres database, my skills should be transferable, should be portable to another. So, I think that's a lot of what's happening there.Corey: Well, I do want to call that out in particular because when I was coming up in the naughts, the mid-2000s decade, the lingua franca on everything I used was MySQL, or as I insist on mispronouncing it, my-squeal. And lately, on same vein, Postgres-squeal seems to have taken over the entire universe, when it comes to the de facto database of choice. And I'm old and grumpy and learning new things as always challenging, so I don't understand a lot of the ways that thing gets managed from the context coming from where I did before, but what has driven the massive growth of mindshare among the Postgres-squeal set?Tony: Well, I think it's a matter of it's 30 years old and it's—number one, Postgres always positioned itself as an Oracle alternative. And the early years, you know, this is a new database, how are you going to be able to match, at that point, Oracle had about a 15-year headstart on it. And so, it was a gradual climb to respectability. And I have huge respect for Oracle, don't get me wrong on that, but you take a look at Postgres today and they have basically filled in a lot of the blanks.And so, it now is a very cre—in many cases, it's a credible alternative to Oracle. Can it do all the things Oracle can do? No. But for a lot of organizations, it's the 80/20 rule. And so, I think it's more just a matter of, like, Postgres coming of age. And the fact is, as a result of it coming of age, there's a huge marketplace out there and so much choice, and so much opportunity for skills portability. So, it's really one of those things where its time has come.Corey: I think that a lot of my own biases are simply a product of the era in which I learned how a lot of these things work on. I am terrible at Node, for example, but I would be hard-pressed not to suggest JavaScript as the default language that people should pick up if they're just entering tech today. It does front-end, it does back-end—Tony: Sure.Corey: —it even makes fries, apparently. There's a—that is the lingua franca of the modern internet in a bunch of different ways. That doesn't mean I'm any good at it, and it doesn't mean at this stage, I'm likely to improve massively at it, but it is the right move, even if it is inconvenient for me personally.Tony: Right. Right. Put it this way, we've seen—and as I said, I'm not an expert in programming languages, but we've seen a huge profusion of programming languages and frameworks. But the fact is that there's always been a draw towards critical mass. At the turn of the millennium, we thought is between Java and .NET. Little did we know that basically JavaScript—which at that point was just a web scripting language—[laugh] we didn't know that it could work on the server; we thought it was just a client. Who knew?Corey: That's like using something inappropriately as a database. I mean, good heavens.Tony: [laugh]. That would be true. I mean, when I could have, you know, easily just use a spreadsheet or something like that. But so, I mean, who knew? I mean, just like for instance, Java itself was originally conceived for a set-top box. You never know how this stuff is going to turn out. It's the same thing happen with Python. Python was also a web scripting language. Oh, by the way, it happens to be really powerful and flexible for data science. And whoa, you know, now Python is—in terms of data science languages—has become the new SaaS.Corey: It really took over in a bunch of different ways. Before that, Perl was great, and I go, “Why would I use—why write in Python when Perl is available?” It's like, “Okay, you know, how to write Perl, right?” “Yeah.” “Have you ever read anything a month later?” “Oh…” it's very much a write-only language. It is inscrutable after the fact. And Python at least makes that a lot more approachable, which is never a bad thing.Tony: Yeah.Corey: Speaking of what you touched on toward the beginning of this episode, the idea of databases not being sentient, which I equate to being self-aware, you just came out very recently with a report on generative AI and a trip that you wound up taking on this. Which I've read; I love it. In fact, we've both been independently using the phrase [unintelligible 00:19:09] to, “English is the new most common programming language once a lot of this stuff takes off.” But what have you seen? What have you witnessed as far as both the ground truth reality as well as the grandiose statements that companies are making as they trip over themselves trying to position as the forefront leader and all of this thing that didn't really exist five months ago?Tony: Well, what's funny is—and that's a perfect question because if on January 1st you asked “what's going to happen this year?” I don't think any of us would have thought about generative AI or large language models. And I will not identify the vendors, but I did some that had— was on some advanced briefing calls back around the January, February timeframe. They were talking about things like server lists, they were talking about in database machine learning and so on and so forth. They weren't saying anything about generative.And all of a sudden, April, it changed. And it's essentially just another case of the tail wagging the dog. Consumers were flocking to ChatGPT and enterprises had to take notice. And so, what I saw, in the spring was—and I was at a conference from SaaS, I'm [unintelligible 00:20:21] SAP, Oracle, IBM, Mongo, Snowflake, Databricks and others—that they all very quickly changed their tune to talk about generative AI. What we were seeing was for the most part, position statements, but we also saw, I think, the early emphasis was, as you say, it's basically English as the new default programming language or API, so basically, coding assistance, what I'll call conversational query.I don't want to call it natural language query because we had stuff like Tableau Ask Data, which was very robotic. So, we're seeing a lot of that. And we're also seeing a lot of attention towards foundation models because I mean, what organization is going to have the resources of a Google or an open AI to develop their own foundation model? Yes, some of the Wall Street houses might, but I think most of them are just going to say, “Look, let's just use this as a starting point.”I also saw a very big theme for your models with your data. And where I got a hint of that—it was a throwaway LinkedIn post. It was back in, I think like, February, Databricks had announced Dolly, which was kind of an experimental foundation model, just to use with your own data. And I just wrote three lines in a LinkedIn post, it was on Friday afternoon. By Monday, it had 65,000 hits.I've never seen anything—I mean, yes, I had a lot—I used to say ‘data mesh' last year, and it would—but didn't get anywhere near that. So, I mean, that really hit a nerve. And other things that I saw, was the, you know, the starting to look with vector storage and how that was going to be supported was it was going be a new type of database, and hey, let's have AWS come up with, like, an, you know, an [ADF 00:21:41] database here or is this going to be a feature? I think for the most part, it's going to be a feature. And of course, under all this, everybody's just falling in love, falling all over themselves to get in the good graces of Nvidia. In capsule, that's kind of like what I saw.Corey: That feels directionally accurate. And I think databases are a great area to point out one thing that's always been more a little disconcerting for me. The way that I've always viewed databases has been, unless I'm calling a RAND function or something like it and I don't change the underlying data structure, I should be able to run a query twice in a row and receive the same result deterministically both times.Tony: Mm-hm.Corey: Generative AI is effectively non-deterministic for all realistic measures of that term. Yes, I'm sure there's a deterministic reason things are under the hood. I am not smart enough or learned enough to get there. But it just feels like sometimes we're going to give you the answer you think you're going to get, sometimes we're going to give you a different answer. And sometimes, in generative AI space, we're going to be supremely confident and also completely wrong. That feels dangerous to me.Tony: [laugh]. Oh gosh, yes. I mean, I take a look at ChatGPT and to me, the responses are essentially, it's a high school senior coming out with an essay response without any footnotes. It's the exact opposite of an ACID database. The reason why we're very—in the database world, we're very strongly drawn towards ACID is because we want our data to be consistent and to get—if we ask the same query, we're going to get the same answer.And the problem is, is that with generative, you know, based on large language models, computers sounds sentient, but they're not. Large language models are basically just a series of probabilities, and so hopefully those probabilities will line up and you'll get something similar. That to me, kind of scares me quite a bit. And I think as we start to look at implementing this in an enterprise setting, we need to take a look at what kind of guardrails can we put on there. And the thing is, that what this led me to was that missing piece that I saw this spring with generative AI, at least in the data and analytics world, is nobody had a clue in terms of how to extend AI governance to this, how to make these models explainable. And I think that's still—that's a large problem. That's a huge nut that it's going to take the industry a while to crack.Corey: Yeah, but it's incredibly important that it does get cracked.Tony: Oh, gosh, yes.Corey: One last topic that I want to get into. I know you said you don't want to over-index on AWS, which, fair enough. It is where I spend the bulk of my professional time and energy—Tony: [laugh].Corey: Focusing on, but I think this one's fair because it is a microcosm of a broader industry question. And that is, I don't know what the DBA job of the future is going to look like, but increasingly, it feels like it's going to primarily be picking which purpose-built AWS database—or larger [story 00:24:56] purpose database is appropriate for a given workload. Even without my inappropriate misuse of things that are not databases as databases, they are legitimately 15 or 16 different AWS services that they position as database offerings. And it really feels like you're spiraling down a well of analysis paralysis, trying to pick between all these things. Do you think the future looks more like general-purpose databases, or very purpose-built and each one is this beautiful, bespoke unicorn?Tony: [laugh]. Well, this is basically a hit on a theme that I've been—you know, we've been all been thinking about for years. And the thing is, there are arguments to be made for multi-model databases, you know, versus a for-purpose database. That being said, okay, two things. One is that what I've been saying, in general, is that—and I wrote about this way, way back; I actually did a talk at the [unintelligible 00:25:50]; it was a throwaway talk, or [unintelligible 00:25:52] one of those conferences—I threw it together and it's basically looking at the emergence of all these specialized databases.But how I saw, also, there's going to be kind of an overlapping. Not that we're going to come back to Pangea per se, but that, for instance, like, a relational database will be able to support JSON. And Oracle, for instance, does has some fairly brilliant ideas up the sleeve, what they call a JSON duality, which sounds kind of scary, which basically says, “We can store data relationally, but superimpose GraphQL on top of all of this and this is going to look really JSON-y.” So, I think on one hand, you are going to be seeing databases that do overlap. Would I use Oracle for a MongoDB use case? No, but would I use Oracle for a case where I might have some document data? I could certainly see that.The other point, though, and this is really one I want to hammer on here—it's kind of a major concern I've had—is I think the cloud vendors, for all their talk that we give you operational simplicity and agility are making things very complex with its expanding cornucopia of services. And what they need to do—I'm not saying, you know, let's close down the patent office—what I think we do is we need to provide some guided experiences that says, “Tell us the use case. We will now blend these particular services together and this is the package that we would suggest.” I think cloud vendors really need to go back to the drawing board from that standpoint and look at, how do we bring this all together? How would he really simplify the life of the customer?Corey: That is, honestly, I think the biggest challenge that the cloud providers have across the board. There are hundreds of services available at this point from every hyperscaler out there. And some of them are brand new and effectively feel like they're there for three or four different customers and that's about it and others are universal services that most people are probably going to use. And most things fall in between those two extremes, but it becomes such an analysis paralysis moment of trying to figure out what do I do here? What is the golden path?And what that means is that when you start talking to other people and asking their opinion and getting their guidance on how to do something when you get stuck, it's, “Oh, you're using that service? Don't do it. Use this other thing instead.” And if you listen to that, you get midway through every problem for them to start over again because, “Oh, I'm going to pick a different selection of underlying components.” It becomes confusing and complicated, and I think it does customers largely a disservice. What I think we really need, on some level, is a simplified golden path with easy on-ramps and easy off-ramps where, in the absence of a compelling reason, this is what you should be using.Tony: Believe it or not, I think this would be a golden case for machine learning.Corey: [laugh].Tony: No, but submit to us the characteristics of your workload, and here's a recipe that we would propose. Obviously, we can't trust AI to make our decisions for us, but it can provide some guardrails.Corey: “Yeah. Use a graph database. Trust me, it'll be fine.” That's your general purpose—Tony: [laugh].Corey: —approach. Yeah, that'll end well.Tony: [laugh]. I would hope that the AI would basically be trained on a better set of training data to not come out with that conclusion.Corey: One could sure hope.Tony: Yeah, exactly.Corey: I really want to thank you for taking the time to catch up with me around what you're doing. If people want to learn more, where's the best place for them to find you?Tony: My website is dbinsight.io. And on my homepage, I list my latest research. So, you just have to go to the homepage where you can basically click on the links to the latest and greatest. And I will, as I said, after Labor Day, I'll be publishing my take on my generative AI journey from the spring.Corey: And we will, of course, put links to this in the [show notes 00:29:39]. Thank you so much for your time. I appreciate it.Tony: Hey, it's been a pleasure, Corey. Good seeing you again.Corey: Tony Baer, principal at dbInsight. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry, insulting comment that we will eventually stitch together with all those different platforms to create—that's right—a large-scale distributed database.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
Oscar Trimboli is an award-winning author, host of the Apple award-winning podcast Deep Listening and a sought-after keynote speaker.Along with the Deep Listening Ambassador Community, he is on a quest to create 100 million deep listeners in the workplace.Through his work with chairs, boards of directors, and executive teams, Oscar has experienced first-hand the transformational impact leaders can have when they listen beyond words. He believes that when leadership teams focus their attention and listening, they will build organizations that create powerful legacies for the people they serve – today and more importantly, for future generations.Oscar is a marketing and technology industry veteran working for Microsoft, PeopleSoft, Polycom, and Vodafone. He consults with organizations including American Express, AstraZeneca, Cisco, Google, HSBC, IAG, Montblanc, PwC, Salesforce, Sanofi, SAP, and Siemens.He is the author of how to listen – discover the hidden key to better communication – the most comprehensive book about listening in the workplace, Deep Listening – Impact beyond words and Breakthroughs: How to Confront Assumptions. Oscar loves his afternoon walks with his wife, Jennie, and their dog Kilimanjaro. On the weekends, you will find him playing Lego with one or all his four grandchildren. Hosted on Acast. See acast.com/privacy for more information.
Bank Degroof Petercam shares how they're caring for their employees by alleviating commuting-associated stress and incentivizing sustainable transportation alternatives while also streamlining HR processes. Timo Elliott, VP and global innovation evangelist at SAP, talks with guests from Bank Degroof Petercam, Guy Spitaels, Head of HR Service Center, Payroll & Systems and Kira Zouboff, HR Business Analyst, as well as Bert Van Bree, Solution Architect, Flexso Digital focused on employee commutes to bring them back to the office, increase satisfaction, and to win the war for talent, all with an eye on sustainability.
For Episode 155 of the Wealth On Any Income Podcast, Rennie is joined by Michael Brenner.Michael is a Top Chief Marketing Officer, Content Marketing and Digital Marketing Influencer, and an international keynote speaker. He is also the author of "Mean People Suck" and "The Content Formula" and the CEO and Founder of Marketing Insider Group, a leading Content Marketing Agency. Michael has worked in leadership positions in sales and marketing for global brands like SAP and Nielsen, as well as for many thriving startups. Today, Michael helps build successful content marketing programs for leading brands and startups alike.Have you heard about ‘The Content Formula'?In this episode, Rennie and Michael cover:01:55 How Michael's frustration as a salesperson with the support he was getting led him to a career switch to marketing.02:52 Michael shares the most important thing to focus on when it comes to marketing.03:38 The charity that is special to Michael, a local charity in his county called NorthStar.04:49 Michael shares who his target market is – people just like him!05:43 Michael shares a little about his biggest failure, which is the focus of his book ‘Mean People Suck' and what else he learned from that experience.08:16 Rennie and Michael discuss how it is “survival of the friendliest” not “survival of the fittest” when it comes to human (and wolf pack) success.10:02 Michael shares a little about the focus of his book ‘The Content Formula' and how you can get a copy by connecting with him on LinkedIn (https://www.linkedin.com/in/michaelbrenner/) and sending him a message you heard him on our podcast.11:14 Michael shares his favorite tip to answer the question ‘What's the one piece of content, what's the one type of marketing story that every company should focus on?'.“In the book, The Content Formula, I talk about how every company should be sharing the things that they know, the things that are helpful to their target audience. Those messages are much better received - it almost sounds ridiculously obvious - than what we think we're supposed to be saying, ‘Hey, we're awesome, we're great, buy our stuff because it's better than the next guy's'. And no one wants to hear that. It's like the analogy of the guy that walks in the bar and says, ‘Will you marry me?' No one's going to say yes to that. But you know, the person who walks in the bar and says, ‘Hey, everybody, how you doing? What's going on'? The person that asks questions, the person that seems interested, that shares interesting stories, and tells people what might be helpful to them, that's how you can generate and attract the target audience that you want and grow your business.” – Michael BrennerIf you wish to contact Michael visit his LinkedIn profile at https://www.linkedin.com/in/michaelbrenner/To get a free PDF of the book, The Content Formula send Michael a LinkedIn message, that you heard him on the Wealth On Any Income Podcast.If you'd like to know how books, movies, and society programs you to be poor, and what the cure is visit wealthonanyincome.com/tedx. You'll hear Rennie's TEDx talk and can request a free 27-page Roadmap to Complete Financial Choice® and receive a weekly email with tips, techniques, or inspiration around your business or money. AND if you'd like to see how you can increase your wealth and donate to the causes that touch your heart. Please check out our affordable program ‘Wealth with Purpose'.Rennie's Books and Programshttps://wealthonanyincome.com/books/Wealth with Purpose:https://wealthonanyincome.com/wealthwithpurposeRennie's 9 Days to Financial Freedom program:https://wealthonanyincome.com/programsConnect with Rennie Websites:WealthOnAnyIncome.comRennieGabriel.comEmail: Rennie@WealthOnAnyIncome.comLinkedIn: https://www.linkedin.com/in/renniegabriel/Facebook: https://www.facebook.com/WealthOnAnyIncome/Twitter: https://twitter.com/RennieGabrielYouTube: https://www.youtube.com/channel/UCdIkYMOuvzHQqVXe4e_L8PgInstagram: https://www.instagram.com/wealthonanyincome/
James and Paul discuss the upcoming events in the fall conference season - especially Microsoft Power Platform Conference and SAP TechEd, and come up with their unique spin on a wish list for them.
Account-Based Growth: Unlocking Sustainable Value Through Extraordinary Customer Focus by Bev Burgess and Tim Shercliff About the Book: Develop long-term relationships, deliver market-beating growth, and create sustainable value with this pragmatic guide to aligning marketing, sales, customer success, and your executives around your most important customers. Many B2B companies make half their profitable revenue from just three percent of their customers, yet don't recognize the significance of these accounts, nor invest appropriately in them. Account-Based Growth introduces a comprehensive framework for improving internal alignment and external engagement with these vital few. It contains bullet-pointed takeaways at the end of each chapter and a comprehensive checklist to help you improve your company's approach to its most important customers. Each framework element is brought to life through viewpoints from industry experts and case studies from leading organizations including Accenture, Fujitsu, Infosys, SAP, Salesforce, ServiceNow, and Telstra. About the Author: Bev Burgess is passionate about the critical role marketing can play in accelerating business growth. Her specialism is the marketing and selling of business services, built through a combination of postgraduate study and the privilege of working with 40 of the world's most influential firms, primarily in the technology and professional services sectors. Bev's background includes senior marketing roles at British Gas, Epson, and Fujitsu, and she was a Senior Vice President at ITSMA, where she led the global ABM Practice and ITSMA's European operations for many years. Bev first codified ABM as a marketing strategy while managing director of ITSMA Europe in 2003. Today Bev is a Founder and Managing Principal at Inflexion Group, delivering thought leadership, consulting, and training to companies around the world that are designing, developing, and implementing account-based growth programs. Bev holds an MBA in strategic marketing and a BSc Honours degree in business and ergonomics. She is a Fellow of the Chartered Institute of Marketing and has served as an international trustee. Her first book, Marketing Technology as a Service, was published by Wiley in 2010, exploring proven techniques to create value through services based on an infrastructure of technology. Her most recent, A Practitioner's Guide to Account-Based Marketing (with Dave Munn, Kogan Page 2021, 2017) explains how to use ABM to accelerate growth in strategic accounts. Both editions of that book were featured on The Marketing Book Podcast episodes 117 and 373 with Dave Munn. Executive Engagement Strategies, published by Kogan Page in 2020, explains how to have conversations that deepen executive relationships and build sustainable growth with key clients. And, interesting fact – she was a competitive ballroom dancer! Click here for this episode's website page with the links mentioned during the interview... https://www.salesartillery.com/marketing-book-podcast/account-based-growth-bev-burgess
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
Doug Adamic is the CRO @ Brex and leads the company's revenue and growth strategy. Prior to Brex, Doug was most recently the Chief Revenue Officer at SAP Concur, a provider of travel spend management solutions and services. During his 16-year tenure oversaw an organization of 600+ employees. He was responsible for all aspects of revenue, generating go-to-market strategies and departments. Prior to SAP Concur, he had a five-year tenure as an Enterprise Sales Manager for Kronos, Inc. In Today's Episode with Doug Adamic We Discuss: 1. Entry into Sales: Does Doug believe that love of sales is innate or can be learned? When did he discover his love? What does Doug know now about sales he wish he had known when he started? What are 1-2 of his biggest takeaways from leading 600+ people at SAP? 2. Discovery, Pipeline and Qualification: What are the three core reasons why companies buy software today? How do the best sales teams use those needs to get deals done fast? What does great sales discovery mean today? Why do you have to make customers feel uncomfortable to understand their true needs? What are the biggest mistakes sales teams make when asking questions, determining customer pain, willingness to pay etc etc? Why does Doug believe that everyone in the company is responsible for demand creation? What are the core pillars to success in qualification? Where do so many go wrong? 3. Getting Deals Done: Why does Doug disagree that now is the hardest time to be selling? Are companies buying new software today? What is the secret to opening up organizations that say they are not open for buying new software? How can sales teams create multiple champions in a prospect? How can they determine who is really a buyer vs who is an influencer in a prospect? What are the biggest tactics that can be used to reduce sales cycles and create urgency in a sales process? 4. Discounting, Trust and Deal Reviews: What is a good reason to lose a deal? What is a bad reason to lose a deal? How does Doug and Brex conduct deal reviews? What makes a good vs a bad deal review? What is the fastest way to lose trust either with prospects or with customers? Why does Doug believe discounting is BS and should not be used?
SAP and Google Cloud announced an expanded partnership to help enterprises harness the power of data and generative AI. IFS announced it has signed a definitive agreement to acquire Falkonry, Inc. UKG began the week by providing a business update for the third quarter of fiscal 2023, ending June 30, 2023. Salesforce followed suit, announcing results for its second quarter fiscal 2024 ended July 31, 2023. Qlik announced it has successfully achieved Google Cloud Ready – Cloud SQL Designation for Cloud SQL for both Qlik Data Integration and Talend Data Fabric.Connect with us!https://www.erpadvisorsgroup.com866-499-8550LinkedIn:https://www.linkedin.com/company/erp-advisors-groupTwitter:https://twitter.com/erpadvisorsgrpFacebook:https://www.facebook.com/erpadvisorsInstagram:https://www.instagram.com/erpadvisorsgroupPinterest:https://www.pinterest.com/erpadvisorsgroupMedium:https://medium.com/@erpadvisorsgroup
Episode 31 | Keeping Customers Happy Today's customer success: Customer success is not just about selling products but ensuring that customers achieve value and satisfaction from their purchases. In terms of software implementation, early stages are critical for customer success, with the first year being defining.Evolving sales team: Sales teams are involved longer than they have been in the past. Having the same salesperson throughout a multiyear contract is ideal. The emphasis on customer success and customer satisfaction is important for sales teams because it's another selling point for them. Reviews and other measurements are helpful for sales teams.Clear communication and expectations: Misaligned expectations can lead to project failures and dissatisfaction. Clear communication between customers, partners, and vendors is necessary to avoid misunderstandings and to ensure that everyone is on the same page regarding goals, timelines, and outcomes.The Big Quote: “Customers don't want to be marketed to anymore. They want to hear realistic stories, they want to hear candor from clients who have gone through the experience . . . authenticity is so important.”Stream the audio version of this episode:
Hey folks, in today's episode of the Climate Confident podcast I dive into the world of corporate social responsibility with Gitte Winther Bruhn, the Global Head of Social Responsibility Solutions at SAP. We talk about SAP's ground-breaking projects, such as "Advance Shared Prosperity," aimed at tackling complex issues in global supply chains. If you're a business leader, this episode is a must-listen as it highlights the competitive advantage that comes from embracing social responsibility. Plus, the World Business Council for Sustainable Development is backing SAP, so you know this is the real deal!Ever wondered how technology can help ensure your suppliers uphold human rights? Or how to make your supply chain not just efficient but also equitable? Gitte has fascinating insights into all this and more, from self-assessment credentials for suppliers to implementing workplace safety measures in large industrial settings.We also touch upon the legal landscape, with new regulations putting the heat on corporations. But don't worry—Gitte breaks down how to not only comply but also thrive in this changing environment. She's adamant that taking action now will put your business on the right side of history and law, and she offers actionable steps to get there.We even dive into a few success stories, like WEConnect International, who are creating equitable supply chains connecting large buyers with women-owned small businesses. This isn't just feel-good chatter; it's about pragmatic solutions for the pressing challenges businesses face today.Gitte's links:Corporate Social Responsibility (CSR) Software | SAPSocial Responsibility | Sustainability for SAP | SAP CommunityFlagship repAI & Aliens Unveiled • The WordThe Word pulls the veil on AI and AliensListen on: Apple Podcasts SpotifySupport the showPodcast supportersI'd like to sincerely thank this podcast's amazing supporters: Lorcan Sheehan Hal Good Jerry Sweeney Christophe Kottelat Andreas Werner Richard Delevan Anton Chupilko Devaang Bhatt Stephen Carroll William Brent And remember you too can Support the Podcast - it is really easy and hugely important as it will enable me to continue to create more excellent Climate Confident episodes like this one.ContactIf you have any comments/suggestions or questions for the podcast - get in touch via direct message on Twitter/LinkedIn. If you liked this show, please don't forget to rate and/or review it. It makes a big difference to help new people discover the show. CreditsMusic credit - Intro and Outro music for this podcast was composed, played, and produced by my daughter Luna JuniperThanks for listening, and remember, stay healthy, stay safe, stay sane!
Today's guest is Jens Strandbygaard, Senior Director of Product Management at ServiceNow. Based in Denmark, Jens has more than 25 years international experience as senior executive or founder helping enterprises digitally transform by leveraging cutting edge technologies. He has built and maintained a global network of business partners ranging from software vendors to systems integrators and consulting companies to cloud providers, mainly solving for SAP technology challenges for his partners and customers. In 2021, Jens managed the sale of Gekkobrain to ServiceNow and he is part of the Creator Workflow Business Unit, helping SAP customers modernize and optimize their SAP environments by utilizing ServiceNow's digital workflow platform to build low-code apps. On the personal side, Jens is passionate about technology and people, and thrives in a multinational and multi-cultural environment, meeting and dealing with different nationalities and cultures to mutually solve customers problems and make their IT systems run more efficiently. In this episode, Jens talks about: How he got into the world of ServiceNow, The scale and complexity of working with SAP, Examples of the problems they are solving for customers, How ServiceNow can help modernize SAP systems, Where he sees the platform evolving in the near future
Austin Parker, Community Maintainer at OpenTelemetry, joins Corey on Screaming in the Cloud to discuss OpenTelemetry's mission in the world of observability. Austin explains how the OpenTelemetry community was able to scale the OpenTelemetry project to a commercial offering, and the way Open Telemetry is driving innovation in the data space. Corey and Austin also discuss why Austin decided to write a book on OpenTelemetry, and the book's focus on the evergreen applications of the tool. About AustinAustin Parker is the OpenTelemetry Community Maintainer, as well as an event organizer, public speaker, author, and general bon vivant. They've been a part of OpenTelemetry since its inception in 2019.Links Referenced: OpenTelemetry: https://opentelemetry.io/ Learning OpenTelemetry early release: https://www.oreilly.com/library/view/learning-opentelemetry/9781098147174/ Page with Austin's social links: https://social.ap2.io TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Look, I get it. Folks are being asked to do more and more. Most companies don't have a dedicated DBA because that person now has a full time job figuring out which one of AWS's multiple managed database offerings is right for every workload. Instead, developers and engineers are being asked to support, and heck, if time allows, optimize their databases. That's where OtterTune comes in. Their AI is your database co-pilot for MySQL and PostgresSQL on Amazon RDS or Aurora. It helps improve performance by up to four x OR reduce costs by 50 percent – both of those are decent options. Go to ottertune dot com to learn more and start a free trial. That's O-T-T-E-R-T-U-N-E dot com.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. It's been a few hundred episodes since I had Austin Parker on to talk about the things that Austin cares about. But it's time to rectify that. Austin is the community maintainer for OpenTelemetry, which is a CNCF project. If you're unfamiliar with, we're probably going to fix that in short order. Austin, Welcome back, it's been a month of Sundays.Austin: It has been a month-and-a-half of Sundays. A whole pandemic-and-a-half.Corey: So, much has happened since then. I tried to instrument something with OpenTelemetry about a year-and-a-half ago, and in defense to the project, my use case is always very strange, but it felt like—a lot of things have sharp edges, but it felt like this had so many sharp edges that you just pivot to being a chainsaw, and I would have been at least a little bit more understanding of why it hurts so very much. But I have heard from people that I trust that the experience has gotten significantly better. Before we get into the nitty-gritty of me lobbing passive-aggressive bug reports at you have for you to fix in a scenario in which you can't possibly refuse me, let's start with the beginning. What is OpenTelemetry?Austin: That's a great question. Thank you for asking it. So, OpenTelemetry is an observability framework. It is run by the CNCF, you know, home of such wonderful award-winning technologies as Kubernetes, and you know, the second biggest source of YAML in the known universe [clear throat].Corey: On some level, it feels like that is right there with hydrogen as far as unlimited resources in our universe.Austin: It really is. And, you know, as we all know, there are two things that make, sort of, the DevOps and cloud world go around: one of them being, as you would probably know, AWS bills; and the second being YAML. But OpenTelemetry tries to kind of carve a path through this, right, because we're interested in observability. And observability, for those that don't know or have been living under a rock or not reading blogs, it's a lot of things. It's a—but we can generally sort of describe it as, like, this is how you understand what your system is doing.I like to describe it as, it's a way that we can model systems, especially complex, distributed, or decentralized software systems that are pretty commonly found in larg—you know, organizations of every shape and size, quite often running on Kubernetes, quite often running in public or private clouds. And the goal of observability is to help you, you know, model this system and understand what it's doing, which is something that I think we can all agree, a pretty important part of our job as software engineers. Where OpenTelemetry fits into this is as the framework that helps you get the telemetry data you need from those systems, put it into a universal format, and then ship it off to some observability back-end, you know, a Prometheus or a Datadog or whatever, in order to analyze that data and get answers to your questions you have.Corey: From where I sit, the value of OTel—or OpenTelemetry; people in software engineering love abbreviations that are impenetrable from the outside, so of course, we're going to lean into that—but what I found for my own use case is the shining value prop was that I could instrument an application with OTel—in theory—and then send whatever I wanted that was emitted in terms of telemetry, be it events, be it logs, be it metrics, et cetera, and send that to any or all of a curation of vendors on a case-by-case basis, which meant that suddenly it was the first step in, I guess, an observability pipeline, which increasingly is starting to feel like a milit—like an industrial-observability complex, where there's so many different companies out there, it seems like a good approach to use, to start, I guess, racing vendors in different areas to see which performs better. One of the challenges I've had with that when I started down that path is it felt like every vendor who was embracing OTel did it from a perspective of their implementation. Here's how to instrument it to—send it to us because we're the best, obviously. And you're a community maintainer, despite working at observability vendors yourself. You have always been one of those community-first types where you care more about the user experience than you do this quarter for any particular employer that you have, which to be very clear, is intended as a compliment, not a terrifying warning. It's why you have this authentic air to you and why you are one of those very few voices that I trust in a space where normally I need to approach it with significant skepticism. How do you see the relationship between vendors and OpenTelemetry?Austin: I think the hard thing is that I know who signs my paychecks at the end of the day, right, and you always have, you know, some level of, you know, let's say bias, right? Because it is a bias to look after, you know, them who brought you to the dance. But I think you can be responsible with balancing, sort of, the needs of your employer, and the needs of the community. You know, the way I've always described this is that if you think about observability as, like, a—you know, as a market, what's the total addressable market there? It's literally everyone that uses software; it's literally every software company.Which means there's plenty of room for people to make their numbers and to buy and sell and trade and do all this sort of stuff. And by taking that approach, by taking sort of the big picture approach and saying, “Well, look, you know, there's going to be—you know, of all these people, there are going to be some of them that are going to use our stuff and there are some of them that are going to use our competitor's stuff.” And that's fine. Let's figure out where we can invest… in an OpenTelemetry, in a way that makes sense for everyone and not just, you know, our people. So, let's build things like documentation, right?You know, one of the things I'm most impressed with, with OpenTelemetry over the past, like, two years is we went from being, as a project, like, if you searched for OpenTelemetry, you would go and you would get five or six or ten different vendor pages coming up trying to tell you, like, “This is how you use it, this is how you use it.” And what we've done as a community is we've said, you know, “If you go looking for documentation, you should find our website. You should find our resources.” And we've managed to get the OpenTelemetry website to basically rank above almost everything else when people are searching for help with OpenTelemetry. And that's been really good because, one, it means that now, rather than vendors or whoever coming in and saying, like, “Well, we can do this better than you,” we can be like, “Well, look, just, you know, put your effort here, right? It's already the top result. It's already where people are coming, and we can prove that.”And two, it means that as people come in, they're going to be put into this process of community feedback, where they can go in, they can look at the docs, and they can say, “Oh, well, I had a bad experience here,” or, “How do I do this?” And we get that feedback and then we can improve the docs for everyone else by acting on that feedback, and the net result of this is that more people are using OpenTelemetry, which means there are more people kind of going into the tippy-tippy top of the funnel, right, that are able to become a customer of one of these myriad observability back ends.Corey: You touched on something very important here, when I first was exploring this—you may have been looking over my shoulder as I went through this process—my impression initially was, oh, this is a ‘CNCF project' in quotes, where—this is not true universally, of course, but there are cases where it clearly—is where this is an, effectively, vendor-captured project, not necessarily by one vendor, but by an almost consortium of them. And that was my takeaway from OpenTelemetry. It was conversations with you, among others, that led me to believe no, no, this is not in that vein. This is clearly something that is a win. There are just a whole bunch of vendors more-or-less falling all over themselves, trying to stake out thought leadership and imply ownership, on some level, of where these things go. But I definitely left with a sense that this is bigger than any one vendor.Austin: I would agree. I think, to even step back further, right, there's almost two different ways that I think vendors—or anyone—can approach OpenTelemetry, you know, from a market perspective, and one is to say, like, “Oh, this is socializing, kind of, the maintenance burden of instrumentation.” Which is a huge cost for commercial players, right? Like, if you're a Datadog or a Splunk or whoever, you know, you have these agents that you go in and they rip telemetry out of your web servers, out of your gRPC libraries, whatever, and it costs a lot of money to pay engineers to maintain those instrumentation agents, right? And the cynical take is, oh, look at all these big companies that are kind of like pushing all that labor onto the open-source community, and you know, I'm not casting any aspersions here, like, I do think that there's an element of truth to it though because, yeah, that is a huge fixed cost.And if you look at the actual lived reality of people and you look at back when SignalFx was still a going concern, right, and they had their APM agents open-sourced, you could go into the SignalFx repo and diff, like, their [Node Express 00:10:15] instrumentation against the Datadog Node Express instrumentation, and it's almost a hundred percent the same, right? Because it's truly a commodity. There's no—there's nothing interesting about how you get that telemetry out. The interesting stuff all happens after you have the telemetry and you've sent it to some back-end, and then you can, you know, analyze it and find interesting things. So, yeah, like, it doesn't make sense for there to be five or six or eight different companies all competing to rebuild the same wheels over and over and over and over when they don't have to.I think the second thing that some people are starting to understand is that it's like, okay, let's take this a step beyond instrumentation, right? Because the goal of OpenTelemetry really is to make sure that this instrumentation is native so that you don't need a third-party agent, you don't need some other process or jar or whatever that you drop in and it instruments stuff for you. The JVM should provide this, your web framework should provide this, your RPC library should provide this right? Like, this data should come from the code itself and be in a normalized fashion that can then be sent to any number of vendors or back ends or whatever. And that changes how—sort of, the competitive landscape a lot, I think, for observability vendors because rather than, kind of, what you have now, which is people will competing on, like, well, how quickly can I throw this agent in and get set up and get a dashboard going, it really becomes more about, like, okay, how are you differentiating yourself against every other person that has access to the same data, right? And you get more interesting use cases and how much more interesting analysis features, and that results in more innovation in, sort of, the industry than we've seen in a very long time.Corey: For me, just from the customer side of the world, one of the biggest problems I had with observability in my career as an SRE-type for years was you would wind up building your observability pipeline around whatever vendor you had selected and that meant emphasizing the things they were good at and de-emphasizing the things that they weren't. And sometimes it's worked to your benefit; usually not. But then you always had this question when it got things that touched on APM or whatnot—or Application Performance Monitoring—where oh, just embed our library into this. Okay, great. But a year-and-a-half ago, my exposure to this was on an application that I was running in distributed fashion on top of AWS Lambda.So great, you can either use an extension for this or you can build in the library yourself, but then there's always a question of precedence where when you have multiple things that are looking at this from different points of view, which one gets done first? Which one is going to see the others? Which one is going to enmesh the other—enclose the others in its own perspective of the world? And it just got incredibly frustrating. One of the—at least for me—bright lights of OTel was that it got away from that where all of the vendors receiving telemetry got the same view.Austin: Yeah. They all get the same view, they all get the same data, and you know, there's a pretty rich collection of tools that we're starting to develop to help you build those pipelines yourselves and really own everything from the point of generation to intermediate collection to actually outputting it to wherever you want to go. For example, a lot of really interesting work has come out of the OpenTelemetry collector recently; one of them is this feature called Connectors. And Connectors let you take the output of certain pipelines and route them as inputs to another pipeline. And as part of that connection, you can transform stuff.So, for example, let's say you have a bunch of [spans 00:14:05] or traces coming from your API endpoints, and you don't necessarily want to keep all those traces in their raw form because maybe they aren't interesting or maybe there's just too high of a volume. So, with Connectors, you can go and you can actually convert all of those spans into metrics and export them to a metrics database. You could continue to save that span data if you want, but you have options now, right? Like, you can take that span data and put it into cold storage or put it into, like, you know, some sort of slow blob storage thing where it's not actively indexed and it's slow lookups, and then keep a metric representation of it in your alerting pipeline, use metadata exemplars or whatever to kind of connect those things back. And so, when you do suddenly see it's like, “Oh, well, there's some interesting p99 behavior,” or we're hitting an alert or violating an SLO or whatever, then you can go back and say, like, “Okay, well, let's go dig through the slow da—you know, let's look at the cold data to figure out what actually happened.”And those are features that, historically, you would have needed to go to a big, important vendor and say, like, “Hey, here's a bunch of money,” right? Like, “Do this for me.” Now, you have the option to kind of do all that more interesting pipeline stuff yourself and then make choices about vendors based on, like, who is making a tool that can help me with the problem that I have? Because most of the time, I don't—I feel like we tend to treat observability tools as—it depends a lot on where you sit in the org—but you certainly seen this movement towards, like, “Well, we don't want a tool; we want a platform. We want to go to Lowe's and we want to get the 48-in-one kit that has a bunch of things in it. And we're going to pay for the 48-in-one kit, even if we only need, like, two things or three things out of it.”OpenTelemetry lets you kind of step back and say, like, “Well, what if we just got, like, really high-quality tools for the two or three things we need, and then for the rest of the stuff, we can use other cheaper options?” Which is, I think, really attractive, especially in today's macroeconomic conditions, let's say.Corey: One thing I'm trying to wrap my head around because we all find when it comes to observability, in my experience, it's the parable of three blind people trying to describe an elephant by touch; depending on where you are on the elephant, you have a very different perspective. What I'm trying to wrap my head around is, what is the vision for OpenTelemetry? Is it specifically envisioned to be the agent that runs wherever the workload is, whether it's an agent on a host or a layer in a Lambda function, or a sidecar or whatnot in a Kubernetes cluster that winds up gathering and sending data out? Or is the vision something different? Because part of what you're saying aligns with my perspective on it, but other parts of it seem to—that there's a misunderstanding somewhere, and it's almost certainly on my part.Austin: I think the long-term vision is that you as a developer, you as an SRE, don't even have to think about OpenTelemetry, that when you are using your container orchestrator or you are using your API framework or you're using your Managed API Gateway, or any kind of software that you're building something with, that the telemetry data from that software is emitted in an OpenTelemetry format, right? And when you are writing your code, you know, and you're using gRPC, let's say, you could just natively expect that OpenTelemetry is kind of there in the background and it's integrated into the actual libraries themselves. And so, you can just call the OpenTelemetry API and it's part of the standard library almost, right? You add some additional metadata to a span and say, like, “Oh, this is the customer ID,” or, “This is some interesting attribute that I want to track for later on,” or, “I'm going to create a histogram here or counter,” whatever it is, and then all that data is just kind of there, right, invisible to you unless you need it. And then when you need it, it's there for you to kind of pick up and send off somewhere to any number of back-ends or databases or whatnot that you could then use to discover problems or better model your system.That's the long-term vision, right, that it's just there, everyone uses it. It is a de facto and du jour standard. I think in the medium term, it does look a little bit more like OpenTelemetry is kind of this Swiss army knife agent that's running on—inside cars in Kubernetes or it's running on your EC2 instance. Until we get to the point of everyone just agrees that we're going to use OpenTelemetry protocol for the data and we're going to use all your stuff and we just natively emit it, then that's going to be how long we're in that midpoint. But that's sort of the medium and long-term vision I think. Does that track?Corey: It does. And I'm trying to equate this to—like the evolution back in the Stone Age was back when I was first getting started, Nagios was the gold standard. It was kind of the original Call of Duty. And it was awful. There were a bunch of problems with it, but it also worked.And I'm not trying to dunk on the people who built that. We all stand on the shoulders of giants. It was an open-source project that was awesome doing exactly what it did, but it was a product built for a very different time. It completely had the wheels fall off as soon as you got to things were even slightly ephemeral because it required this idea of the server needed to know where all of the things that was monitoring lived as an individual host basis, so there was this constant joy of, “Oh, we're going to add things to a cluster.” Its perspective was, “What's a cluster?” Or you'd have these problems with a core switch going down and suddenly everything else would explode as well.And even setting up an on-call rotation for who got paged when was nightmarish. And a bunch of things have evolved since then, which is putting it mildly. Like, you could say that about fire, the invention of the wheel. Yeah, a lot of things have evolved since the invention of the wheel, and here we are tricking sand into thinking. But we find ourselves just—now it seems that the outcome of all of this has been instead of one option that's the de facto standard that's kind of terrible in its own ways, now, we have an entire universe of different products, many of which are best-of-breed at one very specific thing, but nothing's great at everything.It's the multifunction printer conundrum, where you find things that are great at one or two things at most, and then mediocre at best at the rest. I'm excited about the possibility for OpenTelemetry to really get to a point of best-of-breed for everything. But it also feels like the money folks are pushing for consolidation, if you believe a lot of the analyst reports around this of, “We already pay for seven different observability vendors. How about we knock it down to just one that does all of these things?” Because that would be terrible. What do you land on that?Austin: Well, as I intu—or alluded to this earlier, I think the consolidation in the observability space, in general, is very much driven by that force you just pointed out, right? The buyers want to consolidate more and more things into single tools. And I think there's a lot of… there are reasons for that that—you know, there are good reasons for that, but I also feel like a lot of those reasons are driven by fundamentally telemetry-side concerns, right? So like, one example of this is if you were Large Business X, and you see—you are an engineering director and you get a report, that's like, “We have eight different metrics products.” And you're like, “That seems like a lot. Let's just use Brand X.”And Brand X will tell you very, very happily tell you, like, “Oh, you just install our thing everywhere and you can get rid of all these other tools.” And usually, there's two reasons that people pick tools, right? One reason is that they are forced to and then they are forced to do a bunch of integration work to get whatever the old stuff was working in the new way, but the other reason is because they tried a bunch of different things and they found the one tool that actually worked for them. And what happens invariably in these sort of consolidation stories is, you know, the new vendor comes in on a shining horse to consolidate, and you wind up instead of eight distinct metrics tools, now you have nine distinct metrics tools because there's never any bandwidth for people to go back and, you know—you're Nagios example, right, Nag—people still use Nagios every day. What's the economic justification to take all those Nagios installs, if they're working, and put them into something else, right?What's the economic justification to go and take a bunch of old software that hasn't been touched for ten years that still runs and still does what needs to do, like, where's the incentive to go and re-instrument that with OpenTelemetry or anything else? It doesn't necessarily exist, right? And that's a pretty, I think, fundamental decision point in everyone's observability journey, which is what do you do about all the old stuff? Because most of the stuff is the old stuff and the worst part is, most of the stuff that you make money off of is the old stuff as well. So, you can't ignore it, and if you're spending, you know, millions of millions of dollars on the new stuff—like, there was a story that went around a while ago, I think, Coinbase spent something like, what, $60 million on Datadog… I hope they asked for it in real money and not Bitcoin. But—Corey: Yeah, something I've noticed about all the vendors, and even Coinbase themselves, very few of them actually transact in cryptocurrency. It's always cash on the barrelhead, so to speak.Austin: Yeah, smart. But still, like, that's an absurd amount of money [laugh] for any product or service, I would argue, right? But that's just my perspective. I do think though, it goes to show you that you know, it's very easy to get into these sort of things where you're just spending over the barrel to, like, the newest vendor that's going to come in and solve all your problems for you. And just, it often doesn't work that way because most places aren't—especially large organizations—just aren't built in is sort of like, “Oh, we can go through and we can just redo stuff,” right? “We can just roll out a new agent through… whatever.”We have mainframes [unintelligible 00:25:09], mainframes to thinking about, you have… in many cases, you have an awful lot of business systems that most, kind of, cloud people don't like, think about, right, like SAP or Salesforce or ServiceNow, or whatever. And those sort of business process systems are actually responsible for quite a few things that are interesting from an observability point of view. But you don't see—I mean, hell, you don't even see OpenTelemetry going out and saying, like, “Oh, well, here's the thing to let you know, observe Apex applications on Salesforce,” right? It's kind of an undiscovered country in a lot of ways and it's something that I think we will have to grapple with as we go forward. In the shorter term, there's a reason that OpenTelemetry mostly focuses on cloud-native applications because that's a little bit easier to actually do what we're trying to do on them and that's where the heat and light is. But once we get done with that, then the sky is the limit.[midroll 00:26:11]Corey: It still feels like OpenTelemetry is evolving rapidly. It's certainly not, I don't want to say it's not feature complete, which, again, what—software is never done. But it does seem like even quarter-to-quarter or month-to-month, its capabilities expand massively. Because you apparently enjoy pain, you're in the process of writing a book. I think it's in early release or early access that comes out next year, 2024. Why would you do such a thing?Austin: That's a great question. And if I ever figure out the answer I will tell you.Corey: Remember, no one wants to write a book; they want to have written the book.Austin: And the worst part is, is I have written the book and for some reason, I went back for another round. I—Corey: It's like childbirth. No one remembers exactly how horrible it was.Austin: Yeah, my partner could probably attest to that. Although I was in the room, and I don't think I'd want to do it either. So, I think the real, you know, the real reason that I decided to go and kind of write this book—and it's Learning OpenTelemetry; it's in early release right now on the O'Reilly learning platform and it'll be out in print and digital next year, I believe, we're targeting right now, early next year.But the goal is, as you pointed out so eloquently, OpenTelemetry changes a lot. And it changes month to month sometimes. So, why would someone decide—say, “Hey, I'm going to write the book about learning this?” Well, there's a very good reason for that and it is that I've looked at a lot of the other books out there on OpenTelemetry, on observability in general, and they talk a lot about, like, here's how you use the API. Here's how you use the SDK. Here's how you make a trace or a span or a log statement or whatever. And it's very technical; it's very kind of in the weeds.What I was interested in is saying, like, “Okay, let's put all that stuff aside because you don't necessarily…” I'm not saying any of that stuff's going to change. And I'm not saying that how to make a span is going to change tomorrow; it's not, but learning how to actually use something like OpenTelemetry isn't just knowing how to create a measurement or how to create a trace. It's, how do I actually use this in a production system? To my point earlier, how do I use this to get data about, you know, these quote-unquote, “Legacy systems?” How do I use this to monitor a Kubernetes cluster? What's the important parts of building these observability pipelines? If I'm maintaining a library, how should I integrate OpenTelemetry into that library for my users? And so on, and so on, and so forth.And the answers to those questions actually probably aren't going to change a ton over the next four or five years. Which is good because that makes it the perfect thing to write a book about. So, the goal of Learning OpenTelemetry is to help you learn not just how to use OpenTelemetry at an API or SDK level, but it's how to build an observability pipeline with OpenTelemetry, it's how to roll it out to an organization, it's how to convince your boss that this is what you should use, both for new and maybe picking up some legacy development. It's really meant to give you that sort of 10,000-foot view of what are the benefits of this, how does it bring value and how can you use it to build value for an observability practice in an organization?Corey: I think that's fair. Looking at the more quote-unquote, “Evergreen,” style of content as opposed to—like, that's the reason for example, I never wind up doing tutorials on how to use an AWS service because one console change away and suddenly I have to redo the entire thing. That's a treadmill I never had much interest in getting on. One last topic I want to get into before we wind up wrapping the episode—because I almost feel obligated to sprinkle this all over everything because the analysts told me I have to—what's your take on generative AI, specifically with an eye toward observability?Austin: [sigh], gosh, I've been thinking a lot about this. And—hot take alert—as a skeptic of many technological bubbles over the past five or so years, ten years, I'm actually pretty hot on AI—generative AI, large language models, things like that—but not for the reasons that people like to kind of hold them up, right? Not so that we can all make our perfect, funny [sigh], deep dream, meme characters or whatever through Stable Fusion or whatever ChatGPT spits out at us when we ask for a joke. I think the real win here is that this to me is, like, the biggest advance in human-computer interaction since resistive touchscreens. Actually, probably since the mouse.Corey: I would agree with that.Austin: And I don't know if anyone has tried to get someone that is, you know, over the age of 70 to use a computer at any time in their life, but mapping human language to trying to do something on an operating system or do something on a computer on the web is honestly one of the most challenging things that faces interface design, face OS designers, faces anyone. And I think this also applies for dev tools in general, right? Like, if you think about observability, if you think about, like, well, what are the actual tasks involved in observability? It's like, well, you're making—you're asking questions. You're saying, like, “Hey, for this metric named HTTPrequestsByCode,” and there's four or five dimensions, and you say, like, “Okay, well break this down for me.” You know, you have to kind of know the magic words, right? You have to know the magic promQL sequence or whatever else to plug in and to get it to graph that for you.And you as an operator have to have this very, very well developed, like, depth of knowledge and math and statistics to really kind of get a lot of—Corey: You must be at least this smart to ride on this ride.Austin: Yeah. And I think that, like that, to me is the real—the short-term win for certainly generative AI around using, like, large language models, is the ability to create human language interfaces to observability tools, that—Corey: As opposed to learning your own custom SQL dialect, which I see a fair number of times.Austin: Right. And, you know, and it's actually very funny because there was a while for the—like, one of my kind of side projects for the past [sigh] a little bit [unintelligible 00:32:31] idea of, like, well, can we make, like, a universal query language or universal query layer that you could ship your dashboards or ship your alerts or whatever. And then it's like, generative AI kind of just, you know, completely leapfrogs that, right? It just says, like, well, why would you need a query language, if we can just—if you can just ask the computer and it works, right?Corey: The most common programming language is about to become English.Austin: Which I mean, there's an awful lot of externalities there—Corey: Which is great. I want to be clear. I'm not here to gatekeep.Austin: Yeah. I mean, I think there's a lot of externalities there, and there's a lot—and the kind of hype to provable benefit ratio is very skewed right now towards hype. That said, one of the things that is concerning to me as sort of an observability practitioner is the amount of people that are just, like, whole-hog, throwing themselves into, like, oh, we need to integrate generative AI, right? Like, we need to put AI chatbots and we need to have ChatGPT built into our products and da-da-da-da-da. And now you kind of have this perfect storm of people that really don't ha—because they're just using these APIs to integrate gen AI stuff with, they really don't understand what it's doing because a lot you know, it is very complex, and I'll be the first to admit that I really don't understand what a lot of it is doing, you know, on the deep, on the foundational math side.But if we're going to have trust in, kind of, any kind of system, we have to understand what it's doing, right? And so, the only way that we can understand what it's doing is through observability, which means it's incredibly important for organizations and companies that are building products on generative AI to, like, drop what—you know, walk—don't walk, run towards something that is going to give you observability into these language models.Corey: Yeah. “The computer said so,” is strangely dissatisfying.Austin: Yeah. You need to have that base, you know, sort of, performance [goals and signals 00:34:31], obviously, but you also need to really understand what are the questions being asked. As an example, let's say you have something that is tokenizing questions. You really probably do want to have some sort of observability on the hot path there that lets you kind of break down common tokens, especially if you were using, like, custom dialects or, like, vectors or whatever to modify the, you know, neural network model, like, you really want to see, like, well, what's the frequency of the certain tokens that I'm getting they're hitting the vectors versus not right? Like, where can I improve these sorts of things? Where am I getting, like, unexpected results?And maybe even have some sort of continuous feedback mechanism that it could be either analyzing the tone and tenor of end-user responses or you can have the little, like, frowny and happy face, whatever it is, like, something that is giving you that kind of constant feedback about, like, hey, this is how people are actually like interacting with it. Because I think there's way too many stories right now people just kind of like saying, like, “Oh, okay. Here's some AI-powered search,” and people just, like, hating it. Because people are already very primed to distrust AI, I think. And I can't blame anyone.Corey: Well, we've had an entire lifetime of movies telling us that's going to kill us all.Austin: Yeah.Corey: And now you have a bunch of, also, billionaire tech owners who are basically intent on making that reality. But that's neither here nor there.Austin: It isn't, but like I said, it's difficult. It's actually one of the first times I've been like—that I've found myself very conflicted.Corey: Yeah, I'm a booster of this stuff; I love it, but at the same time, you have some of the ridiculous hype around it and the complete lack of attention to safety and humanity aspects of it that it's—I like the technology and I think it has a lot of promise, but I want to get lumped in with that set.Austin: Exactly. Like, the technology is great. The fan base is… ehh, maybe something a little different. But I do think that, for lack of a better—not to be an inevitable-ist or whatever, but I do think that there is a significant amount of, like, this is a genie you can't put back in the bottle and it is going to have, like, wide-ranging, transformative effects on the discipline of, like, software development, software engineering, and white collar work in general, right? Like, there's a lot of—if your job involves, like, putting numbers into Excel and making pretty spreadsheets, then ooh, that doesn't seem like something that's going to do too hot when I can just have Excel do that for me.And I think we do need to be aware of that, right? Like, we do need to have that sort of conversation about, like… what are we actually comfortable doing here in terms of displacing human labor? When we do displace human labor, are we doing it so that we can actually give people leisure time or so that we can just cram even more work down the throats of the humans that are left?Corey: And unfortunately, I think we might know what that answer is, at least on our current path.Austin: That's true. But you know, I'm an optimist.Corey: I… don't do well with disappointment. Which the show has certainly not been. I really want to thank you for taking the time to speak with me today. If people want to learn more, where's the best place for them to find you?Austin: Welp, I—you can find me on most social media. Many, many social medias. I used to be on Twitter a lot, and we all know what happened there. The best place to figure out what's going on is check out my bio, social.ap2.io will give you all the links to where I am. And yeah, been great talking with you.Corey: Likewise. Thank you so much for taking the time out of your day. Austin Parker, community maintainer for OpenTelemetry. I'm Cloud Economist Co