Tales at Scale cracks open the world of analytics projects. We’ll be diving into Apache Druid but also hearing from folks in the data ecosystem tackling everything from architecture to open source, from scaling to streaming and everything in between- brought to you by Imply.
Held in October 2024, Druid Summit brought Apache Druid® community contributors at companies including Netflix, Salesforce, Atlassian, Imply, Roblox and more together to discuss the latest trends, challenges, and best practices across the Druid community. The summit explored user experience design, operations and optimization techniques, and lakehouse and streaming analytics pipelines. And on this featured partner episode, #DataFemme host Danielle DiKayo interviews Larissa Klitzke, Senior Product Marketing Manager at Imply, about the highlights from Druid Summit. Want to dive deeper into Druid Summit content? Watch all the keynotes, panels and other sessions on-demand at https://imply.io/events/druid-summit-2024
On this episode, we explore how Kong, a cloud-native API gateway, leverages Apache Druid for real-time data processing and analytics in their platform, Kong Konnect. Hiroshi Fukada, Staff Software Engineer at Kong, shares his insights on managing customer data through Kong Gateway and transitioning to Imply's managed Druid services to simplify their infrastructure. Discover the benefits of Druid, like low latency and ease of use, and learn about Kong's contributions to open source Druid, including the DDSketch extension for improved handling of long-tail distributions. Want to learn more about Druid and meet the community behind it? Registration for Druid Summit is now open! Register at https://druidsummit.org/
On this episode, we are joined by special co-host Hugh Evans and returning guest Will Xu as we announce Druid Summit 2024 and dive into Druid 30.0's new features and enhancements. Improvements include better ingestion for Amazon Kinesis and Apache Kafka, enhanced support for Delta Lake, and advanced integrations with Google Cloud Storage and Azure Blob Storage. Come for the technical upgrades like GROUP BY and ORDER BY for complex columns and faster query processing with new IN and AND filters, stay for the stabilized concurrent append and replace API for late-arriving streaming data. We also explore experimental features like the centralized data source schema for better performance. Tune in to learn about the latest on arrays, the upcoming GA for window functions, and the benefits of upgrading Druid! To submit a talk or register for Druid Summit 2024, visit https://druidsummit.org/
On this episode, we're diving into digital ad spend and real-time data with Miguel Rodrigues, Head of Engineering at British media company Global. We'll discuss their use of Apache Druid to enhance real-time analytics for their digital advertising platform and get the details on their transition from traditional databases to Druid, which added the scalable streaming capabilities and fast query speeds they needed to improve critical data freshness. But that's not all! Miguel was also kind enough to share insights for newcomers to Druid on how to embrace its flexibility and complexity. Interested in telling your Apache Druid story? The call for speakers for Druid Summit 2024 is now open! Join the Druid community in the San Francisco Bay Area on October 22nd - that's right, we're in person this year! Full details are at https://druidsummit.org/
On this episode, we are joined by Ross Morrow, a Software Engineer at Finix, the payment processor working to create the most accessible financial services ecosystem in history. Finix's B2B payments platform is designed for flexibility and scalability, streamlining financial transactions for businesses and delivering a truly customer-centric experience. Faced with the need for a powerful database for real-time insights, Finix turned to Apache Druid. Listen to learn how they're able to access real-time data with sub-second query times, how they transformed their data operations, and how Imply Polaris is helping them get all the benefits of Druid without the burden of maintenance or overhead costs. If you'd like to learn more about Finix, please visit https://go.finix.com/ To learn more about Apache Druid or to download the latest version, please visit https://druid.apache.org/ Or to see what we're up to at Imply, please visit imply.io
On this episode, we're going all in on cybersecurity! Helping us with what critical aspects of security you need to focus on when building analytics applications is Carrell Jackson, CISO at Imply. We'll discuss the importance of protecting sensitive data by implementing role-based access control and encryption and hear about best practices for securing a Druid cluster. Listen to learn more about how Imply takes a security-first approach to their product development and stick around to hear where Certified Ethical Hacking fits into how Imply's security stays ahead of threats.
On this episode, we explore Apache Druid 29.0, focusing on three specific themes: performance, ecosystem, and SQL compliance. Discover new features such as EARLIEST / LATEST support for numerical columns, system fields ingestion, and enhanced array support like UNNEST and JSON_QUERY_ARRAY. In addition, get the full scoop on community-contributed extensions like Spectator Histogram and DDsketch for efficient quantile calculations and long-tailed distribution support. Learn about what's new with MSQ, what's up with PIVOT / UNPIVOT, and so much more!
In this special episode of Tales at Scale - this is our final episode of our first season! - Peter Marshall, Director of Developer Relations at Imply joins the show to discuss the highlights of 2023 for Apache Druid. We dive into the significant feature releases and enhancements that have transformed Druid over the past year, including the SQL standardizaion, query from deep storage, experimental window functions, and the growing Druid community. Come for the retrospective, stay for the peek into the future of what's to come for us and for Druid in 2024. See you all next year!
On this episode, we dive into Apache Druid 28. This latest Druid release includes improved ANSI SQL and Apache Calcite support, the addition of window functions as an experimental feature, async queries and query from deep storage going GA, array enhancements, multi-topic Apache Kafka ingestion, and so much more! Will Xu, program manager at Imply returns to give us the full scoop.
On this episode, we debunk the myth that Druid can't do joins. Druid doesn't function as a traditional relational database because it was purpose-built for lightning-fast queries on large datasets. However, this doesn't mean Druid is entirely devoid of join capabilities – it simply approaches them differently. Our myth-busting team features returning guests Sergio Ferragut and Hellmar Becker from Imply ready to clarify how Druid handles joins in its own unique way and tackle what Druid is for in the first place.
On this episode, we explore how Atlassian leverages Apache Druid's capabilities to handle millions of daily events and empower users with intelligent data-driven features. We're joined by Gautam Jethwani and Kasirajan Selladurai Selvakumari from the Confluence Big Data Platform Team who will talk through how they use Druid to power intelligent features, sub-second query latency, and complex ingestion tasks.
When it comes to fraud detection, initial detection is key, but so is the ability to quickly dissect and address the problem to minimize losses. This means access to real-time data is paramount. The only way to combat fraud in the digital age is to fight fire with fire…automation with automation. In this episode, we're joined by Jaylyn Stoesz, Staff Data Engineer at Ibotta, a free cashback rewards platform, who walks us through Ibotta's multifaceted approach to fraud detection that includes Apache Druid and gives us the full scoop on their use of Imply Polaris.
We're back again with another Druid release! Here we are at Apache Druid 27.0, thanks to the dedication of the Druid Community. This release was made possible by over 350 commits & 46 contributors. Will Xu, Product Manager at Imply joins the show to discuss new features like Smart Segment Loading, a new mechanism for managing data files as the database scales, improvements to schema auto-discovery, and the long-awaited feature – querying from deep storage!
Real-time data has many applications but one place where it's extremely valuable is with usage tracking, billing, and generating reports. Ensuring the freshness and availability of this data is not only essential for financial success but also for establishing a more challenging aspect—trust. That's precisely why Orb chose Apache Druid and Imply as the backbone of their advanced pricing platform. This platform encompasses invoicing, usage monitoring, and comprehensive reporting. On this episode, Kshitij Grover, co-founder and CTO at Orb, guides us through their innovative utilization of Druid, allowing their users to assess metrics across queries and beyond. He also gives some great advice for those just starting on their real-time data journey. Definitely worth a listen...or two!
Apache Kafka® is a streaming platform that can handle large-scale, real-time data streams reliably. It's used for real-time data pipelines, event sourcing, log aggregation, stream processing, and building analytics applications. Apache® Druid is a database designed to provide fast, interactive, and scalable analytics on time-series and event-based data, empowering organizations to derive insights, monitor real-time metrics, and build analytics applications. Naturally, these two things just go together and are often both key parts of a company's data architecture. Confluent is one of those companies. On this episode, Kai Waehner, Field CTO at Confluent walks us through how they use Kafka and Druid together, where Apache Flink fits into the mix and shares insights and trends from the world of data streaming.
Today's show is all about the world of big data and open source projects, and we've got a real gem to share with you—Voltron Data! They're on a mission to revolutionize the data analytics industry through open standards. To unleash the untapped potential in data, Voltron Data uses cutting-edge tech and provides top-notch support services, with a special focus on Apache Arrow. This open-source framework lets you process data in both flat and hierarchical formats, all packed into a super-efficient columnar memory setup. And that's not all! Meet Ibis—an amazing framework that gives data analysts, scientists, and engineers the power to access their data with a user-friendly and engine-agnostic Python library. Excited to learn more? We've got Josh Patterson, the CEO of Voltron Data, here to give us all the details.
Who better to talk about the real-world usage of Apache Druid than Digital Turbine, a leading mobile growth and monetization platform? The folks at DT go way back with Druid. On this episode Lioz Nudel, Engineering Group Manager at Digital Turbine and Alon Edelman, Data Architect at Digital Turbine discuss how Druid has significantly improved their analytics infrastructure in terms of performance and scalability. We cover their journey from using MySQL to Druid, highlighting the scalability, performance, and agility that Druid offers and delve into specific use cases, such as analyzing massive amounts of data and managing cloud computing costs.
Whether you're a data engineer, data scientist, technology enthusiast, or just a person on the Internet, you've heard about ChatGPT. But did you know there are some great use cases for it that work with Apache Druid? Druid and ChatGPT are two cutting-edge technologies that are revolutionizing the world of real-time analytics and natural language processing. In this episode, we're joined by Rick Jacobs, Senior Technical Evangelist at Imply, who will dive into the benefits of combining a trained NLP model with Apache Druid for sentiment analysis and how he did it with ChatGPT. We'll also get into the broader applications of AI and how to use it responsibly, especially when you're dealing with important datasets.
Deploying and configuring Apache Druid manually in a Kubernetes environment can be complex and time-consuming. But it doesn't have to be. Enter Druid Operator, a tool specifically designed for managing Apache Druid deployments in a Kubernetes environment. Adheip Singh, founder of DataInfra and contributor to Druid Operator, walks us through the benefits, including managing upgrades and rollbacks of Druid clusters, scaling Druid clusters, adding more storage to underlying persistent volume claims and more.
Breaking news! Apache Druid 26.0 is now available! Druid 26.0 has a few key features including schema auto discovery and shuffle JOINs but that's not all. On this episode, we're joined by Vadim Ogievetsky, Apache Druid PMC, co-founder of Imply and one of the very first Druid users, to talk through what's new and why it's cool. Special thanks to the Apache Druid community: 60+ contributors and the nearly 400 commits made this possible!
Kubernetes, an open-source container orchestration platform, has been making waves in the Apache Druid community. It makes sense - using Druid with Kubernetes can help you build a more scalable, flexible, and resilient data analytics infrastructure. Yoav Nordmann, Tech Lead and Architect at Tikal Knowledge shares why Kubernetes is so hot right now - along with some of his own Apache Druid stories.
Working on/with Apache Druid is one thing, but talking about it is another. On today's episode, we get tips and tricks for writing about your technical projects from Hellmar Becker, Apache Druid blogger and sales engineer at Imply. Spoiler alert: It doesn't have to be War and Peace. Learn how to get started with your own blog, the value of documenting your process as you go, and what to do when you hit challenges.
One of Apache Druid's top use cases is operational visibility, which involves monitoring, understanding, and optimizing systems in real time. If that sounds a little boring, you're in for a treat. We talk about what you need to get started and then dive into some really interesting use cases for operational visibility across industries. Listen to learn about how operational visibility and IoT are making transportation safer and helping utility companies plan their energy needs or how it's being used to stop money laundering at a major bank. Our guest, Will To, Sr. Product Evangelist at Imply, taps into his background as an educator to give us the full story, and some bonus insights.
What is an analytics application? We state at the top of every show that they're different from BI tools but so far, we haven't said why. It's time we break it all down. Darin Briskman, Director of Technology at Imply and author of the O'Reilly report Building Real Time Analytics Applications: Operational Workflows with Apache Druid, joins us to talk about how real-time data, the growth of streaming, and more have created a new world of analytics applications and how to get started if you want to build you own.
If you're a fan of SQL, this episode is for you. The addition of the multi-stage query engine in Apache Druid has enabled SQL-based ingestion. While that's not something new to the database space, it makes Druid easier to use since more developers know SQL. Sergio Ferragut, Senior Developer Advocate at Imply walks us through using SQL to define and run batch ingestions and how it's simpler and faster than traditional Druid batch ingestion.
How do ads work on the “front page of the internet?” On today's episode, staff software engineer at Reddit Lakshmi Ramasubramanian discusses Reddit's ad platform, including how it handles ad pacing, real-time data, and more. We'll dive into the challenges they needed to solve and why Apache Druid was the right database for the job.
Apache Druid today isn't the Druid that you're used to. It's so much more. The addition of the multi-stage query engine didn't just change the way Druid handles queries but enabled data and transformation on ingestion and inside of Druid from one table to another using SQL. This has made Druid about 40% faster. But why stop there? Get the inside scoop of what's coming to Druid this year, from cold tier storage to asynchronous queries and more.
The term “real time” gets thrown around a lot, especially when it comes to data. Gwen Shapira, co-founder and CPO of Nile joins us to help define real-time data, discuss who needs it (and who probably doesn't) and how to not build yourself into a corner with your architecture. When you're able to harness the power of real-time insights, you can do incredible things from simplifying daily decision making to building trust with customers. Gwen discusses her approach to modern SaaS platforms and how to reach the ultimate goal of bringing the right data to the right person at the right time.
On today's episode, we're joined by David Wang, VP of Product Marketing at Imply and Muthu Lalapet, Director of Worldwide Sales Engineering at Imply to dig into Apache Druid, a high performance, real-time analytics database. Thousands of companies are already using Druid today, from Netflix to Salesforce. But what is Apache Druid best used for? What types of projects? What data sets are Druid users working with? What are companies doing with Druid? Listen to hear real-life examples of where Druid works best: Operational visibility at scale, customer-facing analytics, rapid-drill down exploration and real-time decisioning.
On today's episode, we're digging into the origins of Apache Druid, a high-performance, real-time analytics database, and what makes it unique with Druid co-creator and Field CTO at Imply Eric Tschetter.
Tales at Scale cracks open the world of analytics projects. Hosted by Reena Leone from Imply, we'll be diving into Apache Druid but also hearing from folks in the data ecosystem tackling everything from architecture to open source, from scaling to streaming and everything in between.