POPULARITY
Oil and gas companies generate enormous volumes of operational, geological, and production data. Despite this abundance, much of that data remains fragmented, inconsistent, and difficult to trust. Teams often spend a significant portion of their time preparing datasets rather than analyzing them. The result is delayed decision-making, inflated costs, and reduced operational agility. The core complication lies in data quality, data governance, and data readiness. Duplicate records, null values, drift, and structural inconsistencies make it difficult to move quickly from raw data to actionable insight. Asset teams frequently work semi-independently, each rebuilding transformation processes from scratch. Without reliable data foundations, scaling analytics, automation, or advanced modelling becomes difficult and costly. In this episode, I'm in conversation with Shravan Gunda, CEO of Kaarvi, to discuss how a structured approach to data ingestion, anomaly detection, ETL transformation, and data lineage can reduce time-to-insight from weeks to hours. He outlines how upstream teams can standardize workflows, support governance requirements such as SOC 2, and deploy platforms either on-premises or via SaaS. Clean, trusted data is a prerequisite for accelerating analytics and enabling more advanced digital capabilities.
If you've ever wondered how Oracle Database really works inside AWS, this episode will finally turn the lights on. Join Senior Principal OCI Instructor Susan Jang as she explains the two database services available (Exadata Database Service and Autonomous Database), how Oracle and AWS share responsibilities behind the scenes, and which essential tasks still land on your plate after deployment. You'll discover how automation, scaling, and security actually work, and which model best fits your needs, whether you want hands-off simplicity or deeper control. Oracle Database@AWS Architect Professional: https://mylearn.oracle.com/ou/course/oracle-databaseaws-architect-professional/155574 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, Anna Hulkower, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------------ Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Communications and Adoption with Customer Success Services, and with me is Nikita Abraham, Team Lead: Editorial Services with Oracle University. Nikita: Hi everyone! In our last episode, we began the discussion on Oracle Database@AWS. Today, we're diving deeper into the database services that are available in this environment. Susan Jang, our Senior Principal OCI Instructor, joins us once again. 00:56 Lois: Hi Susan! Thanks for being here today. In our last conversation, we compared Oracle Autonomous Database and Exadata Database Service. Can you elaborate on the fundamental differences between these two services? Susan: Now, the primary difference is between the service is really the management model. The Autonomous is fully-managed by Oracle, while the Exadata provides flexibility for you to have the ability to customize your database environment while still having the infrastructure be managed by Oracle. 01:30 Nikita: When it comes to running Oracle Database@AWS, how do Oracle and AWS each chip in? Could you break down what each provider is responsible for in this setup? Susan: Oracle Database@AWS is a collaboration between Oracle, as well as AWS. It allows the customer to deploy and run Oracle Database services, including the Oracle Autonomous Database and the Oracle Exadata Database Service directly in AWS data centers. Oracle provides the ability of having the Oracle Exadata Database Service on a dedicated infrastructure. This service delivers full capabilities of Oracle Exadata Database on the Oracle Exadata hardware. It offers high performance and high security for demanding workloads. It has cloud automation, resource scaling, and performance optimization to simplify the management of the service. Oracle Autonomous Database on the dedicated Exadata infrastructure provides a fully Autonomous Database on this dedicated infrastructure within AWS. It automates the database management tasks, including patching, backups, as well as tuning, and have built-in AI capabilities for developing AI-powered applications and interacting with data using natural language. The Oracle Database@AWS integrates those core database services with various AWS services for a comprehensive unified experience. AWS provides the ability of having a cloud-based object storage, and that would be the Amazon S3. You also have the ability to have other services, such as the Amazon CloudWatch. It monitors the database metrics, as well as performance. You also have Amazon Bedrock. It provides a development environment for a generative AI application. And last but not the least, amongst the many other services, you also have the SageMaker. This is a cloud-based platform for development of machine learning models, a wonderful integration with our AI application development needs. 03:54 Lois: How has the work involved in setting up and managing databases changed over time? Susan: When we take a look at the evolution of how things have changed through the years in our systems, we realize that transfer responsibility has now been migrated more from customer or human interaction to services. As the database technology evolves from the traditional on-premise system to the Exadata engineered system, and finally to the Autonomous Database, certain services previously requiring significant manual intervention has become increasingly automated, as well as optimized. 04:34 Lois: How so? Susan: When we take a look at the more traditional database environment, it requires manual configuration of hardware, operating system, as well as the software of the database, along with initial database creation. As we evolve into the Exadata environment, the Exadata Database, specifically the Exadata cloud service, simplifies provisioning through web-based wizard, making it faster and easier to deploy the Oracle Database in an optimized hardware. But when we move it to an Autonomous environment, it automates the entire provisioning process, allowing users to rapidly deploy mission-critical databases without manual intervention, or DBA involvement. So as customers move toward Autonomous Database through Exadata, we have fewer components that the customer needs to manage in the database stack, which gives them more time to focus more on important parts of the business. With the Exadata Database, it provides a co-management of backup, restore, patches and upgrade, monitoring, and tuning. And it allows the administrator the ability to customize the configuration to meet their very specific business needs. With Autonomous Database, it's now fully automated and it's a greater responsibility is shift toward the service. With Autonomous Database on dedicated infrastructure, it provides that fine-grained tuning more for Oracle to help you perform that task. 06:15 Nikita: If we narrow it down just to Oracle and AWS for a moment, which parts of the infrastructure or day-to-day ops are handled by each company behind the scenes? Susan: When we take a look at Oracle Database@AWS, it operates under a shared responsibility model, dividing the service responsibilities between AWS, as well as Oracle, as well as you, the customer. The AWS has the data center. Remember, this is where everything is running. The Oracle Database@AWS, the Oracle Database infrastructure may be managed by Oracle and run in OCI, but is physically located within the AWS regions, as well as the availability zones and the AWS data centers. The AWS infrastructure, in this case, is AWS's responsibility to secure the environment, including the physical security of the data center, the network infrastructure, and the foundational services like the compute, the storage, and the networking, all within AWS. The next thing of who's responsible for the shared responsibility, it's Oracle. And that would be the hardware. We provide the hardware. While the hardware may physically reside in the AWS data center, Oracle's Cloud Infrastructure operational team will be the one managing this infrastructure, including software patching, infrastructure update, and other operations through a connection to OCI. This means Oracle handles the provisioning, as well as the maintenance of any of the underlying Exadata infrastructure hardware. When we take a look at the next thing that it manages, it is also responsible besides the infrastructure of the Exadata. It is also the ability to manage the hardware, the environment of that hardware through the database control plane. So Oracle manages the administration and the operational for the Oracle Database@AWS service, which resides in OCI. So this includes the capabilities for management, upgrade, and operational features. 08:37 Nikita: And what are the key things that still remain on the customer's plate? Susan: If you are in an Exadata environment or in an Autonomous environment, it is you, the customer, who is responsible for most of the database administration operation, as well as managing the users and the privileges of the user to access the database. No one knows the database and who should be accessing the data better than you. You will be responsible for securing the applications, the data of the database, which now allows you to define who has access to it, control the data encryption, and securing the application that interacts with the Oracle Database@AWS. 09:29 Lois: Susan, we've talked about both Autonomous Database and Exadata Database Service being available on Oracle Database@AWS, but what's different about how each works in this environment, and why might someone pick one over the other? Susan: Both databases, even though they run on the same Exadata Cloud Infrastructure, both can be deployed on both public cloud, as well as the customer data center, which is Oracle Cloud@Customer. The Autonomous Database is a fully managed, completely automated environment. And this provides a capability of having a fully Autonomous Database Service running on a dedicated Oracle Exadata Infrastructure within your AWS data center. The Exadata is a service that is provided and managed by Oracle and is physically running in the AWS data center, but is designed for mission critical workload and includes RAC environment, Real Application Cluster, offering a high performance availability and full feature capability that is similar to other Exadata environment, such as those running in our customers' data center. The primary difference is really between the two services. When you take a look at the Exadata, the customer only pays for the compute resources that is used. Autoscaling can be used for a variety or variable resources, the workload, to automatically scale to the compute resources up or down when required. The Autonomous Database also has automatic optimization for data warehousing, transaction processing, as well as JSON workload. The Exadata service, the customer again, also pays for the compute resources that they allocate. But that's the key thing. The customer can initiate the scaling because it's very specific to the workload that is needed. So when you take a look at the two database services, one gives the ability to let Oracle fully manage it, including the scaling capability. The other, the Exadata, provides you the capability of having the environment that it's running on the infrastructure be managed by Oracle that adds a database administrator. You may wish to have a little bit more granular control of how you want the database to not only be scaling, but how you wish to customize how the database will be running. 12:10 Nikita: Focusing on Autonomous Database for a moment, what should teams know about how it actually runs within AWS? Susan: The Autonomous Database on the Oracle Database@AWS brings the power of the Oracle's self-managing, self-securing, and self-repairing database into your AWS environment. It provides the capability of the database automatically, automates many of the traditional, complex, and time-consuming database management tasks, such as the provisioning of the database, the patching, the backing up, and the scaling, and the performance tuning, reducing the need for any manual intervention by the database administrator. Running the Autonomous Database in your AWS region enables low latency access for your AWS applications and services that is deployed within AWS, thus improving performance and response time. With the Autonomous Database, it automates many of the traditional things that is now automatically done by Oracle. It also supports integration with various AWS services, such as the ability of the not in addition to AIM, but the cloud formation, the CloudWatch for monitoring and the S3 for the storage. You can easily migrate existing Exadata workload, including those running on Oracle RAC to AWS with minimum or no change to any of your databases or applications. In addition, there's a really powerful capability and feature of the database is called zero ETL, and that's zero extract, transformation, and load. It's an integration capability with services like your Amazon Redshift, enabling near real time analytics and machine learning on your transactional database that is stored within the Autonomous Database on in your AWS environment. So with the Autonomous Database, it checks off many of the boxes for automatic capability, securing, tuning, as well as scaling the database. With the Autonomous Database in the Dedicated Exadata Infrastructure, the Exadata Cloud Infrastructure resource represents the physical system, which can be expanded with storage, as well as compute services, the compute host. This now provides the ability to have an isolated zone for the highest protection from other tenants. The data is stored on a dedicated server only for one customer. That would be you. 14:56 Lois: Could you explain the role of Autonomous VM? What are its primary benefits? Susan: The virtual machine or as we refer to them as the cluster, includes the grid infrastructure and provides a private network isolation. This provides you the capability of having custom memory, core, and storage allocation. The Oracle Grid Infrastructure includes the Oracle Clusterware, which manages the cluster, as well as the servers, and ensure that the database can failover to another server in case of any failure. 15:34 Be a part of something big by joining the Oracle University Learning Community! Connect with over 3 million members, including Oracle experts and fellow learners. Engage in topical forums, share your knowledge, and celebrate your achievements together. Discover the community today at mylearn.oracle.com. 15:55 Nikita: Welcome back! Susan, what is the Autonomous Container Database? Susan: With the Autonomous Container Database, and you need that if you're going to create an Autonomous Database, you need to provision that within your Autonomous Exadata VM Cluster. It serves as a container to hold or to house one or more Autonomous Databases. This allows multiple Autonomous Databases to coexist in the same infrastructure while still being logically separated. And this allows for the separation of databases based on their intended use. Think of a database for production. Think of a database for development. Think of a database for testing. You may have different database versions within the same infrastructure. This isolation makes it easier for you to be able to meet your SLA, your Service Level Agreement, any long-term backups you may have, very specific encryption key needs to prevent issues from one database impacting another. So, the ability to have everything be isolated and secure is still grouping it in a manner that will meet your business needs. 17:08 Lois: Looking at Exadata Database Service specifically, what are some standout advantages for customers who deploy it on Oracle Database@AWS? Is there anything in particular they should get excited about in terms of performance or integration with AWS? Susan: The Exadata Database Service is running on a dedicated Exadata Infrastructure that's deployed within your AWS data center. It delivers the same Exadata service experience in cloud control planes as the Oracle Cloud Infrastructure, allowing you to leverage existing skills and processing across your multi-cloud environment. It addresses the data resiliency, or residency rather. And that's the scenario where many of our customers has the need. You have a need because of your security compliance to have the data local to you. By having the Exadata Database in your Oracle Database@AWS, it is running in your data center. So, this addresses that very important need, data residency, to have it close to you. It also allows for seamless integration with other AWS services and applications. So now you have a capability of a hybrid cloud architecture leveraging the benefit of both Oracle Exadata and your AWS system. It has built-in high availability, the RAC application cluster, as well as Data Guard, a capability of addressing disaster recovery capability. This also provides the ability for you to scale your compute, as well as your storage and your I/O resources independently. So as mentioned with Exadata, you have flexibility of how you want your database to be running individually. So just like the Autonomous, the Exadata Database checks off many of the boxes for running a mission-critical with high availability, highly redundant hardware and software features, along with extreme performance, scalability, and reliability. This now allows you to run your AI environment, your online transaction processing, your analytic workload on any scale on the Exadata Infrastructure running in the Oracle Cloud. And in this case, running in your data center. 19:45 Nikita: If a business suddenly needs more capacity, how does scaling work with Exadata Database Service versus Autonomous Database on Oracle Database@AWS? Susan: So with the Exadata scaling, you now can scale to meet expected demands so you know at certain point I will need more. I will then ask it to scale at that point when I will assign it-- and I'm using an example, I will assign it three computer cores all the time. But there may be demands. Think of your end of the quarter, end of the year processing that you may need more. So, you are enabling the compute cores to scale at the time you need it. And what's cool is it will then, when it's no longer needed, it will then scale back down to the original three cores that you assign. So, you only pay for the enabled cores. But what's very cool about the Autonomous is that it is real-time scaling. So, with Autonomous, now you have the capability using Autonomous Database since it is self-tuning, self-monitoring, the Autonomous Database actually monitors the workload requirement and scales to match the workload demand. Once the minimum level of the compute is defined and enabled, the automatic scaling is set. Autonomous Database will adjust to the consumption when it's needed, and it will scale back down when it's not. So though the Exadata is pretty cool, it will scale up and down on the workload demand. This is with the Autonomous is even more powerful. It is real-time scaling based on that usage at that moment. Built-in automatic increase to meet the workload demands when it spikes and it automatically scales back when it's not needed. A very powerful capability with all of our Oracle databases, the ability, even with traditional, to allow you to define what you may need with Exadata scaling for peak demands, as well as Autonomous scaling for real-time consumption and scaling when needed. When you look at all of our options, one of the key things to bear in mind is a phrase that we use: performance scale as more servers are added. And what this is really saying is Oracle's automated scaling ability for the database, it basically has the ability to maintain or improve its performance under increased workload by automatically adding computational resources when needed. This process is also known as horizontal scaling. It involves adding more servers, compute instances, to a cluster to share the processing load. And it has that capability automatically. 22:53 Nikita: There's so much more we can discuss about Oracle Database@AWS, but let's pause here for today! Thank you so much Susan for joining us. Lois: Yeah, it's been really great to have you, Susan. If you want to dive deeper into the topics we covered today, go to mylearn.oracle.com and search for the Oracle Database@AWS Architect Professional course. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 23:23 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Send us a textIn this episode of Exchanges with Hitachi Solutions, host Ginny Lebeck joins Matt Volke, Chris Satterly, and Mark Shoesmith to preview FabCon 2026 in Atlanta. The conversation explores why FabCon has become a must-attend event for data and AI professionals, how customer expectations around data have evolved, and what organizations need to succeed on their AI journey. Drawing from real-world client conversations and healthcare use cases, the panel discusses Microsoft Fabric as a unifying data platform, the importance of strong data foundations and governance, and how emerging capabilities—from low-code tools to advanced analytics—are enabling teams to move faster and unlock more value from their data. You can find Hitachi Solutions at Fabcoon Booth #545 March 18-20.Highlights:FabCon brings together customers, partners, and Microsoft experts to share practical insights on data management, analytics, and AI, with a strong emphasis on real-world business challenges.Organizations are increasingly treating data as a strategic asset rather than a byproduct, focusing on real-time insights, advanced analytics, and AI readiness.Microsoft Fabric is emerging as a key enabler, simplifying data ingestion, modeling, and curation while supporting both self-service analytics and advanced data science workloads.Healthcare organizations are adopting Fabric to modernize legacy, on-premises data platforms, reduce manual ETL effort, and better handle diverse and high-volume data sources.Tools like pipelines, copy jobs, mirroring, and Copilot-assisted development help small data teams work more efficiently and respond faster to analytics demands.Strong data models, governance, and curated information are essential prerequisites for successful AI and advanced analytics initiatives.Hitachi Solutions' presence at FabCon focuses on connecting customers with experienced data engineers, data scientists, and industry experts to help translate technology into outcomes.global.hitachi-solutions.com
On a tous vu au moins une fois cette photo en noir et blanc dans un magazine, sur un mur : Jacques Brel, Léo Ferré et Georges Brassens engagés dans une discussion qui semble passionnante. Brel et Ferré écoutent, la cigarette à la main, attablés devant des micros, des verres et bouteilles de bière vides, Brassens, tenant son éternelle pipe, raconter quelque chose qu'on voudrait bien partager aussi.La magie d'une photo captant un 60ème de seconde d'une rencontre aujourd'hui historique. Car contrairement à tous ceux qui ont laissé leur nom dans la grande histoire, ceux-là n'ont tué personne, n'ont pas propagé la haine, ni harangué les foules pour le pouvoir. Ils sont tous le contraire. Ils ne sont pas parfaits ni irréprochables, personne ne l'est, mais ce sont des poètes, des musiciens, des chanteurs. Et les plus grands, en ce 6 janvier 1969.Brassens et Brel ont connu leurs premiers succès au milieu des années 50 après avoir écumé, parfois ensemble, les cabarets de Montmartre et de la Rive Gauche pour trois francs et six sous par récital. Et les voilà devenus les idoles d'à présent deux générations. Oh surtout ne leur dites pas ça, ils détestent ce statut qu'ils ont acquis un peu malgré eux.Et Léo Ferré ? Encore moins. Il est de loin le plus libertaire, individualiste et anti-système des trois, ce qui n'est pas peu dire. C'est le plus ancien, aussi. Si Brel aura 40 en avril et Brassens vient de fêter son 48ème anniversaire, Léo Ferré en a 52 ans depuis l'été dernier. Il est né durant la première guerre, lui, et avait déjà 31 ans quand il est monté à Paris, en 1946 pour faire ses débuts dans un cabaret.Alors, même s'ils nourrissent une immense admiration l'un envers l'autre, les voir réunis tous les trois autour d'une même table est de l'ordre de l'impossible. Cela fait des mois qu'un jeune journaliste de 24 ans tente de fixer ce fameux rendez-vous pour un mensuel, tout aussi jeune, Rock & Folk. Le casse-tête ! Brel est sur scène, à Bruxelles, au Théâtre de la Monnaie, rien moins, où il joue sa comédie musicale, L'homme de la Mancha. Mais le voilà à présent à Paris pour y jouer son œuvre qui rencontre un grand succès. L'occasion est trop belle ! Nous sommes fin 1968, il n'y a que des téléphones et des agendas, pas de répondeur, encore moins d'outils électroniques, et le temps passe. Mais ce 6 janvier 1969, ils arrivent, l'un après l'autre, dans l'appartement du journaliste, enfin de sa belle-mère, dans le sixième arrondissement, celui d'un Quartier où le printemps précédent il a régné une atmosphère révolutionnaire. En Angleterre, les Rolling Stones ont été arrêtés par la police, on a vu 2001 Odyssée de l'espace. Dans six mois, des hommes marcheront sur la Lune et 500.000 autres se réuniront à Woodstock, quant aux Beatles, ils joueront fin de ce mois de janvier sur le toit de LEUR propre maison de disques. Les temps changent, cette fois, c'est sûr, que vont en dire ces trois hommes qui n'ont pas la langue de bois, dans l'intimité de cet appartement au cours de ce qu'on pourrait appeler l'interview du siècle ? On aurait voulu y être, pas vrai ?
La France est inquiète des conséquences de la politique prédatrice américaine dans la Caraïbe. Et la question se pose de savoir si Paris peut compter sur sa présence en Guyane pour peser sur le continent. La Commission des Affaires étrangères de la Défense et des Forces armées du Sénat rendait publique au début du mois de janvier 2026 un rapport de mission. La visée américaine sur le Groenland et l'enlèvement de Nicolas Maduro au Venezuela sont autant de sources d'inquiétudes. Les effets de la doctrine Monroe, invoquée pour justifier la domination des États-Unis dans cet hémisphère et l'émergence de nouveaux États pétroliers aux immenses réserves que sont le Guyana et le Suriname font craindre des risques de déstabilisation, auxquels s'ajoutent le narcotrafic et l'orpaillage illégal. Les territoires français de la Caraïbe -Guadeloupe, Martinique, Saint-Martin, Saint-Barthélémy et Guyane sont aujourd'hui au cœur d'un enjeu géopolitique et sécuritaire d'ampleur. La souveraineté française s'y négocie au quotidien. Invités : Fred Constant, professeur des Universités en Science politique à l'Université des Antilles. Auteur de « Géopolitique des Outre-mer », aux éditions le Cavalier bleu et « Atlas des Outre-mer », aux éditions Autrement Yannick Chenevard, officier supérieur de réserve. Député du Var, rapporteur du budget de la Marine et de l'exécution de la loi de programmation militaire. Chercheur associé au Lab'HOMERe Patrick Roger, ancien journaliste au quotidien Le Monde, auteur de « Nouvelle-Calédonie, la Tragédie », récompensé par le Prix des Députés 2025. Et « L'archipel de la discorde. Paris-Nouméa. Demain le Pacifique », aux éditions du Cerf.
Les Outre-mer du globe traversent une période difficile, attentifs qu'ils sont aux menaces qui pèsent sur le Groenland, pays constitutif du royaume du Danemark et territoire associé à l'Union européenne. De fait, ils sont nombreux les Outre-mer contemporains, produit résiduel de l'expansionnisme colonial qui a poussé en son temps des nations rivales à étendre leurs frontières au-delà des mers et des océans, ces territoires offshore se situant souvent dans des zones névralgiques pour les échanges mondiaux. Engagé à partir du XVIIè siècle, ce mouvement avait pour but de satisfaire les intérêts économiques de ces nations et promouvoir leurs idéaux politiques et religieux. Et voilà qu'il refait surface sur fond de velléités de Donald Trump de mettre la main sur le Groenland. Une démarche qu'on peut imaginer être observée de près et dans un certain silence, par le grand rival des États-Unis qu'est la Chine et qui n'en est pas moins active elle aussi dans d'autres zones et de manière moins ouverte….quoique. Géopolitique se saisit de cette occasion pour jeter un regard sur les Outre-mer français -dans les Caraïbes, le Pacifique, l'océan Indien et l'Atlantique Sud- qui fondent une large part du rayonnement mondial de la France et qui se trouvent confrontés à deux situations qui, parfois, ne sont pas sans lien : entre mouvements de contestation au sein même de ces territoires et stratégies de déstabilisation portées par divers acteurs internationaux. Regard sur les vulnérabilités de l'Outre-mer français ou comment, hier, marginal cet espace est devenu stratégique. Invités : Fred Constant, professeur des Universités en Science politique à l'Université des Antilles. Auteur de « Géopolitique des Outre-mer », aux éditions le Cavalier bleu et « Atlas des Outre-mer », aux éditions Autrement Yannick Chenevard, officier supérieur de réserve. Député du Var, rapporteur du budget de la Marine et de l'exécution de la loi de programmation militaire. Chercheur associé au Lab'HOMERe Patrick Roger, ancien journaliste au quotidien LE MONDE, auteur de « Nouvelle-Calédonie, la Tragédie » récompensé par le Prix des Députés 2025. Et « L'archipel de la discorde. Paris-Nouméa. Demain le Pacifique », aux éditions du Cerf.
In the Pit with Cody Schneider | Marketing | Growth | Startups
Your “source of truth” for customer acquisition isn't GA4. It's what people tell you when they sign up — and right now, that story is changing fast.In this episode, we unpack a simple but brutally effective tactic: adding a required “How did you hear about us?” field to your signup form — and using that data to understand where real discovery is happening. The surprise? More and more B2B customers are saying social media, even when analytics tools claim otherwise.But here's the deeper shift: organic social is hard to measure… unless you track the right trailing indicator. That indicator is branded search.You'll learn how to use Google Search Console to track brand-name impressions over time, why it's becoming the only KPI that matters for modern founder-led marketing, and how branded search creates a defensible moat competitors can't easily steal.If you're planning your marketing strategy for 2026, this is the measurement system you need.What You'll LearnWhy signup form attribution is often more reliable than your analytics dashboardsThe biggest B2B acquisition shift happening right now: from search → socialWhy organic social is nearly impossible to ROI… and how to measure it anywayThe “branded search” metric that acts as a trailing indicator for social discoveryWhy branded search is a marketing moat your competitors can't take from youHow to build a branded-search chart using Google Search Console in minutesThe exact prompt to pull branded impressions by query and track them over timeTimestamps00:00:00 - Customer Discovery Starts at Signup00:00:10 - The Shift: Search → Social00:00:31 - Why Organic Social Now Matters Most00:00:52 - The Measurement Problem (and the Fix)00:01:12 - Branded Search = Your Trailing Indicator00:01:33 - Why Branded Search Is a Moat00:01:54 - Where to Invest Time, Money, and Energy00:02:04 - The 2026 Strategy: Grow Brand Searches00:02:15 - How to Track Branded Search in GSC00:02:25 - Building the Branded Impressions Chart00:02:46 - Live Demo: Google Search Console Setup00:03:07 - Final ThoughtsKey Topics & Insights1. Signup Attribution Beats Analytics (Almost Every Time)One of the fastest ways to understand how customers actually found you is simple: add a required “How did you hear about us?” field in your signup form.Why it works:It captures customer intent in their wordsIt reveals channels analytics often misattributesIt shows the real discovery story (not the last-click story)And the punchline: it often contradicts what GA4 says.2. The B2B Discovery Shift: Search → SocialIf you've been paying attention to the data, something big is happening:People aren't discovering new software products through search anymore. They're discovering them on social — then Googling them afterward.This shift has accelerated over the past 12–18 months. Even in B2B, where trends typically lag behind DTC.What this means:SEO is no longer the first touchpointSocial is becoming the top-of-funnel discovery engineSearch is evolving into a validation channel3. Organic Social Has a Measurement ProblemThe hardest part about investing in organic social is that it's difficult to tie to ROI.Whether you're doing:Founder-led contentCreator sponsorshipsCommunity distributionOrganic growth loops…it doesn't fit neatly into traditional attribution.So instead of forcing bad ROI models, track the trailing indicator that proves social discovery is working.4. Branded Search Is the Trailing Indicator That MattersHere's the key idea:When someone discovers your product on social, they don't click your link. They Google your name.That branded search becomes the measurable proof:A discovery event happenedPeople care enough to look you upYour brand is entering the market's memoryThis is why branded search growth is one of the strongest indicators of momentum.If branded search is increasing month-over-month, your brand is winning.5. Branded Search Creates a Defensible MoatThis is where it becomes more than measurement — it becomes strategy.Branded search is difficult for competitors to steal. Once people are searching your name, you own that demand.The only way competitors can interfere:They bid on your brand in Google AdsThey try to outspend youOr they attempt to confuse the marketBut that's expensive, obvious, and usually temporary.So branded search is not only a KPI — it's defensibility.6. How to Track Branded Search in Google Search ConsoleThis is the tactical part.To track branded search over time, you want a chart that shows:Impressions over timeFor queries containing your brand nameCaptured in every format your audience might type itAnd this is surprisingly easy to pull from Google Search Console.7. The Exact Chart & Prompt to Build ItThe goal is to extract Search Console impressions where queries include your brand name.Example prompt:“Build a chart showing total impressions over time for queries containing ‘YOURBRAND'.”Then your job becomes simple:Increase branded impressions month-over-month through:social contentdistributioncreator partnershipspodcast mentionsrepeated brand exposureconsistent visibilityThis becomes the clearest signal that marketing is compounding.Action Steps (Do This Today)Add a required “How did you hear about us?” field on signupReview responses weekly (and compare against analytics)Use Google Search Console to track branded query impressionsCreate a monthly KPI: branded impressions growthUse branded search growth as the scoreboard for your organic social effortsSponsorToday's episode is brought to you by Graphed – an AI data analyst & BI platform.With Graphed you can:Connect data like GA4, Facebook Ads, HubSpot, Google Ads, Search Console, AmplitudeBuild interactive dashboards just by chatting (no Looker Studio/Tableau learning curve)Use it as your ETL + data warehouse + BI layer in one placeAsk:“Build me a stacked bar chart of new users vs. all users over time from GA4”…and Graphed just builds it for you.
In the Pit with Cody Schneider | Marketing | Growth | Startups
If you're not getting cited by ChatGPT, your “AI SEO” strategy isn't working, no matter what your dashboards say. Most of it is observability theater: dashboards, charts, synthetic prompts — and zero actual placement.In this episode, we chat with Shawn Schneider, founder of Eldil AI, about what actually determines whether your company shows up in ChatGPT answers. The short answer: LLMs don't reward more content, clever prompts, or prettier dashboards. They reward a small set of trusted third-party sources — and most brands aren't mentioned in any of them.Shawn breaks down why observability alone creates a false sense of progress, how to identify the specific citations that dominate your category, and how to turn that insight into real placements through outreach and negotiation. We also unpack why Google Search Console is still the best signal we have for AI-driven queries, how to prioritize the one citation that actually matters, and what the first 30–90 days can look like when you do this correctly.GuestShawn Schneider — founder of Eldil AI, a GEO / AI SEO platform focused on identifying and securing the citations LLMs rely on most; helps brands and agencies win visibility in ChatGPT by targeting the power-law sources that shape AI answers.Guest LinksLinkedIn: https://www.linkedin.com/in/shawn-schneider-61b2b5207/ Company Website: https://www.eldil.ai/What You'll LearnWhy most GEO / AI SEO observability tools are meaningless without actual placements The only thing that reliably improves AI search visibility: citation placementsHow to use Google Search Console to surface AI fan-out queriesWhy synthetic prompt data is still unreliable (and what to trust instead)The power law of citations: why only 1–3 sources actually matterHow Eldil turns citation discovery into outreach and negotiated placementsWhat 30–90 days can look like when you secure the right citationWhich industries should invest heavily — and which should ignore this for nowWhy ChatGPT dominates referral traffic compared to other LLMsWhat happens when ads arrive inside AI search resultsTimestamps00:00 — GEO, AI SEO, AEO: noise vs. reality00:21 — Why observability tools don't move the needle03:55 — Where GEO tools get their data (and why it's messy)07:16 — Using Google Search Console as a prompt proxy09:40 — The three pillars: technical, content, authority12:07 — Citations as the dominant ranking lever13:07 — The power law: thousands of citations, one winner19:07 — How fast results actually show up20:39 — When building your own citation content makes sense30:41 — Which business models win with GEO37:11 — ChatGPT ads and the future of AI search41:32 — Where to find Shawn and closing thoughts Key Topics & Ideas1. Why dashboards feel good but don't create outcomes.Most tools are essentially “Google Analytics for LLMs”ChatGPT referrals rise naturally as usage increasesCharts go up even if you do nothingWithout placements, observability is just vanity2. The three common approaches in the market today:Guessing prompts with LLMsClickstream data sourced from Chrome extensions and brokersSynthetic prompts without transparencyEldil uses Google Search Console + Analytics as the best available proxy for real intent.3. How to spot AI-generated fan-out queries:50+ character queriesHigh impressionsLow or zero clicksThese often represent LLMs expanding short prompts into long-form searches.4. The three pillars: Technical, Content, AuthorityTechnical — can an LLM crawl and understand your site?Content — does useful information exist?Authority — does anyone credible back it up?Authority is the multiplier most teams ignore.5. What actually shapes AI answers:Citations are not backlinks, they are semantic explanationsLLMs repeatedly return to the same trusted sourcesThird-party listicles and niche blogs dominate citation share6. The Power Law of Citations10k–15k citations may exist200–300 matter1–3 actually move the needleIf you're not in those, content volume won't save you.7. The real workflow:Identify high-value customer questionsExtract dominant citationsRank them by weightContact site ownersNegotiate placementMonitor AI visibility and referral trafficThis is where most tools stop — and where Eldil focuses.8. How many placements do you need?Surprisingly few.You don't need 100 placementsYou need the right oneThen expand into adjacent verticalsThis is concentrated betting, not spray-and-pray SEO.9. Why GEO feels different from traditional SEO:You are inserting into sources that already rankChanges can show up in weeks, not yearsMeaningful referral growth often appears within ~60–90 days10. Who Should (and Shouldn't) Do ThisBest fit:High-ACV B2B SaaSLong buying cyclesHigh-LTV e-commerce (supplements, skincare)ICPs that already live in ChatGPTIf your customers do not use LLMs yet, start elsewhere.11. Why ChatGPT is the main eventBased on Eldil's data:ChatGPT referrals dwarf Perplexity and othersFor most companies, this is where focus belongsSmaller channels still matter for high-ticket sales12. What's coming nextPaid placements inside LLMsOrganic plus paid becoming a one-two punchCitation inventory getting expensive fastThe window for cheap dominance will not last.SponsorToday's episode is brought to you by Graphed – an AI data analyst & BI platform.With Graphed you can:Connect data like GA4, Facebook Ads, HubSpot, Google Ads, Search Console, AmplitudeBuild interactive dashboards just by chatting (no Looker Studio/Tableau learning curve)Use it as your ETL + data warehouse + BI layer in one placeAsk:“Build me a stacked bar chart of new users vs. all users over time from GA4”…and Graphed just builds it for you.
Real-time data is no longer a future problem. At Small Data SF by MotherDuck, I sat down with David Yaffe, Co-Founder & CEO at Estuary, to talk about what has changed in the world of data streaming!!!!A few years ago, real-time data was something most teams put on their “later” list. Expensive. Hard to scale. Too complex for most use cases.But as David shared, that story has shifted fast.Here are some takeaways from our conversation:- Streaming is now viable for everyoneWith cheaper compute, mature tooling, and simpler developer experiences, real-time data isn't a luxury anymore. The barriers that once made it a niche capability are gone- Batch vs Real-time: Asking the right questionsBefore jumping to streaming, David suggests asking what problems you're solving — speed for the sake of speed rarely pays off. Sometimes batch is just fine. The goal is fit, not flash- Architecture mattersMoving from batch to streaming means thinking end-to-end: from schema evolution and error handling to observability. Teams that skip this planning end up redoing pipelines- CDC done rightChange Data Capture is powerful, but it's easy to misuse. The most common mistake? Treating CDC as an ETL replacement rather than an event system. Understanding that difference prevents pain later- The conversation was practical, focused, and refreshing.Real-time isn't about chasing trends, it's about enabling faster insights and cleaner data movement with less friction.If you've been wondering when “real-time” becomes realistic, this one will give you a clear answer.#data #ai #motherduck #smalldatasf #theravitshow
Benedikt turns a year older. Benedicte moves forward despite the curveballs.Benedikt took a week off work to celebrate his 40th birthday. He spent his birthday week with a few parties with family and friends, and seeing two concerts. On the work front, he and the team built a Snowflake integration. And with the ETL infrastructure now in place, this potentially opens doors for other integrations.Despite another major extended family upheaval, Benedicte carries on by going on morning walks and focusing on her projects. She recently shipped the new Framer plugin version, made demos, and is planning to get the documentation for the plugin.Benedikt and Benedicte talk about books, how fast the internet is breaking nowadays, and more.Mentioned on the show:Surrounded by Idiots – a book by Thomas EriksonAntifragile – a book by Nassim Nicholas TalebNonviolent Communication – a book by B. Marshall Rosenberg
In the Pit with Cody Schneider | Marketing | Growth | Startups
Your brand doesn't exist until it ranks on page one—and most founders have no idea how to make that happen.In this episode, we break down the exact playbook for getting a brand-new domain to show up in Google for your company name. After going through this process firsthand with Graphed.com, you'll learn how to choose a rankable name, build the right backlinks, trigger branded search behavior, and use Google Ads to accelerate the whole process.If you're launching anything new, this is the tactical blueprint you wish you had earlier.What You'll LearnWhy ranking for your brand name is the first real trust signal for any startupHow to pick a name and domain you can actually rank forThe “first 100 links” strategy that trains Google to recognize your brandSimple ways to generate branded search behavior across social and contentHow Google quietly tests your domain—and how to know when it's happeningHow to use Google Search Ads to accelerate ranking and protect your brandWhy .com still matters more than any other TLDTimestamps00:00 — Why your new domain must rank for your own brand name00:31 — Why ranking for your brand name is a critical early trust signal01:03 — The rookie mistake founders make when picking a brand name01:13 — What ideal, non-competitive SERPs should look like01:35 — Graphed.com's journey to finally ranking in position one01:45 — Overview of the process to teach Google your brand01:55 — Step 1: Build backlinks to your homepage03:29 — Step 2: Drive branded search with social posts & content04:21 — Step 3: Run Google Search Ads on your exact brand name05:45 — Why you should always buy the .com for your brand06:16 — Final thoughts + Graphed free trialKey Topics & Insights1. Ranking for Your Brand Name = Early-Stage TrustIf someone Googles your company and doesn't find you, credibility collapses. Ranking for your brand name is one of the first—and easiest—trust signals to secure. Graphed.com took ~2 months to rank, but with this framework, it can happen in as little as 24–48 hours.2. How to Choose a Rankable NameAvoid names already used by active companiesLook for search results filled with noise, not competitorsIdeal: two words, few syllables, easy to spellAnd always, always buy the .com3. Build the First 100 Backlinks (Brand-Name Anchors Only)Your #1 job early is to teach Google what your company is.Do this by:Building backlinks to your homepageUsing your brand name as the anchor text (not keywords)These are foundational “identity” links that help Google map brand → domain.How to build them:Submit to software directoriesUse link submission servicesCold email companies for guest post swapsLayer PR on top later4. Trigger Branded Search BehaviorOnce Google sees your backlinks, you need humans to reinforce the signal:Search your brand nameClick your domainSpend time on the pageGoogle then learns:“When people search this name, this is the site they want.”You create this behavior through:Social postsNewslettersPodcast mentionsRepeated use of the brand everywhere5. How Google Tests Your DomainGoogle will quietly experiment by showing your domain for branded queries.You'll see this in Search Console via:Rising impressionsIncreasing CTRSudden jumps in average positionThis is the moment Google “decides” you belong on page one.6. Accelerate Everything With Google Search AdsRun a brand campaign:Exact-match brand keywordMinimum bid: around $5Send traffic to homepageThis forces the association between brand name → your site, and accelerates your rise in organic search.Brand protection tips:Raise bids to block competitorsAdd sitelinks to take more SERP real estateOptional: multiple ad accounts (with caution)7. Why .com Still Beats Every Other DomainConsumers inherently trust .com more than .io, .co, .xyz, etc.It drives higher CTR and reduces friction in word-of-mouth.If the .com isn't available, pick a new name—don't settle.SponsorToday's episode is brought to you by Graphed – an AI data analyst & BI platform.With Graphed you can:Connect data like GA4, Facebook Ads, HubSpot, Google Ads, Search Console, AmplitudeBuild interactive dashboards just by chatting (no Looker Studio/Tableau learning curve)Use it as your ETL + data warehouse + BI layer in one placeAsk:“Build me a stacked bar chart of new users vs. all users over time from GA4”…and Graphed just builds it for you.
An airhacks.fm conversation with Stanislav Bashkyrtsev (@sbashkirtsev) about: scientific software for chemists and drug discovery, peaksel flagship software for analyzing mass spectrometer data, parsing binary instrument formats up to gigabytes in size, mass spectrometry measuring molecular weights using electric fields and detectors, daltons as mass units, isotope patterns for molecule identification, storing experimental data in PostgreSQL with potential big data challenges, S3 storage solutions, drug discovery process from hit identification to molecule modifications, molecular libraries and combinatorial chemistry, enumeration of molecular structures in computers, synthesis reactions mixing reactants with solvents and various conditions, liquid handlers and laboratory automation challenges, return on investment issues in early drug discovery automation, lab of the future concepts, Molbrett product combining excalidraw with chemical structure drawing capabilities, SMILES format for representing molecular structures as strings, graph-based molecular formats storing atom connections and bond types, 2D vs 3D molecular visualization preferences, Meve centralized event system for tracking molecular experiments across different software systems, ETL processes for data integration, Crystalline software for documenting protein crystallography experiments, protein structure determination using X-ray crystallography, Synchrotron facilities for high-energy X-ray generation, crystal growing conditions and documentation, fishing crystals with microscope and lasso wands, liquid nitrogen cooling for crystal preservation, Java backend, JavaScript frontend, minimal dependencies approach, six-person team structure, sponsorship business model for open source scientific software development, free updates for sponsors, subscription model for non-sponsors, checkout: https://elsci.io Stanislav Bashkyrtsev on twitter: @sbashkirtsev
AWS Morning Brief for the week of December 1st, with Corey Quinn. Links:Protect sensitive data with dynamic data masking for Amazon Aurora PostgreSQLAmazon CloudFront announces support for mutual TLS authenticationAmazon EC2 announces interruptible Capacity ReservationsIntroducing guidelines for network scanningPractical implementation considerations to close the AI value gapEverything you don't need to know about Amazon Aurora DSQL: Part 4 – DSQL componentsSimplify data integration using zero-ETL from self-managed databases to Amazon RedshiftAutomatic quota management is now AWS Service Quotas adds support for automatic quota managementAnnouncing Amazon Route 53 Accelerated Recovery for managing public DNS recordsAnnouncing Unused NAT Gateway Recommendations in AWS Compute OptimizerAmazon EKS introduces Provisioned Control PlaneAWS Finally Lets You Find Your Idle NAT Gateways
Software Engineering Radio - The Podcast for Professional Software Developers
Flavia Saldanha, a consulting data engineer, joins host Kanchan Shringi to discuss the evolution of data engineering from ETL (extract, transform, load) and data lakes to modern lakehouse architectures enriched with vector databases and embeddings. Flavia explains the industry's shift from treating data as a service to treating it as a product, emphasizing ownership, trust, and business context as critical for AI-readiness. She describes how unified pipelines now serve both business intelligence and AI use cases, combining structured and unstructured data while ensuring semantic enrichment and a single source of truth. She outlines key components of a modern data stack, including data marketplaces, observability tools, data quality checks, orchestration, and embedded governance with lineage tracking. This episode highlights strategies for abstracting tooling, future-proofing architectures, enforcing data privacy, and controlling AI-serving layers to prevent hallucinations. Saldanha concludes that data engineers must move beyond pure ETL thinking, embrace product and NLP skills, and work closely with MLOps, using AI as a co-pilot rather than a replacement. Brought to you by IEEE Computer Society and IEEE Software magazine.
In the Pit with Cody Schneider | Marketing | Growth | Startups
There's a whole narrative right now that “vibe coding is a bubble” and all the MRR from AI-built apps isn't real.In this episode, we chat with Jacob Klug, founder of the agency Creme, which specializes in building lovable MVPs on top of tools like Lovable and AI coding assistants. Jacob makes the case that most of the “AI apps are trash” discourse is really a skill issue, not a tool issue—and he breaks down the exact process his team uses to ship full platform-level apps in two-week sprints.We dig into how to scope and design software that doesn't look AI-generated, how to think about personal operating systems vs. SaaS, why ideas are getting worse even as tools get better, and how creators and agencies can turn niche domain expertise into real products.If you're an operator, marketer, or founder trying to figure out how to actually use AI coding tools (instead of just tweeting about them), this one's for you.GuestJacob Klug — founder of Creme, an agency building “lovable MVPs” and full-stack products with Lovable + AI tools; helps founders, startups & enterprises ship production apps in weeks without sacrificing UX.Guest LinksWebsite: https://www.creme.digital/LinkedIn: https://www.linkedin.com/in/jacob-klug-37b254156/X (Twitter): https://x.com/JacobsklugWhat You'll LearnWhy the “vibe coding is a bubble” take is mostly a skill and discipline problemHow Jacob's agency ships full startup-grade products using Lovable and AIThe PRD-first formula they use before ever opening a builderHow to decide when to build vs. when to buy software in 2025Why we're entering a wave of personal OSes and custom internal toolsHow to avoid shipping janky AI UI and make your app look intentionally designedThe mindset shift from “I could build anything” → “I will build this one specific thing”Why specializing in one AI tool (Lovable, Cursor, n8n, etc.) beats being “the AI guy”Tactical content and lead-gen plays for agencies on LinkedIn and YouTubeHow to learn AI tooling without getting paralyzed by the infinite possibilitiesTimestamps00:00 — Vibe coding: bubble or breakthrough?02:23 — Effective use of no-code tools05:23 — Stack and scoping for MVP development07:08 — Trends in personal software development10:33 — Personal projects: blood work analysis tool13:00 — Steps to start building custom software17:49 — Successful and unsuccessful product categories21:01 — Learning and adopting AI tools27:45 — Creator collaboration in software development32:14 — Lead generation strategies for AI-powered agenciesKey Topics & Ideas1. Bubble or Skill Issue?Why early no-code/AI apps looked jankyHow tools like Lovable increased automation from ~50% → ~85%The remaining 10–15% where real engineering still mattersMany failures come from non-devs skipping fundamentals2. How Creme Builds Lovable MVPsEvery project starts with a clear PRD (often drafted with ChatGPT)AI is used to tighten scope before buildingWhen Creme stays fully in Lovable vs. moving code to CursorUsing Lovable Cloud for hosting, database, and analytics3. Personal Operating Systems & Internal ToolsPeople replacing SaaS subscriptions with their own custom toolsIn a 20-person cohort, nearly everyone built workflow appsRise of the Personal OS: one system for life + workExample builds:Bloodwork tracker from PDF uploadsUnified messaging CRM (WhatsApp, Telegram, SMS, email)Automated 30-second sales briefings4. How to Learn AI Coding ToolsHalf the cohort hadn't built anything before startingMain blocker: overwhelm, not skillLearn core concepts: frontend vs. backend, auth, roles, securityBuild daily reps, focus on the next thing you need—not “all of AI”5. Designing Apps That Don't Look AI-GeneratedGood design is still the hardest and biggest edgeCreme process: build a /components library, define buttons/cards/inputs, assign stable IDsTools: Mobbin, Figma Community kits, 21st.devBest prompt: “Here's a screenshot → copy this.”6. What Works in Product IdeasMost of Creme's builds are full startup platforms, not micro-toolsAI makes shipping easier, but ideas are getting worse without depthReal advantage = domain expertise + niche problem + AI speed7. Creators x SoftwareCreators can now ship products without capitalJacob prefers retainers over equityAnalogy: Like creator brands—most fail, a few go huge8. Career Strategy: SpecializeFuture = verticalized expertise, not “AI generalists”Specialist lanes: Lovable, Cursor, n8n, automationBe the person for one tool + one market9. Content & Lead GenJacob's two rules for content: people are selfish and people are boredBuild content that teaches, sparks emotion, and creates curiosityPost ~5x/week, prioritize visual postsLong-term: YouTube deep dives for high-intent inboundSponsorToday's episode is brought to you by Graphed – an AI data analyst & BI platform.With Graphed you can:Connect data like GA4, Facebook Ads, HubSpot, Google Ads, Search Console, AmplitudeBuild interactive dashboards just by chatting (no Looker Studio/Tableau learning curve)Use it as your ETL + data warehouse + BI layer in one placeAsk:“Build me a stacked bar chart of new users vs. all users over time from GA4”…and Graphed just builds it for you.
Andy Pernsteiner is the Field CTO at VAST Data, working on large-scale AI infrastructure, serverless compute near data, and the rollout of VAST's AI Operating System.The GPU Uptime Battle // MLOps Podcast #346 with Andy Pernsteiner, Field CTO of VAST Data.Huge thanks to VAST Data for supporting this episode!Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractMost AI projects don't fail because of bad models; they fail because of bad data plumbing. Andy Pernsteiner joins the podcast to talk about what it actually takes to build production-grade AI systems that aren't held together by brittle ETL scripts and data copies. He unpacks why unifying data - rather than moving it - is key to real-time, secure inference, and how event-driven, Kubernetes-native pipelines are reshaping the way developers build AI applications. It's a conversation about cutting out the complexity, keeping data live, and building systems smart enough to keep up with your models. // BioAndy is the Field Chief Technology Officer at VAST, helping customers build, deploy, and scale some of the world's largest and most demanding computing environments.Andy has spent the past 15 years focused on supporting and building large-scale, high-performance data platform solutions. From humble beginnings as an escalations engineer at pre-IPO Isilon, to leading a team of technical Ninjas at MapR, he's consistently been in the frontlines solving some of the toughest challenges that customers face when implementing Big Data Analytics and next-generation AI solutions.// Related LinksWebsite: www.vastdata.comhttps://www.youtube.com/watch?v=HYIEgFyHaxkhttps://www.youtube.com/watch?v=RyDHIMniLro The Mom Test by Rob Fitzpatrick: https://www.momtestbook.com/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Andy on LinkedIn: /andypernsteinerTimestamps:[00:00] Prototype to production gap[00:21] AI expectations vs reality[03:00] Prototype vs production costs[07:47] Technical debt awareness[10:13] The Mom Test[15:40] Chaos engineering[22:25] Data messiness reflection[26:50] Small data value[30:53] Platform engineer mindset shift[34:26] Gradient description comparison[38:12] Empathy in MLOps[45:48] Empathy in Engineering[51:04] GPU clusters rolling updates[1:03:14] Checkpointing strategy comparison[1:09:44] Predictive vs Generative AI[1:17:51] On Growth, Community, and New Directions[1:24:21] UX of agents[1:32:05] Wrap up
In this episode of Crazy Wisdom, host Stewart Alsop talks with Jessica Talisman, founder of Contextually and creator of the Ontology Pipeline, about the deep connections between knowledge management, library science, and the emerging world of AI systems. Together they explore how controlled vocabularies, ontologies, and metadata shape meaning for both humans and machines, why librarianship has lessons for modern tech, and how cultural context influences what we call “knowledge.” Jessica also discusses the rise of AI librarians, the problem of “AI slop,” and the need for collaborative, human-centered knowledge ecosystems. You can learn more about her work at Ontology Pipeline and find her writing and talks on LinkedIn.Check out this GPT we trained on the conversationTimestamps00:00 Stewart Alsop welcomes Jessica Talisman to discuss Contextually, ontologies, and how controlled vocabularies ground scalable systems.05:00 They compare philosophy's ontology with information science, linking meaning, categorization, and sense-making for humans and machines.10:00 Jessica explains why SQL and Postgres can't capture knowledge complexity and how neuro-symbolic systems add context and interoperability.15:00 The talk turns to library science's split from big data in the 1990s, metadata schemas, and the FAIR principles of findability and reuse.20:00 They discuss neutrality, bias in corporate vocabularies, and why “touching grass” matters for reconciling internal and external meanings.25:00 Conversation shifts to interpretability, cultural context, and how Western categorical thinking differs from China's contextual knowledge.30:00 Jessica introduces process knowledge, documentation habits, and the danger of outsourcing how-to understanding.35:00 They explore knowledge as habit, the tension between break-things culture and library design thinking, and early AI experiments.40:00 Libraries' strategic use of AI, metadata precision, and the emerging role of AI librarians take focus.45:00 Stewart connects data labeling, Surge AI, and the economics of good data with Jessica's call for better knowledge architectures.50:00 They unpack content lifecycle, provenance, and user context as the backbone of knowledge ecosystems.55:00 The talk closes on automation limits, human-in-the-loop design, and Jessica's vision for collaborative consulting through Contextually.Key InsightsOntology is about meaning, not just data structure. Jessica Talisman reframes ontology from a philosophical abstraction into a practical tool for knowledge management—defining how things relate and what they mean within systems. She explains that without clear categories and shared definitions, organizations can't scale or communicate effectively, either with people or with machines.Controlled vocabularies are the foundation of AI literacy. Jessica emphasizes that building a controlled vocabulary is the simplest and most powerful way to disambiguate meaning for AI. Machines, like people, need context to interpret language, and consistent terminology prevents the “hallucinations” that occur when systems lack semantic grounding.Library science predicted today's knowledge crisis. Stewart and Jessica trace how, in the 1990s, tech went down the path of “big data” while librarians quietly built systems of metadata, ontologies, and standards like schema.org. Today's AI challenges—interoperability, reliability, and information overload—mirror problems library science has been solving for decades.Knowledge is culturally shaped. Drawing from Patrick Lambe's work, Jessica notes that Western knowledge systems are category-driven, while Chinese systems emphasize context. This cultural distinction explains why global AI models often miss nuance or moral voice when trained on limited datasets.Process knowledge is disappearing. The West has outsourced its “how-to” knowledge—what Jessica calls process knowledge—to other countries. Without documentation habits, we risk losing the embodied know-how that underpins manufacturing, engineering, and even creative work.Automation cannot replace critical thinking. Jessica warns against treating AI as “room service.” Automation can support, but not substitute, human judgment. Her own experience with a contract error generated by an AI tool underscores the importance of review, reflection, and accountability in human–machine collaboration.Collaborative consulting builds knowledge resilience. Through her consultancy, Contextually, Jessica advocates for “teaching through doing”—helping teams build their own ontologies and vocabularies rather than outsourcing them. Sustainable knowledge systems, she argues, depend on shared understanding, not just good technology.
Marketing analytics stacks struggle with outdated, siloed data that delays critical business decisions. Noha Rizk, CMO of Incorta, explains how live data integration transforms enterprise analytics capabilities. She demonstrates how questioning "why" behind data patterns unlocks actionable insights and discusses eliminating complex ETL processes through real-time analysis across all business systems. The conversation covers practical frameworks for moving from raw data collection to immediate business intelligence that drives customer behavior understanding.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Revenue Generator Podcast: Sales + Marketing + Product + Customer Success = Revenue Growth
Marketing analytics stacks struggle with outdated, siloed data that delays critical business decisions. Noha Rizk, CMO of Incorta, explains how live data integration transforms enterprise analytics capabilities. She demonstrates how questioning "why" behind data patterns unlocks actionable insights and discusses eliminating complex ETL processes through real-time analysis across all business systems. The conversation covers practical frameworks for moving from raw data collection to immediate business intelligence that drives customer behavior understanding.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Incorta is the first and only open data delivery platform that enables real-time analysis of live, detailed data across all systems of record—without the need for complex ETL processes.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Revenue Generator Podcast: Sales + Marketing + Product + Customer Success = Revenue Growth
Incorta is the first and only open data delivery platform that enables real-time analysis of live, detailed data across all systems of record—without the need for complex ETL processes.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Incorta is the first and only open data delivery platform that enables real-time analysis of live, detailed data across all systems of record—without the need for complex ETL processes.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Revenue Generator Podcast: Sales + Marketing + Product + Customer Success = Revenue Growth
Most companies rely on stale dashboards while AI demands live data for real-time decisions. Noha Rizk, CMO of Incorta, explains how enterprises can transition from legacy data systems to real-time analytics infrastructure. She covers identifying high-ROI use cases like retail waste optimization and supply chain management, implementing live data without complex ETL processes, and enabling business users to query data instantly for creative problem-solving.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this episode, I interview Josh Cook - the founder of ETL - Enter the Lion. Josh joined the Baltimore Police Department as a police officer, eventually transferring to the Anne Arundel County Police Department. Feeling called to minister to law enforcement directly, he separated from his department, relocated to Tennessee and began Enter the Lion (ETL). ETL is a Christian ministry that provides a completely free retreat to first responders and law enforcement who are desiring rest and time in nature. ETL provides biblical counseling, mentorship, and discipleship to those seeking a connection with other believers. To contact Josh or inquire about attending a retreat, contact him through his website:www.enterthelion.coYou can access The Tactical Debrief on Apple, Spotify, or Audible Podcasts.
Points of Interest00:00 – 01:30 – Introduction: Marcel welcomes Parakeeto's Kristen Kelly back to discuss a recurring misconception in agency operations—the belief that a better project management or PSA tool can solve profit management challenges.01:30 – 03:25 – The PM Tool “Silver Bullet” Myth: Kristen explains how leaders and PMs often adopt new tools to tame chaos, believing marketing promises that they'll also solve utilization, capacity, and profitability issues.03:25 – 06:00 – Why Agencies Fall for It: Marcel and Kristen note that while PM tools are valuable, they're often oversold as full profit-management systems. Agencies end up frustrated by missing fields, tool quirks, and data limitations.06:00 – 08:45 – Hitting the Wall: Many teams find themselves with tools that improve delivery workflows but still leave them unable to make key financial or operational decisions because the data remains fragmented across systems.08:45 – 11:43 – Introducing the Framework → Data → Process Model: Marcel outlines Parakeeto's three-part sequence for solving profit management: define the framework (metrics and formulas), structure the data, and establish ongoing processes for hygiene and cadence.11:43 – 12:46 – Why Sequencing Matters: Without first defining what needs to be measured, agencies make poor configuration choices in PM tools—creating rework, confusion, and endless tool migrations.12:46 – 15:19 – Defining the Framework: Agencies must precisely define how metrics like utilization, delivery margin, and project profitability are calculated, and understand the relationships between those measures before configuring tools.15:19 – 19:54 – The Role of Process and Data Hygiene: Marcel explains that real-time reporting fails if data quality is poor. Clean, reliable reporting requires an ETL (Extract, Transform, Load) process, not direct reporting from source data.19:54 – 22:55 – The Precision Trap: Kristen and Marcel explore the conflict between PMs needing granular precision and executives needing simple, high-level rollups. Forcing perfect data consistency across teams destroys usability and compliance.22:55 – 26:28 – Practical Limits of In-Tool Reporting: Marcel describes how building detailed profitability reporting directly in PM tools creates unsustainable complexity, unrealistic data maintenance, and unreliable results.26:28 – 34:38 – Building a Sustainable Data Architecture: They outline how Parakeeto's ETL pipeline works—extracting time data (person, project, hours), joining it with payroll and project grids, normalizing fields, and applying ongoing QA to ensure accuracy.34:38 – 42:37 – The Big Takeaway: Kristen and Marcel conclude that PM tools are essential for delivery but not the whole profit solution. Agencies should use them for managing work while relying on a clear framework and data pipeline for accurate reporting.Show NotesConnect with Kristen via LinkedInFree Agency ToolkitParakeeto Foundations CourseFree access to our Model PlatformLove this Episode?Leave us a review here. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Roger Baudet - "Compliainte a Deux" - Musique Électronique Pour La Scène Et L'image 1976 - 1992 https://www.wfmu.org/playlists/shows/156794
This interview was recorded at GOTO Copenhagen 2024.https://gotocph.comMichael Nygard - General Manager of Data at NubankDave Farley - Continuous Delivery & DevOps Pioneer, Award-winning Author, Founder & Director of Continuous Delivery Ltd.RESOURCESMichaelhttps://www.linkedin.com/in/mtnygardhttps://twitter.com/mtnygardhttp://www.michaelnygard.comDavehttps://bsky.app/profile/davefarley77.bsky.socialhttps://www.continuous-delivery.co.ukhttps://linkedin.com/in/dave-farley-a67927https://twitter.com/davefarley77http://www.davefarley.netRead the full abstract hereRECOMMENDED BOOKSDavid Deutsch • The Beginning of InfinityMichael Nygard • Release It! 2nd EditionMichael Nygard • Release It! 1st EditionZhamak Dehghani • Data MeshDave Farley • Modern Software EngineeringDave Farley • Continuous Delivery PipelinesDave Farley & Jez Humble • Continuous DeliveryInspiring Tech Leaders - The Technology PodcastInterviews with Tech Leaders and insights on the latest emerging technology trends.Listen on: Apple Podcasts SpotifyBlueskyTwitterInstagramLinkedInFacebookCHANNEL MEMBERSHIP BONUSJoin this channel to get early access to videos & other perks:https://www.youtube.com/channel/UCs_tLP3AiwYKwdUHpltJPuA/joinLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket: gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted daily!
MMPs give you a strong foundation for measuring mobile campaigns. But what if that's not enough? What if the data you're missing could unlock faster growth, smarter user acquisition, and better ROI?That's where Extract comes in. In this episode of Growth Masterminds, John Koetsier talks with Maayan Schor about why mobile marketers need a next-gen ELT and reverse ETL platform to move raw data in and out of their systems. From app stores to social to ad networks, Extract helps you pull it all together with your MMP data ... and make smarter decisions with more confidence.Leading, of course, to more growth.We cover:- Why there's more data than MMPs provide alone- How to access raw app store, organic social, and granular ad network data- Real-world use cases from top mobile marketers- How BI teams and marketers collaborate to make Extract work- Why flexibility and context are key to growth in mobile marketingIf you're working in mobile marketing, user acquisition, data engineering, or growth analytics, this conversation is packed with insights you can use today.
Smart Agency Masterclass with Jason Swenk: Podcast for Digital Marketing Agencies
Would you like access to our advanced agency training for FREE? https://www.agencymastery360.com/training Are you still thinking of AI as just “ChatGPT with a better prompt”? Or maybe you've played around with Zapier automations and thought, yeah, that's good enough. Today's featured guest knows that the agencies pulling ahead right now are building full-on AI agent networks that replace routine tasks, streamline data pipelines, and give their teams superpowers. She's re-engineering her agency around AI and will talk about where she finds top-tier talent and why you don't need to code to lead your agency into the future. Jennifer Bagley is the CEO and founder of CI Web Group, a fully virtual digital marketing agency registered in 22 U.S. states with clients across the United States and Canada. A former corporate operator turned entrepreneur, Jennifer started in real estate and mortgage brokerage before leaning into the marketing work she built to support those businesses. Today she runs a modern, tech-forward agency that's rebuilt its stack around AI, centralized data, and agentic networks, all while carrying the scars and lessons of scaling, pivoting, and re-founding a business from the ground up. In this episode, we'll discuss: Feeling trapped by the business. Hiring, firing, and the people reset AI, reskilling, and the end of “middle” roles What does this talent cost? Subscribe Apple | Spotify | iHeart Radio Sponsors and Resources E2M Solutions: Today's episode of the Smart Agency Masterclass is sponsored by E2M Solutions, a web design, and development agency that has provided white-label services for the past 10 years to agencies all over the world. Check out e2msolutions.com/smartagency and get 10% off for the first three months of service. From Corporate Ladder to Accidental Agency Founder Jennifer came from an operations background, a self-proclaimed black belt in Six Sigma and certified project manager. Having built that corporate background, she had made a promise to herself (“by 30 I'll be an entrepreneur”), and started to build the side hustle that became the main event. She started in real estate and mortgage brokering where she had to learn marketing the hard way; not because she wanted to be a marketer, but because the survival of her businesses depended on it. Initially, Jennifer didn't set out to build a scalable agency; she built a team to support her broker network. When the market collapsed in 2008, the same team that did marketing for agents suddenly had a market outside real estate. That “we'll just help this painter or HVAC company” phase is where the web group was born: small, service-focused, and useful to people in her network. That accidental turn became a business by solving real, pressing problems for paying clients, then leaned into that. Trading Time for Freedom: The Hard Pivot For the first five years, Jennifer describes the business as a “lifestyle” operation, profitable maybe, but trapping her time. She was trading billable hours for income and was reaching her limit when she hired a coach that forced a reckoning: if entrepreneurship isn't buying you time, money, and freedom, what's the point? So she made the brutal choice of cutting consulting contracts and burning the bridge to the “safety” of hourly work, and effectively gave herself a mulligan. This is the classic founder pivot: you have to choose between growth that keeps you doing the work and growth that scales the business without you. Jennifer's reset wasn't pretty, for a while she lost everything and she and her son lived in an office for a while, but it bought her the permission to build something salable, not just sustainable. Agency owners who feel trapped in delivery need to remember that sometimes you have to give up short-term revenue to create long-term value. Feeling Trapped by the Agency and Becoming a CEO Those first five years, Jennifer continued to run a business that started as a supply chain consulting and eventually turned into a sales supply chain consulting. This change meant the business was now a good lead generator for the agency but it also meant Jennifer was essentially selling her image and her time. Until she ran out of time. Once she felt trapped by the business, Jennifer actually hired a business coach that helped her change the model from “selling Jennifer with marketing on the side” to an actual sustainable business. She had to go back to the basics and remember she, like every entrepreneur, started the business with the idea of having more time, money, and freedom. It took losing everything, but Jennifer knew she didn't want a lifestyle business, she wanted a sellable business. The antidote was delegation plus systems. If you want growth and a future exit, you need to own those CEO responsibilities and be comfortable with letting go of the day-to-day. Hiring, Firing, and Resetting the Team Jennifer's talent strategy has evolved with each stage of growth. Her early hires were the classic “friends, family, fools” bootstrap crew; later she invested in developers, content teams, project managers, and over time, more strategic hires like CFOs, chief of staff, BI teams, and AI engineers. Each five-year arc brought a new set of needs and a new level of sophistication in hiring. Now, she divides her time between promoting her agency's work in podcasts and content and thinking of ways to navigate her business in these volatile and exciting times. Her most recent addition to the team was a technology and transformation team that is revisiting all of the agency's processes, investments, and infrastructure. As a result, she has downsized her team from over 300 W2 employees and refocus the team. The takeaway for agency owners: be honest about whether your people are builders or maintainers, and hire accordingly. The workforce you need for growth is not the same as the workforce you need for stable operations. Building AI Agent Networks with Centralized Data Jennifer's agency shifted from WordPress to Webflow and built agentic networks: hundreds of AI agents that crawl competitors, do strategy homework, and automate tasks that humans used to do. More importantly, they rebuilt infrastructure into a hub-and-spoke model with a centralized min.io data layer and ETL pipelines feeding analytics and BI. Two big lessons here. One: invest in your tech stack deliberately so you're not a Frankenstein of five different platforms that don't talk to each other. Two: design your data architecture so your people (and your AI agents) have a single source of truth. That's how you get from fire-fighting in six dashboards to proactive, predictive signals that tell you when a client engagement needs attention. AI, Reskilling, and Shrinking Middle Roles Jennifer draws a hard line: the agency now tends to hire either very seasoned client-facing leaders or AI engineers; the middle is shrinking. With agentic networks giving junior staff “superpowers,” the agency can afford fewer mid-level “lever pullers.” At this level there's no room for slow execution or elementary work. That's a cultural and ethical challenge, both for hiring and for workforce development. For agency owners, this raises practical HR questions: do you reskill your people, or replace them? Jennifer suggests building agent-driven systems that augment humans, and being brutally honest about who can grow into that future. It's also a call to action for how we prepare the next generation: schools won't teach this; companies will need to. Playing with AI Platforms: Why Leaders Need to Just Know Enough to Be Dangerous Jennifer started like a lot of agency owners dipping into AI, playing around on tools like n8n, Make.com, Relevance, and Longchain. Her dev team laughed, calling her an “elementary school kid on a tricycle,” but here's the point: she didn't need to master the tech. She needed to know enough to point her team in the right direction. Instead of obsessing over code, she framed the problem differently: “Here's what I don't want a human doing anymore. Can you make that happen?” That mindset shift is key for agency owners. You don't need to be a full-stack AI engineer to lead an agency into the future; you just need to clearly define outcomes and invest in people who can deliver them. Find Real AI Talent in Unlikely Places This is where most agencies get stuck. You're not going to find your next AI architect on Upwork. Jennifer leaned on her network, starting with her cousin Chris, a hardcore developer who initially thought AI platforms were “rookie business.” Once Chris realized the power of agentic networks to scale his expertise, he became the backbone of CI Web Group's transformation. Now, she hunts talent in unconventional places: hackathons, LinkedIn, and especially YouTube. Forget the flashy “10x growth hack” videos — she looks for nerds with four views, geeking out about orchestrators and ETL pipelines. Those are the builders who care about solving real problems, not just building hype. Her tip: if you find one, reach out immediately. They don't want sales, they just want to build. Designing AI Agents Like an Agency Org Chart Jennifer compares AI agents to a company org chart. You don't hire one person to do everything, that's a recipe for burnout. Same thing with AI. Each agent should tightly focus on a single task, with checks, auditors, and orchestrators overseeing the system. The payoff was massive efficiency gains. Instead of six different platforms that don't talk, her agency built a centralized hub with min.io, ClickHouse, and AI layers on top. That's how you go from patchwork automation to true predictive intelligence. The Real Cost of AI Talent If you're wondering how much this all costs, the answer is… a lot. On the high end, seasoned AI engineers can run you a quarter million in salary. On the low end, Jennifer tests new hires on project-based sprints, maybe $6K for a 10-hour challenge. The point isn't to cut costs; it's to prove quickly who can deliver and who can't. Her recruiting process is brutal but effective: give candidates a project, a tight deadline, and see how they perform. If they stall, they're out. If they screen-share fast and solve problems live, they're in. No fluff, no endless interviews. Do You Want to Transform Your Agency from a Liability to an Asset? Looking to dig deeper into your agency's potential? Check out our Agency Blueprint. Designed for agency owners like you, our Agency Blueprint helps you uncover growth opportunities, tackle obstacles, and craft a customized blueprint for your agency's success.
In this episode, Adrian is joined by Renaud Anjoran to explore fail-safe design principles: essential thinking for anyone developing most kinds of products. Through real-world examples ranging from Tesla doors to Boeing and consumer electronics, they highlight how designers must ask: “If this fails, what happens to the user?” They break down why it matters, what trade-offs exist, and how structured risk analysis, simplification, redundancy, and error-proofing can dramatically reduce hazards and costly failures. Episode Sections: 00:00:03 – Introduction 00:01:00 – Tesla door handle fail-safe issue 00:02:32 – Building lock systems vs. car safety 00:05:55 – Structured thinking in fail-safe design 00:07:21 – Designing with users in mind 00:09:02 – Risk analysis methods: FMEA & fault tree analysis 00:11:10 – Catastrophic failures & extreme examples 00:12:18 – Everyday product applications 00:14:21 – Principle: Simplification in design 00:16:13 – Redundancy in critical systems 00:20:30 – Battery management & safety logic 00:20:34 – Human error and mistake-proofing 00:23:09 – Error-proofing examples: tables & plugs 00:23:41 – Trade-offs and cost considerations 00:26:03 – Testing, regulations & standards (UL, ETL, etc.) 00:27:11 – Summary & wrap-up 00:28:07 – Final thoughts & listener takeaway 00:28:19 – Outro Are you designing a new product? Ask yourself: “If this fails, what happens?” Visit Sofeast.com to learn how our quality, reliability, and product development teams can support you in building safer, more reliable products. Related content... Fail Safe Design Principles & Examples | Product Risk Reduction Alaska Airlines Boeing 737 Max 9 Near Disaster! Quality & Reliability Issues? Why Product Safety, Quality, and Reliability Are Tightly Linked Tesla's Cybertruck Debacle: Reliability, Politics, & Plummeting Sales [Podcast] We can do your manufacturing at Agilian Technology Get in touch with us Connect with us on LinkedIn Contact us via Sofeast's contact page Subscribe to our YouTube channel Prefer Facebook? Check us out on FB
Send us a textWhat if AI could tap into live operational data — without ETL or RAG? In this episode, Deepti Srivastava, founder of Snow Leopard, reveals how her company is transforming enterprise data access with intelligent data retrieval, semantic intelligence, and a governance-first approach. Tune in for a fresh perspective on the future of AI and the startup journey behind it.We explore how companies are revolutionizing their data access and AI strategies. Deepti Srivastava, founder of Snow Leopard, shares her insights on bridging the gap between live operational data and generative AI — and how it's changing the game for enterprises worldwide.We dive into Snow Leopard's innovative approach to data retrieval, semantic intelligence, and governance-first architecture.04:54 Meeting Deepti Srivastava 14:06 AI with No ETL, no RAG 17:11 Snow Leopard's Intelligent Data Fetching 19:00 Live Query Challenges 21:01 Snow Leopard's Secret Sauce 22:14 Latency 23:48 Schema Changes 25:02 Use Cases 26:06 Snow Leopard's Roadmap 29:16 Getting Started 33:30 The Startup Journey 34:12 A Woman in Technology 36:03 The Contrarian View
Send us a textWhat if AI could tap into live operational data — without ETL or RAG? In this episode, Deepti Srivastava, founder of Snow Leopard, reveals how her company is transforming enterprise data access with intelligent data retrieval, semantic intelligence, and a governance-first approach. Tune in for a fresh perspective on the future of AI and the startup journey behind it.We explore how companies are revolutionizing their data access and AI strategies. Deepti Srivastava, founder of Snow Leopard, shares her insights on bridging the gap between live operational data and generative AI — and how it's changing the game for enterprises worldwide.We dive into Snow Leopard's innovative approach to data retrieval, semantic intelligence, and governance-first architecture.04:54 Meeting Deepti Srivastava 14:06 AI with No ETL, no RAG 17:11 Snow Leopard's Intelligent Data Fetching 19:00 Live Query Challenges 21:01 Snow Leopard's Secret Sauce 22:14 Latency 23:48 Schema Changes 25:02 Use Cases 26:06 Snow Leopard's Roadmap 29:16 Getting Started 33:30 The Startup Journey 34:12 A Woman in Technology 36:03 The Contrarian View
I invited Atalia Horenshtien to unpack a topic many leaders are wrestling with right now. Everyone is talking about AI agents, yet most teams are still living with rule based bots, brittle scripts, and a fair bit of anxiety about handing decisions to software. Atalia has lived through the full arc, from early machine learning and automated pipelines to today's agent frameworks inside large enterprises. She is an AI and data strategist, a former data scientist and software engineer, and has just joined Hakoda, an IBM company, to help global brands move from experiments to outcomes. The timing matters. She starts on the 18th, and this conversation captures how she thinks about responsible progress at exactly the moment she steps into that new role. Here's the thing. Words like autonomy sound glamorous until an agent faces a messy real world task. Atalia draws a clear line between scripted bots and agents with goals, memory, and the ability to learn from feedback. Her advice is refreshingly grounded. Start internal where you can observe behavior. Put human in the loop review where it counts. Use role based access rather than feeding an LLM everything you own. Build an observability layer so you can see what the model did, why it did it, and what it cost. We also get into measurements that matter. Time saved, cycle time reduction, adoption, before and after comparisons, and a sober look at LLM costs against any reduction in FTE hours. She shares how custom cost tracking for agents prevents surprises, and why version one should ship even if it is imperfect. Culture shows up as a recurring theme. Leaders need to talk openly about reskilling, coach managers through change, and invite teams to be co creators. Her story about Hakoda's internal AI Lab is a good example. What began as an engineer's idea for ETL schema matching grew into agent powered tools that won a CIO 100 award and now help deliver faster, better outcomes for clients. There are lighter moments too. Atalia explains how she taught an ex NFL player the basics of time series forecasting using football tactics. Then she takes us behind the scenes with McLaren Racing, where data and strategy collide on the F1 circuit, and admits she has become a committed fan because of that work. If you want a practical playbook for moving from shiny demos to dependable agents, this episode will help you think clearly about scope, safeguards, and speed. Connect with Atalia on LinkedIn, explore Hakoda's work at hakoda.io, and then tell me how you plan to measure your first agent's value. ********* Visit the Sponsor of Tech Talks Network: Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA
During the IT Press Tour, I had the pleasure of speaking with Weimo Liu, CEO and co-founder of PuppyGraph, and hearing firsthand how his team is rethinking graph technology for the enterprise. In this episode of Tech Talks Daily, Weimo joins me to share the story behind PuppyGraph's “zero ETL” approach, which lets organizations query their existing data as a graph without ever moving or duplicating it. We discuss why graph databases, despite their promise, have struggled with mainstream adoption, often because of complex pipelines and heavy infrastructure requirements. Weimo explains how PuppyGraph borrows from his time at TigerGraph and Google's F1 engine to build something new: a distributed query engine that maps tables into a logical graph and delivers subsecond performance on massive datasets. That shift opens the door for use cases in cybersecurity, fraud detection, and AI-driven applications where latency and accuracy matter most. We also unpack the developer experience. Instead of rewriting schemas or reloading data every time requirements change, PuppyGraph allows teams to define nodes and edges directly from existing tables. That design lowers the barrier for SQL-focused teams and accelerates time to value. Weimo even touches on the role of graph in reducing AI hallucinations, showing how structured relationships can make enterprise AI systems more reliable. What struck me most in our conversation is how PuppyGraph's playful branding belies its serious engineering depth. Behind the “puppy” name lies a distributed engine built to scale with today's data volumes, backed by strong early adoption and a team that listens closely to customer needs. Whether you're exploring graph for cybersecurity, AI chatbots, or supply chain analytics, this discussion offers a glimpse of how the next generation of graph tech might finally break free from its niche and go mainstream. ********* Visit the Sponsor of Tech Talks Network: Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA
Adam Cheyer is a pioneering AI technologist whose innovations have fundamentally shaped today's intelligent interfaces. As co-founder of Siri Inc. (acquired by Apple), he served as a Director of Engineering at Apple's iOS group, and later co-founded Viv Labs (acquired by Samsung), Sentient Technologies, and played a founding role in Change.org. Adam Cheyer was Chief Architect of CALO, one of DARPA's largest AI projects, authored over 60 publications and holds more than 25 patents In recognition of his achievement, he received his alma mater Brandeis University's 2024 Alumni Achievement Award - for transforming a long?standing AI vision into everyday tools used by hundreds of millions. Now represented by Champions Speakers Agency, he continues to speak globally on how organisations can harness AI with responsibility, scale, and impact. Q1. How do you see the role of data management in enabling AI capabilities and bringing data to life for organisations? Adam Cheyer: "AI systems are built on two foundations: algorithms and data. The algorithms themselves are well established, but without high-quality, well-organised data, they can't deliver real value. Data is the fuel that powers every AI application, and managing it effectively is now a mission-critical skill for any organisation developing AI. "With the rapid acceleration of AI in recent years - especially in the past six months - the ability to handle, refine, and govern data has shifted from being a technical advantage to an essential requirement across industries." Q2. What challenges have you faced when managing large data sets? Adam Cheyer: "I've been building AI systems over 30 years, so it's changed a little bit over time. Clearly, the first issue is just storage and management and processing of the data. The data now is so large. Back in the 80s and 90s that wasn't quite as essential, it was smaller data sets, but today the data sets are huge. "So, you need a system that can store it efficiently in a distributed way, and we've used various systems over the years to do that. You need a system that can process this huge amount of data in parallel at scale. "One of the key areas in data management for me is data quality. Even if you work with data companies - and when we were a start-up, and then even at Apple for instance - many of the data sources come from other places, other vendors, and surprisingly the data is not always in perfect clean form. "So, you need to have a process and tools and a pipeline that goes through and takes that data, cleanses it, adapts it, and often if you have multiple sources you need to integrate data together, and that can be a real challenge. "There are standard systems, ETL systems etc., but sometimes you need proprietary algorithms. As an example, with Siri, when we were a start-up, you would get millions and millions of restaurant name data and business name data. "If you had something like Joe's Restaurant and Joe's Bar and Grill - are they the same or not? That's a real problem. Joe's - probably you'd say yes, but Joe's Pizzeria and Joe's Grill maybe not, right? And so, how do you know? "There's a lot of work that goes into cleansing, integrating data. "And then the final thing I'll mention, which is a big topic in data management, is privacy and security. Once you have data coming in from users, there are standards, issues, and regulations that mean you need to be able to ensure that the data you have is accessible only by the right people, that it is secured and protected, and that it keeps privacy as much as possible - standardised. "At Apple, we had a number of techniques and teams, and there's a lot that goes into that. So, you need good systems, good processes, and to set up your organisation to be able to handle all of these challenges." Q3. How do you manage data privacy when building large AI systems? Adam Cheyer: "Absolutely, so it is a challenge. Your first tendency is, well, we just record everything, but I think that'...
What does it really mean to be data-driven? Mark Gergess, VP of Data and BI at DoubleVerify, joins the show to unpack how data teams can go beyond dashboards to drive meaningful business action. From building an internal consulting lens to evaluating the latest AI tools, Mark shares how his team translates complex data flows into measurable revenue impact. If you've ever wrestled with the gap between insights and outcomes, this conversation will hit home.Key Takeaways• Being data-driven is about driving action, not just reporting numbers• Stakeholders don't care about your data problems—they care about business outcomes• The biggest challenge with AI adoption isn't the model, it's the use cases• Efficiency gains from AI should shift focus from ETL tasks to solving real business problems• Data culture health is measured by how naturally teams rely on data day-to-dayTimestamped Highlights01:17 How DoubleVerify helps advertisers build safer, more effective digital campaigns04:55 Why the definition of “data-driven” still varies and why it matters09:25 Measuring whether data efforts are moving the needle on revenue13:15 How to separate hype from value when evaluating AI and GenAI tools17:10 Lessons from the data science boom and why companies must go “all in” with AI25:31 Can AI act as your junior analyst? Where efficiency gains really show up27:01 How freeing up time changes the structure of data teams and boosts business impactA thought worth holding onto“It's not about dashboards. It's not about reporting. It's about doing something with the information.”Pro TipsMark recommends treating AI as a “junior analyst”—let it handle quick, lower-priority questions so your team can focus on bigger business challenges.Call to ActionEnjoyed the conversation? Share this episode with a colleague who talks about being “data-driven.” Subscribe on your favorite podcast platform and connect with me on LinkedIn for more insights from leaders shaping the future of data and technology.
While many people talk about “agents,” Shreya Shankar (UC Berkeley) has been building the systems that make them reliable. In this episode, she shares how AI agents and LLM judges can be used to process millions of documents accurately and cheaply. Drawing from work on projects ranging from databases of police misconduct reports to large-scale customer transcripts, Shreya explains the frameworks, error analysis, and guardrails needed to turn flaky LLM outputs into trustworthy pipelines. We talk through: - Treating LLM workflows as ETL pipelines for unstructured text - Error analysis: why you need humans reviewing the first 50–100 traces - Guardrails like retries, validators, and “gleaning” - How LLM judges work — rubrics, pairwise comparisons, and cost trade-offs - Cheap vs. expensive models: when to swap for savings - Where agents fit in (and where they don't) If you've ever wondered how to move beyond unreliable demos, this episode shows how to scale LLMs to millions of documents — without breaking the bank. LINKS Shreya's website (https://www.sh-reya.com/) DocETL, A system for LLM-powered data processing (https://www.docetl.org/) Upcoming Events on Luma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk) Watch the podcast video on YouTube (https://youtu.be/3r_Hsjy85nk) Shreya's AI evals course, which she teaches with Hamel "Evals" Husain (https://maven.com/parlance-labs/evals?promoCode=GOHUGORGOHOME)
In this episode, Dave and Jamison answer these questions: I'm the CTO of a small startup. We're 3 devs including me and one of them is a junior developer. My current policy is to discourage the use of AI tools for the junior dev to make sure they build actual skills and don't just prompt their way through tasks. However I'm more and more questioning my stance as AI skills will be in demand for jobs to come and I want to prepare this junior dev for a life after my startup. How would you do this? What's the AI coding assistant policy in your companies. Is it the same for all seniority levels? Hi everyone! Long-time listener here, and I really appreciate all the insights you share. Greetings from Brazil! I recently joined a large company (5,000 employees) that hired around 500 developers in a short time. It seems like they didn't have enough projects aligned with everyone's expertise, so many of us, myself included, were placed in roles that don't match our skill sets. I'm a web developer with experience in Java and TypeScript, but I was assigned to a data-focused project involving Python and ETL pipelines, which is far from my area of interest or strength. I've already mentioned to my manager that I don't have experience in this stack, but the response was that the priority is to place people in projects. He told me to “keep [him] in the loop if you don't feel comfortable”, but I'm not sure that should I do. The company culture is chill, and I don't want to come across as unwilling to work or ungrateful. But I also want to grow in the right direction for my career. How can I ask for a project change, ideally one that aligns with my web development background, without sounding negative or uncooperative? Maybe wait for like 3 months inside of this project and then ask for a change? Thanks so much for your thoughts!
Joey DeVilla of Tampa Tech fame and accordion playing glory joins Mike to discuss the Tampa Tech scene, some Python goodness, a little Rust and much more. Try Mailtrap for free (https://l.rw.rw/coder_radio_6) Joey's Blog (https://www.joeydevilla.com/) Mike on X (https://x.com/dominucco) Mike on BlueSky (https://bsky.app/profile/dominucco.bsky.social) Coder on X (https://x.com/coderradioshow) Coder on BlueSky (https://bsky.app/profile/coderradio.bsky.social) Show Discord (https://discord.gg/k8e7gKUpEp) Alice (https://alice.dev)
Unlock the power of Alteryx for tax professionals in this insightful episode of Alter Everything! Join us in an interview with Adrian Steller, Director of Tax Technology at Ryan, to explore how Alteryx revolutionizes tax processes, automates data workflows, and enhances efficiency for tax teams. Discover real-world Alteryx use cases in VAT compliance, transfer pricing, and automation, and learn practical tips for transitioning from Excel to Alteryx. Whether you're a tax analyst, data professional, or business leader, this episode provides actionable insights on leveraging Alteryx for tax data transformation, reporting, and analytics.Panelists: Adrian Steller, Director @ International Tax Technology - LinkedInMegan Bowers, Sr. Content Manager @ Alteryx - @MeganBowers, LinkedInShow notes: Ryan (Company)Ryan Tax Lab (Podcast)Alteryx Community BlogsAlteryx Help Docs Interested in sharing your feedback with the Alter Everything team? Take our feedback survey here!This episode was produced by Megan Bowers, Mike Cusic, and Matt Rotundo. Special thanks to Andy Uttley for the theme music.
Roger Baudet - "Anhamete (Ceremonial) 1991" - Musique Électronique Pour La Scène Et L'image 1976 - 1992 Mariana La Palma - "Hong-Kong Shoes" - SNX va C. Lavender - "An Offering Proclaimed in the Dream" - Rupture in the Eternal Realm Anni-Frid Lyngstad - "Så Synd Du Måste Gå (It Hurts To Say Goodbye)" - The Girls Want The Boys! Sweden's Beat Girls 1964-1970 Secos & Molhados - "Não Digas Nada" - Secos & Molhados Serei Usignolo , Giampiero Boneschi E I Suoi Strumenti Elettronici - "Mitridate - Visione" Brandon Auger - "T24.d02.0315" - Anthology of Experimental Music From Canada va Bernard Parmegiani - "Entropie" - Chants Magnetiques Amedeo Tommasi - "Gemelli" - Zodiac Matia Bazar - "Lili Marleen" - Berlino, Parigi, Londra Marius Constant - "La Publicite (excerpt)" - Eloge De La Folie Nurse With Wound - "A Snake In Your Abdomen (excerpt)" - More Automating Ash Ra Tempel - "Echo Waves (excerpt)" - Inventions For Electric Guitar Brainticket - "Voyage (part 1) excerpt" - Voyage MT Luciani - "Ribellione Del Terzo Mondo" - Situazione Del Le Terzo Mondo https://www.wfmu.org/playlists/shows/154222
This week on The Data Stack Show, Eric and welcomes back Ruben Burdin, Founder and CEO of Stacksync as they together dismantle the myths surrounding zero-copy ETL and traditional data integration methods. Ruben reveals the complex challenges of two-way syncing between enterprise systems like Salesforce, HubSpot, and NetSuite, highlighting how existing tools often create more problems than solutions. He also introduces Stacksync's innovative approach, which uses real-time SQL-based synchronization to simplify data integration, reduce maintenance overhead, and enable more efficient operational workflows. The conversation exposes the limitations of current data transfer techniques and offers a glimpse into a more declarative, flexible approach to managing enterprise data across multiple systems. You won't want to miss it.Highlights from this week's conversation include:The Pain of Two-Way Sync and Early Integration Challenges (2:01)Zero Copy ETL: Hype vs. Reality (3:50)Data Definitions and System Complexity (7:39)Limitations of Out-of-the-Box Integrations (9:35)The CSV File: The Original Two-Way Sync (11:18)Stacksync's Approach and Capabilities (12:21)Zero Copy ETL: Technical and Business Barriers (14:22)Data Sharing, Clean Rooms, and Marketing Myths (18:40)The Reliable Loop: ETL, Transform, Reverse ETL (27:08)Business Logic Fragmentation and Maintenance (33:43)Simplifying Architecture with Real-Time Two-Way Sync (35:14)Operational Use Case: HubSpot, Salesforce, and Snowflake (39:10)Filtering, Triggers, and Real-Time Workflows (45:38)Complex Use Case: Salesforce to NetSuite with Data Discrepancies (48:56)Declarative Logic and Debugging with SQL (54:54)Connecting with Ruben and Parting Thoughts (57:58)The Data Stack Show is a weekly podcast powered by RudderStack, customer data infrastructure that enables you to deliver real-time customer event data everywhere it's needed to power smarter decisions and better customer experiences. Each week, we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
In this decades-spanning episode, Tristan Handy sits down with Lonne Jaffe, Managing Director at Insight Partners and former CEO of Syncsort (now Precisely), to trace the history of the data ecosystem—from its mainframe origins to its AI-infused future. Lonne reflects on the evolution of ETL, the unexpected staying power of legacy tech, and why AI may finally erode the switching costs that have long protected incumbents. For full show notes and to read 6+ years of back issues of the podcast's companion newsletter, head to https://roundup.getdbt.com. The Analytics Engineering Podcast is sponsored by dbt Labs.
Thiersch, JD, speaks with Alex Lirtsman, founder and CEO of CorralData, to explore how medical spas can unlock real-time, HIPAA-compliant insights without changing their existing systems. CorralData integrates everything from your EMR to marketing, payroll, and finance systems, giving med spas the ability to uncover actionable insights that drive profitability, patient retention, and scalable growth. Listen for strategies to ask smarter questions of your data, including: Integrating all of your existing platforms to get actionable insights from your data; How multi-location practices, med spa rollups and private equity develop playbooks; Navigating HIPAA and BAAs with AI companies to create secure data analysis tools; Using reverse ETL to optimize for high lifetime value patients and boost profitability; The questions you can ask your data with conversational AI and large language models; CorralData's tailor-made solutions for Advanced MedAesthetic Partners, and more! -- Music by Ghost Score
We all talk about #AI, but what good is it if your models are powered by stale, outdated data?In Episode 99 of Great Things with Great Tech, Deepti Srivastava, founder and CEO of Snow Leopard, and former founding PM of Google Spanner, calls out the broken state of enterprise AI. With decades of experience in distributed systems and data infrastructure, Deepti unveils how Snow Leopard is redefining how AI applications are built, by tapping into live, real-time data from SQL and APIs without the need for ETL or pipelines.Instead of relying on static snapshots or disconnected data lakes, Snow Leopard's #agentic platform queries native sources like PostgreSQL, Snowflake, and Salesforce on-demand, empowering AI to live directly in the critical decision path.In This Episode, We Cover:Deepti's journey from building Spanner at Google to founding Snow Leopard AI.Why most enterprise AI fails due to reliance on stale data and outdated pipelines. How Snow Leopard federates live data across SQL and APIs with zero ETL.The limitations of vector databases in structured, real-time business use cases.Why putting AI in the critical path of business decisions unlocks real value.Snow Leopard is a U.S.-based technology company founded in 2023 by and is Headquartered in San Francisco, CaliforniaSnow Leopard specializes in building a platform that enables the development of production-ready AI applications by leveraging live business data. The company's approach focuses on real-time data retrieval directly from sources like SQL databases and APIs, eliminating the need for traditional ETL processes and data pipelines. This innovation allows for more accurate and timely AI-driven business decision.PODCAST LINKSGreat Things with Great Tech Podcast: https://gtwgt.comGTwGT Playlist on YouTube: https://www.youtube.com/@GTwGTPodcastListen on Spotify: https://open.spotify.com/show/5Y1Fgl4DgGpFd5Z4dHulVXListen on Apple Podcasts: https://podcasts.apple.com/us/podcast/great-things-with-great-tech-podcast/id1519439787EPISODE LINKSSnow Leopard Web: https://www.snowleopard.ai/Deepti Srivastava on LinkedIn:https://www.linkedin.com/in/thedeepti/Snow Leopard on LinkedIn: https://www.linkedin.com/company/snow-leopard-ai/GTwGT LINKSSupport the Channel: https://ko-fi.com/gtwgtBe on #GTwGT: Contact via Twitter/X @GTwGTPodcast or visit https://www.gtwgt.comSubscribe to YouTube: https://www.youtube.com/@GTwGTPodcast?sub_confirmation=1Great Things with Great Tech Podcast Website: https://gtwgt.comSOCIAL LINKSFollow GTwGT on Social Media:Twitter/X: https://twitter.com/GTwGTPodcastInstagram: https://www.instagram.com/GTwGTPodcastTikTok: https://www.tiktok.com/@GTwGTPodcast
Si tu observes attentivement la Joconde, le célèbre tableau de Léonard de Vinci exposé au Louvre, un détail intrigue immédiatement : elle n'a ni sourcils ni cils. Un visage d'une précision incroyable, un regard presque vivant… mais un front totalement nu. Comment expliquer cette absence ?Une mode de la Renaissance ?Pendant longtemps, on a pensé que l'absence de sourcils était simplement liée à la mode de l'époque. Au début du XVIe siècle, en Italie, certaines femmes aristocrates s'épilaient les sourcils (et parfois la racine des cheveux) pour dégager le front, considéré alors comme un signe de beauté et de noblesse. Selon cette hypothèse, Mona Lisa (ou Lisa Gherardini, si l'on en croit la thèse majoritaire) aurait pu suivre cette tendance esthétique.Mais cette explication ne tient pas totalement : d'autres portraits de femmes de la même époque montrent clairement des sourcils, même fins ou discrets. Et Léonard de Vinci, connu pour son obsession du réalisme, aurait-il vraiment volontairement omis un tel détail ?Une disparition progressiveL'explication la plus crédible aujourd'hui repose sur l'histoire matérielle du tableau. La Joconde a plus de 500 ans, et au fil des siècles, elle a été soumise à des restaurations, nettoyages et vernissages qui ont pu altérer les détails les plus fins.Une étude scientifique menée par le spécialiste Pascal Cotte, en 2004, à l'aide d'une technologie de réflectographie multispectrale, a révélé qu'à l'origine, Léonard avait bien peint des sourcils et des cils, très fins et délicats. Mais ces détails auraient disparu avec le temps, en raison de l'usure naturelle de la couche picturale ou de restaurations trop agressives. En somme, les sourcils étaient là, mais ils se sont effacés au fil des siècles.Un effet renforçant le mystèreL'absence de sourcils contribue aussi, paradoxalement, au mystère et à l'ambiguïté du visage de la Joconde. Son expression indéfinissable, ce mélange de sourire et de neutralité, est renforcé par ce manque de lignes faciales qui encadreraient normalement le regard. Ce flou contribue au caractère intemporel et énigmatique du tableau, qui fascine depuis des siècles.En résumé : la Joconde avait probablement des sourcils, peints avec la finesse propre à Léonard de Vinci. Mais le temps, les restaurations et les vernis les ont effacés. Ce détail oublié est devenu un élément clé de son mystère. Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.
✨ Heads up! This episode features a demonstration of the SnapLogic UI and its AI Agent Creator towards the end. For the full visual experience, check out the video version on the Spotify app! ✨(Episode Summary)Tired of tangled data spread across multiple clouds, on-premise systems, and the edge? In this episode, MongoDB's Shane McAllister sits down with Peter Ngai, Principal Architect at SnapLogic, to explore the future of data integration and management in today's complex tech landscape.Dive into the challenges and solutions surrounding modern data architecture, including:Navigating the complexities of multi-cloud and hybrid cloud environments.The secrets to building flexible, resilient data ecosystems that avoid vendor lock-in.Strategies for seamless data integration and connecting disparate applications using low-code/no-code platforms like SnapLogic.Meeting critical data compliance, security, and sovereignty demands (think GDPR, HIPAA, etc.).How AI is revolutionizing data automation and providing faster access to insights (featuring SnapLogic's Agent Creator).The powerful synergy between SnapLogic and MongoDB, leveraging MongoDB both internally and for customer integrations.Real-world applications, from IoT data processing to simplifying enterprise workflows.Whether you're an IT leader, data engineer, business analyst, or simply curious about cloud strategy, iPaaS solutions, AI in business, or simplifying your data stack, Peter offers invaluable insights into making data connectivity a driver, not a barrier, for innovation.-Keywords: Data Integration, Multi-Cloud, Hybrid Cloud, Edge Computing, SnapLogic, MongoDB, AI, Artificial Intelligence, Data Automation, iPaaS, Low-Code, No-Code, Data Architecture, Data Management, Cloud Data, Enterprise Data, API Integration, Data Compliance, Data Sovereignty, Data Security, Business Automation, ETL, ELT, Tech Stack Simplification, Peter Ngai, Shane McAllister.
Ever wondered how companies like Amazon or Pinterest deliver lightning-fast image search? Dive into this episode of MongoDB Podcast Live with Shane McAllister and Nenad, a MongoDB Champion, as they unravel the magic of semantic image search powered by MongoDB Atlas Vector Search!
What happens when you hand off your Power BI output to ChatGPT and ask it to make sense of your world? You might be surprised. This week, Rob shares a deeply personal use case. One that ties together two major themes we've been exploring: Gen AI is reshaping the way we think about dashboards. To get real value out of AI, you need more than just data. You need metadata. And yes, that kind of metadata—the kind you create in Power BI when you translate raw data into something meaningful. Along the way, we revisit the old guard of data warehousing. The mighty (and now dusty?) ETL priesthood. And we uncover a delicious little irony about how the future of data looks a lot like its past, just with better tools and smarter questions. The big twist? We're all ETL now. But the "T" might not mean what you think it does anymore. Listen now to find out how a few rows of carefully modeled data, a table visual, and one really good AI assistant changed the game. For Rob and, just possibly, for all of us. Also in this episode: Blind Melon – Change (YouTube) The Data Warehouse Toolkit Raw Data Episode - The Human Side of Data: Using Analytics for Personal Well-Being
In this engaging episode of the HVAC School Podcast, host Bryan sits down with Jesse from NAVAC to dive deep into the evolving landscape of refrigeration technology, focusing primarily on the transition to A2L refrigerants. The conversation offers a refreshingly pragmatic approach to addressing industry concerns about these new, mildly flammable refrigerants, dispelling myths and providing practical insights for HVAC technicians. The discussion begins by addressing the most pressing question for many technicians: Do you need to buy all new tools to work with A2L refrigerants? Jesse from NAVAC provides a nuanced response, emphasizing that while there are currently no regulations mandating new equipment, the company has proactively developed tools that are safety-certified and compatible with the new refrigerant types. They explore the intricacies of safety certifications like UL and CSA, explaining the differences between UL Listed and UL Verified, and highlighting the importance of intrinsically safe equipment, especially for tools like vacuum pumps and recovery machines. NAVAC's approach goes beyond mere product promotion, with Jesse positioning himself as an educator first. The podcast delves into the technical details of A2L refrigerants, challenging common misconceptions and providing context about their flammability. Bryan and Jesse draw parallels with previous refrigerant transitions, noting how technicians were initially skeptical about R-410A but eventually adapted. They emphasize the importance of best practices, proper training, and understanding the actual risks associated with these new refrigerants, rather than succumbing to fear-based narratives. The episode also showcases NAVAC's latest technological innovations, including smart probes, a Bluetooth scale, a smart valve for charging and recovery, and an advanced vacuum pump with a one-touch oil testing feature. These tools represent the company's commitment to improving technician efficiency and safety, with features that address real-world challenges faced by HVAC professionals. Key Topics Covered: A2L Refrigerants Myths and misconceptions about flammability Comparison with previous refrigerant transitions Safety considerations and best practices Safety Certifications Differences between UL Listed and UL Verified Importance of intrinsically safe equipment CSA and ETL certifications NAVAC's New Tools Smart probes with Bluetooth connectivity Advanced vacuum pump with automatic oil testing Flex manifold with digital accuracy and analog feel Battery-operated pumps with improved run times Industry Trends Preparation for A2L and future refrigerant transitions Regulatory changes and efficiency standards Importance of technician education and adaptation Additional Insights: No current regulations require new tools for A2L refrigerants Proper training and best practices are crucial Technicians should focus on understanding new technologies Safety is about awareness and proper procedures, not fear Have a question that you want us to answer on the podcast? Submit your questions at https://www.speakpipe.com/hvacschool. Purchase your tickets or learn more about the 6th Annual HVACR Training Symposium at https://hvacrschool.com/symposium. Subscribe to our podcast on your iPhone or Android. Subscribe to our YouTube channel. Check out our handy calculators here or on the HVAC School Mobile App for Apple and Android