Language for management and use of relational databases
POPULARITY
Categories
Thinking of becoming a certified penetration tester? Your journey starts here. In Part 1 of our PenTest+ prep series, we lay the groundwork for mastering offensive security—from tools and techniques to frameworks and real-world attack insights.Whether you're targeting the CompTIA PenTest+ or sharpening your pentesting skills, this session helps you build the mindset and toolkit of a professional ethical hacker.
Nikolay and Michael discuss case-insensitive data — when we want to treat columns as case-insensitive, and the pros and cons of using citext, functions like lower(), or a custom collation. Here are some links to things they mentioned:citext https://www.postgresql.org/docs/current/citext.htmlOur episode on over-indexing https://postgres.fm/episodes/over-indexingNondeterministic collations https://www.postgresql.org/docs/current/collation.html#COLLATION-NONDETERMINISTICHow to migrate from Django's PostgreSQL CI Fields to use a case-insensitive collation (blog post by Adam Johnson) https://adamj.eu/tech/2023/02/23/migrate-django-postgresql-ci-fields-case-insensitive-collationThe collation versioning problem with ICU 73 (blog post by Daniel Vérité) https://postgresql.verite.pro/blog/2023/10/20/icu-73-versioning.htmlamcheck https://www.postgresql.org/docs/current/amcheck.html~~~What did you like or not like? What should we discuss next time? Let us know via a YouTube comment, on social media, or by commenting on our Google doc!~~~Postgres FM is produced by:Michael Christofides, founder of pgMustardNikolay Samokhvalov, founder of Postgres.aiWith credit to:Jessie Draws for the elephant artwork
This is episode 300 recorded on July 31st, 2025, where John & Jason talk the Microsoft Fabric July 2025 Feature Summary including tags in Fabric Domains, Fabric Data Agent integration with Microsoft Copilot Studio, enhancements to Activator, manual control of Auto-Refresh in pipelines, and CosmosDB in preview now in Fabric. For show notes please visit www.bifocal.show
UK & Ireland Director of Intelligence Enterprise at GlobalLogic, Tim Hatton, explores how principles of control theory, exemplified by SpaceX's Starship, apply to the design of effective enterprise agentic AI systems. Reaching for the stars has always been the pinnacle of human ingenuity. The relentless desire to push beyond known boundaries is what drives innovation and advancement all around the globe. The recent example of SpaceX's latest Starship spacecraft soaring into the skies and returning with precision isn't just a milestone in aerospace engineering - it's a vivid illustration of what's possible when our boundless creativity fuels cutting-edge technologies. SpaceX's success demonstrates that autonomous software can effectively control a sophisticated system and steer it toward defined goals. This seamless blend of autonomy, awareness, intelligent adaptability, and results-driven decision-making offers a compelling analogy for enterprises. It's a beacon for a future where agentic AI systems revolutionise workflows, drive innovation, and transform industries. Control theory: A proven framework Control theory underpins self-regulating systems that balance performance and adaptability. It dates from the 19th century when Scottish physicist and mathematician James Clerk Maxwell first described the operation of centrifugal 'governors'. Its core principles - feedback loops, stability, controllability, and predictability - brought humanity into the industrial age. Starting with stabilising windmill velocity, up to today's spaceflights, nuclear stations and nation-spanning electricity grids. We see control theory in action when landing a rocket, for example. The manoeuvre relies on sensors to measure actual parameters, controllers to adjust based on feedback, and the system to execute corrections. Comparing real-time data to desired outcomes minimises errors, ensuring precision and safety. It's a framework that extends to enterprise workflows. Employees function as systems, supervisors as controllers, and tasks as objectives. A seasoned worker might self-correct without managerial input, paralleling autonomous systems' ability to adapt dynamically. Challenges in agentic AI Agentic AI systems combine traditional control frameworks' precision with advanced AI models' generative power. However, while rockets rely on the time-tested principles of control theory, AI-driven systems are powered by large language models (LLMs). This introduces new layers of complexity that make designing resilient AI agents that deliver precision, adaptability, and trustworthiness uniquely challenging. Computational irreducibility: LLMs like GPT-4 defy simplified modelling. They are so complex and their internal workings so intricate that we cannot predict their exact outputs without actually running them. Predicting outputs requires executing each computational step, complicating reliability and optimisation. A single prompt tweak can disrupt workflows, making iterative testing essential, yet time-consuming. Nonlinearity and high dimensionality: Operating in high-dimensional vector spaces, with millions of input elements, LLMs process data in nonlinear ways. This means outputs are sensitive to minor changes. Testing and optimising the performance of single components of complex workflows, like text-to-SQL queries, under these parameters, becomes a monumental task. Blurring code and data: Traditional systems separate code and data. In contrast, LLMs embed instructions within prompts, mixing the two. This variability introduces a host of testing, reliability, and security issues. This blurring of ever-growing data sets with the prompts introduces variability that is difficult to model and predict, which also compounds the dimensionality problem described above. Stochastic behaviour: LLMs may produce different outputs for the same input due to factors like sampling methods during generation. This means they introduce randomness - an asset for creati...
Fredrik talks to Matt Topol about Arrow and how the Arrow ecosystem is evolving. Arrow is an open source, columnar in-memory data format designed for efficient data processing and analytics - which means passing data between things without needing to transform it, and ideally even without needing to copy it. What makes the ecosystem grow, and why is it very cool to have Arrow on the GPU? What is the connection between Arrow, machine learning, and Hugging face? Matt emphasizes the value of open standards, even as they work with or within more closed systems they can help open things up, and help bring about more modular solutions so that developers can focus on doing their core area really well. This episode can be seen as a follow-up to episode 567, where Matt first joined to discuss everything Arrow. Recorded during Øredev 2024. Thank you Cloudnet for sponsoring our VPS! Comments, questions or tips? We a re @kodsnack, @tobiashieta, @oferlund and @bjoreman on Twitter, have a page on Facebook and can be emailed at info@kodsnack.se if you want to write longer. We read everything we receive. If you enjoy Kodsnack we would love a review in iTunes! You can also support the podcast by buying us a coffee (or two!) through Ko-fi. Links Matt Matt’s Øredev 2023 talks: State of the Apache Arrow ecosystem: How your project can leverage Arrow! and Leveraging Apache Arrow for ML workflows Previous episodes with Matt Øredev 2024 Matt’s Øredev 2024 talks - on Arrow ADBC and Composable and modular data systems ADBC - Arrow database connectivity Arrow Snowflake Snowflake drivers for ADBC Bigquery The Bigquery driver Microsoft Fabric Duckdb Postgres SQLite Arrow flight - RPC framework for services based on Arrow data Arrow flight SQL Microsoft Power BI Velox Apache datafusion Query planning Substrait - query IR Polaris Libcudf Nvidia RAPIDS Pytorch Tensorflow Arrow device interface DLPack - in-memory tensor structure Tensors Nanoarrow Voltron data - where Matt used to work. He’s now at Columnar Theseus GPU compute engine The composable data management system manifesto Support us on Ko-fi! Matt’s book - In-memory analytics with Apache Arrow Spark Spark connect RPC UDFs Photon Datafusion Apache Cassandra ODBC JDBC R - programming language for statistical computing Hugging face Ray Stringview - “German-style strings” Scaling up with R and Arrow - the book on using Arrow with R Titles It’s gotten a lot bigger The bones of it are in the repo (Powered by ADBC) Individual compute components Feed it substrate Where the ecosystem is going Arrow on the GPU The data stays on the GPU A forced copy Leverage that device interface Without forcing the copy Shy of that last mile Turtles all the way down The guy who said yes German-style strings
For memberships: join this channel as a member here:https://www.youtube.com/channel/UC_mGuY4g0mggeUGM6V1osdA/joinSummaryIn this conversation, Nitish Tiwari discusses Parseable, an observability platform designed to address the challenges of managing and analyzing large volumes of data. The discussion covers the evolution of observability systems, the design principles behind Parseable, and the importance of efficient data ingestion and storage in S3. Nitish explains how Parseable allows for flexible deployment, handles data organization, and supports querying through SQL. The conversation also touches on the correlation of logs and traces, failure modes, scaling strategies, and the optional nature of indexing for performance optimization.References:Parseable: https://www.parseable.com/GitHub Repository: https://github.com/parseablehq/parseableArchitecture: https://parseable.com/docs/architecture Chapters:00:00 Introduction to Parseable and Observability Challenges05:17 Key Features of Parseable12:03 Deployment and Configuration of Parseable18:59 Ingestion Process and Data Handling32:52 S3 Integration and Data Organisation35:26 Organising Data in Parseable38:50 Metadata Management and Retention39:52 Querying Data: User Experience and SQL44:28 Caching and Performance Optimisation46:55 User-Friendly Querying: SQL vs. UI48:53 Correlating Logs and Traces50:27 Handling Failures in Ingestion53:31 Managing Spiky Workloads54:58 Data Partitioning and Organisation58:06 Creating Indexes for Faster Reads01:00:08 Parseable's Architecture and Optimisation01:03:09 AI for Enhanced Observability01:05:41 Getting Involved with ParseableFor memberships: join this channel as a member here:https://www.youtube.com/channel/UC_mGuY4g0mggeUGM6V1osdA/joinDon't forget to like, share, and subscribe for more insights!=============================================================================Like building stuff? Try out CodeCrafters and build amazing real world systems like Redis, Kafka, Sqlite. Use the link below to signup and get 40% off on paid subscription.https://app.codecrafters.io/join?via=geeknarrator=============================================================================Database internals series: https://youtu.be/yV_Zp0Mi3xsPopular playlists:Realtime streaming systems: https://www.youtube.com/playlist?list=PLL7QpTxsA4se-mAKKoVOs3VcaP71X_LA-Software Engineering: https://www.youtube.com/playlist?list=PLL7QpTxsA4sf6By03bot5BhKoMgxDUU17Distributed systems and databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4sfLDUnjBJXJGFhhz94jDd_dModern databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4scSeZAsCUXijtnfW5ARlrsNStay Curios! Keep Learning!#database #s3 #objectstorage #opentelemetry #logs #metrics
What's up everyone, today we have the pleasure of sitting down with István Mészáros, Founder and CEO of Mitzu.io. (00:00) - Intro (01:00) - In This Episode (03:39) - How Warehouse Native Analytics Works (06:54) - BI vs Analytics vs Measurement vs Attribution (09:26) - Merging Web and Product Analytics With a Zero-Copy Architecture (14:53) - Feature or New Category? What Warehouse Native Really Means For Marketers (23:23) - How Decoupling Storage and Compute Lowers Analytics Costs (29:11) - How Composable CDPs Work with Lean Data Teams (34:32) - How Seat-Based Pricing Works in Warehouse Native Analytics (40:00) - What a Data Warehouse Does That Your CRM Never Will (42:12) - How AI-Assisted SQL Generation Works Without Breaking Trust (50:55) - How Warehouse Native Analytics Works (52:58) - How To Navigate Founder Burnout While Raising Kids Summary: István built a warehouse-native analytics layer that lets teams define metrics once, query them directly, and skip the messy syncs across five tools trying to guess what “active user” means. Instead of fighting over numbers, teams walk through SQL together, clean up logic, and move faster. One customer dropped their bill from $500K to $1K just by switching to seat-based pricing. István shares how AI helps, but only if you still understand the data underneath. This conversation shows what happens when marketing, product, and data finally work off the same source without second-guessing every report.About IstvánIstvan is the Founder and CEO of Mitzu.io, a warehouse-native product analytics platform built for modern data stacks like Snowflake, Databricks, BigQuery, Redshift, Athena, Postgres, Clickhouse, and Trino. Before launching Mitzu.io in 2023, he spent over a decade leading high-scale data engineering efforts at companies like Shapr3D and Skyscanner. At Shapr3D, he defined the long-term data strategy and built self-serve analytics infrastructure. At Skyscanner, he progressed from building backend systems serving millions of users to leading data engineering and analytics teams. Earlier in his career, he developed real-time diagnostic and control systems for the Large Hadron Collider at CERN. How Warehouse Native Analytics WorksMarketing tools like Mixpanel, Amplitude, and GA4 create their own versions of your customer. Each one captures data slightly differently, labels users in its own format, and forces you to guess how their identity stitching works. The warehouse-native model removes this overhead by putting all customer data into a central location before anything else happens. That means your data warehouse becomes the only source of truth, not just another system to reconcile.István explained the difference in blunt terms. “The data you're using is owned by you,” he said. That includes behavioral events, transactional logs, support tickets, email interactions, and product usage data. When everything lands in one place first (BigQuery, Redshift, Snowflake, Databricks) you get to define the logic. No more retrofitting vendor tools to work with messy exports or waiting for their UI to catch up with your question.In smaller teams, especially B2C startups, the benefits hit early. Without a shared warehouse, you get five tools trying to guess what an active user means. With a warehouse-native setup, you define that metric once and reuse it everywhere. You can query it in SQL, schedule your campaigns off it, and sync it with downstream tools like Customer.io or Braze. That way you can work faster, align across functions, and stop arguing about whose numbers are right.“You do most of the work in the warehouse for all the things you want to do in marketing,” István said. “That includes measurement, attribution, segmentation, everything starts from that central point.”Centralizing your stack also changes how your data team operates. Instead of reacting to reporting issues or chasing down inconsistent UTM strings, they build shared models the whole org can trust. Marketing ops gets reliable metrics, product teams get context, and leadership gets reports that actually match what customers are doing. Nobody wins when your attribution logic lives in a fragile dashboard that breaks every other week.Key takeaway: Warehouse native analytics gives you full control over customer data by letting you define core metrics once in your warehouse and reuse them everywhere else. That way you can avoid double-counting, reduce tool drift, and build a stable foundation that aligns marketing, product, and data teams. Store first, define once, activate wherever you want.BI vs Analytics vs Measurement vs AttributionBusiness intelligence means static dashboards. Not flexible. Not exploratory. Just there, like laminated truth. István described it as the place where the data expert's word becomes law. The dashboards are already built, the metrics are already defined, and any changes require a help ticket. BI exists to make sure everyone sees the same numbers, even if nobody knows exactly how they were calculated.Analytics lives one level below that, and it behaves very differently. It is messy, curious, and closer to the raw data. Analytics splits into two tracks: the version done by data professionals who build robust models with SQL and dbt, and the version done by non-technical teams poking around in self-serve tools. Those non-technical users rarely want to define warehouse logic from scratch. They want fast answers from big datasets without calling in reinforcements.“We used to call what we did self-service BI, because the word analytics didn't resonate,” István said. “But everyone was using it for product and marketing analytics. So we changed the copy.”The difference between analytics and BI has nothing to do with what the tool looks like. It has everything to do with who gets to use it and how. If only one person controls the dashboard, that is BI. If your whole team can dig into campaign performance, break down cohorts, and explore feature usage trends without waiting for data engineering, that is analytics. Attribution, ML, and forecasting live on top of both layers. They depend on the raw data underneath, and they are only useful if the definitions below them hold up.Language often lags behind how tools are actually used. István saw this firsthand. The product stayed the same, but the positioning changed. People used Mitzu for product analytics and marketing performance, so that became the headline. Not because it was a trend, but because that is what users were doing anyway.Key takeaway: BI centralizes truth through fixed dashboards, while analytics creates motion by giving more people access to raw data. When teams treat BI as the source of agreement and analytics as the source of discovery, they stop fighting over metrics and start asking better questions. That way you can maintain trusted dashboards for executive reporting and still empower teams to explore data without filing tickets or waiting days for answers.Merging Web and Product Analytics With a Zero-Copy ArchitectureMost teams trying to replace GA4 end up layering more tools onto the same mess. They drop in Amplitude or Mixpanel for product analytics, keep something else for marketing attribution, and sync everything into a CDP that now needs babysitting. Eventually, they start building one-off pipelines just to feed the same events into six different systems, all chasing slightly different answers to the same question.István sees this fragmentation as a byproduct of treating product and marketing analytics as separate functions. In categorie...
Miss us?Feels like it's been a few weeks since we've had something new to share. This week, we're excited to share a video we recorded just before the release of First of Kind. Grant Lee fits the profile of a founder indie is built to support: worked in investment banking, and joined a hot YC startup that didn't end up being the rocket ship they'd planned. Joined another venture-backed startup that found a successful outcome, but has made intentional decisions to build differently now that he's working on something of his own. The wave we predicted in our Indie Era of Startups talk has started to crest, and founders, like Grant, are demonstrating the benefits to this new way of building. In this conversation, we break down Grant's approach to building Gamma into a few discreet buckets. The first is a small team of what he calls player-coaches. These team members are not interested in growing their org so much as focusing on results. And not just results, but results that they can manage from idea to execution. Bye-bye middle managers, hello player-coaches. Grant explains this approach best in his pinned tweet:Instead of creating specialist silos, we hire versatile generalists who can solve problems across domains. Rather than building management hierarchies, we find player-coaches who both lead and execute. Our team leverages AI tools throughout our workflow - Claude for data analysis, Cursor for coding efficiency, NotebookLM for customer research synthesis. These aren't just productivity hacks; they're force multipliers. Examples: — When our growth PM needed better analytics, he didn't file a ticket with a data team—he built a self-serve system that anyone can use without SQL knowledge. — When our marketing lead needed to understand our customers better, she fed thousands of interactions into an LLM and created actionable personas that now guide our entire strategy. — When our design team needs to test a hypothesis, we create a rapid prototype and show it to our power users. What we're seeing isn't just about "doing more with less." It's about fundamentally changing what's possible per person. The most valuable employees aren't specialists who excel in narrow domains - they're resourceful problem-solvers who continuously expand their capabilities. This approach creates remarkable resilience. Since everyone understands multiple functions, we don't have single points of failure when someone leaves or moves to another project. If you're building today, the question isn't how quickly you can scale headcount — it's how much impact you can create with the smallest possible team. The future belongs to tiny teams of extraordinary people.The next is to embrace constraints. At the time of this recording, Gamma was doing $50M in ARR and had over 50M users. Yes, you read that right. And, yes, they've done this while keeping their team small and wildly profitable for over a year. They do not see profitability as a lack of imagination or ambition, but the fuel for them to continue building on their own terms and timelines. This was a phenomenal conversation, and one that touches on many of the ideas we've been advocating for with indie over the years. We hope you see in Grant and Gamma something to aspire to as a founder that goes far deeper than hitting the next fundable milestone. We hope you enjoy listening as much as we enjoyed recording this one.
Nikolay talks to Michael about Postgres AI's new monitoring tool — what it is, how its different to other tools, and some of the thinking behind it. Here are some links to things they mentioned:postgres_ai monitoring https://gitlab.com/postgres-ai/postgres_aiDB Lab 4.0 announcement https://github.com/postgres-ai/database-lab-engine/releases/tag/v4.0.0pganalyze https://pganalyze.compostgres-checkup https://gitlab.com/postgres-ai/postgres-checkupPercona Monitoring and Management (PMM) https://github.com/percona/pmmpgwatch https://github.com/cybertec-postgresql/pgwatchpgwatch Postgres AI Edition https://gitlab.com/postgres-ai/pgwatch2libpg_query https://github.com/pganalyze/libpg_queryThe Four Golden Signals https://sre.google/sre-book/monitoring-distributed-systems/#xref_monitoring_golden-signalslogerrors https://github.com/munakoiso/logerrors~~~What did you like or not like? What should we discuss next time? Let us know via a YouTube comment, on social media, or by commenting on our Google doc!~~~Postgres FM is produced by:Michael Christofides, founder of pgMustardNikolay Samokhvalov, founder of Postgres.aiWith credit to:Jessie Draws for the elephant artwork
Dans cet épisode du Big Data Hebdo, Vincent Heuschling et Quentin Ambard reviennent sur le Data and AI Summit 2025 de Databricks.En autres on parle de :L'acquisition de Néon pour avoir une BDD au dessus du LakehouseLakeflow Designer pour avoir une approche low-codeL'intégration de l'IADatabricks One pour rendre l'interface plus accessibleLes améliorations du moteur SQL de DatabricksAgent Bricks qui simplifie le développement d'agents AI.La data-gouvernance avec Unity Catalog.Le Vector Search au dessus du lakehouseLes inevitables troll envers Snowflake
In this episode of Elixir Wizards, Charles Suggs sits down with Chris Grainger, co-founder and CTO of Amplified and creator of the Explorer library. Chris explains how Explorer brings the familiar data-frame workflows of R's dplyr and Python's pandas into the Elixir world. We explore (pun intended!) how Explorer integrates with Ecto, Nx, and LiveView to build end-to-end data pipelines without leaving the BEAM, and how features like lazy evaluation and distributed frames let you tackle large datasets. Whether you're generating reports or driving interactive charts in LiveView, Explorer makes tabular data accessible to every Elixir developer. We wrap up by looking ahead to SQL-style backends, ADBC connectivity, and other features on the Explorer roadmap. Key topics discussed in this episode: dplyr- and pandas-inspired data manipulation in Elixir Polars integration via Rust NIFs for blazing performance Immutable data frames and BEAM-friendly concurrency Lazy evaluation to work with arbitrarily large tables Distributed data-frame support for multi-node processing Seamless integration with Ecto schemas and queries Zero-copy interoperability between Explorer and Nx tensors Apache Arrow and ADBC protocols for cross-language I/O Exploring SQL-style backends for remote query execution Building interactive dashboards and charts in LiveView Consolidating ETL workflows into a single Elixir API Streaming data pipelines for memory-efficient processing Tidy data principles and behavior-based API design Real-world use cases: report generation, patent analysis, and more Future roadmap: new backends, query optimizations, and community plugins Links mentioned: https://hexdocs.pm/explorer/Explorer.html https://www.amplified.ai/ https://www.r-project.org/ https://vita.had.co.nz/papers/tidy-data.pdf https://www.tidyverse.org/ https://www.python.org/ https://dplyr.tidyverse.org/ https://go.dev/ https://hexdocs.pm/nx/Nx.html https://github.com/pola-rs/polars https://github.com/rusterlium/rustler https://www.rust-lang.org/ https://www.postgresql.org/ https://hexdocs.pm/ecto/Ecto.html https://www.elastic.co/elasticsearch https://arrow.apache.org/ Chris Grainger & Chris McCord Keynote ElixirConf 2024: https://youtu.be/4qoHPh0obv0 https://dbplyr.tidyverse.org/ https://spark.posit.co/ https://hexdocs.pm/pythonx/Pythonx.html https://hexdocs.pm/vegalite/VegaLite.html 10 Minutes to Explorer: https://hexdocs.pm/explorer/exploringexplorer.html https://github.com/elixir-nx/scholar https://scikit-learn.org/stable/ https://github.com/cigrainger https://erlef.org/slack-invite/erlef https://bsky.app/profile/cigrainger.bsky.social https://github.com/cigrainger
This week on The Data Stack Show, John chats with Paul Blankley, Founder and CTO of Zenlytic, live from Denver! Paul and John discuss the rapid evolution of AI in business intelligence, highlighting how AI is transforming data analysis and decision-making. Paul also explores the potential of AI as an "employee" that can handle complex analytical tasks, from unstructured data processing to proactive monitoring. Key insights include the increasing capabilities of AI in symbolic tasks like coding, the importance of providing business context to AI models, and the future of BI tools that can flexibly interact with both structured and unstructured data. Paul emphasizes that the next generation of AI tools will move beyond traditional dashboards, offering more intelligent, context-aware insights that can help businesses make more informed decisions. It's an exciting conversation you won't want to miss.Highlights from this week's conversation include:Welcoming Paul Back and Industry Changes (1:03)AI Model Progress and Superhuman Domains (2:01)AI as an Employee: Context and Capabilities (4:04)Model Selection and User Experience (7:37)AI as a McKinsey Consultant: Decision-Making (10:18)Structured vs. Unstructured Data Platforms (12:55)MCP Servers and the Future of BI Interfaces (16:00)Value of UI and Multimodal BI Experiences (18:38)Pitfalls of DIY Data Pipelines and Governance (22:14)Text-to-SQL, Semantic Layers, and Trust (28:10)Democratizing Semantic Models and Personalization (33:22)Inefficiency in Analytics and Analyst Workflows (35:07)Reasoning and Intelligence in Monitoring (37:20)Roadmap: Proactive AI by 2026 (39:53)Limitations of BI Incumbents, Future Outlooks and Parting Thoughts (41:15)The Data Stack Show is a weekly podcast powered by RudderStack, customer data infrastructure that enables you to deliver real-time customer event data everywhere it's needed to power smarter decisions and better customer experiences. Each week, we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
This is episode 299 recorded on July 21st, 2025, where John & Jason talk the Microsoft Power BI July 2025 Feature Summary including expanded data sharing with Microsoft 365, Copilot updates, field parameters goes GA, Organizational Themes, and more. For show notes please visit www.bifocal.show
In this special episode, guest host Brian Kennedy sits down with Chris Gaffney to explore how supply chain professionals can take control of their careers by embracing artificial intelligence. Chris introduces the “AI Maturity Ladder,” a step-by-step roadmap that helps individuals and teams evolve from foundational tools like Excel to advanced capabilities like predictive analytics, machine learning, and AI agents.The conversation covers:The evolution of AI in supply chain rolesPractical skills to stay relevant in a data-driven marketHow tools like Python, SQL, and Power BI tie into career growthWhy applied analytics beats theoretical knowledge in today's job marketStrategies for leaders to upskill their teams and create a culture of innovationWhether you're a student, mid-career professional, or supply chain leader, this episode offers clear, actionable guidance for climbing the AI ladder and ensuring you're leading change instead of reacting to it.
Product managers for BI platforms have it easy. They "just" need to have the dev team build a tool that gives all types of users access to all of the data they should be allowed to see in a way that is quick, simple, and clear while preventing them from pulling data that can be misinterpreted. Of course, there are a lot of different types of users—from the C-level executive who wants ready access to high-level metrics all the way to the analyst or data scientist who wants to drop into a SQL flow state to everyone in between. And sometimes the tool needs to provide structured dashboards, while at other times it needs to be a mechanism for ad hoc analysis. Maybe the product manager's job is actually…impossible? Past Looker CAO and current Omni CEO Colin Zima joined this episode for a lively discussion on the subject! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
Back in 2010, Tableau beat smarter tools with a better demo. No brain, all charm and the market loved it. Fast-forward to now: same playbook, new costume. The AI dashboard crowd is selling “natural language BI” with zero semantic model, zero memory, and a whole lot of LinkedIn swagger. In this episode, Rob and Justin revisit why Tableau's empty-calorie approach won the first round, and how that same mistake is about to flood the AI + BI space all over again. Turns out, you can still sell snake oil if you call it GenAI. Rob breaks down how an elite MIT course managed to skip LLMs entirely, how a flashy Tableau blog post went viral for connecting a CSV, and why “AI-ready” vendors keep duct-taping chat interfaces onto raw SQL and hoping no one looks under the hood. But the real story? Microsoft is sitting on the most powerful data brain in the game, and if they land the front end, it's game over. This isn't just a history lesson. It's a blueprint for seeing through the hype and betting on what actually works. If you're building, buying, or betting on AI tools, listen in before you get dazzled by the demo. Also on this episode: Early Experiments in Tableau's New MCP Service
Nikolay and Michael are joined by Andrew Johnson and Nate Brennand from Metronome to discuss MultiXact member space exhaustion — what it is, how they managed to hit it, and some tips to prevent running into it at scale. Here are some links to things they mentioned:Nate Brennand https://postgres.fm/people/nate-brennandAndrew Johnson https://postgres.fm/people/andrew-johnsonMetronome https://metronome.comRoot Cause Analysis: PostgreSQL MultiXact member exhaustion incidents (blog post by Metronome) https://metronome.com/blog/root-cause-analysis-postgresql-multixact-member-exhaustion-incidents-may-2025Multixacts and Wraparound (docs) https://www.postgresql.org/docs/current/routine-vacuuming.html#VACUUM-FOR-MULTIXACT-WRAPAROUNDmultixact.c source code https://github.com/postgres/postgres/blob/master/src/backend/access/transam/multixact.cAdd pg_stat_multixact view for multixact membership usage monitoring (patch proposal by Andrew, needing review!) https://commitfest.postgresql.org/patch/5869/PostgreSQL subtransactions considered harmful (blog post by Nikolay) https://postgres.ai/blog/20210831-postgresql-subtransactions-considered-harmfulvacuum_multixact_failsafe_age doesn't account for MultiXact member exhaustion (thread started by Peter Geoghegan) https://www.postgresql.org/message-id/flat/CAH2-WzmLPWJk3gbAxy8dHY%2BA-Juz_6uGwfe6DkE8B5-dTDvLcw%40mail.gmail.comAmazon S3 Vectors https://aws.amazon.com/blogs/aws/introducing-amazon-s3-vectors-first-cloud-storage-with-native-vector-support-at-scale/MultiXacts in PostgreSQL: usage, side effects, and monitoring (blog post by Shawn McCoy and Divya Sharma from AWS) https://aws.amazon.com/blogs/database/multixacts-in-postgresql-usage-side-effects-and-monitoring/Postgres Aurora multixact monitoring queries https://gist.github.com/natebrennand/0924f723ff61fa897c4106379fc7f3dc And finally an apology and a correction, the membership space is ~4B, not ~2B as said by Michael in the episode! Definition here:https://github.com/postgres/postgres/blob/f6ffbeda00e08c4c8ac8cf72173f84157491bfde/src/include/access/multixact.h#L31And here's the formula discussed for calculating how the member space can grow quadratically by the number of overlapping transactions:Members can be calculated via: aₙ = 2 + [sum from k=3 to n+1 of k]This simplifies to: aₙ = (((n+1)(n+2))/2) - 1~~~What did you like or not like? What should we discuss next time? Let us know via a YouTube comment, on social media, or by commenting on our Google doc!~~~Postgres FM is produced by:Michael Christofides, founder of pgMustardNikolay Samokhvalov, founder of Postgres.aiWith special thanks to:Jessie Draws for the elephant artwork
This week on The Data Stack Show, Eric and welcomes back Ruben Burdin, Founder and CEO of Stacksync as they together dismantle the myths surrounding zero-copy ETL and traditional data integration methods. Ruben reveals the complex challenges of two-way syncing between enterprise systems like Salesforce, HubSpot, and NetSuite, highlighting how existing tools often create more problems than solutions. He also introduces Stacksync's innovative approach, which uses real-time SQL-based synchronization to simplify data integration, reduce maintenance overhead, and enable more efficient operational workflows. The conversation exposes the limitations of current data transfer techniques and offers a glimpse into a more declarative, flexible approach to managing enterprise data across multiple systems. You won't want to miss it.Highlights from this week's conversation include:The Pain of Two-Way Sync and Early Integration Challenges (2:01)Zero Copy ETL: Hype vs. Reality (3:50)Data Definitions and System Complexity (7:39)Limitations of Out-of-the-Box Integrations (9:35)The CSV File: The Original Two-Way Sync (11:18)Stacksync's Approach and Capabilities (12:21)Zero Copy ETL: Technical and Business Barriers (14:22)Data Sharing, Clean Rooms, and Marketing Myths (18:40)The Reliable Loop: ETL, Transform, Reverse ETL (27:08)Business Logic Fragmentation and Maintenance (33:43)Simplifying Architecture with Real-Time Two-Way Sync (35:14)Operational Use Case: HubSpot, Salesforce, and Snowflake (39:10)Filtering, Triggers, and Real-Time Workflows (45:38)Complex Use Case: Salesforce to NetSuite with Data Discrepancies (48:56)Declarative Logic and Debugging with SQL (54:54)Connecting with Ruben and Parting Thoughts (57:58)The Data Stack Show is a weekly podcast powered by RudderStack, customer data infrastructure that enables you to deliver real-time customer event data everywhere it's needed to power smarter decisions and better customer experiences. Each week, we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
Madison Schott joins me to chat about about her journey from working as an analytics engineer to creating content about dbt, SQL, data modeling, and more.
In this episode of Building Better Developers with AI, Rob Broadhead and Michael Meloche revisit a popular question: What Happens When Software Fails? Originally titled When Coffee Hits the Fan: Developer Disaster Recovery, this AI-enhanced breakdown explores real-world developer mistakes, recovery strategies, and the tools that help turn chaos into control. Whether you're managing your first deployment or juggling enterprise infrastructure, you'll leave this episode better equipped for the moment when software fails. When Software Fails and Everything Goes Down The podcast kicks off with a dramatic (but realistic) scenario: CI passes, coffee is in hand, and then production crashes. While that might sound extreme, it's a situation many developers recognize. Rob and Michael cover some familiar culprits: Dropping a production database Misconfigured cloud infrastructure costing hundreds overnight Accidentally publishing secret keys Over-provisioned “default” environments meant for enterprise use Takeaway: Software will fail. Being prepared is the difference between a disaster and a quick fix. Why Software Fails: Avoiding Costly Dev Mistakes Michael shares an all-too-common situation: connecting to the wrong environment and running production-breaking SQL. The issue wasn't the code—it was the context. Here are some best practices to avoid accidental failure: Color-code terminal environments (green for dev, red for prod) Disable auto-commit in production databases Always preview changes with a SELECT before running DELETE or UPDATE Back up databases or individual tables before making changes These simple habits can save hours—or days—of cleanup. How to Recover When Software Fails Rob and Michael outline a reliable recovery framework that works in any team or tech stack: Monitoring and alerts: Tools like Datadog, Prometheus, and Sentry help detect issues early Rollback plans: Scripts, snapshots, and container rebuilds should be ready to go Runbooks: Documented recovery steps prevent chaos during outages Postmortems: Blameless reviews help teams learn and improve Clear communication: Everyone on the team should know who's doing what during a crisis Pro Tip: Practice disaster scenarios ahead of time. Simulations help ensure you're truly ready. Essential Tools for Recovery Tools can make or break your ability to respond quickly when software fails. Rob and Michael recommend: Docker & Docker Compose for replicable environments Terraform & Ansible for consistent infrastructure GitHub Actions, GitLab CI, Jenkins for automated testing and deployment Chaos Engineering tools like Gremlin and Chaos Monkey Snapshot and backup automation to enable fast data restoration Michael emphasizes: containers are the fastest way to spin up clean environments, test recovery steps, and isolate issues safely. Mindset Matters: Staying Calm When Software Fails Technical preparation is critical—but so is mindset. Rob notes that no one makes smart decisions in panic mode. Having a calm, repeatable process in place reduces pressure when systems go down. Cultural and team-based practices: Use blameless postmortems to normalize failure Avoid root access in production whenever possible Share mistakes in standups so others can learn Make local environments mirror production using containers Reminder: Recovery is a skill—one you should build just like any feature. Think you're ready for a failure scenario? Prove it. This week, simulate a software failure in your development environment: Turn off a service your app depends on Delete (then restore) a local database from backup Use Docker to rebuild your environment from scratch Trigger a mock alert in your monitoring tool Then answer these questions: How fast can you recover? What broke that you didn't expect? What would you do differently in production? Recovery isn't just theory—it's a skill you build through practice. Start now, while the stakes are low. Final Thought Software fails. That's a reality of modern development. But with the right tools, smart workflows, and a calm, prepared team, you can recover quickly—and even improve your system in the process. Learn from failure. Build with resilience. And next time something breaks, you'll know exactly what to do. Stay Connected: Join the Developreneur Community We invite you to join our community and share your coding journey with us. Whether you're a seasoned developer or just starting, there's always room to learn and grow together. Contact us at info@develpreneur.com with your questions, feedback, or suggestions for future episodes. Together, let's continue exploring the exciting world of software development. Additional Resources System Backups – Prepare for the Worst Using Dropbox To Provide A File Store and Reliable Backup Testing Your Backups – Disaster Recovery Requires Verification Virtual Systems On A Budget – Realistic Cloud Pricing Building Better Developers With AI Podcast Videos – With Bonus Content
This is episode 298 recorded on July 9th, 2025, where John & Jason talk the Microsoft Fabric June 2025 Feature Summary including lots of Notebook updates in Data Engineering, lower cost for AI functions in Data Science, Copilot for RTI dashboards, and more. For show notes please visit www.bifocal.show
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
SSH Tunneling in Action: direct-tcp requests Attackers are compromising ssh servers to abuse them as relays. The attacker will configure port forwarding direct-tcp connections to forward traffic to a victim. In this particular case, the Yandex mail server was the primary victim of these attacks. https://isc.sans.edu/diary/SSH%20Tunneling%20in%20Action%3A%20direct-tcp%20requests%20%5BGuest%20Diary%5D/32094 Fortiguard FortiWeb Unauthenticated SQL injection in GUI (CVE-2025-25257) An improper neutralization of special elements used in an SQL command ('SQL Injection') vulnerability [CWE-89] in FortiWeb may allow an unauthenticated attacker to execute unauthorized SQL code or commands via crafted HTTP or HTTPs requests. https://www.fortiguard.com/psirt/FG-IR-25-151 Ruckus Virtual SmartZone (vSZ) and Ruckus Network Director (RND) contain multiple vulnerabilities Ruckus products suffer from a number of critical vulnerabilities. There is no patch available, and users are advised to restrict access to the vulnerable admin interface. https://kb.cert.org/vuls/id/613753
Nikolay and Michael are joined by Sugu Sougoumarane to discuss Multigres — a project he's joined Supabase to lead, building an adaptation of Vitess for Postgres! Here are some links to things they mentioned:Sugu Sougoumarane https://postgres.fm/people/sugu-sougoumaraneSupabase https://supabase.comAnnouncing Multigres https://supabase.com/blog/multigres-vitess-for-postgresVitess https://github.com/vitessio/vitessSPQR https://github.com/pg-sharding/spqrCitus https://github.com/citusdata/citusPgDog https://github.com/pgdogdev/pgdogMyths and Truths about Synchronous Replication in PostgreSQL (talk by Alexander Kukushkin) https://www.youtube.com/watch?v=PFn9qRGzTMcConsensus algorithms at scale (8 part series by Sugu) https://planetscale.com/blog/consensus-algorithms-at-scale-part-1A More Flexible Paxos (blog post by Sugu) https://www.sougou.io/a-more-flexible-paxoslibpg_query https://github.com/pganalyze/libpg_queryPL/Proxy https://github.com/plproxy/plproxyPlanetScale Postgres Benchmarking https://planetscale.com/blog/benchmarking-postgresMultiXact member exhaustion incidents (blog post by Cosmo Wolfe / Metronome) https://metronome.com/blog/root-cause-analysis-postgresql-multixact-member-exhaustion-incidents-may-2025~~~What did you like or not like? What should we discuss next time? Let us know via a YouTube comment, on social media, or by commenting on our Google doc!~~~Postgres FM is produced by:Michael Christofides, founder of pgMustardNikolay Samokhvalov, founder of Postgres.aiWith special thanks to:Jessie Draws for the elephant artwork
This week on The Data Stack Show, Eric and John welcome back Matt Kelliher-Gibson for another edition of the Cynical Data Guy. The group explores the current state of data engineering and team dynamics while critically examining the evolving landscape of analytics engineering, dissecting the hype around the modern data stack and its tools. The conversation also explores the challenges of data team management, including headcount reductions, rising technology costs, and the struggle to maintain efficiency. Key discussions revolve around the need for open standards, the impact of AI on data roles, the complex hiring practices in tech startups, and so much more. Highlights from this week's conversation include:The Evolution of Analytics Engineer Roles (1:53)Job Titles and Role Consolidation in Data (3:20)Standardization and Open Data Standards (7:51)SQL as a Universal Standard & Vendor Lock-In (11:58)Modern Data Stack: Hype vs. Reality (13:29)The State of Data Teams in 2025 (18:12)Morale and Job Market Realities for Data Professionals (25:17)Bonus Round: Extreme Work Culture Satire (28:41)Honesty in Hiring and Team Building (33:18)Challenges of Building and Leading Data Teams (37:31)Final Thoughts and Takeaways (41:15)The Data Stack Show is a weekly podcast powered by RudderStack, customer data infrastructure that enables you to deliver real-time customer event data everywhere it's needed to power smarter decisions and better customer experiences. Each week, we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
Appsec still deals with ancient vulns like SQL injection and XSS. And now LLMs are generating code along side humans. Sandy Carielli and Janet Worthington join us once again to discuss what all this new code means for appsec practices. On a positive note, the prevalence of those ancient vulns seems to be diminishing, but the rising use of LLMs is expanding a new (but not very different) attack surface. We look at where orgs are investing in appsec, who appsec teams are collaborating with, and whether we need security awareness training for LLMs. Resources: https://www.forrester.com/blogs/application-security-2025-yes-ai-just-made-it-harder-to-do-this-right/ Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-338
This is episode 297 recorded on July 4th, 2025, where John & Jason talk the Microsoft Fabric June 2025 Feature Summary including updates to Visual Calcs, Sparklines are now GA (after 4 years), Azure Maps breaking changes, Org Apps updates & revelations, and Power Query editing of Import models in the web.
Appsec still deals with ancient vulns like SQL injection and XSS. And now LLMs are generating code along side humans. Sandy Carielli and Janet Worthington join us once again to discuss what all this new code means for appsec practices. On a positive note, the prevalence of those ancient vulns seems to be diminishing, but the rising use of LLMs is expanding a new (but not very different) attack surface. We look at where orgs are investing in appsec, who appsec teams are collaborating with, and whether we need security awareness training for LLMs. Resources: https://www.forrester.com/blogs/application-security-2025-yes-ai-just-made-it-harder-to-do-this-right/ Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-338
Appsec still deals with ancient vulns like SQL injection and XSS. And now LLMs are generating code along side humans. Sandy Carielli and Janet Worthington join us once again to discuss what all this new code means for appsec practices. On a positive note, the prevalence of those ancient vulns seems to be diminishing, but the rising use of LLMs is expanding a new (but not very different) attack surface. We look at where orgs are investing in appsec, who appsec teams are collaborating with, and whether we need security awareness training for LLMs. Resources: https://www.forrester.com/blogs/application-security-2025-yes-ai-just-made-it-harder-to-do-this-right/ Show Notes: https://securityweekly.com/asw-338
This show has been flagged as Explicit by the host. New hosts There were no new hosts this month. Last Month's Shows Id Day Date Title Host 4391 Mon 2025-06-02 HPR Community News for May 2025 HPR Volunteers 4392 Tue 2025-06-03 The Water is Wide, and the sheet music should be too Jezra 4393 Wed 2025-06-04 Journal like you mean it. Some Guy On The Internet 4394 Thu 2025-06-05 Digital Steganography Intro mightbemike 4395 Fri 2025-06-06 Second Life Lee 4396 Mon 2025-06-09 AI and Sangria operat0r 4397 Tue 2025-06-10 Transfer files from desktop to phone with qrcp Klaatu 4398 Wed 2025-06-11 Command line fun: downloading a podcast Kevie 4399 Thu 2025-06-12 gpg-gen-key oxo 4400 Fri 2025-06-13 Isaac Asimov: Other Asimov Novels of Interest Ahuka 4401 Mon 2025-06-16 hajime oxo 4402 Tue 2025-06-17 pinetab2 Brian in Ohio 4403 Wed 2025-06-18 How to get your very own copy of the HPR database norrist 4404 Thu 2025-06-19 Kevie nerd snipes Ken by grepping xml Ken Fallon 4405 Fri 2025-06-20 What did I do at work today? Lee 4406 Mon 2025-06-23 SVG Files: Cyber Threat Hidden in Images ko3moc 4407 Tue 2025-06-24 A 're-response' Bash script Dave Morriss 4408 Wed 2025-06-25 Lynx - Old School Browsing Kevie 4409 Thu 2025-06-26 H D R Ridiculous Monitor operat0r 4410 Fri 2025-06-27 Civilization V Ahuka 4411 Mon 2025-06-30 The Pachli project thelovebug Comments this month These are comments which have been made during the past month, either to shows released during the month or to past shows. There are 29 comments in total. Past shows There are 4 comments on 3 previous shows: hpr4375 (2025-05-09) "Long Chain Carbons,Eggs and Dorodango?" by operat0r. Comment 4: Torin Doyle on 2025-06-06: "Reply to @Bob" hpr4378 (2025-05-14) "SQL to get the next_free_slot" by norrist. Comment 1: Torin Doyle on 2025-06-12: "Cheers for this." hpr4388 (2025-05-28) "BSD Overview" by norrist. Comment 4: Henrik Hemrin on 2025-06-02: "Learned more about BSD." Comment 5: norrist on 2025-06-02: "Additional info for OpenBSD Router" This month's shows There are 25 comments on 10 of this month's shows: hpr4391 (2025-06-02) "HPR Community News for May 2025" by HPR Volunteers. Comment 1: Torin Doyle on 2025-06-06: "Very disappointed."Comment 2: Ken Fallon on 2025-06-06: "Thanks for your feedback."Comment 3: Torin Doyle on 2025-06-09: "Reply to Ken [Comment 2]"Comment 4: norrist on 2025-06-09: "Watch the Queue for a show about how to find all the comments"Comment 5: Torin Doyle on 2025-06-10: "Comment #3 typo."Comment 6: Torin Doyle on 2025-06-11: "Reply to Comment #4 by norrist"Comment 7: Torin Doyle on 2025-06-11: "Got the link." hpr4394 (2025-06-05) "Digital Steganography Intro" by mightbemike. Comment 1: Henrik Hemrin on 2025-06-05: "Fascinating topic"Comment 2: oxo on 2025-06-05: "Good show! " hpr4395 (2025-06-06) "Second Life" by Lee. Comment 1: Antoine on 2025-06-08: "Brings philosophical thoughts" hpr4397 (2025-06-10) "Transfer files from desktop to phone with qrcp" by Klaatu. Comment 1: Laindir on 2025-06-18: "The perfect kind of recommendation" hpr4398 (2025-06-11) "Command line fun: downloading a podcast" by Kevie. Comment 1: Henrik Hemrin on 2025-06-11: "Tempted to have fun"Comment 2: Ken Fallon on 2025-06-22: "Personal message to redhat (nprfan)" hpr4403 (2025-06-18) "How to get your very own copy of the HPR database" by norrist. Comment 1: Torin Doyle on 2025-06-18: "Appreciated!"Comment 2: Torin Doyle on 2025-06-18: "Database size."Comment 3: norrist on 2025-06-18: "Also an SQLite version"Comment 4: Torin Doyle on 2025-06-25: "Not able to use database to find my comments." hpr4404 (2025-06-19) "Kevie nerd snipes Ken by grepping xml" by Ken Fallon. Comment 1: Henrik Hemrin on 2025-06-22: "More to digest"Comment 2: Alec Bickerton on 2025-06-29: "Shorter version"Comment 3: Alec Bickerton on 2025-06-29: "Shorter version"Comment 4: Alec Bickerton on 2025-06-29: "XML parsing without xmlstarlet" hpr4405 (2025-06-20) "What did I do at work today?" by Lee. Comment 1: Dave Morriss on 2025-06-25: "Thanks for bringing us along..." hpr4406 (2025-06-23) "SVG Files: Cyber Threat Hidden in Images" by ko3moc. Comment 1: oxo on 2025-06-23: "Interesting! "Comment 2: ko3moc on 2025-06-24: "response " hpr4408 (2025-06-25) "Lynx - Old School Browsing" by Kevie. Comment 1: Henrik Hemrin on 2025-06-29: "Review ALT texts" Mailing List discussions Policy decisions surrounding HPR are taken by the community as a whole. This discussion takes place on the Mailing List which is open to all HPR listeners and contributors. The discussions are open and available on the HPR server under Mailman. The threaded discussions this month can be found here: https://lists.hackerpublicradio.com/pipermail/hpr/2025-June/thread.html Events Calendar With the kind permission of LWN.net we are linking to The LWN.net Community Calendar. Quoting the site: This is the LWN.net community event calendar, where we track events of interest to people using and developing Linux and free software. Clicking on individual events will take you to the appropriate web page. Provide feedback on this episode.
The big tech company conferences continued this summer with Vercel hosting Vercel Ship 2025. As you'd expect there was lots of talk about AI and Vercel's AI Cloud: tools, infrastructure, and platform enhancements to build AI agents and help AI agents use Vercel.On July 1, hosting platform Cloudflare declared Content Independence Day, and changed its settings to block AI crawlers by default unless they pay creators for their content. While we absolutely support this move, Cloudflare's future vision of a marketplace where content creators and AI companies come together and compensation is based on how much content “furthers knowledge” seems idealistic, but we'll have to wait and see.Serverless Postgres database company Neon has a new product called Neon Launchpad that can create an instant Neon database with zero configuration or account creation. Users get an automatically generated connection string, 72 hours to claim a new database, and even automatic database seeding with SQL scripts for schema and data initialization.Timestamps:2:13 - Vercel Ship event updates7:49 - Cloudfare declares content independence day16:12 - Neon Launchpad20:03 - Figma IPO22:24 - Deno v. Oracle trademark update25:10 - Antropic lets Claude run a vending machine32:21 - What's making us happyLinks:News:Paige - Cloudflare declares July 1 Content Independence DayJack - Neon Launchpad instant DBsTJ - Vercel Ship 2025Lightning:Figma has filed for an IPO to trade on the stock exchange as “FIG”Claude ran a vending machine, and the first attempt at “vibe management” wasn't greatDeno v. Oracle trademark updateWhat Makes Us Happy this Week:Paige - Squid Game season 3Jack - F1: The MovieTJ - Bobby Banilla DayThanks as always to our sponsor, the Blue Collar Coder channel on YouTube. You can join us in our Discord channel, explore our website and reach us via email, or talk to us on X, Bluesky, or YouTube.Front-end Fire websiteBlue Collar Coder on YouTubeBlue Collar Coder on DiscordReach out via emailTweet at us on X @front_end_fireFollow us on Bluesky @front-end-fire.comSubscribe to our YouTube channel @Front-EndFirePodcast
In this episode of the Biz To Biz Podcast, we dive into the future of business intelligence with Sarah Kabakoff, CEO of Genetica, and a driving force behind intelligent automation in modern industries.With a deep background leading revenue and product strategy at AI-driven SaaS startups, Sarah and her team have spent the last decade transforming operations in restaurants, retail, and regulated markets through cutting-edge technology.At Genetica, Sarah is leading the charge on ServeAI, a data intelligence platform that merges enriched SQL layers, LLM-powered agents, and real-time business logic. The result? A revolutionary tool that eliminates the need for static dashboards, analysts, or complex BI systems.Follow Us On These Social Media Platforms
In the final episode of this series on Oracle GoldenGate 23ai, Lois Houston and Nikita Abraham welcome back Nick Wagner, Senior Director of Product Management for GoldenGate, to discuss how parameters shape data replication. This episode covers parameter files, data selection, filtering, and transformation, providing essential insights for managing GoldenGate deployments. Oracle GoldenGate 23ai: Fundamentals: https://mylearn.oracle.com/ou/course/oracle-goldengate-23ai-fundamentals/145884/237273 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. --------------------------------------------------------------- Podcast Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! This is the last episode in our Oracle GoldenGate 23ai series. Previously, we looked at how you can manage Extract Trails and Files. If you missed that episode, do go back and give it a listen. 00:50 Lois: Today, Nick Wagner, Senior Director of Product Management for GoldenGate, is back on the podcast to tell us about parameters, data selection, filtering, and transformation. These are key components of GoldenGate because they allow us to control what data is replicated, how it's transformed, and where it's sent. Hi Nick! Thanks for joining us again. So, what are the different types of parameter files? Nick: We have a GLOBALS parameter file and your runtime parameter files. The global one is going to affect all processes within a deployment. It's going to be things like where's your checkpoint table located in name, things like the heartbeat table. You want to have a single one of these across your entire deployment, so it makes sense to keep it within a single file. We also have runtime parameter files. This are going to be associated with a specific extract or replicat process. These files are located in your OGG_ETC_HOME/conf/ogg. The GLOBALS file is just simply named GLOBALS and all capitals, and your parameter file names for the processes themselves are named with the process.prm. So if my extract process is EXT demo, my parameter file name will be extdemo.prm. When you make changes to parameter files, they don't take effect until the process is restarted. So in the case of a GLOBALS parameter file, you need to restart the administration service. And in a runtime parameter file, you need to restart that specific process before any changes will take effect. We also have what we call a managed process setting profile. And this allows you to set up auto restart profiles for each process. And the GoldenGate Gate classic architecture, this was contained within the GLOBALS parameter file and handled by the manager. And microservices is a little bit different, it's handled by the service manager itself. But now we actually set up profiles. 02:41 Nikita: Ok, so what can you tell us about the extract parameter file specifically? Nick: There's a couple things within the extract parameter file is common use. First, we want to tell what the group name is. So in this case, it would be our extract name. We need to put in information on where the extract process is going to be writing the data it captures to and that would be our trail files, and extract process can write to one or more trail files. We also want to list out the list of tables and schemas that we're going to be capturing, as well as any kind of DDL changes. If we're doing an initial load, we want to set up the SQL predicate to determine which tables are being captured and put a WHERE clause on those to speed up performance. We can also do filtering within the extract process as well. So we write just the information that we need to the trail file. 03:27 Nikita: And what are the common parameters within an extract process? Nick: There are a couple of common parameters within your extract process. We have table to list out the list of tables that GoldenGate is going to be capturing from. These can be wildcarded. So I can simply do table.star and GoldenGate will capture all the tables in that database. I can also do schema.star and it will capture all the tables within a schema. We have our EXTTRAIL command, which tells GoldenGate which trail to write to. If I want to filter out certain rows and columns, I can use the filter cols and cols except parameter. GoldenGate can also capture sequence changes. So we would use the sequence parameter. And then we can also set some high-level database options for GoldenGate that affect all the tables and that's configured using the tranlog options parameter. 04:14 Lois: Nick, can you talk a bit about the different types of tranlogoptions settings? How can they be used to control what the extract process does? Nick: So one of the first ones is ExcludeTag. So GoldenGate has the ability to exclude tagged transactions. Within the database itself, you can actually specify a transaction to be tagged using a DBMS set tag option. GoldenGate replicat also sets its transactions with a tag so that the GoldenGate process knows which transactions were done by the replicat and it can exclude them automatically. You can do exclude tag with a plus. That simply means to exclude any transaction that's been tagged with any value. You can also exclude specific tags. Another good option for TranLogOptions is enable procedural replication. This allows GoldenGate to actually capture and replicate database procedure calls, and this would be things like DBMS AQ, NQ operations, or DQ operations. So if you're using Oracle advanced queuing and you need GoldenGate to replicate those changes, it can. Another valuable tranlogoption setting is enable auto capture. Within the Oracle Database, you can actually set ALTER TABLE command that says ALTER TABLE, enable logical replication. Or when you create a table, you can actually do CREATE TABLE statement and at the end use the enable logical replication option for that CREATE TABLE statement. And this tells GoldenGate to automatically capture that table. One of the nice features about this is that I don't need to specify that table and my parameter file, and it'll automatically enable supplemental logging on that table for me using scheduling columns. So it makes it very easy to set up replication between Oracle databases. 06:01 Nikita: Can you tell us about replicat parameters, Nick? Nick: Within a replicat, we'll have the group name, some common other parameters that we'll use is a mapping parameter that allows us to map the source to target table relationships. We can do transformation within the replicat, as well as error handling and controlling group operations to improve performance. Some common replicat parameters include the replicat parameter itself, which tells us what the name of that replicat is. We have our map statement, which allows us to map a source object to a target object. We have things like rep error that control how to handle errors. Insert all records allows us to change and convert, update, and delete operations into inserts. We can do things like compare calls, which helps with active-active replication in determining which columns are used in the GoldenGate WHERE clause. We also have the ability to use macros and column mapping to do additional transformation and make the parameter file look elegant. 07:07 AI is being used in nearly every industry…healthcare, manufacturing, retail, customer service, transportation, agriculture, you name it! And it's only going to get more prevalent and transformational in the future. It's no wonder that AI skills are the most sought-after by employers. If you're ready to dive in to AI, check out the OCI AI Foundations training and certification that's available for free! It's the perfect starting point to build your AI knowledge. So, get going! Head on over to mylearn.oracle.com to find out more. 07:47 Nikita: Welcome back! Let's move on to some of the most interesting topics within GoldenGate… data mapping, selection, and transformation. As I understand, users can do pretty cool things with GoldenGate. So Nick, let's start with how GoldenGate can manipulate, change, and map data between two different databases. Nick: The map statement within a Replicat parameter allows you to provide specifications on how you're going to map source and target objects. You can also use a map and an extract, but it's pretty rare. And that would be used if you needed to write the object name. Inside the trail files is a different name than the actual object name that you're capturing from. GoldenGate can also do different data selection, mapping, and manipulation, and this is all controlled within the Extract and Replicat parameter files. In the classic architecture of GoldenGate, you could do a rudimentary level of transformation and filtering within the extract pump. Now, the distribution service is only allowing you to do filtering. Any transformation that you had within the pump would need to be moved to the Extract or the Replicat process. The other thing that you can do within GoldenGate is select and filter data based on different levels and conditions. So within your parameter clause, you have your Table and Map statement. That's the core of everything. You have your filtering. You have COLS and COLSEXCEPT, which allow you to determine which columns you're going to include or exclude from replication. The Table and Map statement works at the table level. The FILTER works at the row level. And COLS and COLSEXCEPTs works at the column level. We also have the ability to filter by operation type too. So GoldenGate has some very easy parameters called GitInserts, GitUpdates, GitDeletes, and conversely ignore updates, ignore deletes, ignore inserts. And that will affect the operation type. 09:40 Lois: Nick, are there any features that GoldenGate provides to make data replication easier? Nick: The first thing is that GoldenGate is going to automatically match your source and target column names with a parameter called USEDEFAULTS. You can specify it inside of your COLMAP clause, but again, it's a default, so you don't need to worry about it. We also handle all data type and character set conversion. Because we store the metadata in the trail, we know what that source data type is like. When we go to apply the record to the target table, the Replicat process is going to look up the definition of that record and keep a repository of that in memory. So that when it knows that, hey, this value coming in from the trail file is going to be of a date data type, and then this value in the target database is going to be a character data type, it knows how to convert that date to a character, and it'll do it for you. Most of the conversion is going to be done automatically for data types. Things where we don't do automatic data type conversion is if you're using abstract data types or user-defined data types, collections arrays, and then some types of CLOB operations. For example, if you're going from a BLOB to a JSON, that's not really going to work very well. Character set conversion is also done automatically. It's not necessarily done directly by GoldenGate, but it's done by the database engine. So there is a character set value inside that source database. And when GoldenGate goes to apply those changes into the target system, it's ensuring that that character set is visible and named so that that database can do the necessary translation. You can also do advanced filtering transformation. There's tokens that you can attach from the source environment, database, or records into a record itself on the trail file. And then there's also a bunch of metadata that GoldenGate can use to attach to the record itself. And then of course, you can use data transformation within your COLMAP statement. 11:28 Nikita: Before we wrap up, what types of data transformations can we perform, Nick? Nick: So there's quite a few different data transformations. We can do constructive or destructive transformation, aesthetic, and structural. 11:39 Lois: That's it for the Oracle GoldenGate 23ai: Fundamentals series. I think we covered a lot of ground this season. Thank you, Nick, for taking us through it all. Nikita: Yeah, thank you so much, Nick. And if you want to learn more, head over to mylearn.oracle.com and search for the Oracle GoldenGate 23ai: Fundamentals course. Until next time, this is Nikita Abraham… Lois: And Lois Houston, signing off! 12:04 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
There have been lots of social media posts declaring things to be dead - SQL, R, data engineering, BI, etc.I give my thoughts on these proclamations, why it's a wrong way to think about our space, and more.
Welcome to episode 308 of The Cloud Pod – where the forecast is always cloudy! Justin and Matt are on hand and ready to bring you an action packed episode. Unfortunately, this one is also lullaby free. Apologies. This week we're talking about Databricks and Lakebridge, Cedar Analysis, Amazon Q, Google's little hiccup, and updates to SQL – plus so much more! Thanks for joining us. Titles we almost went with this week: KV Phone Home: When Your Key-Value Store Goes AWOL When Your Coreless Service Finds Its Core Problem Oracle’s Vanity Fair: Pretty URLs for Pretty Penny From Warehouse to Lakehouse: Your Free Ticket to Cloud Town 1⃣Databricks Uno: Because One is the Loneliest Number Free as in Beer, Smart as in Data Science Cedar Analysis: Because Your Authorization Policies Wood Never Lie Cedar Analysis: Teaching Old Policies New Proofs Amazon Q Finally Learns to Talk to Other Apps Tomorrow: Visual Studio’s Predictive Edit Revolution The Ghost of Edits Future: AI Haunts Your Code Before You Write It IAM What IAM: Google’s Identity Crisis Breaks the Internet Permission Denied: The Day Google Forgot Who Everyone Was 403 Forbidden: When Google’s Bouncer Called in Sick AWS Brings the Heat to Fusion Research Larry’s Cloud Nine: Oracle Stock Soars on Forecast Raise OCI You Later: Oracle Bets Big on Cloud Growth Oracle’s Crystal Ball Shows 40% Cloud Growth Ahead Meta Scales Up Its AI Ambitions with $14 Billion Investment From FAIR to Scale: Meta’s $14 Billion AI Makeover Congratulations Databricks one, you are now the new low code solution. AWS burns power to figure out how power works AI Is Going Great – Or How ML Makes Money 02:12 Zuckerberg makes Meta’s biggest bet on AI, $14 billion Scale AI deal Meta is finalizing a $14 billion investment for a 49% stake in Scale AI, with CEO Alexandr Wang joining to lead a new AI research lab at Meta. This follows similar moves by Google and Microsoft acquiring AI talent through investments rather than direct acquisitions to avoid regulatory scrutiny. Scale AI specializes in data labeling and annotation services critical for training AI models, serving major clients including OpenAI, Google, Microsoft, and Meta. The company’s expertise covers approximately 70% of all AI models being built, providing Meta with valuable intelligence on competitor approaches to model development. The deal reflects Meta’s struggles with its Llama AI models, particularly the underwhelming reception of Llama 4 and delays in releasing the more powerful “Behemoth” model due to concerns about competitiveness with OpenAI and
Nikolay and Michael are joined by Gwen Shapira to discuss multi-tenant architectures — the high level options, the pros and cons of each, and how they're trying to help with Nile. Here are some links to things they mentioned:Gwen Shapira https://postgres.fm/people/gwen-shapiraNile https://www.thenile.devSaaS Tenant Isolation Strategies (AWS whitepaper) https://docs.aws.amazon.com/whitepapers/latest/saas-tenant-isolation-strategies/saas-tenant-isolation-strategies.html Row Level Security https://www.postgresql.org/docs/current/ddl-rowsecurity.htmlCitus https://github.com/citusdata/citusPostgres.AI Bot https://postgres.ai/blog/20240127-postgres-ai-bot RLS Performance and Best Practices https://supabase.com/docs/guides/troubleshooting/rls-performance-and-best-practices-Z5JjwvCase Gwen mentioned about the planner thinking an optimisation was unsafe Re-engineering Postgres for Millions of Tenants (Gwen's recent talk at PGConf.dev) https://www.youtube.com/watch?v=EfAStGb4s88 Multi-tenant database the good, the bad, the ugly (talk by Pierre Ducroquet at PgDay Paris) https://www.youtube.com/watch?v=4uxuPfSvTGU ~~~What did you like or not like? What should we discuss next time? Let us know via a YouTube comment, on social media, or by commenting on our Google doc!~~~Postgres FM is produced by:Michael Christofides, founder of pgMustardNikolay Samokhvalov, founder of Postgres.aiWith special thanks to:Jessie Draws for the elephant artwork
In today's episode, James Maude chats with Robin Wood—better known as “DigiNinja”—the creator of DVWA and co-founder of SteelCon. Robin shares wild stories from his hacking career, including an infamous SQL injection that accidentally overwrote every customer's credit card info on a gambling site, how he took down entire client networks with just two packets, and the origins of the UK's most eccentric security conference, SteelCon—featuring 450 stuffed whippets and full-on Nerf gun warfare.
In this episode of AI + a16z, dbt Labs founder and CEO Tristan Handy sits down with a16z's Jennifer Li and Matt Bornstein to explore the next chapter of data engineering — from the rise (and plateau) of the modern data stack to the growing role of AI in analytics and data engineering. As they sum up the impact of AI on data workflows: The interesting question here is human-in-the-loop versus human-not-in-the-loop. AI isn't about replacing analysts — it's about enabling self-service across the company. But without a human to verify the result, that's a very scary thing.Among other specific topics, they also discuss how automation and tooling like SQL compilers are reshaping how engineers work with data; dbt's new Fusion Engine and what it means for developer workflows; and what to make of the spate of recent data-industry acquisitions and ambitious product launches.Follow everyone on X:Tristan HandyJennifer LiMatt Bornstein Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
At 23, Isaac is already jaded about software reliability - and frankly, he's got good reason to be. When your grandmother can't access her medical records because a username change broke the entire system, when bugs routinely make people's lives harder, you start to wonder: why do we just accept that software is broken most of the time?Isaac's answer isn't just better testing - it's a whole toolkit of techniques working together. He's advocating for scattering "little bombs" throughout your code via runtime assertions, adding in the right amount of static typing, building feedback loops that page you when invariants break, and running nightly SQL queries to catch the bugs that slip through everything else. All building what he sees as a pyramid of software reliability.Weaving into that, we also dive into the Roc programming language, its unique platform architecture that tailors development to specific domains. Software reliability isn't just about the end user experience - Roc feeds in the idea we can make reliability easier by tailoring the language domain to the problem at hand.–Isaac's Homepage: https://isaacvando.com/Episode on Property Testing: https://youtu.be/wHJZ0icwSkcProperty Testing Walkthrough: https://youtu.be/4bpc8NpNHRcSupport Developer Voices on Patreon: https://patreon.com/DeveloperVoicesSupport Developer Voices on YouTube: https://www.youtube.com/@developervoices/joinIsaac on LinkedIn: https://www.linkedin.com/in/isaacvando/Kris on Bluesky: https://bsky.app/profile/krisajenkins.bsky.socialKris on Mastodon: http://mastodon.social/@krisajenkinsKris on LinkedIn: https://www.linkedin.com/in/krisjenkins/
This is episode 296 recorded on June 6th, 2025, where John & Jason talk the Microsoft Fabric May 2025 Feature Summary including a REST API updates for Fabric, updates to User Data Functions, Copilot in Power BI support for Fabric data agents, CosmosDB in Fabric, DataFlows Gen 2 CI/CD support is now GA, updates to Data Pipelines & Mirroring, and much more. For show notes please visit www.bifocal.show
In this episode, Dave and Jamison answer these questions: A listener named Mike says, To what degree do you think it's appropriate to talk with your peer managers about people that have moved from their team to yours? How much weight do you give their criticisms of an IC that they used to manage that is working out just fine under your leadership? How do you know if it was mostly due to a conflict in their relationship, or if there's a nugget of truth you need to look out for? Hi, thanks for a great show. I've listened to 400 episodes in a year - thanks for making my commute fun! I've been at my current job as a software developer for a year. It's a great company overall, but we rely on a 30-year-old in-house ticket system that also doubles as a time reporting tool. It lacks many basic features, and project managers often resort to SQL and Excel just to get an overview. As you can imagine, things get forgotten and lost easily. Everyone dislikes it, but the old-timers are used to it. They want any replacement to be cheap and also handle time reporting, which really limits our options. I suggested to keep using the old system for time reporting only for now, but the reaction made me feel like I'd suggested going back to pen and paper. While the company is old and set in its ways in some areas, it has made big changes in others, so I'm not ready to give up hope just yet. How can I at least nudge the company toward adopting a more modern ticket system to improve visibility and planning? I've shown examples that save time and offer better overviews, but it hasn't made much impact. Where should I focus my efforts—or do I just have to learn to live with it? Some more context: This is in Europe and the culture at the company is generally open to feedback and discussions from anyone. I have 10+ years experience and a relatively good influence. My manager is driving change successfully to make the company more modern but I suspect he might have given up on this one.
Bob Ward is a Principal Architect for the Microsoft Azure Data team, which owns the development for Microsoft SQL Edge to Cloud. Bob has worked for Microsoft for 31-plus years on every version of SQL Server shipped, from OS/2 1.1 to SQL Server 2025, including Azure SQL. Bob is a well-known speaker on SQL Server, Azure SQL, AI, and Microsoft Fabric, often presenting talks on new releases, internals, and specialized topics at events such as SQLBits, Microsoft Build, Microsoft Ignite, PASS Summit, DevIntersection, and VS Live. You can also learn Azure SQL from him on the popular series https://aka.ms/azuresql4beginners. You can follow him on X at @bobwardms or linkedin.com/in/bobwardms. Bob is the author of the books Pro SQL Server on Linux, SQL Server 2019 Revealed, Azure SQL Revealed with a 2nd edition, and SQL Server 2022 Revealed available from Apress Media. Topics of Discussion: [1:38] Bob reflects on nearly 30 years at Microsoft, growing alongside SQL Server since 1993. [4:16] Transitioning from engineering to advocacy: why Bob now focuses on helping developers unlock the power of SQL Server. [6:12] Debunking myths about SQL Server — yes, it's cloud-ready, developer-friendly, and supports containers and Linux. [10:15] Key tools and features for developers using SQL: containers, Bicep templates, SQLCMD, and DevOps pipelines. [16:23] SQL projects and source control: how modern database DevOps practices improve reliability and testing. [19:32] Common challenges in database development: fear of breaking production, limited test data, and cultural silos. [22:55] Bob's perspective on responsible database change management and the importance of a good rollback plan. [26:02] The evolution of developer tooling in SQL Server, and how Microsoft is making the CLI and APIs first-class citizens. [30:47] Advice for new developers: SQL isn't going anywhere, and it's easier than ever to get started. [34:00] Resources and community support: Bob highlights docs, GitHub samples, training courses, and his book. Mentioned in this Episode: Clear Measure Way Architect Forum Software Engineer Forum Programming with Palermo — New Video Podcast! Email us at programming@palermo.net. Clear Measure, Inc. (Sponsor) Bob Ward: SQL Server - Episode 321 Bob Ward LinkedIn Bob Ward MBob Ward — Microsoft | LinkedInicrosoft Azure SQL Revealed: The Next-Generation Cloud Database with AI and Microsoft Fabric Want to Learn More? Visit AzureDevOps.Show for show notes and additional episodes.
Nikolay and Michael discuss looking at queries by mean time — when it makes sense, why ordering by a percentile (like p99) might be better, and the merits of approximating percentiles in pg_stat_statements using the standard deviation column. Here are some links to things they mentioned:Approximate the p99 of a query with pg_stat_statements (blog post by Michael) https://www.pgmustard.com/blog/approximate-the-p99-of-a-query-with-pgstatstatementspg_stat_statements https://www.postgresql.org/docs/current/pgstatstatements.html Our episode about track_planning https://postgres.fm/episodes/pg-stat-statements-track-planning pg_stat_monitor https://github.com/percona/pg_stat_monitorstatement_timeout https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-STATEMENT-TIMEOUT~~~What did you like or not like? What should we discuss next time? Let us know via a YouTube comment, on social media, or by commenting on our Google doc!~~~Postgres FM is produced by:Michael Christofides, founder of pgMustardNikolay Samokhvalov, founder of Postgres.aiWith credit to:Jessie Draws for the elephant artwork
This episode kicks off with a look at Rock RMS version 17.1 updates, including a fix for Insights reports and guidance on unknown marital statuses. The team also addresses and clears up rumors around vendor incentives. Then, John lays out your “Mission Possible” for the summer — with practical ideas to level up in SQL, Lava, UI styling, communication, learning management, and more. Discover how to grow, lead, and prepare for RX! Hosted on Acast. See acast.com/privacy for more information.
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
OctoSQL & Vulnerability Data OctoSQL is a neat tool to query files in different formats using SQL. This can, for example, be used to query the JSON vulnerability files from CISA or NVD and create interesting joins between different files. https://isc.sans.edu/diary/OctoSQL+Vulnerability+Data/32026 Mirai vs. Wazuh The Mirai botnet has now been observed exploiting a vulnerability in the open-source EDR tool Wazuh. https://www.akamai.com/blog/security-research/botnets-flaw-mirai-spreads-through-wazuh-vulnerability DNS4EU The European Union created its own public recursive resolver to offer a public resolver compliant with European privacy laws. This resolver is currently operated by ENISA, but the intent is to have a commercial entity operate and support it by a commercial entity. https://www.joindns4.eu/ WordPress FAIR Package Manager Recent legal issues around different WordPress-related entities have made it more difficult to maintain diverse sources of WordPress plugins. With WordPress plugins usually being responsible for many of the security issues, the Linux Foundation has come forward to support the FAIR Package Manager, a tool intended to simplify the management of WordPress packages. https://github.com/fairpm
This is episode 295 recorded on June 5th, 2025, where John & Jason talk the Power BI May 2025 Feature Summary including a new fabric roadmap tool, Copilot & AI enhancements, Translytical task flows, TMDL view enhancements, and more. For show notes please visit www.bifocal.show
Nikolay and Michael discuss logging in Postgres — mostly what to log, and why changing quite a few settings can pay off big time in the long term. Here are some links to things they mentioned:What to log https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHATOur episode about Auditing https://postgres.fm/episodes/auditing Our episode on auto_explain https://postgres.fm/episodes/auto_explain Here are the parameters they mentioned changing:log_checkpointslog_autovacuum_min_duration log_statementlog_connections and log_disconnectionslog_lock_waitslog_temp_fileslog_min_duration_statement log_min_duration_sample and log_statement_sample_rate And finally, some very useful tools they meant to mention but forgot to! https://pgpedia.infohttps://postgresqlco.nfhttps://why-upgrade.depesz.com/show?from=16.9&to=17.5 ~~~What did you like or not like? What should we discuss next time? Let us know via a YouTube comment, on social media, or by commenting on our Google doc!~~~Postgres FM is produced by:Michael Christofides, founder of pgMustardNikolay Samokhvalov, founder of Postgres.aiWith credit to:Jessie Draws for the elephant artwork
In this kickoff episode of Decoded, Phillip Jackson sits down with Pini Yakuel to explore the concept of "positionless marketing" — a radical rethinking of how marketing teams operate in an AI-powered world. Drawing inspiration from the evolution of positionless basketball, Pini argues that marketing, like sports, is evolving toward roles defined by agility and capability, not titles or silos. The conversation weaves through leadership, startup culture, and how Optimove is enabling marketers to work faster, smarter, and more autonomously.Key TakeawaysPositionless marketing is a mindset — It's about autonomy, adaptability, and eliminating bottlenecks, not just rearranging the org chart.Modern teams thrive when roles are fluid — Inspired by positionless basketball, today's marketers succeed through cross-functionality and creative flexibility, not rigid specialization.Gen AI is the new creative exoskeleton — Like an Iron Man suit, AI tools enhance marketers' abilities, enabling faster, smarter, and more creative execution.Speed is the native language of startups — Startups operate positionlessly by necessity, while legacy orgs must dismantle silos and empower self-service to keep up.Positionless isn't chaos—it's craftsmanship — The best managers focus less on blocking and tackling, and more on elevating outcomes by distributing capability and unlocking human potential at scale.Key Quotes[00:12:25] “Let's look at the Renaissance man... the celebration of the wide gamut of human talent — that's what this could be.” – Pini[00:24:53] “It's not that departments will disappear. It's that the type of work they do will start to change.” – Pini[00:26:23] “Almost every person in our exec team started their job at Optimove by writing SQL.” – Pini[00:30:12] “A team should be small enough to be fed by two pizzas — and fully autonomous.” – Pini (on the Bezos principle)[00:34:07] “You're already positionless — that's why you get to focus on what actually matters: the work.” – Pini, on Phillip's agile team setupAssociated Links:Learn more about Optimove's platformsLearn more about Positionless MarketingCheck out Future Commerce on YouTubeCheck out Future Commerce+ for exclusive content and save on merch and printSubscribe to Insiders and The Senses to read more about what we are witnessing in the commerce worldListen to our other episodes of Future CommerceHave any questions or comments about the show? Let us know on futurecommerce.com, or reach out to us on Twitter, Facebook, Instagram, or LinkedIn. We love hearing from our listeners!
PostgreSQL is an open-source database known for its robustness, extensibility, and compliance with SQL standards. Its ability to handle complex queries and maintain high data integrity has made it a top choice for both start-ups and large enterprises. Heikki Linnakangas is a leading developer for the PostgreSQL project, and he's a co-founder at Neon, which The post Building PostgreSQL for the Future with Heikki Linnakangas appeared first on Software Engineering Daily.
Talk Python To Me - Python conversations for passionate developers
Python has many string formatting styles which have been added to the language over the years. Early Python used the % operator to injected formatted values into strings. And we have string.format() which offers several powerful styles. Both were verbose and indirect, so f-strings were added in Python 3.6. But these f-strings lacked security features (think little bobby tables) and they manifested as fully-formed strings to runtime code. Today we talk about the next evolution of Python string formatting for advanced use-cases (SQL, HTML, DSLs, etc): t-strings. We have Paul Everitt, David Peck, and Jim Baker on the show to introduce this upcoming new language feature. Episode sponsors Posit Auth0 Talk Python Courses Links from the show Guests: Paul on X: @paulweveritt Paul on Mastodon: @pauleveritt@fosstodon.org Dave Peck on Github: github.com Jim Baker: github.com PEP 750 – Template Strings: peps.python.org tdom - Placeholder for future library on PyPI using PEP 750 t-strings: github.com PEP 750: Tag Strings For Writing Domain-Specific Languages: discuss.python.org How To Teach This: peps.python.org PEP 501 – General purpose template literal strings: peps.python.org Python's new t-strings: davepeck.org PyFormat: Using % and .format() for great good!: pyformat.info flynt: A tool to automatically convert old string literal formatting to f-strings: github.com Examples of using t-strings as defined in PEP 750: github.com htm.py issue: github.com Exploits of a Mom: xkcd.com pyparsing: github.com Watch this episode on YouTube: youtube.com Episode transcripts: talkpython.fm --- Stay in touch with us --- Subscribe to Talk Python on YouTube: youtube.com Talk Python on Bluesky: @talkpython.fm at bsky.app Talk Python on Mastodon: talkpython Michael on Bluesky: @mkennedy.codes at bsky.app Michael on Mastodon: mkennedy