POPULARITY
When I found out that Django developer and Python Software Foundation chair Dawn Wages has a chapter in her upcoming Domain-Driven Django book called “Just Use Postgres”, I knew we had to get her on the show. In this episode of Talking Postgres with Claire Giordano, Dawn breaks down why so many Python and Django developers have such an affinity for Postgres. And we dive into the Djangonaut Space mentoring program (where contributors launch), learn why “free as in puppies” beats “free as in cake” for open source vibes, and dig into why Python is the second-best language for everything.Links mentioned in this episode:Project page: psycopgDocumentation: Psycopg 3 – PostgreSQL database adapter for PythonProject page: PostgreSQL open source projectGit repo: code for PostgreSQL.org websiteConference: PyCon US 2025, happening May 14-22 in PittsburghConference: PGConf.dev 2025 Schedule, happening May 13-16 in Montreal CanadaConference: Prague PostgreSQL Developer Day 2025 (P2D2) Schedule, which took place Jan 28-29Wikipedia page: Model-view-controller software design patternBook: Professional ASP.NET MVC 1.0, affectionately called “the four heads book”Podcast episode: Working in Public with Simon Willison & Marco SlotBlog: Simon Willison's TILs, aka Things I've learnedSimon Willison's Weblog: Here's how I use LLMs to help me write codeSimon Willison's Weblog: How I make annotated presentationsSurvey: Python Developers Survey 2023 ResultsPython Docs: What's new in Python 3.14Mentorship program: Djangonaut SpaceMentorship program: Media & Talks about Djangonaut SpacePodcast episode: Why mentor Postgres developers with Robert HaasSlides: PGConf EU 2024 talk by Claire Giordano about Contributions to Postgres, including maps showing how global the Postgres project isVideo of POSETTE 2024 talk by Paolo Melchiorre: Semantic search with Django, PostgreSQL, & pgvectorVideo of Citus Con 2023 talk: Maps with Django (and PostGIS), by Paolo MelchiorreVideo of Citus Con 2022 talk: Django with PostgreSQL superpowers, by Paolo MelchiorreConference: DjangoCon Africa 2025, happening August 11-15 in Tanzania Calendar invite: LIVE recording of Ep26 of Talking Postgres to happen on Wed Apr 02, 2025 with guest Bruce Momjian, to talk about Open Source Leadership
Michael and Nikolay are joined by Lukas Eder, the creator of jOOQ, to discuss what it is, some nice developer experience features it has, and some fun things he's come across from a Postgres perspective. Here are some links to things they mentioned:Lukas Eder https://postgres.fm/people/lukas-ederjOOQ https://www.jooq.org/ DSL https://en.wikipedia.org/wiki/Domain-specific_language SQL Dialects https://www.jooq.org/javadoc/latest/org.jooq/org/jooq/SQLDialect.htmlMERGE https://www.postgresql.org/docs/current/sql-merge.html match_recognize https://modern-sql.com/feature/match_recognize JOOQ, joy of SQL (talk by Kevin Davin) https://www.youtube.com/watch?v=8Ej47GZX9D8 BUFFERS enabled for EXPLAIN ANALYZE by default (commit for Postgres 18) https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=c2a4078ebad71999dd451ae7d4358be3c9290b07 PostGIS https://postgis.net/ 10 SQL Tricks That You Didn't Think Were Possible (blog post by Lukas) https://blog.jooq.org/10-sql-tricks-that-you-didnt-think-were-possible/ jOOQ questions on Stack Overflow https://stackoverflow.com/questions/tagged/jooq Our episode on NULLs https://postgres.fm/episodes/nulls-the-good-the-bad-the-ugly-and-the-unknown ~~~What did you like or not like? What should we discuss next time? Let us know via a YouTube comment, on social media, or by commenting on our Google doc!~~~Postgres FM is produced by:Michael Christofides, founder of pgMustardNikolay Samokhvalov, founder of Postgres.aiWith special thanks to:Jessie Draws for the elephant artwork
Mit Hilfe von Spatial Index-Strukturen einen schnellen Zugriff auf Geodaten gewährleistenDie Welt ist groß und wird weiter digitalisiert. Um alles Auffindbar und durchsuchbar zu machen, werden Geodaten von alles und jedem festgehalten: Nicht nur Längen- und Breitengrade (wenn es sich um die Erde handelt), sondern auch Höhe bzw. Tiefe, Zeit und etliche andere Metadaten. Diese Art von Daten nennen sich Spatial-Data oder auch Geospatial-Data.Um in großen Datenmengen einen schnellen Zugriff zu gewährleisten, verwenden Softwaresysteme, wie zB Datenbanken, Indexstrukturen, auch Indizes, genannt. Eine zusätzliche Form der Speicherung durch die Nutzung hoch optimierter Datenstrukturen. Welche Indexstrukturen werden eigentlich bei Geospatial-Daten genutzt? Das ist das Thema dieser Episode. Wir sprechen über die Anwendungsfälle von Geospatial-Data, warum eine klassische Struktur wie ein B-Baum nicht für diese Art von Daten geeignet ist, was Gridfiles, Quadtrees, KD-Trees, R-Trees und Geohashing ist und wie diese funktionieren, ob all dies selbst implementiert werden muss oder wir auf bereits existierende Datenbanksysteme zurückgreifen können und klären, was der Fluch der Dimensionalität ist und was dies mit dem Thema AI zu tun hat.Bonus: MongoDBs Marketing-Initiative auf Basis von Spatial-Index-Strukturen.Unsere aktuellen Werbepartner findest du auf https://engineeringkiosk.dev/partnersDas schnelle Feedback zur Episode:
Elizabeth Christensen discusses effective management of Postgres and PostGIS databases, focusing on operations, resource management, and the importance of indexing. Highlights
Elizabeth Christensen of Crunchy Data walks us through how to use open source tooling to avoid paying the Esri tax. It was a great tour of the options and also a nice vibe check of the industry. A headline here is she echoes former guest Stephanie May in endorsing DuckDB. She also wanted to pass on that a great way to find out more is to attend PostGIS Day! More here. On the topic of resources, Elizabeth has been very helpful and provided the following set of links: #PostgreSQL, open source relational database https://www.postgresql.org/ #PostGIS, open source GIS data store https://postgis.net/ # PostGIS Day 2024 https://www.crunchydata.com/community/events/postgis-day-2024 #Crunchy Data, Postgres and PostGIS services provider https://www.crunchydata.com/ # Open Source Geospatial Foundation https://www.osgeo.org/ #QGIS download, open source mapping https://www.qgis.org/ #Simple map SQL queries as QGIS layers https://www.crunchydata.com/blog/connecting-qgis-to-postgres-and-postgis #pg_tileserv - Tile server for PostGIS https://github.com/CrunchyData/pg_tileserv #pg_featureserv - API JSON server for PostGIS https://github.com/CrunchyData/pg_featureserv/ #OpenLayers project https://openlayers.org/ #OpenLayers + PgRouting + pg_tileserv + pg_featureserv sample code https://github.com/CrunchyData/pg_featureserv/tree/master/demo #PostGIS day videos https://www.youtube.com/@CrunchyDataPostgres#Crunchy Data's Postgres Playground https://www.crunchydata.com/developers/tutorials #Really cool open source GIS people to follow Paul Ramsey @ Crunchy Data / cleverelephant Regina Obe @ Paragon Ryan Lambert @ RustProofLabs Cliff Patterson @ Luna Geospatial Matt Forrest @ Whereabots #Elizabeth's crunchy blogs https://www.crunchydata.com/blog/author/elizabeth-christensen #Elizabeth's LinkedIn https://www.linkedin.com/in/elizabeth-garrett-christensen/ #Elizabeth's Twitter https://twitter.com/sqlliz THE GEOSPATIAL INDEX The Geospatial Index is a comprehensive listing of all publicly traded geospatial businesses worldwide. Why? The industry is growing at ~5% annually (after inflation and after adjusting for base rates). This rate varies significantly, however, by sub index. For $480,000 to start, this growth rate is $5,000,000 over a working life. This channel, Bluesky account, newsletter, watchlist and podcast express the view that you are serious about geospatial if you take the view of an investor, venture capitalist or entrepreneur. You are expected to do your own research. This is not a replacement for that. This is not investment advice. Consider it entertainment. NOT THE OPINION OF MY EMPLOYER NOT YOUR FIDUCIARY NOT INVESTMENT ADVICE Bluesky: https://bsky.app/profile/geospatialindex.bsky.social LinkedIn: https://uk.linkedin.com/in/geospatialindex Watchlist: https://www.tradingview.com/watchlists/123254792/ Newsletter: https://www.geospatial.money/ Podcast: https://open.spotify.com/show/5gpQUsaWxEBpYCnypEdHFC
Have you ever eavesdropped on other people's conversations? Former co-host Pino de Candia joins Claire Giordano on this episode of Talking Postgres (formerly Path To Citus Con) to share their experience on podcasting about Postgres. Is listening to a podcast the next best thing to being in the hallway track at a conference? Does it bring the community together? How beneficial has it been to have a parallel chat while recording live? What is the “sweet spot” for the number of guests to have per episode? Is structure important for a podcast? Also discussed: this podcast's rename, a walk down memory lane reflecting on the past 16 episodes, and shout-outs to other podcasts about Postgres.Links mentioned in this episode:Cal invite for next Ep18 of Talking Postgres with David RowleyPodcast: Talking Postgres Talking Postgres Ep01: Working in public on open source with Simon Willison & Marco SlotEp02: How to get Postgres ready for the next 100 million usersEp03: Why giving talks at Postgres conferences matterswith Álvaro Herrera and Boriss MejíasVideo: Postgres Storytelling: What's going on with Synchronous Replication | POSETTE 2024 by Boriss MejíasVideo: Postgres Storytelling: Support in the Darkest Hour | Citus Con 2023, by Boriss MejíasEp04: How I got started as a dev and in Postgres with Melanie Plageman & Thomas MunroEp05: My favorite ways to learn more about PostgreSQL with Grant Fritchey & Ryan BoozVideo: Fibonacci Spirals and Ways to Contribute to Postgres—Beyond Code | Citus Con 2022, by Claire GiordanoEp06: You're probably already using Postgres with Chelsea Dole & Floor DreesWikipedia: Object–relational mappingVideo: How to work with other people | POSETTE 2024, by Floor Drees and Jimmy AngelakosEp07: Why people care about PostGIS and Postgres with Paul Ramsey & Regina ObeEp08: How I got started as a developer (& in Postgres) with Andres Freund & Heikki LinnakangasSatya Nadella's LinkedIn post about Andres Freund's xz backdoor discoveryEp09: Solving every data problem in SQL with Dimitri Fontaine & Vik FearingWikipedia: Advent of CodeEp10: My Journey into Postgres Monitoring with Lukas Fittl & Rob TreatEp11: My Journey into Performance Benchmarking with Jelte Fennema-Nio & Marco SlotEp12: From developer to PostgreSQL specialist with Derk van VeenEp13: Spinning up on Postgres & AI with Arda AytekinEp14: Becoming expert at using PostgreSQL with Chris EllisVideo: Electric Elephants | pgDay Paris 2024, by Chris EllisEp15: My Journey to Explaining Explain with Michael ChristofidesPodcast: Postgres FMEp16: The Making of POSETTE: An Event for Postgres with Teresa Giacomini & Aaron WislangPodcast: Scaling PostgreSQLPodcast: Postgres FM Ep99 with guest Claire Giordano: Sponsoring the communityPodcast: Hacking PostgresPlaylist: 5mins of Postgres
It's not a conference unless you can confer, right? POSETTE organizers Teresa Giacomini and Aaron Wislang join Claire Giordano on the Path To Citus Con podcast to share backstage perspectives on the making of POSETTE: An Event for Postgres. How do you feel about captions: love or hate? Should livestream talks be pre-recorded or presented live? Why rename from Citus Con to POSETTE? Where did the inspiration for POSETTE come from? And can the hallway track at a conference actually be fun—if it is virtual? Also discussed: Avett Brothers lyrics, the surprising number of POSETTE speakers with chickens, and the existential question of whether the work in organizing a conference is worth it.Links mentioned in this episode: Blog post: What's in a name? About the naming of POSETTE: An Event for PostgresFOSDEM: the conference whose name inspired the POSETTE namePlaylist of all 42 talks from POSETTE: An Event for Postgres 2024Playlist of the 4 unique livestreams from POSETTE 2024 CFP is open: PGDay Lowlands 2024 Call for Papers will close July 9, 2024Virtual conference that POSETTE organizers were inspired by: P99 ConfDiscord: Microsoft Open Source Discord, Home for virtual hallway track for #posetteconfAdam Wølk's speaker page for POSETTESpeaker interview with Polina Bungina at POSETTEBlog post: About Talk Selection for POSETTE: An Event for Postgres 2024, by Claire GiordanoBlog post: Building the PGConf.dev Programme, by Paul RamseypgDay Paris 2024 note about talk selection processKeynote: All The Postgres Things at Microsoft, POSETTE edition, by Charles FeddersenKeynote: The Open Source Geospatial Community, PostGIS, & Postgres, by Regina ObeKeynote: Why I love open source development & what I learned from K8s, by Sarah NovotnyKeynote: A Walking Tour of PostgreSQL, by Thomas MunroLyrics from The Perfect Space by The Avett BrothersVideo: Lessons Learned benchmarking & profiling distributed PostgreSQL, by Lotte FeliusVideo: Postgres Storytelling: Support in the Darkest Hour | Citus Con 2023, by Boriss Mejías Video: Postgres Storytelling: What's going on with Synchronous Replication?, by Boriss MejíasVideo: Vindicating ZFS with PostgreSQL: Unleashing the Power of Scalability, includes a bit of jazz music by Federico CampoliBlog post: Ultimate Guide to POSETTE: An Event for Postgres, 2024 editionSocial post: Tweet by Kelsey Hightower with advice to conference organizersVideo from PGConfEU 2023: So you want a PGDay in your city, by Henrietta Dombrovskaya & Teresa GiacominiBlog post: The Story Behind the Activity Book for Postgres, by Teresa Giacomini
Summary Eddie Pickle from Crunchy Data talks about modernizing geospatial workflows by starting with the database and working with Crunchy Data and the Postgres database. Moving to an open source database can create a more modern and effective architecture, with technical and infrastructural benefits. Innovating from within government and investing in open source software is encouraged. Crunchy Data provides foundation database software and support for modernizing geospatial workflows using Postgres and PostGIS. Highlights
Use Postgres as a managed service on Azure. As you build generative AI apps, explore advantages of Azure Database for Postgres Flexible Server such as integration with Azure AI services, as well as extensibility and compatibility, integrated enterprise capabilities to protect data, and controls for managing business continuity. Charles Feddersen, product team lead for Postgres on Azure, joins host Jeremy Chapman to share how Flexible Server is a complete Postgres platform for enterprise and developers. ► QUICK LINKS: 00:00 - Azure Database for PostgreSQL Flexible Server 00:51 - Microsoft and Postgres 01:40 - Open-source Postgres 03:18 - Vector embeddings for data 04:32 - How it works with an app 06:59 - Azure AI Vision 08:14 - Azure AI extension using PostgreSQL 09:37 - Text generation using Azure AI extension 10:30 - High availability and disaster recovery 12:45 - Wrap up ► Link References Get started with the Azure Database for Postgres flexible server at https://aka.ms/postgresql Stay current with all the updates at https://aka.ms/AzurePostgresBlog ► Unfamiliar with Microsoft Mechanics? As Microsoft's official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. • Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries • Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog • Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast ► Keep getting this insider knowledge, join us on social: • Follow us on Twitter: https://twitter.com/MSFTMechanics • Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ • Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ • Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics #PostgreSQL #OpenSource #Azure #Database
Summary: Feeding 911 data from QGIS/PostGIS is discussed by Randal Hale in his FOSS4GNA 2023 presentation. He shares insights into transitioning from proprietary to open-source tools, recounting experiences with a 911 proof-of-concept project in the Caribbean. The talk covers setting up PostGIS servers, overcoming challenges, and expanding GIS capabilities over time. Noteworthy aspects include leveraging GitHub for collaborative development, improving SQL scripts for efficiency, and adopting GeoPackage for streamlined data management. Highlights:
Summary David Bitner discusses TiPG, an OGC API features service that serves OGC features and tiles directly from Postgres and PostGIS databases. TiPG leverages the power of PostGIS, FastAPI, and other standard libraries, making it easy to display spatial data without extensive configuration. The project supports various OGC features, enables full filtering using CQL, and utilizes the FastAPI framework for efficient service development. Additionally, Bitner introduces EAPI, an opinionated bundle of tools, including TiPG, for seamless integration. The ease of use, templating capabilities, and support for set-returning functions make TiPG a versatile solution for spatial data services. Highlights
Ever wonder what it's like to build a geospatial extension for PostgreSQL? Join us for Episode 6 of Hacking Postgres, where Ry sits down with Regina Obe and Paul Ramsey, two of the key creators of PostGIS.
The MapScaping Podcast - GIS, Geospatial, Remote Sensing, earth observation and digital geography
Embracing Open-Source Geospatial Technology is easy as an individual but what if you want your organization to use FOSS4G How do you get strategic buy-in? It turns out that the software does not sell itself and that even in the age of AI we still have to convince a human if we want organizational change to to happen. I think the temptation is to say hey look at this long list of specifications and notice how FOSS4G is often better or equal to the close source equivalent. Or hey look at the price tag … it costs nothing which is way cheaper than this other thing which costs more than nothing. While this might be all the argumentation you need in some cases … in general, making change happen is hard, and it's going to require more than that. That's why I have invited Todd Barr back on the podcast to walk us through what it takes to get an organization to Embrace Open-Source Geospatial Technology. Here are a few of the key points Getting buy-in for open-source software and addressing concerns about security and IP protection Perspectives of External and internal stakeholders on open-source Software Importance of collaboration, empathy, and understanding in decision-making and stakeholder management Challenges of implementing open source technology in a corporate environment Benefits of using open source solutions, such as faster analysis, increased stability, and flexibility for innovation Accessibility and support in open source communities, including direct interaction with developers and availability of external consultants Customization and development work required for creating vertical solutions with open-source components Finding skilled developers and training them in geospatial technology Cost-saving advantages of open source technology in cloud computing Leading arguments for implementing open source software: cost savings and freedom to modify and customize Advocacy for supporting and integrating with the open source community. The last time Todd was on the podcast we talked about Leadership and Mentorship in the Geospatial community https://mapscaping.com/podcast/skills-leadership-mentorship-and-the-geospatial-community/ If you are interested in FOSS4G you might enjoy these previous episodes A Business built on Open Source GIS https://mapscaping.com/podcast/a-business-built-on-open-source-gis/ Monetizing an open-source geospatial project https://mapscaping.com/podcast/monetizing-an-open-source-geospatial-project/ Or just scroll through the archive to find episodes about QGIS, PostGIS, Geoserver, Geonode, Python, and a bunch of other open-source projects I could use some support! please consider supporting this podcast on Patreon https://www.patreon.com/MapScaping
The geospatial world of Postgres is so much more than mapping. Paul Ramsey and Regina Obe join Claire Giordano and Pino de Candia to explore the "where" on Path To Citus Con, the podcast for developers who love Postgres. What are some of the unexpected use cases for PostGIS, one of the most popular extensions to Postgres? How have Large Language Models helped in the geospatial world? Can you really model almost anything with pgRouting? “Where” is the universal foreign key. They talk about communities and governments using geospatial data and how it's very difficult to build a database that does not have some sort of spatial component to it. Why do people care about PostGIS? Find out more about OpenStreetMap and its place in the open source geospatial world. Finally, Paul and Regina share the origin story for the PostGIS extension to Postgres. Links mentioned in this episode, in the order they were covered:PostGIS: https://postgis.net/ FOSS4G NA: https://foss4gna.org/ Ushahidi: https://www.ushahidi.com/ Humanitarian Open Street Map: https://www.hotosm.org/ OpenStreetMap: https://www.openstreetmap.org/ pgRouting: https://pgrouting.org/ Regina Obe's books: https://locatepress.com/book/pgr Regina's book “PostGIS In Action”: https://www.manning.com/books/postgis-in-action-third-edition?experiment=B MobilityDB: https://github.com/MobilityDB/MobilityDB Blog: Analyzing GPS trajectories at scale with Postgres, PostGIS, MobilityDB, & Citus: https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/analyzing-gps-trajectories-at-scale-with-postgres-mobilitydb-amp/ba-p/1859278 OSGeo: https://www.osgeo.org/ Simon Willison's presentation on "The weird world of LLMs": https://simonwillison.net/2023/Aug/3/weird-world-of-llms/ QGIS: https://qgis.org/en/site/ QGIS “Gentle Introduction” documentation: https://docs.qgis.org/3.28/en/docs/gentle_gis_introduction/ PostGIS Workshops: https://postgis.net/documentation/training/#workshop Locate Press: https://locatepress.com/ FedGeoDay 2023: https://www.fedgeo.us/about-2023 Schedule of FOSS4G NA 2023: https://foss4gna.org/schedule.html#schedule FOSS4G Brazil, December 2024: https://www.osgeo.org/foundation-news/foss4g-2024-has-been-awarded-to-belem-brazil/ Paul's keynote talk at PGConfEU in Lisbon in 2018, titled "Put some "where" in your WHERE clause": https://docs.google.com/presentation/d/1xyXA4-0wmNX7WfiLeH9h10bIkZxrej278-mMaClagys/edit?usp=sharing
A VerySpatial Podcast | Discussions on Geography and Geospatial Technologies
News: Hillary hits the west coast of North America Planet completes acquisition, growing Earth Data platform PostGIS 3.4 released Microsoft AI suggests food bank as a “cannot miss” tourist spot in Canada Topic: Highlighting Geography (and geospatial) as students return to classrooms Events: Geography 2050: 16-17 November 2023, New York City, NY 6th National Cohesive Wildland Fire Mgmt Strategy Workshop: 6-10 November, Santa Fe, NM SotM 2024: 6-8 September, Nairobi, Kenya
Drop the fear, not the tables. Chelsea Dole and Floor Drees join Claire Giordano and Pino de Candia to explore the app developer perspective on Path To Citus Con, the podcast for developers who love Postgres. If you're an app developer, you're probably already using Postgres. Now what? What do you need to know? Are databases your best friend or your worst enemy? They talk about the steps to becoming more Postgres-savvy. Should you go depth-first or breadth-first in order to learn more about the underlying database? What are Postgres extensions and how do you go about adopting them? Find out more about the strength of what Floor calls “boring technology.” Finally, both guests tell stories of their non-traditional entries into Postgres that led to their deep work with databases today.Links mentioned in this episode: Fintech startup where Chelsea works, Brex: https://www.brex.com/ Open source data platform where Floor works, Aiven: https://aiven.io/ “Mission-Critical PostgreSQL Databases on Kubernetes" by Karen Jex at KubeCon Europe 2023: https://www.youtube.com/watch?v=_NBQ9JmOMko The Imposters Handbook by Rob Conery: https://bigmachine.io/products/the-imposters-handbook/ Designing Data-Intensive Applications by Martin Kleppmann: https://www.oreilly.com/library/view/designing-data-intensive-applications/9781491903063/ Devopsdays Amsterdam: https://devopsdays.org/events/2023-amsterdam/welcome/ Building Community in Open Source with Floor Drees on the Last Week in AWS podcast: https://www.lastweekinaws.com/podcast/screaming-in-the-cloud/building-community-in-open-source-with-floor-drees/ pg_stat_statements: https://www.postgresql.org/docs/current/pgstatstatements.html PostGIS: https://postgis.net/ “Postgres tips for optimizing Django & Python performance, from my PyCon workshop” by Louise Grandjonc: https://www.citusdata.com/blog/2020/05/20/postgres-tips-for-django-and-python/ Video of Louise's PyCon talk, Optimize Django & Python performance with Postgres superpowers: https://youtu.be/dyBLGjCQJHsGrafana: https://grafana.com/pganalyze: https://pganalyze.com/ auto_explain: https://www.postgresql.org/docs/current/auto-explain.html EXPLAIN ANALYZE in PostgreSQL: https://www.postgresql.org/docs/current/sql-explain.html psql: https://www.postgresguide.com/utilities/psql/ Path To Citus Con Episode 05: My favorite ways to learn more about PostgreSQL with Grant Fritchey and Ryan Booz: https://pathtocituscon.transistor.fm/episodes/my-favorite-ways-to-learn-more-about-postgresql-with-grant-fritchey-and-ryan-booz Coffee Meets Bagel (dating app): https://coffeemeetsbagel.com/
The MapScaping Podcast - GIS, Geospatial, Remote Sensing, earth observation and digital geography
Distributing Geospatial Data - Every wondered why you might what to do this? Or maybe you understand the why but are unsure about the how? Perhaps you have heard people talk about partitioning data or sharding data, you might have heard some of these terms used in the context of enterprise-scale geospatial systems and parallel processing and thought … Wow … that sounds daunting! This podcast episode is meant to be a soft introduction to the world of distributed geospatial databases and some of the concepts surrounding them. Along the way, you will be introduced to something called Apache Sedona which is a cluster computing system for processing large-scale spatial data, and hear the surprising stat that 97% of enterprise data remains unused! … perhaps because of the lack of context? You can connect with Mo Sarwat on Twitter https://twitter.com/MoSarwat Or at https://mosarwat.org/ During the conversation, we mention PostgreSQL and PostGIS a few times which are topics that we have covered in previous podcast episodes Servicing Dynamic Vector Tiles from PostGIS https://mapscaping.com/podcast/dynamic-vector-tiles-straight-from-the-database/ An introduction to PostgreSQL https://mapscaping.com/podcast/postgresql-an-open-source-geospatial-database-for-gis-practitioners/ Spatial SQL https://mapscaping.com/podcast/spatial-sql-gis-without-the-gis/ Toward the end of the conversation, we touch on the idea of cloud-native geospatial formats, and if you are interested in understanding this concept you might find these two previous episodes helpful Cloud Optimized Point Clouds https://mapscaping.com/podcast/cloud-optimized-point-clouds/ Cloud Native Geospatial https://mapscaping.com/podcast/cloud-native-geospatial/ If you have any questions or comments please feel free to reach out! I would love to hear from you
In this episode of Scaling Postgres, we discuss how PG15 helps reduce replication lag, how to get the parameters used in prepared statements, PostGIS Day and how to use multiple pgbouncers. Subscribe at https://www.scalingpostgres.com to get notified of new episodes. Links for this episode: https://pganalyze.com/blog/5mins-postgres-15-maintenance-io-concurrency-reduce-replication-lag https://www.cybertec-postgresql.com/en/explain-that-parameterized-statement/ https://blog.rustprooflabs.com/2022/11/route-the-interesting-things-postgis-day2022 https://www.crunchydata.com/blog/postgres-at-scale-running-multiple-pgbouncers https://www.crunchydata.com/blog/postgres-query-boost-using-any-instead-of-in https://fluca1978.github.io/2022/11/16/psqlEmacsClient.html https://people.planetpostgresql.org/devrim/index.php?/archives/121-Improved-aarch64-arm64-support-is-available-in-the-PostgreSQL-RPM-repository..html https://postgrespro.com/blog/pgsql/5969859 https://www.crunchydata.com/blog/multi-cloud-strategies-with-crunchy-postgres-for-kubernetes https://pgsqlpgpool.blogspot.com/2022/11/dynamic-spare-process-management-in.html https://www.enterprisedb.com/blog/how-use-external-database-pgadmin-user-settings https://postgres.fm/episodes/hot-updates https://postgresql.life/post/onder_kalaci/ https://www.rubberduckdevshow.com/episodes/66-top-down-or-bottom-up-testing/
In this episode of Scaling Postgres, we discuss why and what to do when updates are slow, use cases for lateral joins, moving from serial to identity for auto-incrementing ids and changes to Postgres 15. Subscribe at https://www.scalingpostgres.com to get notified of new episodes. Links for this episode: https://www.cybertec-postgresql.com/en/why-are-my-postgresql-updates-getting-slower/ https://pganalyze.com/blog/5mins-postgres-debug-UPDATE-bloated-tables-auto-explain-pageinspect https://sqlfordevs.com/for-each-loop-lateral-join https://adamj.eu/tech/2022/10/21/migrate-postgresql-ids-serial-identity-django-4.1/ https://blog.rustprooflabs.com/2022/10/postgres-15-config-changes https://pgconfig.rustprooflabs.com/param/change/14/15 https://www.enterprisedb.com/blog/exclusive-backup-mode-finally-removed-postgres-15 https://fluca1978.github.io/2022/10/21/PostgreSQL15JsonLogs.html https://www.enterprisedb.com/blog/postgres-15-adds-copy-header-and-matching https://www.endpointdev.com/blog/2022/10/upgrading-postgresql-14-to-15-on-fedora-centos-rocky-alma-linux/ https://www.bostongis.com/blog/index.php?/archives/278-Using-pg_upgrade-to-upgrade-PostgreSQL-9.6-PostGIS-2.4-to-PostgreSQL-15-3.3-on-Yum.html https://www.softwareandbooz.com/postgresql-at-the-pass-data-community-summit-2022/ https://postgres.fm/episodes/stored-procedures https://postgresql.life/post/cedric_duprez/ https://www.rubberduckdevshow.com/episodes/62-staying-off-the-rails-with-joel-drapper/
Here are links to a few things we mentioned: EXPLAIN parameters (PostgreSQL documentation)EXPLAIN (ANALYZE) needs BUFFERS (blog post by Nikolay)Using BUFFERS for query optimization (blog post by Michael) H3 indexes on PostGIS data (blog post by Ryan Lambert)Turning BUFFERS on by default (latest patch)pgMustard explain.depesz.comexplain.dalibo.comDatabase Lab Engine------------------------What did you like or not like? What should we discuss next time? Let us know by tweeting us on @samokhvalov and @michristofidesIf you would like to share this episode, here's a good link (and thank you!)Postgres FM is brought to you by:Nikolay Samokhvalov, founder of Postgres.aiMichael Christofides, founder of pgMustardWith special thanks to:Jessie Draws for the amazing artwork
About BenjaminBenjamin Anderson is CTO, Cloud at EDB, where he is responsible for developing and driving strategy for the company's Postgres-based cloud offerings. Ben brings over ten years' experience building and running distributed database systems in the cloud for multiple startups and large enterprises. Prior to EDB, he served as chief architect of IBM's Cloud Databases organization, built an SRE practice at database startup Cloudant, and founded a Y Combinator-funded hardware startup.Links Referenced: EDB: https://www.enterprisedb.com/ BigAnimal: biganimal.com TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.I come bearing ill tidings. Developers are responsible for more than ever these days. Not just the code that they write, but also the containers and the cloud infrastructure that their apps run on. Because serverless means it's still somebody's problem. And a big part of that responsibility is app security from code to cloud. And that's where our friend Snyk comes in. Snyk is a frictionless security platform that meets developers where they are - Finding and fixing vulnerabilities right from the CLI, IDEs, Repos, and Pipelines. Snyk integrates seamlessly with AWS offerings like code pipeline, EKS, ECR, and more! As well as things you're actually likely to be using. Deploy on AWS, secure with Snyk. Learn more at Snyk.co/scream That's S-N-Y-K.co/screamCorey: This episode is sponsored by our friends at Fortinet. Fortinet's partnership with AWS is a better-together combination that ensures your workloads on AWS are protected by best-in-class security solutions powered by comprehensive threat intelligence and more than 20 years of cybersecurity experience. Integrations with key AWS services simplify security management, ensure full visibility across environments, and provide broad protection across your workloads and applications. Visit them at AWS re:Inforce to see the latest trends in cybersecurity on July 25-26 at the Boston Convention Center. Just go over to the Fortinet booth and tell them Corey Quinn sent you and watch for the flinch. My thanks again to my friends at Fortinet.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. This promoted guest episode is brought to us by our friends at EDB. And not only do they bring us this promoted episode, they bring me their CTO for Cloud, Benjamin Anderson. Benjamin, thank you so much for agreeing to suffer the slings and arrows that I will no doubt throw at you in a professional context, because EDB is a database company, and I suck at those things.Benjamin: [laugh]. Thanks, Corey. Nice to be here.Corey: Of course. So, databases are an interesting and varied space. I think we can all agree—or agree to disagree—that the best database is, of course, Route 53, when you misuse TXT records as a database. Everything else is generally vying for number two. EDB was—back in the days that I was your customer—was EnterpriseDB, now rebranded as EDB, which is way faster to say, and I approve of that.But you were always the escalation point of last resort. When you're stuck with a really weird and interesting Postgres problem, EDB was where you went because if you folks couldn't solve the problem, it was likely not going to get solved. I always contextualized you folks as a consulting shop. That's not really what you do. You are the CTO for Cloud.And, ah, interesting. Do databases behave differently in cloud environments? Well, they do when you host them as a managed service, which is an area you folks have somewhat recently branched into. How'd you get there?Benjamin: Ah, that's interesting. So, there's a bunch of stuff to unpack there. I think EDB has been around for a long time. It's something like 13, 14, 15 years, something like that, and really it's just been kind of slowly growing, right? We did start very much as a product company. We built some technology to help customers get from Oracle database on to Postgres, way back in 2007, 2008.That business has just slowly been growing. It's been going quite well. Frankly, I only joined about 18 months ago, and it's really cool tech, right? We natively understand some things that Oracle is doing. Customers don't have to change their schemas to migrate from Oracle to Postgres. There's some cool technology in there.But as you point out, I think a lot of our position in the market has not been that product focused. There's been a lot of people seeing us as the Postgres experts, and as people who can solve Postgres problems, in general. We have, for a long time, employed a lot of really sharp Postgres people. We still employ a lot of really sharp Postgres people. That's very much, in a lot of ways, our bread and butter. That we're going to fix Postgres problems as they come up.Now, over the past few years, we've definitely tried to shift quite a bit into being more of a product company. We've brought on a bunch of people who've been doing more enterprise software product type development over the past few years, and really focusing ourselves more and more on building products and investing in ourselves as a product company. We're not a services company. We're not a consulting company. We do, I think, provide the best Postgres support in the market. But it's been a journey. The cloud has been a significant part of that as well, right? You can't get away.Corey: Oh, yeah. These days, when someone's spinning up a new workload, it's unlikely—in most cases—they're going to wind up spinning up a new data center, if they don't already have one. Yes, there's still a whole bunch of on-prem workloads. But increasingly, the default has become cloud. Instead of, “Why cloud?” The question's become, “Why not?”Benjamin: Right, exactly. Then, as people are more and more accepting of managed services, you have to be a product company. You have to be building products in order to support your database customers because what they want his managed services. I was working in managed databases and service, something like, ten years ago, and it was like pulling teeth. This is after RDS launched. This was still pulling teeth trying to get people to think about, oh, I'm going to let you run my database. Whereas, now obviously, it's just completely different. We have to build great products in order to succeed in the database business, in general.Corey: One thing that jumped out at me when you first announced this was the URL is enterprisedb.com. That doesn't exactly speak to, you know, non-large companies, and EDB is what you do. You have a very corporate logo, but your managed service is called BigAnimal, which I absolutely love. It actually expresses a sense of whimsy and personality that I can no doubt guess that a whole bunch of people argued against, but BigAnimal, it is. It won through. I love that. Was that as contentious as I'm painting it to be, or people actually have a sense of humor sometimes?Benjamin: [laugh]. Both, it was extremely contentious. I, frankly, was one of the people who was not in favor of it at first. I was in favor of something that was whimsical, but maybe not quite that whimsical.Corey: Well, I call it Postgres-squeal, so let's be very clear here that we're probably not going to see eye-to-eye on most anything in pronunciation things. But we can set those differences aside and have a conversation.Benjamin: Absolutely, no consider that. It was deliberate, though, to try to step away a little bit from the blue-suit-and-tie, enterprise, DB-type branding. Obviously, a lot of our customers are big enterprises. We're good at that. We're not trying to be the hip, young startup targeting business in a lot of ways. We have a wide range of customers, but we want to branch out a little bit.Corey: One of the challenges right now is if I spin up an environment inside of AWS, as one does, and I decide I certainly don't want to take the traditional approach of running a database on top of an EC2 instance—the way that we did in the olden days—because RDS was crappy. Now that it's slightly less crappy, that becomes a not ideal path. I start looking at their managed database offerings, and there are something like 15 distinct managed databases that they offer, and they never turn anything off. And they continue to launch things into the far future. And it really feels, on some level, like 20 years from now—what we call a DBA today—their primary role is going to look a lot more like helping a company figure out which of Amazon's 40 managed databases is the appropriate fit for this given workload. Yet, when I look around at what the industry has done, it seems that when we're talking about relational databases. Postgres has emerged back when I was, more or less, abusing servers in person in my data center days, it was always MySQL. These days, Postgres is the de facto standard, full stop. I admit that I was mostly keeping my aura away from any data that was irreplaceable at that time. What happened? What did I miss?Benjamin: It's a really good question. And I certainly am not a hundred percent on all the trends that went on there. I know there's a lot of folks that are not happy about the MySQL acquisition by Oracle. I think there's a lot of energy that was adopted by the NoSQL movement, as well. You have people who didn't really care about transactional semantics that were using MySQL because they needed a place to store their data. And then, things like MongoDB and that type of system comes along where it's significantly easier than MySQL, and that subset of the population just sort of drifts away from MySQL.Corey: And in turn, those NoSQL projects eventually turn into something where, okay, now we're trying to build a banking system on top of it, and it's, you know, I guess you can use a torque wrench as a hammer if you're really creative about it, but it seems like there's a better approach.Benjamin: Yeah, exactly. And those folks are coming back around to the relational databases, exactly. At the same time, the advancements in Postgres from the early eight series to today are significant, right? We shouldn't underestimate how much Postgres has really moved forward. It wasn't that long ago that replication was hardly a thing and Postgres, right? It's been a journey.Corey: One thing that your website talks about is that you accelerate your open-sourced database transformation. And this is a bit of a hobby horse I get on from time to time. I think that there are a lot of misunderstandings when people talk about this. You have the open-source purists—of which I shamefully admit I used to be one—saying that, “Oh, it's about the idea of purity and open and free as in software.” Great. Okay, awesome. But when I find that corporate customers are talking about when they say open-source database, they don't particularly care if they have access to the source code because they're not going to go in and patch a database engine, we hope. But what they do care about is regardless of where they are today—even if they're perfectly happy there—they don't want to wind up beholden to a commercial database provider, and/or they don't want to wind up beholden to the environment that is running within. There's a strategic Exodus that's available in theory, which on some level serves to make people feel better about not actually Exodus-ing, but it also means if they're doing a migration at some point, they don't also have to completely redo their entire data plan.Benjamin: Yeah, I think that's a really good point. I mean, I like to talk—there's a big rat's nest of questions and problems in here—but I generally like talk to about open APIs, talk about standards, talk about how much is going to have to change if you eliminate this vendor. We're definitely not open-source purists. Well, we employ a lot of open-source purists. I also used to be an open—Corey: Don't let them hear you say that, then. Fair enough. Fair enough.Benjamin: [laugh] we have proprietary software at EDB, as well. There's a kind of wide range of businesses that we participate in. Glad to hear you also mention this where-it's-hosted angle, as well. I think there's some degree to which people are—they figured out that having at least open APIs or an open-source-ish database is a good idea rather than being beholden to proprietary database. But then, immediately forget that when they're picking a cloud vendor, right? And realizing that putting their data in Cloud Vendor A versus Cloud Vendor B is also putting them in a similar difficult situation. They need to be really wary of when they're doing that. Now, obviously, I work at an independent software company, and I have some incentive to say this, but I do think it's true. And you know, there's meaningful data gravity risk.Corey: I assure you, I have no incentive. I don't care what cloud provider you're on. My guidance has been, for years, to—as a general rule—pick a provider, I care about which one, and go all in until there's a significant reason to switch. Trying to build an optionality, “Oh, everything we do should be fully portable at an instance notice.” Great. Unless you're actually doing it, you're more or less, giving up a whole bunch of shortcuts and feature velocity you could otherwise have, in the hopes of one day you'll do a thing, but all the assumptions you're surrounded by baked themselves in regardless. So, you're more or less just creating extra work for yourself for no defined benefit. This is not popular in some circles, where people try to sell something that requires someone to go multi-cloud, but here we are.Benjamin: No, I think you're right. I think people underestimate the degree to which the abstractions are just not very good, right, and the degree to which those cloud-specific details are going to leak in if you're going to try to get anything done, you end up in kind of a difficult place. What I see more frequently is situations where we have a big enterprise—not even big, even medium-sized companies where maybe they've done an acquisition or two, they've got business units that are trying to do things on their own. And they end up in two or three clouds, sort of by happenstance. It's not like they're trying to do replication live between two clouds, but they've got one business unit in AWS and one business unit and Azure, and somebody in the corporate—say enterprise architect or something like that—really would like to make things consistent between the two so they get a consistent security posture and things like that. So, there are situations where the multi-cloud is a reality at a certain level, but maybe not at a concrete technical level. But I think it's still really useful for a lot of customers.Corey: You position your cloud offering in two different ways. One of them is the idea of BigAnimal, and the other—well, it sort of harkens back to when I was in sixth grade going through the American public school system. They had a cop come in and talk to us and paint to this imaginary story of people trying to push drugs. “Hey, kid. You want to try some of this?” And I'm reading this and it says EDB, Postgres for Kubernetes. And I'm sent back there, where it's like, “Hey, kid. You want to run your stateful databases on top of Kubernetes?” And my default answer to that is good lord, no. What am I missing?Benjamin: That's a good question. Kubernetes has come a long way—I think is part of that.Corey: Oh, truly. I used to think of containers as a pure story for stateless things. And then, of course, I put state into them, and then, everything exploded everywhere because it turns out, I'm bad at computers. Great. And it has come a long way. I have been tracking a lot of that. But it still feels like the idea being that you'd want to have your database endpoints somewhere a lot less, I guess I'll call it fickle, if that makes sense.Benjamin: It's an interesting problem because we are seeing a lot of people who are interested in our Kubernetes-based products. It's actually based on—we recently open-sourced the core of it under a project called cloud-native PG. It's a cool piece of technology. If you think about sort of two by two. In one corner, you've got self-managed on-premise databases. So, you're very, very slow-moving, big-iron type, old-school database deployments. And on the opposite corner, you've got fully-managed, in the cloud, BigAnimal, Amazon RDS, that type of thing. There's a place on that map where you've got customers that want a self-service type experience. Whether that's for production, or maybe it's even for dev tests, something like that. But you don't want to be giving the management capability off to a third party.For folks that want that type of experience, trying to build that themselves by, like, wiring up EC2 instances, or doing something in their own data center with VMware, or something like that, can be extremely difficult. Whereas if you've go to a Kubernetes-based product, you can get that type of self-service experience really easily, right? And customers can get a lot more flexibility out of how they run their databases and operate their databases. And what sort of control they give to, say application developers who want to spin up a new database for a test or for some sort of small microservice, that type of thing. Those types of workloads tend to work really well with this first-party Kubernetes-based offering. I've been doing databases on Kubernetes in managed services for a long time as well. And I don't, frankly, have any concerns about doing it. There are definitely some sharp edges. And if you wanted to do to-scale, you need to really know what you're doing with Kubernetes because the naive thing will shoot you in the foot.Corey: Oh, yes. So, some it feels almost like people want to cosplay working for Google, but they don't want to pass the technical interview along the way. It's a bit of a weird moment for it.Benjamin: Yeah, I would agree.Corey: I have to go back to my own experiences with using RDS back at my last real job before I went down this path. We were migrating from EC2-Classic to VPC. So, you could imagine what dates me reasonably effectively. And the big problem was the database. And the joy that we had was, “Okay, we have to quiesce the application.” So, the database is now quiet, stop writes, take a snapshot, restore that snapshot into the environment. And whenever we talk to AWS folks, it's like, “So, how long is this going to take?” And the answer was, “Guess.” And that was not exactly reassuring. It went off without a hitch because every migration has one problem. We were sideswiped in an Uber on the way home. But that's neither here nor there. This was two o'clock in the morning, and we finished in half the maintenance time we had allotted. But it was the fact that, well, guess we're going to have to take the database down for many hours with no real visibility, and we hope it'll be up by morning. That wasn't great. But that was the big one going on, on an ongoing basis, there were maintenance windows with a database. We just stopped databasing for a period of time during a fairly broad maintenance window. And that led to a whole lot of unfortunate associations in my mind with using relational databases for an awful lot of stuff. How do you handle maintenance windows and upgrading and not tearing down someone's application? Because I have to assume, “Oh, we just never patch anything. It turns out that's way easier,” is in fact, the wrong answer.Benjamin: Yeah, definitely. As you point out, there's a bunch of fundamental limitations here, if we start to talk about how Postgres actually fits together, right? Pretty much everybody in RDS is a little bit weird. The older RDS offerings are a little bit weird in terms of how they do replication. But most folks are using Postgres streaming replication, to do high availability, Postgres in managed services. And honestly, of course—Corey: That winds up failing over, or the application's aware of both endpoints and switches to the other one?Benjamin: Yeah—Corey: Sort of a database pooling connection or some sort of proxy?Benjamin: Right. There's a bunch of subtleties that get into their way. You say, well, did the [vit 00:16:16] failover too early, did the application try to connect and start making requests before the secondaries available? That sort of thing.Corey: Or you misconfigure it and point to the secondary, suddenly, when there's a switchover of some database, suddenly, nothing can write, it can only read, then you cause a massive outage on the weekend?Benjamin: Yeah. Yeah.Corey: That may have been of an actual story I made up.Benjamin: [laugh] yeah, you should use a managed service.Corey: Yeah.Benjamin: So, it's complicated, but even with managed services, you end up in situations where you have downtime, you have maintenance windows. And with Postgres, especially—and other databases as well—especially with Postgres, one of the biggest concerns you have is major version upgrades, right? So, if I want to go from Postgres 12 to 13, 13 to 14, I can't do that live. I can't have a single cluster that is streaming one Postgres version to another Postgres version, right?So, every year, people want to put things off for two years, three years sometimes—which is obviously not to their benefit—you have this maintenance, you have some sort of downtime, where you perform a Postgres upgrade. At EDB, we've got—so this is a big problem, this is a problem for us. We're involved in the Postgres community. We know this is challenging. That's just a well-known thing. Some of the folks that are working EDB are folks who worked on the Postgres logical replication tech, which arrived in Postgres 10. Logical replication is really a nice tool for doing things like change data capture, you can do Walter JSON, all these types of things are based on logical replication tech.It's not really a thing, at least, the code that's in Postgres itself doesn't really support high availability, though. It's not really something that you can use to build a leader-follower type cluster on top of. We have some techs, some proprietary tech within EDB that used to be called bi-directional replication. There used to be an open-source project called bi-directional replication. This is a kind of a descendant of that. It's now called Postgres Distributed, or EDB Postgres Distributed is the product name. And that tech actually allows us—because it's based on logical replication—allows us to do multiple major versions at the same time, right? So, we can upgrade one node in a cluster to Postgres 14, while the other nodes in the clusters are at Postgres 13. We can then upgrade the next node. We can support these types of operations in a kind of wide range of maintenance operations without taking a cluster down from maintenance.So, there's a lot of interesting opportunities here when we start to say, well, let's step back from what your typical assumptions are for Postgres streaming replication. Give ourselves a little bit more freedom by using logical replication instead of physical streaming replication. And then, what type of services, and what type of patterns can we build on top of that, that ultimately help customers build, whether it's faster databases, more highly available databases, so on and so forth.Corey: Let's face it, on-call firefighting at 2am is stressful! So there's good news and there's bad news. The bad news is that you probably can't prevent incidents from happening, but the good news is that incident.io makes incidents less stressful and a lot more valuable. incident.io is a Slack-native incident management platform that allows you to automate incident processes, focus on fixing the issues and learn from incident insights to improve site reliability and fix your vulnerabilities. Try incident.io, recover faster and sleep more.Corey: One approach that I took for, I guess you could call it backup sort of, was intentionally staggering replication between the primary and the replica about 15 minutes or so. So, if I drop a production table or something like that, I have 15 short minutes to realize what has happened and sever the replication before it is now committed to the replica and now I'm living in hell. It felt like this was not, like, option A, B, or C, or the right way to do things. But given that meeting customers where they are as important, is that the sort of thing that you support with BigAnimal, or do you try to talk customers into not being ridiculous?Benjamin: That's not something we support now. It's not actually something that I hear that many asks for these days. It's kind of interesting, that's a pattern that I've run into a lot in the past.Corey: I was an ancient, grumpy sysadmin. Again, I'm dating myself here. These days, I just store everything at DNS text records, and it's way easier. But I digress.Benjamin: [laugh] yeah, it's something that we see a lot for and we had support for a point-in-time restore, like pretty much anybody else in the business at this point. And that's usually the, “I fat-fingered something,” type response. Honestly, I think there's room to be a bit more flexible and room to do some more interesting things. I think RDS is setting a bar and a lot of database services out there and kind of just meeting that bar. And we all kind of need to be pushing a little bit more into more interesting spaces and figuring out how to get customers more value, get customers to get more out of their money for the database, honestly.Corey: One of the problems we tend to see, in the database ecosystem at large, without naming names or companies or anything like that, is that it's a pretty thin and blurry line between database advocate, database evangelist, and database zealot. Where it feels like instead, we're arguing about religion more than actual technical constraints and concerns. So, here's a fun question that hopefully isn't too much of a gotcha. But what sort of workloads would you actively advise someone not to use BigAnimal for in the database world? But yes, again, if you try to run a DNS server, it's probably not fit for purpose without at least a shim in the way there. But what sort of workloads are you not targeting that a customer is likely to have a relatively unfortunate time with?Benjamin: Large-scale analytical workloads is the easy answer to that, right? If you've got a problem where you're choosing between Postgres and Snowflake, you're seriously considering—you actually have as much data that you seriously be considering Snowflake? You probably don't want to be using Postgres, right? You want to be using something that's column, or you want to be using a query planner that really understands a columnar layout that's going to get you the sorts of performance that you need for those analytical workloads. We don't try to touch that space.Corey: Yeah, we're doing some of that right now with just the sheer volume of client AWS bills we have. We don't really need a relational model for a lot of it. And Athena is basically fallen down on the job in some cases, and, “Oh, do you want to use Redshift, that's basically Postgres.” It's like, “Yeah, it's Postgres, if it decided to run on bars of gold.” No, thank you. It just becomes this ridiculously overwrought solution for what feels like it should be a lot similar. So, it's weird, six months ago or so I wouldn't have had much of an idea what you're talking about. I see it a lot better now. Generally, by virtue of trying to do something the precise wrong way that someone should.Benjamin: Right. Yeah, exactly. I think there's interesting room for Postgres to expand here. It's not something that we're actively working on. I'm not aware of a lot happening in the community that Postgres is, for better or worse, extremely extensible, right? And if you see the JSON-supported Postgres, it didn't exist, I don't know, five, six years ago. And now it's incredibly powerful. It's incredibly flexible. And you can do a lot of quote-unquote, schemaless stuff straight in Postgres. Or you look at PostGIS, right, for doing GIS geographical data, right? That's really a fantastic integration directly in the database.Corey: Yeah, before that people start doing ridiculous things almost looks similar to a graph database or a columnar store somehow, and yeah.Benjamin: Yeah, exactly. I think sometimes somebody will do a good column store that's an open-source deeply integrated into Postgres, rather than—Corey: I've seen someone build one on top of S3 bucket with that head, a quarter of a trillion objects in it. Professional advice, don't do that.Benjamin: [laugh]. Unless you're Snowflake. So, I mean, it's something that I'd like to see Postgres expand into. I think that's an interesting space, but not something that, at least especially for BigAnimal, and frankly, for a lot of EDB customers. It's not something we're trying to push people toward.Corey: One thing that I think we are seeing a schism around is the idea that some vendors are one side of it, some are on the other, where on the one side, you have, oh, every workload should have a bespoke, purpose-built database that is exactly for this type of workload. And the other school of thought is you should generally buy us for a general-purpose database until you have a workload that is scaled and significant to a point where running that on its own purpose-built database begins to make sense. I don't necessarily think that is a binary choice, where do you tend to fall on that spectrum?Benjamin: I think everybody should use Postgres. And I say not just because I work in a Postgres company.Corey: Well, let's be clear. Before this, you were at IBM for five years working on a whole bunch of database stuff over there, not just Postgres. And you, so far, have not struck me as the kind of person who's like, “Oh, so what's your favorite database?” “The one that pays me.” We've met people like that, let's be very clear. But you seem very even-handed in those conversations.Benjamin: Yeah, I got my start in databases, actually, with Apache CouchDB. I am a committer on CouchDB. I worked on a managed at CouchDB service ten years ago. At IBM, I worked on something in nine different open-source databases and managed services. But I love having conversations about, like, well, I've got this workload, should I use Postgres, rr should I use Mongo, should I use Cassandra, all of those types of discussions. Frankly, though, I think in a lot of cases people are—they don't understand how much power they're missing out on if they don't choose a relational database. If they don't understand the relational model well enough to understand that they really actually want that. In a lot of cases, people are also just over-optimizing too early, right? It's just going to be much faster for them to get off the ground, get product in customers hands, if they start with something that they don't have to think twice about. And they don't end up with this architecture with 45 different databases, and there's only one guy in the company that knows how to manage the whole thing.Corey: Oh, the same story of picking a cloud provider. It's, “Okay, you hire a team, you're going to build a thing. Which cloud provider do you pick?” Every cloud provider has a whole matrix and sales deck, and the rest. The right answer, of course, is the one your team's already familiar with because learning a new cloud provider while trying not to run out of money at your startup, can't really doesn't work super well.Benjamin: Exactly. Yeah.Corey: One thing that I think has been sort of interesting, and when I saw it, it was one of those, “Oh, I sort of like them.” Because I had that instinctive reaction and I don't think I'm alone in this. As of this recording a couple of weeks ago, you folks received a sizable investment from private equity. And default reaction to that is, “Oh, well, I guess I put a fork in the company, they're done.” Because the narrative is that once private equity takes an investment, well, that company's best days are probably not in front of it. Now, the counterpoint is that this is not the first time private equity has invested in EDB, and you folks from what I can tell are significantly better than you were when I was your customer a decade ago. So clearly, there is something wrong with that mental model. What am I missing?Benjamin: Yeah. Frankly, I don't know. I'm no expert in funding models and all of those sorts of things. I will say that my experience has been what I've seen at EDB, has definitely been that maybe there's private equity, and then there's private equity. We're in this to build better products and become a better product company. We were previously owned by a private equity firm for the past four years or so. And during the course of those four years, we brought on a bunch of folks who were very product-focused, new leadership. We made a significant acquisition of a company called 2ndQuadrant, which they employed a lot of the European best Postgres company. Now, they're part of EDB and most of them have stayed with us. And we built the managed cloud service, right? So, this is a pretty significant—private equity company buying us to invest in the company. I'm optimistic that that's what we're looking at going forward.Corey: I want to be clear as well, I'm not worried about what I normally would be in a private equity story about this, where they're there to save money and cut costs, and, “Do we really need all these database replicas floating around,” and, “These backups, seems like that's something we don't need.” You have, at last count, 32 Postgres contributors, 7 Postgres committers, and 3 core members. All of whom would run away screaming loudly and publicly, in the event that such a thing were taking place. Of all the challenges and concerns I might have about someone running a cloud service in the modern day. I do not have any fear that you folks are not doing what will very clearly be shown to be the right thing by your customers for the technology that you're building on top of. That is not a concern. There are companies I do not have that confidence in, to be clear.Benjamin: Yeah, I'm glad to hear that. I'm a hundred percent on board as well. I work here, but I think we're doing the right thing, and we're going to be doing great stuff going forward.Corey: One last topic I do want to get into a little bit is, on some level, launching in this decade, a cloud-hosted database offering at a time when Amazon—whose product strategy of yes is in full display—it seems like something ridiculous, that is not necessarily well thought out that why would you ever try to do this? Now, I will temper that by the fact that you are clearly succeeding in this direction. You have customers who say nice things about you, and the reviews have been almost universally positive anywhere I can see things. The negative ones are largely complaining about databases, which I admit might be coming from me.Benjamin: Right, it is a crowded space. There's a lot of things happening. Obviously, Amazon, Microsoft, Google are doing great things, both—Corey: Terrible things, but great, yes. Yes.Benjamin: [laugh] right, there's good products coming in. I think AlloyDB is not necessarily a great product. I haven't used it myself yet, but it's an interesting step in the direction. I'm excited to see development happening. But at the end of the day, we're a database company. Our focus is on building great databases and supporting great databases. We're not entering this business to try to take on Amazon from an infrastructure point of view. In fact, the way that we're structuring the product is really to try to get the strengths of both worlds. We want to give customers the ability to get the most out of the AWS or Azure infrastructure that they can, but come to us for their database.Frankly, we know Postgres better than anybody else. We have a greater ability to get bugs fixed in Postgres than anybody else. We've got folks working on the database in the open. We got folks working on the database proprietary for us. So, we give customers things like break/fix support on that database. If there is a bug in Postgres, there's a bug in the tech that sits around Postgres. Because obviously, Postgres is not a batteries-included system, really. We're going to fix that for you. That's part of the contract that we're giving to our customers. And I know a lot of smaller companies maybe haven't been burned by this sort of thing very much. We start to talk about enterprise customers and medium, larger-scale customers, this starts to get really valuable. The ability to have assurance on top of your open-source product. So, I think there's a lot of interesting things there, a lot of value that we can provide there.I think also that I talked a little bit about this earlier, but like the box, this sort of RDS-shaped box, I think is a bit too small. There's an opportunity for smaller players to come in and try to push the boundaries of that. For example, giving customers more support by default to do a good job using their database. We have folks on board that can help consult with customers to say, “No, you shouldn't be designing your schemas that way. You should be designing your schemas this way. You should be using indexes here,” that sort of stuff. That's been part of our business for a long time. Now, with a managed service, we can bake that right into the managed service. And that gives us the ability to kind of make that—you talk about shared responsibility between the service writer and the customer—we can change the boundaries of that shared responsibility a little bit, so that customers can get more value out of the managed database service than they might expect otherwise.Corey: There aren't these harsh separations and clearly defined lines across which nothing shall pass, when it makes sense to do that in a controlled responsible way.Benjamin: Right, exactly. Some of that is because we're a database company, and some of that is because, frankly, we're much smaller.Corey: I'll take it a step further beyond that, as well, that I have seen this pattern evolve a number of times where you have a customer running databases on EC2, and their AWS account managers suggests move to RDS. So, they do. Then, move to Aurora. So, they do. Then, I move this to DynamoDB. At which point, it's like, what do you think your job is here, exactly? Because it seems like every time we move databases, you show up in a nicer car. So, what exactly is the story here, and what are the incentives? Where it just feels like there is a, “Whatever you're doing is not the way that it should be done. So, it's time to do, yet, another migration.”There's something to be said for companies who are focused around a specific aspect of things. Then once that is up and working and running, great. Keep on going. This is fine. As opposed to trying to chase the latest shiny, on some level. I have a big sense of, I guess, affinity for companies that wind up knowing where they start, and most notably, where they stop.Benjamin: Yeah, I think that's a really good point. I don't think that we will be building an application platform anytime soon.Corey: “We're going to run Lambda functions on top of a database.” It's like, “Congratulations. That is the weirdest stored procedure I can imagine this week, but I'm sure we can come up with a worse one soon.”Benjamin: Exactly.Corey: I really want to thank you for taking the time to speak with me so much about how you're thinking about this, and what you've been building over there. If people want to learn more, where's the best place to go to find you?Benjamin: biganimal.com.Corey: Excellent. We will throw a link to that in the show notes and it only just occurred to me that the Postgres mascot is an elephant, and now I understand why it's called BigAnimal. Yeah, that's right. He who laughs last, thinks slowest, and today, that's me. I really want to thank you for being so generous with your time. I appreciate it.Benjamin: Thank you. I really appreciate it.Corey: Benjamin Anderson, CTO for Cloud at EDB. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry comment that you then wind up stuffing into a SQLite database, converting to Base64, and somehow stuffing into the comment field.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
Vor über drei Jahren hatten wir ja schon einmal eine Episode über Datenbanken. Da das ja nun schon ein bisschen her ist, dachten wir dass es vielleicht an der Zeit wäre, mal wieder über dieses Thema zu reden. Dazu haben wir (Dominik und Jochen) uns diesmal mit Susanne zusammengesetzt, die seit vielen Jahren Consulting und Schulungen zum Thema anbietet. Die alte Datenbank-Episode war unsere längste Episode bisher, und irgendwie ist auch diese hier länger als gewöhnlich geworden. Offenbar gibt es über Datenbanken mehr zu sagen als zu anderen Themen
This episode originally aired on Software Engineering Radio.A few topics covered Building on top of open source Forking their GoTrue dependency Relying on Postgres features like row level security Adding realtime support based on Postgres's write ahead log Generating an API layer based on the database schema with PostgREST Creating separate EC2 instances for each customer's database How Postgres could scale in the future Monitoring postgres Common support tickets Permissive open source licenses Related Links @antwilson Supabase Supabase GitHub Firebase Airtable PostgREST GoTrue Elixir Prometheus VictoriaMetrics Logflare BigQuery Netlify Y Combinator Postgres PostgreSQL Write-Ahead Logging Row Security Policies pg_stat_statements pgAdmin PostGIS Amazon Aurora Transcript You can help edit this transcript on GitHub. [00:00:00] Jeremy: Today I'm talking to Ant Wilson, he's the co-founder and CTO of Supabase. Ant welcome to software engineering radio.[00:00:07] Ant: Thanks so much. Great to be here. [00:00:09] Jeremy: When I hear about Supabase, I always hear about it in relation to two other products. The first is Postgres, which is a open source relational database. And second is Firebase, which is a backend as a service product from Google cloud that provides a no SQL data store.It provides authentication and authorization. It has a functions as a service component. It's really meant to be a replacement for you needing to have your own server, create your own backend. You can have that all be done from Firebase. I think a good place for us to start would be walking us through what supabase is and how it relates to those two products.[00:00:55] Ant: Yeah. So, so we brand ourselves as the open source Firebase alternativethat came primarily from the fact that we ourselves do use the, as the alternative to Firebase. So, so my co-founder Paul in his previous startup was using fire store. And as they started to scale, they hit certain limitations, technical scaling limitations and he'd always been a huge Postgres fan.So we swapped it out for Postgres and then just started plugging in. The bits that we're missing, like the real-time streams. Um, He used the tool called PostgREST with a T for the, for the CRUD APIs. And sohe just built like the open source Firebase alternative on Postgres, And that's kind of where the tagline came from.But the main difference obviously is that it's relational database and not a no SQL database which means that it's not actually a drop-in replacement. But it does mean that it kind of opens the door to a lot more functionality actually. Um, Which, which is hopefully an advantage for us. [00:02:03] Jeremy: it's a, a hosted form of Postgres. So you mentioned that Firebase is, is different. It's uh NoSQL. People are putting in their, their JSON objects and things like that. So when people are working with Supabase is the experience of, is it just, I'm connecting to a Postgres database I'm writing SQL.And in that regard, it's kind of not really similar to Firebase at all. Is that, is that kind of right?[00:02:31] Ant: Yeah, I mean, the other thing, the other important thing to notice that you can communicate with Supabase directly from the client, which is what people love about fire base. You just like put the credentials on the client and you write some security rules, and then you just start sending your data. Obviously with supabase, you do need to create your schema because it's relational.But apart from that, the experience of client side development is very much the same or very similar the interface, obviously the API is a little bit different. But, but it's similar in that regard. But I, I think, like I said, we're moving, we are just a database company actually. And the tagline, just explained really, well, kind of the concept of, of what it is like a backend as a service. It has the real-time streams. It has the auth layer. It has the also generated APIs. So I don't know how long we'll stick with the tagline. I think we'll probably outgrow it at some point. Um, But it does do a good job of communicating roughly what the service is.[00:03:39] Jeremy: So when we talk about it being similar to Firebase, the part that's similar to fire base is that you could be a person building the front end part of the website, and you don't need to necessarily have a backend application because all of that could talk to supabase and supabase can handle the authentication, the real-time notifications all those sorts of things, similar to Firebase, where we're basically you only need to write the front end part, and then you have to know how to, to set up super base in this case.[00:04:14] Ant: Yeah, exactly. And some of the other, like we took w we love fire based, by the way. We're not building an alternative to try and destroy it. It's kind of like, we're just building the SQL alternative and we take a lot of inspiration from it. And the other thing we love is that you can administer your database from the browser.So you go into Firebase and you have the, you can see the object tree, and when you're in development, you can edit some of the documents in real time. And, and so we took that experience and effectively built like a spreadsheet view inside of our dashboard. And also obviously have a SQL editor in there as well.And trying to, create this, this like a similar developer experience, because that's where Firebase just excels is. The DX is incredible. And so we, we take a lot of inspiration from it in, in those respects.[00:05:08] Jeremy: and to to make it clear to our listeners as well. When you talk about this interface, that's kind of like a spreadsheet and things like that. I suppose it's similar to somebody opening up pgAdmin, I suppose, and going in and editing the rows. But, but maybe you've got like another layer on top that just makes it a little more user-friendly a little bit more like something you would get from Firebase, I guess.[00:05:33] Ant: Yeah.And, you know, we, we take a lot of inspiration from pgAdmin. PG admin is also open source. So I think we we've contributed a few things and, or trying to upstream a few things into PG admin. The other thing that we took a lot of inspiration from for the table editor, what we call it is airtable.And because airtable is effectively. a a relational database and that you can just come in and, you know, click to add your columns, click to add a new table. And so we just want to reproduce that experience again, backed up by a full Postgres dedicated database. [00:06:13] Jeremy: so when you're working with a Postgres database, normally you need some kind of layer in front of it, right? That the person can't open up their website and connect directly to Postgres from their browser. And you mentioned PostgREST before. I wonder if you could explain a little bit about what that is and how it works.[00:06:34] Ant: Yeah, definitely. so yeah, PostgREST has been around for a while. Um, It's basically an, a server that you connect to, to your Postgres database and it introspects your schemas and generates an API for you based on the table names, the column names. And then you can basically then communicate with your Postgres database via this restful API.So you can do pretty much, most of the filtering operations that you can do in SQL um, uh, equality filters. You can even do full text search over the API. So it just means that whenever you obviously add a new table or a new schema or a new column the API just updates instantly. So you, you don't have to worry about writing that, that middle layer which is, was always the drag right.When, what have you started a new project. It's like, okay, I've got my schema, I've got my client. Now I have to do all the connecting code in the middle of which is kind of, yeah, no, no developers should need to write that layer in 2022.[00:07:46] Jeremy: so this the layer you're referring to, when I think of a traditional. Web application. I think of having to write routes controllers and, and create this, this sort of structure where I know all the tables in my database, but the controllers I create may not map one to one with those tables. And so you mentioned a little bit about how PostgREST looks at the schema and starts to build an API automatically.And I wonder if you could explain a little bit about how it does those mappings or if you're writing those yourself. [00:08:21] Ant: Yeah, it basically does them automatically by default, it will, you know, map every table, every column. When you want to start restricting things. Well, there's two, there's two parts to this. There's one thing which I'm sure we'll get into, which is how is this secure since you are communicating direct from the client.But the other part is what you mentioned giving like a reduced view of a particular date, bit of data. And for that, we just use Postgres views. So you define a view which might be, you know it might have joins across a couple of different tables or it might just be a limited set of columns on one of your tables. And then you can choose to just expose that view. [00:09:05] Jeremy: so it sounds like when you would typically create a controller and create a route. Instead you create a view within your Postgres database and then PostgREST can take that view and create an end point for it, map it to that.[00:09:21] Ant: Yeah, exactly (laughs) . [00:09:24] Jeremy: And, and PostgREST is an open source project. Right. I wonder if you could talk a little bit about sort of what its its history was. How did you come to choose it? [00:09:37] Ant: Yeah.I think, I think Paul probably read about it on hacker news at some point. Anytime it appears on hacker news, it just gets voted to the front page because it's, it's So awesome. And we, we got connected to the maintainer, Steve Chavez. At some point I think he just took an interest in, or we took an interest in Postgres and we kind of got acquainted.And then we found out that, you know, Steve was open to work and this kind of like probably shaped a lot of the way we think about building out supabase as a project and as a company in that we then decided to employ Steve full time, but just to work on PostgREST because it's obviously a huge benefit for us.We're very reliant on it. We want it to succeed because it helps our business. And then as we started to add the other components, we decided that we would then always look for existing tools, existing opensource projects that exist before we decided to build something from scratch. So as we're starting to try and replicate the features of Firebase we would and auth is a great example.We did a full audit of what are all the authorization, authentication, authentication open-source tools that are out there and which one was, if any, would fit best. And we found, and Netlify had built a library called gotrue written in go, which did pretty much exactly what we needed. So we just adopted that.And now obviously, you know, we, we just have a lot of people on the team contributing to, to gotrue as well.[00:11:17] Jeremy: you touched on this a little bit earlier. Normally when you connect to a Postgres database your user has permission to, to basically everything I guess, by default, anyways. And so. So, how does that work? Where when you want to restrict people's permissions, make sure they only get to see records they're allowed to see how has that all configured in PostgREST and what's happening behind the scenes?[00:11:44] Ant: Yeah, we, the great thing about Postgres is it's got this concept of row level security, which actually, I don't think I even rarely looked at until we were building out this auth feature where the security rules live in your database as SQL. So you do like a create policy query, and you say anytime someone tries to select or insert or update apply this policy.And then how it all fits together is our auth server go true. Someone will basically make a request to sign in or sign up with email and password, and we create that user inside the, database. They get issued a URL. And they get issued a JSON, web token, a JWT, and which, you know, when they, when they have it on the, client side, proves that they are this, you, you ID, they have access to this data.Then when they make a request via PostgREST, they send the JWT in the authorization header. Then Postgres will pull out that JWT check the sub claim, which is the UID and compare it to any rows in the database, according to the policy that you wrote. So, so the most basic one is you say in order to, to access this row, it must have a column you UID and it must match whatever is in the JWT.So we basically push the authorization down into the database which actually has, you know, a lot of other benefits in that as you write new clients, You don't need to have, have it live, you know, on an API layer on the client. It's kind of just, everything is managed from the database.[00:13:33] Jeremy: So the, the, you, you ID, you mentioned that represents the user, correct. [00:13:39] Ant: Yeah. [00:13:41] Jeremy: Is that, does that map to a user in post graphs or is there some other way that you're mapping those permissions?[00:13:50] Ant: Yeah. When, so when you connect go true, which is the auth server to your Postgres database for the first time, it installs its own schema. So you'll have an auth schema and inside will be all start users with a list of the users. It'll have a uh, auth dot tokens which will store all the access tokens that it's issued.So, and one of the columns on the auth start user's table will be UUID, and then whenever you write application specific schemers, you can just join a, do a foreign key relation to the author users table. So, so it all gets into schema design and and hopefully we do a good job of having some good education content in the docs as well.Because one of the things we struggled with from the start was how much do we abstract away from SQL away from Postgres and how much do we educate? And we actually landed on the educate sides because I mean, once you start learning about Postgres, it becomes kind of a superpower for you as a developer.So we'd much rather. Have people discover us because we're a firebase alternatives frontend devs then we help them with things like schema design landing about row level security. Because ultimately like every, if you try and abstract that stuff it gets kind of crappy. And maybe not such a great experience. [00:15:20] Jeremy: to make sure I understand correctly. So you have GoTrue, which is uh, a Netlify open-source project that GoTrue project creates some tables in your, your database that has like, you've mentioned the tokens, the, the different users. Somebody makes a request to GoTrue. Like here's my username, my password go true.Gives them back a JWT. And then from your front end, you send that JWT to the PostgREST endpoint. And from that JWT, it's able to know which user you are and then uses postgres' built in a row level security to figure out which rows you're, you're allowed to bring back. Did I, did I get that right?[00:16:07] Ant: That is pretty much exactly how it works. And it's impressive that you garnered that without looking at a single diagram (laughs) But yeah, and, and, and obviously we, we provide a client library supabase JS, which actually does a lot of this work for you. So you don't need to manually attach the JJ JWT in a header.If you've authenticated with supabase JS, then every request sent to PostgREST. After that point, the header will just be attached automatically, and you'll be in a session as that user. [00:16:43] Jeremy: and, and the users that we're talking about when we talk about Postgres' row level security. Are those actual users in PostgreSQL. Like if I was to log in with psql, I could actually log in with those users.[00:17:00] Ant: They're not, you could potentially structure it that way. But it would be more advanced it's it's basically just users in, in the auth.users table, the way, the way it's currently done. [00:17:12] Jeremy: I see and postgrest has the, that row level security is able to work with that table. You, you don't need to have actual Postgres users.[00:17:23] Ant: Exactly. And, and it's, it's basically turing complete. I mean, you can write extremely complex auth policies. You can say, you know, only give access to this particular admin group on a Thursday afternoon between six and 8:00 PM. You can get really, yeah. really as fancy as you want. [00:17:44] Jeremy: Is that all written in SQL or are there other languages they allow you to use?[00:17:50] Ant: Yeah. It's the default is plain SQL. Within Postgres itself, you can useI think you can use, like there's a Python extension. There's a JavaScript extension, which is a, I think it's a subsets of, of JavaScripts. I mean, this is the thing with Postgres, it's super extensible and people have probably got all kinds of interpreters.So you, yeah, you can use whatever you want, but the typical user will just use SQL. [00:18:17] Jeremy: interesting. And that applies to logic in general, I suppose, where if you were writing a rails application, you might write Ruby. Um, If you're writing a node application, you write JavaScript, but you're, you're saying in a lot of cases with PostgREST, you're actually able to do what you want to do, whether that's serialization or mapping objects, do that all through SQL.[00:18:44] Ant: Yeah, exactly, exactly. And then obviously like there's a lot of awesome other stuff that Postgres has like this postGIS, which if you're doing geo, if you've got like a geo application, it'll load it up with a geo types for you, which you can just use. If you're doing like encryption and decryption, we just added PG libsodium, which is a new and awesome cryptography extension.And so you can use all of these, these all add like functions, like SQL functions which you can kind of use in, in any parts of the logic or in the role level policies. Yeah.[00:19:22] Jeremy: and something I thought was a little unique about PostgREST is that I believe it's written in Haskell. Is that right?[00:19:29] Ant: Yeah, exactly. And it makes it fairly inaccessible to me as a result. But the good thing is it's got a thriving community of its own and, you know, people who on there's people who contribute probably because it's written in haskell. And it's, it's just a really awesome project and it's an excuse to, to contribute to it.But yeah. I, I think I did probably the intro course, like many people and beyond that, it's just, yeah, kind of inaccessible to me. [00:19:59] Jeremy: yeah, I suppose that's the trade-off right. Is you have a, a really passionate community about like people who really want to use Haskell and then you've got the, the, I guess the group like yourselves that looks at it and goes, oh, I don't, I don't know about this.[00:20:13] Ant: I would, I would love to have the time to, to invest in uh, but not practical right now. [00:20:21] Jeremy: You talked a little bit about the GoTrue project from Netlify. I think I saw on one of your blog posts that you actually forked it. Can you sort of explain the reasoning behind doing that?[00:20:34] Ant: Yeah, initially it was because we were trying to move extremely fast. So, so we did Y Combinator in 2020. And when you do Y Combinator, you get like a part, a group partner, they call it one of the, the partners from YC and they add a huge amount of external pressure to move very quickly. And, and our biggest feature that we are working on in that period was auth.And we just kept getting the question of like, when are you going to ship auth? You know, and every single week we'd be like, we're working on it, we're working on it. And um, and one of the ways we could do it was we just had to iterate extremely quickly and we didn't rarely have the time to, to upstream things correctly.And actually like the way we use it in our stack is slightly differently. They connected to MySQL, we connected to Postgres. So we had to make some structural changes to do that. And the dream would be now that we, we spend some time upstream and a lot of the changes. And hopefully we do get around to that.But the, yeah, the pace at which we've had to move over the last uh, year and a half has been kind of scary and, and that's the main reason, but you know, hopefully now we're a little bit more established. We can hire some more people to, to just focus on, go true and, and bringing the two folks back together. [00:22:01] Jeremy: it's just a matter of, like you said speed, I suppose, because the PostgREST you, you chose to continue working off of the existing open source project, right? [00:22:15] Ant: Yeah, exactly. Exactly. And I think the other thing is it's not a major part of Netlify's business, as I understand it. I think if it was and if both companies had more resource behind it, it would make sense to obviously focus on on the single codebase but I think both companies don't contribute as much resource as as we would like to, but um, but it's, it's for me, it's, it's one of my favorite parts of the stack to work on because it's written in go and I kind of enjoy how that it all fits together.So Yeah. I, I like to dive in there. [00:22:55] Jeremy: w w what about go, or what about how it's structured? Do you particularly enjoy about the, that part of the project?[00:23:02] Ant: I think it's so I actually learned learned go through, gotrue and I'm, I have like a Python and C plus plus background And I hate the fact that I don't get to use Python and C plus posts rarely in my day to day job. It's obviously a lot of type script. And then when we inherited this code base, it was kind of, as I was picking it up I, it just reminded me a lot of, you know, a lot of the things I loved about Python and C plus plus, and, and the tooling around it as well. I just found to be exceptional. So, you know, you just do like a small amounts of conflig. Uh config, And it makes it very difficult to, to write bad code, if that makes sense.So the compiler will just, boot you back if you try and do something silly which isn't necessarily the case with, with JavaScript. I think TypeScript is a little bit better now, but Yeah, I just, it just reminded me a lot of my Python and C days.[00:24:01] Jeremy: Yeah, I'm not too familiar with go, but my understanding is that there's, there's a formatter that's a part of the language, so there's kind of a consistency there. And then the language itself tries to get people to, to build things in the same way, or maybe have simpler ways of building things. Um, I don't, I don't know.Maybe that's part of the appeal.[00:24:25] Ant: Yeah, exactly. And the package manager as well is great. It just does a lot of the importing automatically. and makes sure like all the declarations at the top are formatted correctly and, and are definitely there. So Yeah. just all of that tool chain is just really easy to pick up.[00:24:46] Jeremy: Yeah. And I, and I think compiled languages as well, when you have the static type checking. By the compiler, you know, not having things blow up and run time. That's, that's just such a big relief, at least for me in a lot of cases,[00:25:00] Ant: And I just loved the dopamine hits of when you compile something on it actually compiles this. I lose that with, with working with JavaScript. [00:25:11] Jeremy: for sure. One of the topics you mentioned earlier was how super base provides real-time database updates. And which is something that as far as I know is not natively a part of Postgres. So I wonder if you could explain a little bit about how that works and how that came about.[00:25:31] Ant: Yeah. So, So Postgres, when you add replication databases the way it does is it writes everything to this thing called the write ahead log, which is basically all the changes that uh, have, are going to be applied to, to the database. And when you connect to like a replication database. It basically streams that log across.And that's how the replica knows what, what changes to, to add. So we wrote a server, which basically pretends to be a Postgres rep, replica receives the right ahead log encodes it into JSON. And then you can subscribe to that server over web sockets. And so you can choose whether to subscribe, to changes on a particular schema or a particular table or particular columns, and even do equality matches on rows and things like this.And then we recently added the role level security policies to the real-time stream as well. So that was something that took us a while to, cause it was probably one of the largest technical challenges we've faced. But now that it's in the real-time stream is, is fully secure and you can apply these, these same policies that you apply over the CRUD API as well.[00:26:48] Jeremy: So for that part, did you have to look into the internals of Postgres and how it did its row level security and try to duplicate that in your own code?[00:26:59] Ant: Yeah, pretty much. I mean it's yeah, it's fairly complex and there's a guy on our team who, well, for him, it didn't seem as complex, let's say (laughs) , but yeah, that's pretty much it it's just a lot of it's effectively a SQL um, a Postgres extension itself, uh which in-in interprets those policies and applies them to, to the, to the, the right ahead log.[00:27:26] Jeremy: and this piece that you wrote, that's listening to the right ahead log. what was it written in and, and how did you choose that, that language or that stack?[00:27:36] Ant: Yeah. That's written in the Elixir framework which is based on Erlang very horizontally scalable. So any applications that you write in Elixir can kind of just scale horizontally the message passing and, you know, go into the billions and it's no problem. So it just seemed like a sensible choice for this type of application where you don't know.How large the wall is going to be. So it could just be like a few changes per second. It could be a million changes per second, then you need to be able to scale out. And I think Paul who's my co-founder originally, he wrote the first version of it and I think he wrote it as an excuse to learn Elixir, which is how, a lot of probably how PostgREST ended up being Haskell, I imagine.But uh, but it's meant that the Elixir community is still like relatively small. But it's a group of like very passionate and very um, highly skilled developers. So when we hire from that pool everyone who comes on board is just like, yeah, just, just really good and really enjoy is working with Elixir.So it's been a good source of a good source for hires as well. Just, just using those tools. [00:28:53] Jeremy: with a feature like this, I'm assuming it's where somebody goes to their website. They make a web socket connection to your application and they receive the updates that way. How have you seen how far you're able to push that in terms of connections, in terms of throughput, things like that?[00:29:12] Ant: Yeah, I don't actually have the numbers at hand. But we have, yeah, we have a team focused on obviously maximizing that but yeah, I don't I don't don't have those numbers right now. [00:29:24] Jeremy: one of the last things you've you've got on your website is a storage project or a storage product, I should say. And I believe it's written in TypeScript, so I was curious, we've got PostGrest, which is in Haskell. We've got go true and go. Uh, We've got the real-time database part in elixir.And so with storage, how did we finally get to TypeScript?[00:29:50] Ant: (Laughs) Well, the policy we kind of landed on was best tool for the job. Again, the good thing about being an open source is we're not resource constrained by the number of people who are in our team. It's by the number of people who are in the community and I'm willing to contribute. And so for that, I think one of the guys just went through a few different options that we could have went with, go just to keep it in line with a couple of the other APIs.But we just decided, you know, a lot of people well, everyone in the team like TypeScript is kind of just a given. And, and again, it was kind of down to speed, like what's the fastest uh we can get this up and running. And I think if we use TypeScript, it was, it was the best solution there. But yeah, but we just always go with whatever is best.Um, We don't worry too much uh, about, you know, the resources we have because the open source community has just been so great in helping us build supabase. And building supabase is like building like five companies at the same time actually, because each of these vertical stacks could be its own startup, like the auth stack And the storage layer, and all of this stuff.And you know, each has, it does have its own dedicated team. So yeah. So we're not too worried about the variation in languages.[00:31:13] Jeremy: And the storage layer is this basically a wrapper around S3 or like what is that product doing?[00:31:21] Ant: Yeah, exactly. It's it's wraparound as three. It, it would also work with all of the S3 compatible storage systems. There's a few Backblaze and a few others. So if you wanted to self host and use one of those alternatives, you could, we just have everything in our own S3 booklets inside of AWS.And then the other awesome thing about the storage system is that because we store the metadata inside of Postgres. So basically the object tree of what buckets and folders and files are there. You can write your role level policies against the object tree. So you can say this, this user should only access this folder and it's, and it's children which was kind of. Kind of an accident. We just landed on that. But it's one of my favorite things now about writing applications and supervisors is the rollover policies kind of work everywhere.[00:32:21] Jeremy: Yeah, it's interesting. It sounds like everything. Whether it's the storage or the authentication it's all comes back to postgres, right? At all. It's using the row level security. It's using everything that you put into the tables there, and everything's just kind of digging into that to get what it needs.[00:32:42] Ant: Yeah. And that's why I say we are a database company. We are a Postgres company. We're all in on postgres. We got asked in the early days. Oh, well, would you also make it my SQL compatible compatible with something else? And, but the amounts. Features Postgres has, if we just like continue to leverage them then it, it just makes the stack way more powerful than if we try to you know, go thin across multiple different databases.[00:33:16] Jeremy: And so that, that kind of brings me to, you mentioned how your Postgres companies, so when somebody signs up for supabase they create their first instance. What's what's happening behind the scenes. Are you creating a Postgres instance for them in a container, for example, how do you size it? That sort of thing.[00:33:37] Ant: Yeah. So it's basically just easy to under the hood for us we, we have plans eventually to be multi-cloud. But again, going down to the speed of execution that the. The fastest way was to just spin up a dedicated instance, a dedicated Postgres instance per user on EC2. We do also package all of the API APIs together in a second EC2 instance.But we're starting to break those out into clustered services. So for example, you know, not every user will use the storage API, so it doesn't make sense to Rooney for every user regardless. So we've, we've made that multitenant, the application code, and now we just run a huge global cluster which people connect through to access the S3 bucket.Basically and we're gonna, we have plans to do that for the other services as well. So right now it's you got two EC2 instances. But over time it will be just the Postgres instance and, and we wanted. Give everyone a dedicated instance, because there's nothing worse than sharing database resource with all the users, especially when you don't know how heavily they're going to use it, whether they're going to be bursty.So I think one of the things we just said from the start is everyone gets a Postgres instance and you get access to it as well. You can use your Postgres connection string to, to log in from the command line and kind of do whatever you want. It's yours.[00:35:12] Jeremy: so did it, did I get it right? That when I sign up, I create a super base account. You're actually creating an two instance for me specifically. So it's like every customer gets their, their own isolated it's their own CPU, their own Ram, that sort of thing.[00:35:29] Ant: Yeah, exactly, exactly. And, and the way the. We've set up the monitoring as well, is that we can expose basically all of that to you in the dashboard as well. so you can, you have some control over like the resource you want to use. If you want to a more powerful instance, we can do that. A lot of that stuff is automated.So if someone scales beyond the allocated disk size, the disk will automatically scale up by 50% each time. And we're working on automating a bunch of these, these other things as well.[00:36:03] Jeremy: so is it, is it where, when you first create the account, you might create, for example, a micro instance, and then you have internal monitoring tools that see, oh, the CPU is getting heady hit pretty hard. So we need to migrate this person to a bigger instance, that kind of thing.[00:36:22] Ant: Yeah, pretty much exactly. [00:36:25] Jeremy: And is that, is that something that the user would even see or is it the case of where you send them an email and go like, Hey, we notice you're hitting the limits here. Here's what's going to happen. [00:36:37] Ant: Yeah.In, in most cases it's handled automatically. There are people who come in and from day one, they say has my requirements. I'm going to have this much traffic. And I'm going to have, you know, a hundred thousand users hitting this every hour. And in those cases we will over-provisioned from the start.But if it's just the self service case, then it will be start on a smaller instance and an upgrade over time. And this is one of our biggest challenges over the next five years is we want to move to a more scalable Postgres. So cloud native Postgres. But the cool thing about this is there's a lot of.Different companies and individuals working on this and upstreaming into Postgres itself. So for us, we don't need to, and we, and we would never want to fork Postgres and, you know, and try and separate the storage and the the computes. But more we're gonna fund people who are already working on this so that it gets upstreamed into Postgres itself.And it's more cloud native. [00:37:46] Jeremy: Yeah. So I think the, like we talked a little bit about how Firebase was the original inspiration and when you work with Firebase, you, you don't think about an instance at all, right? You, you just put data in, you get data out. And it sounds like in this case, you're, you're kind of working from the standpoint of, we're going to give you this single Postgres instance.As you hit the limits, we'll give you a bigger one. But at some point you, you will hit a limit of where just that one instance is not enough. And I wonder if there's you have any plans for that, or if you're doing anything currently to, to handle that.[00:38:28] Ant: Yeah. So, so the medium goal is to do replication like horizontal scaling. We, we do that for some users already but we manually set that up. we do want to bring that to the self serve model as well, where you can just choose from the start. So I want, you know, replicas in these, in these zones and in these different data centers.But then, like I said, the long-term goal is that. it's not based on. Horizontally scaling a number of instances it's just a Postgres itself can, can scale out. And I think we will get to, I think, honestly, the race at which the Postgres community is working, I think we'll be there in two years.And, and if we can contribute resource towards that, that goal, I think yeah, like we'd love to do that, but yeah, but for now, it's, we're working on this intermediate solution of, of what people already do with, Postgres, which is, you know, have you replicas to make it highly available.[00:39:30] Jeremy: And with, with that, I, I suppose at least in the short term, the goal is that your monitoring software and your team is handling the scaling up the instance or creating the read replicas. So to the user, it, for the most part feels like a managed service. And then yeah, the next step would be to, to get something more similar to maybe Amazon's Aurora, I suppose, where it just kind of, you pay per use.[00:40:01] Ant: Yeah, exactly. Exactly. Aurora was kind of the goal from the start. It's just a shame that it's proprietary. Obviously. [00:40:08] Jeremy: right. Um, but it sounds, [00:40:10] Ant: the world would be a better place. If aurora was opensource. [00:40:15] Jeremy: yeah. And it sounds like you said, there's people in the open source community that are, that are trying to get there. just it'll take time. to, to all this, about making it feel seamless, making it feel like a serverless experience, even though internally, it really isn't, I'm guessing you must have a fair amount of monitoring or ways that you're making these decisions.I wonder if you can talk a little bit about, you know, what are the metrics you're looking at and what are the applications you're you have to, to help you make these decisions?[00:40:48] Ant: Yeah. definitely. So we started with Prometheus which is a, you know, metrics gathering tool. And then we moved to Victoria metrics which was just easier for us to scale out. I think soon we'll be managing like a hundred thousand Postgres databases will have been deployed on, on supabase. So definitely, definitely some scale. So this kind of tooling needs to scale to that as well. And then we have agents kind of everywhere on each application on, on the database itself. And we listen for things like the CPU and the Ram and the network IO. We also poll. Uh, Postgres itself. Th there's a extension called PG stats statements, which will give us information about what are, the intensive queries that are running on that, on that box.So we just collect as much of this as possible um, which we then obviously use internally. We set alerts to, to know when, when we need to upgrade in a certain direction, but we also have an end point where the dashboard subscribes to these metrics as well. So the user themselves can see a lot of this information.And we, I think at the moment we do a lot of the, the Ram the CPU, that kind of stuff, but we're working on adding just more and more of these observability metrics uh, so people can can know it could, because it also helps with Let's say you might be lacking an index on a particular table and not know about it.And so if we can expose that to you and give you alerts about that kind of thing, then it obviously helps with the developer experience as well.[00:42:29] Jeremy: Yeah. And th that brings me to something that I, I hear from platform as a service companies, where if a user has a problem, whether that's a crash or a performance problem, sometimes it can be difficult to distinguish between is it a problem in their application or is this a problem in super base or, you know, and I wonder how your support team kind of approaches that.[00:42:52] Ant: Yeah, no, it's, it's, it's a great question. And it's definitely something we, we deal with every day, I think because of where we're at as a company we've always seen, like, we actually have a huge advantage in that.we can provide. Rarely good support. So anytime an engineer joins super base, we tell them your primary job is actually frontline support.Everything you do afterwards is, is secondary. And so everyone does a four hour shift per week of, of working directly with the customers to help determine this kind of thing. And where we are at the moment is we are happy to dive in and help people with their application code because it helps our engineers land about how it's being used and where the pitfalls are, where we need better documentation, where we need education.So it's, that is all part of the product at the moment, actually. And, and like I said, because we're not a 10,000 person company we, it's an advantage that we have, that we can deliver that level of support at the moment. [00:44:01] Jeremy: w w what are some of the most common things you see happening? Like, is it I would expect you mentioned indexing problems, but I'm wondering if there's any specific things that just come up again and again,[00:44:15] Ant: I think like the most common is people not batching their requests. So they'll write an application, which, you know, needs to, needs to pull 10,000 rows and they send 10,000 requests (laughs) . That that's, that's a typical one for, for people just getting started maybe. Yeah. and, then I think the other thing we faced in the early days was. People storing blobs in the database which we obviously solve that problem by introducing file storage. But people will be trying to store, you know, 50 megabytes, a hundred megabyte files in Postgres itself, and then asking why the performance was so bad.So I think we've, we've mitigated that one by, by introducing the blob storage.[00:45:03] Jeremy: and when you're, you mentioned you have. Over a hundred thousand instances running. I imagine there have to be cases where an incident occurs, where something doesn't go quite right. And I wonder if you could give an example of one and how it was resolved.[00:45:24] Ant: Yeah, it's a good question. I think, yeah, w w we've improved the systems since then, but there was a period where our real time server wasn't able to handle rarely large uh, right ahead logs. So w there was a period where people would just make tons and tons of requests and updates to, to Postgres. And the real time subscriptions were failing. But like I said, we have some really great Elixir devs on the team, so they were able to jump on that fairly quickly. And now, you know, the application is, is way more scalable as a result. And that's just kind of how the support model works is you have a period where everything is breaking and then uh, then you can just, you know, tackle these things one by one. [00:46:15] Jeremy: Yeah, I think any, anybody at a, an early startup is going to run into that. Right? You put it out there and then you find out what's broken, you fix it and you just get better and better as it goes along.[00:46:28] Ant: Yeah, And the funny thing was this model of, of deploying EC2 instances. We had that in like the first week of starting super base, just me and Paul. And it was never intended to be the final solution. We just kind of did it quickly and to get something up and running for our first handful of users But it's scaled surprisingly well.And actually the things that broke as we started to get a lot of traffic and a lot of attention where was just silly things. Like we give everyone their own domain when they start a new project. So you'll have project ref dot super base dot in or co. And the things that were breaking where like, you know, we'd ran out of sub-domains with our DNS provider and then, but, and those things always happen in periods of like intense traffic.So we ha we were on the front page of hacker news, or we had a tech crunch article, and then you discover that you've ran out of sub domains and the last thousand people couldn't deploy their projects. So that's always a fun a fun challenge because you are then dependent on the external providers as well and theirs and their support systems.So yeah, I think. We did a surprisingly good job of, of putting in good infrastructure from the start. But yeah, all of these crazy things just break when obviously when you get a lot of, a lot of traffic[00:48:00] Jeremy: Yeah, I find it interesting that you mentioned how you started with creating the EC2 instances and it turned out that just work. I wonder if you could walk me through a little bit about how it worked in the beginning, like, was it the two of you going in and creating instances as people signed up and then how it went from there to where it is today?[00:48:20] Ant: yeah. So there's a good story about, about our fast user, actually. So me and Paul used to contract for a company in Singapore, which was an NFT company. And so we knew the lead developer very well. And we also still had the Postgres credentials on, on our own machines. And so what we did was we set up the th th the other funny thing is when we first started, we didn't intend to host the database.We, we thought we were just gonna host the applications that would connect to your existing Postgres instance. And so what we did was we hooked up the applications to, to the, to the Postgres instance of this, of this startup that we knew very well. And then we took the bus to their office and we sat with the lead developer, and we said, look, we've already set this thing up for you.What do you think. know, when, when you think like, ah, we've, we've got the best thing ever, but it's not until you put it in front of someone and you see them, you know, contemplating it and you're like, oh, maybe, maybe it's not so good. Maybe we don't have anything. And we had that moment of panic of like, oh, maybe we just don't maybe this isn't great.And then what happened was he didn't like use us. He didn't become a supabase user. He asked to join the team. [00:49:45] Jeremy: nice, nice.[00:49:46] Ant: that was a good a good kind of a moment where we thought, okay, maybe we have got something, maybe this is maybe this isn't terrible. So, so yeah, so he became our first employee. Yeah. [00:49:59] Jeremy: And so yeah, so, so that case was, you know, the very beginning you set everything up from, from scratch. Now that you have people signing up and you have, you know, I don't know how many signups you get a day. Did you write custom infrastructure or applications to do the provisioning or is there an open source project that you're using to handle that[00:50:21] Ant: Yeah. It's, it's actually mostly custom. And you know, AWS does a lot of the heavy lifting for you. They just provide you with a bunch of API end points. So a lot of that is just written in TypeScript fairly straightforward and, and like I said, you never intended to be the thing that last. Two years into the business.But it's, it's just scaled surprisingly well. And I'm sure at some point we'll, we'll swap it out for some I don't orchestration tooling like Pulumi or something like this. But actually the, what we've got just works really well.[00:50:59] Ant: Be because we're so into Postgres our queuing system is a Postgres extension called PG boss. And then we have a fleet of workers, which are. Uh, We manage on EC ECS. Um, So it's just a bunch of VMs basically which just subscribed to the, to the queue, which lives inside the database.And just performs all the, whether it be a project creation, deletion modification a whole, whole suite of these things. Yeah. [00:51:29] Jeremy: very cool. And so even your provisioning is, is based on Postgres.[00:51:33] Ant: Yeah, exactly. Exactly (laughs) . [00:51:36] Jeremy: I guess in that case, I think, did you say you're using the right ahead log there to in order to get notifications?[00:51:44] Ant: We do use real time, and this is the fun thing about building supabase is we use supabase to build supabase. And a lot of the features start with things that we build for ourselves. So the, the observability features we have a huge logging division. So, so w we were very early users of a tool called a log flare, which is also written in Elixir.It's basically a log sync backed up by BigQuery. And we loved it so much and we became like super log flare power users that it was kind of, we decided to eventually acquire the company. And now we can just offer log flare to all of our customers as well as part of using supabase. So you can query your logs and get really good business intelligence on what your users um, consuming in from your database.[00:52:35] Jeremy: the lock flare you're mentioning though, you said that that's a log sink and that that's actually not going to Postgres, right. That's going to a different type of store.[00:52:43] Ant: Yeah. That is going to big query actually. [00:52:46] Jeremy: Oh, big query. Okay. [00:52:47] Ant: yeah, and maybe eventually, and this is the cool thing about watching the Postgres progression is it's become. It's bringing like transactional and analytical databases together. So it's traditionally been a great transactional database, but if you look at a lot of the changes that have been made in recent versions, it's becoming closer and closer to an analytical database.So maybe at some point we will use it, but yeah, but big query works just great. [00:53:18] Jeremy: Yeah. It's, it's interesting to see, like, I, I know that we've had episodes on different extensions to Postgres where I believe they change out how the storage works. So there's yeah, it's really interesting how it's it's this one database, but it seems like it can take so many different forms. [00:53:36] Ant: It's just so extensible and that's why we're so bullish on it because okay. Maybe it wasn't always the best database, but now it seems like it is becoming the best database and the rate at which it's moving. It's like, where's it going to be in five years? And we're just, yeah, we're just very bullish on, on Postgres.As you can tell from the amount of mentions it's had in this episode.[00:54:01] Jeremy: yeah, we'll have to count how many times it's been said. I'm sure. It's, I'm sure it's up there. Is there anything else we, we missed or think you should have mentioned.[00:54:12] Ant: No, some of the things we're excited about are cloud functions. So it's the thing we just get asked for the most at anytime we post anything on Twitter, you're guaranteed to get a reply, which is like when functions. And we're very pleased to say that it's, it's almost there. So um, that will hopefully be a really good developer experience where also we launched like a, a graph QL Postgres extension where the resolver lives inside of Postgres.And that's still in early alpha, but I think I'm quite excited for when we can start offering that on the on the hosted platform as well. People will have that option to, to use GraphQL instead of, or as well as the restful API.[00:55:02] Jeremy: the, the common thread here is that PostgreSQL you're able to take it really, really far. Right. In terms of scale up, eventually you'll have the read replicas. Hopefully you'll have. Some kind of I don't know what you would call Aurora, but it's, it's almost like self provisioning, maybe not sharing what, how you describe it.But I wonder as a, as a company, like we talked about big query, right? I wonder if there's any use cases that you've come across, either from customers or in your own work where you're like, I just, I just can't get it to fit into Postgres.[00:55:38] Ant: I think like, not very often, but sometimes we'll, we will respond to support requests and recommend that people use Firebase. they're rarelylike if, if they really do have like large amounts of unstructured data, which is which, you know, documented storage is, is kind of perfect for that. We'll just say, you know, maybe you should just use Firebase.So we definitely come across things like that. And, and like I said, we love, we love Firebase, so we're definitely not trying to, to uh, destroy as a tool. I think it, it has its use cases where it's an incredible tool yeah. And provides a lot of inspiration for, for what we're building as well. [00:56:28] Jeremy: all right. Well, I think that's a good place to, to wrap it up, but where can people hear more about you hear more about supabase?[00:56:38] Ant: Yeah, so supeabase is at supabase.com. I'm on Twitter at ant Wilson. Supabase is on Twitter at super base. Just hits us up. We're quite active on the and then definitely check out the repose gets up.com/super base. There's lots of great stuff to dig into as we discussed. There's a lot of different languages, so kind of whatever you're into, you'll probably find something where you can contribute. [00:57:04] Jeremy: Yeah, and we, we sorta touched on this, but I think everything we've talked about with the exception of the provisioning part and the monitoring part is all open source. Is that correct? [00:57:16] Ant: Yeah, exactly.And as, yeah. And hopefully everything we build moving forward, including functions and graph QL we'll continue to be open source.[00:57:31] Jeremy: And then I suppose the one thing I, I did mean to touch on is what, what is the, the license for all the components you're using that are open source?[00:57:41] Ant: It's mostly Apache2 or MIT. And then obviously Postgres has its own Postgres license. So as long as it's, it's one of those, then we, we're not too precious. I, As I said, we inherit a fair amounts of projects. So we contribute to and adopt projects. So as long as it's just very permissive, then we don't care too much.[00:58:05] Jeremy: As far as the projects that your team has worked on, I've noticed that over the years, we've seen a lot of companies move to things like the business source license or there's, there's all these different licenses that are not quite so permissive. And I wonder like what your thoughts are on that for the future of your company and why you think that you'll be able to stay permissive.[00:58:32] Ant: Yeah, I really, really, rarely hope that we can stay permissive. forever. It's, it's a philosophical thing for, for us. You know, when we, we started the business, it's what just very, just very, as individuals into the idea of open source. And you know, if, if, if AWS come along at some point and offer hosted supabase on AWS, then it will be a signal that where we're doing something.Right. And at that point we just, I think we just need to be. The best team to continue to move super boost forward. And if we are that, and I, I think we will be there and then hopefully we will never have to tackle this this licensing issue. [00:59:19] Jeremy: All right. Well, I wish you, I wish you luck.[00:59:23] Ant: Thanks. Thanks for having me. [00:59:25] Jeremy: This has been Jeremy Jung for software engineering radio. Thanks for listening.
In this episode of Scaling Postgres, we discuss working with multiranges, missing Postgres metrics, PostGIS for newbies and videos from CitusCon. Subscribe at https://www.scalingpostgres.com to get notified of new episodes. Links for this episode: https://www.cybertec-postgresql.com/en/multiranges-in-postgresql-14/ https://coroot.com/blog/pg-missing-metrics https://blog.crunchydata.com/blog/postgis-for-newbies https://www.youtube.com/playlist?list=PLlrxD0HtieHjSzUZYCMvqffEU5jykfPTd#cituscon2022 https://pganalyze.com/blog/5mins-postgres-sql-json-table-postgres15 https://yugonagata-pgsql.blogspot.com/2022/04/pgivm-postgresql-extension-providing.html https://www.postgresql.org/about/news/update-on-the-trademark-actions-against-the-postgresql-community-2437/ https://pgstef.github.io/2022/04/15/pgbackrest_multi-repositories_tips_and_tricks.html https://postgresql.life/post/jehan-guillaume_de_rorthais/ https://www.rubberduckdevshow.com/episodes/38-small-projects/
Mapy.cz jsou klenotem českého Seznamu a troufám si odhadnout, že neexistuje Čech, který by je neznal. Mnoho z nás je běžně používá ať už pro plánování a navigaci na pěších, či cyklo výletech, jiní i jakou auto-navigaci. Osobně jsem s Mapy.cz absolvoval cestu až do řeckých Atén a offline mapy, mě bezpečně provedly přes několik zemí bez jediného zaváhání. V naší zemi nám Mapy.cz nabízejí mnohem více možností, než konkurenční celosvětoví SW hegemoni jako je Google nebo Apple. Proto mi bylo velkou ctí v Kafemlejnku uvítat Martina Baka, který tým vyvíjející tuto aplikaci vede. Hned v úvodu se od Martina dozvídáme, že Mapy.cz jsou provozovány v Kubernetes clusteru, frontend je napsán převážně ve vanilla JavaScriptu a pouze jedna izolovaná část je napsána v Reactu. V týmu stále probíhá diskuse, zda-li se vydají cestou Reactu nebo nikoliv – kdysi se používala knihovna JAK, která je v současné době dostupná jako open-source. Data jsou ukládána převážně v PosgreSQL a jeho rozšíření PostGIS. Martin překvapivě tvrdí, že současní vývojáři nemusí nijak zásadně řešit geografickou matematiku, protože ta byla “oprogramována” před lety a od té doby funguje v podstatě beze změny. Grafické dlaždice jsou uloženy v jedném velikém souboru, kde jsou dostupné na konkrétních souřadnicích. Aktuálně používají 9+9 fyzických strojů na poskytování rastrových dat z výše uvedeného velkého souboru a obsluhují 20 tisíc reqs/sec. Do budoucna plánují postupnou migraci na nové řešení založené na CDN, které Martin v podcastu celkem podrobně popisuje. Pro vykreslování map používají knihovnu Mapnik a to jak pro vlastní dataset, tak i pro data přebíraná z OpenStreetMap. Na toto téma jsme se pustili i do určitého fabulování na téma schvalování úprav v OpenStreetMap, takže pokud někdo z posluchačů je v této oblasti odborníkem, budeme rádi za upřesnění informací, třeba v komentáři – díky! Snažíme se z Martina vytáhnout více informací o tom, jak funguje algoritmus pro vyhledání optimální trasy z bodu A do bodu B avšak neúspěšně. Každopádně se dozvídáme, že algoritmus funguje samostatně jak na klientovi (přímo v aplikaci), tak na serveru. Vyhledání trasy tedy funguje i v offline režimu. Překvapením pro nás bylo poodkrytí způsobu, jak jsou generovány návrhy na výlety v požadované délce. Čekali jsme kdoví jak sofistikovaný algoritmus pro návrhy oblíbených POI, ale vlastně se jedná jen o nalezení okruhu v okolí tak, aby uživatel neprošel 2x stejným úsekem. Plno funkcí, kde jsme čekali nějakou magii, skrývá jen velmi jednoduchá, ale praktická řešení. Zajímavá je informace o spolupráci s Windy.com, jejímž výsledkem je i pokus o rozšíření českých map do světa pod značkou WindyMaps. Narazili jsme i na funkce pro monitorování šíření Covidu, která je v mapách stále dostupná a zamýšleli se nad její smysluplností. Tuhle část zde nebudu podrobněji komentovat a doporučuji poslechnutí záznamu od 48 minuty. Tým pro Mapy.cz stále hledá nové vývojáře. Hledají především Python, C++ vývojáře. Umožňují práci z domu, ale preferují setkávání 2x týdně v Praze. Probíráme, jak vypadá přijímací pohovor do Seznamu a co mohou případní uchazeči při práci v Seznamu čekat.
Steven interviews Paul Ramsey, whom Steven describes as "a superstar of the OSGeo community". Paul is one of the inventors of the PostGIS extension to Postgres, the most well known and widely used spatial database in the world. Paul tells the story of PostGIS's start all the way back in 2001, and how it has since grown to become a foundational piece of geospatial software. The perspective then turns to the future with Paul musing on upcoming trends in geospatial computing. Finally Paul and Steven discuss the current state of the open source world and funding for open source projects, before closing with a discussion of Paul's work for election campaigns. Show notes on the Geomob website, where you can also learn more about Geomob events and sign up for our monthly newsletter.
В этом выпуске: как заморозить и очистить форум, а также зачем обновлять Mumble Server; четыре ка камера от Logitech; почему страдают Java-программисты; PostGIS, GEOS и Shapely; структурируем команды и геораспределенные системы; и, конечно же, темы наших слушателей. Шоуноты: [00:04:17] Что мы узнали за неделю [00:13:19] Семь лет [00:22:02] Форум заморожен и очищен [00:26:14] Ubuntu 20.04… Читать далее →
Paolo’s personal website20tabGithub profileTwitter profileDjangoCon Europe 2020 - A Pythonic Full-Text SearchDjangoCon Europe 2019 - Maps with GeoDjango, PostGIS and LeafletFull-Text Search in Django with PostgreSQLWorld Food ProgramMaps with Django (part 1): GeoDjango, SpatiaLite and LeafletSupport the ShowThis podcast is a labor of love and does not have any ads or sponsors. To support the show, consider purchasing or recommending a book from LearnDjango.com or signing up for the free weekly Django News newsletter.
In this episode of Scaling Postgres, we discuss an unattended upgrade, ARM Postgres benchmarks, how to explore new databases and PostGIS performance. Subscribe at https://www.scalingpostgres.com to get notified of new episodes. Links for this episode: https://seiler.us/2020-11-18-unattended-upgrades/ https://info.crunchydata.com/blog/postgresql-benchmarks-apple-arm-m1-macbook-pro-2020 https://www.craigkerstiens.com/2020/11/14/exploring-a-new-postgres-database/ https://info.crunchydata.com/blog/waiting-for-postgis-3.1-performance http://blog.cleverelephant.ca/2020/11/waiting-for-postgis-3-1-mvt.html https://elephanttamer.net/?p=47 https://www.commandprompt.com/blog/is-of/ https://www.youtube.com/watch?v=qpdiR4NnxK0 https://arunsori.me/posts/postgres-clickhouse-fdw-in-go/ http://postgis.net/2020/11/20/postgis-3.0.3/ https://www.citusdata.com/blog/2020/11/14/citus-9-5-whats-new/ https://rjuju.github.io/postgresql/2020/11/17/queryid-reporting-in-plpgsql_check.html https://www.cybertec-postgresql.com/en/pg_timetable-v3-is-out/ https://postgresql.life/post/markus_wanner/
James and Bill go over how they use GIS on MacOS. From GDAL and QGIS to Safe FME and PostGIS, they cover all the easy ways it is to get started. James also leaves Spatial Networks and is looking for something to do.
We're all in lockdown but Bill and James are still up to things. From Bill playing with PostGIS, Node and Koop to James writing bash scripts so he can save 3 seconds on Git commits, they are working hard. Plus James gives a look into the GeoJSON Ballparks creation process.
The MapScaping Podcast - GIS, Geospatial, Remote Sensing, earth observation and digital geography
Episode #53 - Serving dynamic vector tiles might not be as difficult as you think. Paul Ramsey from Crunchy Data starts off by walking us through the early days of PostGIS, explaining what a vector tile is and the advantages over image tiles before explaining how you can easily and safely serve your own data as vector tiles straight from a PostgreSQL database. further reading Querying Spatial Features with pg_featureserv Spatial Tile Serving with PostgreSQL Functions Tile Serving with Dynamic Geometry This episode is sponsored by GraphHopper GraphHopper provides web services to add for route planning, navigation and route optimization to your application in the food delivery industry or for traveling salesmen. We made some socks for you! You can check them out here You are more than welcome to reach out to me on social media, I would love to hear from you! Map_View on Instagram MapScaping on Twitter MapScaping on Facebook
Sponsored By: #28 Cover art uses imagery by Humanitarian OpenStreetMap Team (https://www.hotosm.org/) © OpenStreetMap contributors (https://www.openstreetmap.org/copyright). Panelists Justin Dorfman | Richard Littauer Guest Maggie Cawley (https://twitter.com/maggiemaps?lang=en) OpenStreetMap US (https://www.openstreetmap.us/) Alyssa Wright (https://twitter.com/alyssapwright?lang=en) OpenStreetMap US (https://www.openstreetmap.us/) | Open Source Collective (https://opencollective.com/opensource) Show Notes In this episode Justin and Richard talk with Maggie Cawley, Executive Director of the American OpenStreetMap US and Alyssa Wright, current Board Member of OpenStreetMap US. If you’ve never heard of OpenStreetMap US you will learn all about what it is, how you can use it, how you can edit it (YES, it’s editable like Wikipedia), and how it differs from Google Maps. 1:40** What is OpenStreetMap and OpenStreetMap US? How are they different? The ladies also share how many people are all together in the OpenStreetMap world in their biosphere and how many contributing editors there are. 04:41 Justin wonders how the map data is licensed and was it a hard sell to the community or an easy one? 08:24 OSMF and OSM US are slighty different. Richard is interested to know what Maggie and Alyssa are doing for each one and how do they see them growing. 10:43 Richard wants to know the Code of Conduct and was questioning if it was just started or did it change. 12:40 Justin wants to know what percentage Craig, from Craigslist, give to their annual budget. Also, regarding sustainability, the discussion of what it costs to run OpenStreetMap as a service is answered. 13:34 Maggie talks about who is involved in their membership which is made of many different people. 17:00 Richard wonders if anything is difficult now and what does sustainability look like and if there are any problems. The ladies talk about all the challenges there have been. 23:09 Why would people use OSM vs. Google Maps? Empowerment is a big factor with OSM. 27:18 Justin wonders how Maggie and Alyssa deal with possible misinformation from bad edits, especially with the earthquakes or tornados or if there have been any issues. 32:04 The ladies answer some questions from Richard regarding what sustainability means for OSM and how do you make sure everyone is working on the same product, doing it with the same intention, but with the same sort of guidance. Quotes 05:43 “ODbL, it’s a unique license.” 15:15 “I always point to my phone, it’s like, you know, this is essentially a map that we’re carrying around with us constantly and whether we’re talking about data privacy in our location, movement, or crime statistics or this Coronavirus, this is grounded in space, and we’re grounded in space.” 21:08 “Billions of people in this world do not have an address. So, without an address that means you don’t have a legal right to your land.” 23:17 “It’s very well known that Google is going to map where they have commercial interest.” Spotlight 34:07 Justin’s spotlight this week is Dracula PRO-Episode 27. 34:40 Richard’s spotlight is Leaflet JS. 34:57 Maggie’s spotlight is QGIS project. 35:17 Alyssa’s spotlight is PostGIS community. Links Maggie Cawley Twitter (https://twitter.com/maggiemaps?lang=en) Alyssa Wright Twitter (https://twitter.com/alyssapwright?lang=en) State of the Map U.S. Conference (https://2020.stateofthemap.us/) Craigslist Charitable Fund Donation (https://www.openstreetmap.us/2019/10/craigslist-charitable-fund/) OpenStreetMap.US (https://www.openstreetmap.us/) OpenStreetMap.org (https://www.openstreetmap.org/#map=4/38.01/-95.84) OpenStreetMap on Slack (https://slack.openstreetmap.us/) QGIS (https://qgis.org/en/site/) PostGIS (https://postgis.net/) TeachOSM (https://teachosm.org/) Dracula Pro-Episode27 (https://sustain.codefund.fm/27) Leaflet JS (https://leafletjs.com/) Special Guests: Alyssa Wright and Maggie Cawley.
In Bucharest this year was the free and open source conference for GIS softwares. 314 videos from the conference have been posted online on their website at https://media.ccc.de/foss4g2019/ Lots of cool talks about public works, private use cases, qGIS, PostGIS, OpenLayers, and more were given. Check it out!
In this episode of Scaling Postgres, we discuss combining indexes, regular expressions, parallelism in Postgres and PostGIS content. Subscribe at https://www.scalingpostgres.com to get notified of new episodes. Links for this episode: https://www.cybertec-postgresql.com/en/combined-indexes-vs-separate-indexes-in-postgresql/ https://www.2ndquadrant.com/en/blog/postgresql-regular-expressions-and-pattern-matching/ https://www.percona.com/blog/2019/07/30/parallelism-in-postgresql/ https://www.percona.com/blog/2019/08/02/out-of-memory-killer-or-savior/ https://www.percona.com/blog/2019/07/31/postgresql-simple-c-extension-development-for-a-novice-user/ https://www.percona.com/blog/2019/07/30/using-plpgsql_check-to-find-compilation-errors-and-profile-functions/ https://paquier.xyz/postgresql-2/postgres-12-replication-slot-copy/ https://www.depesz.com/2019/07/31/waiting-for-postgresql-13-add-support-for-jobs-in-reindexdb/ http://blog.cleverelephant.ca/2019/07/postgis-overlays.html http://blog.cleverelephant.ca/2019/07/simple-sql-gis.html https://info.crunchydata.com/blog/waiting-for-postgis-3-st_asmvt-performance
James and BIll talk about GDAL and OGR. James can't pronounce it correctly but he knows how to use it. Bill talks about how much it has been part of his workflows in the past 15 years. There is even more that didn't make the podcast via Howard Butler (https://twitter.com/howardbutler): GDAL Barn Raising Geodesy is a property of the data. When all our data came from paper maps, geodesy didn't matter as much except for the pocket protector crowd, but now that most of our positioning comes from space, geodesy buried in our data is an unavoidable topic. The open source stack of software based on GDAL and PROJ was deficient in the geodesy realm because PROJ was a cartographic transformation library (for making pretty maps!) rather than a geodetic library (for serious data management and engineering). Kristian Evers and Thomas Knudsen of Denmark SDFE (~ Denmark USGS) added the bits necessary to PROJ to allow it to act as a geodetic library as of the PROJ 5 release. The Barn Raising was needed to wire up those bits in PROJ 5 with the dictionaries that describe all of the transformations. This effort was three pieces – swapping out the GDALDATA and PROJLIB CSV files for a single, unified SQLite db, putting a OGC Well Known Text v2 engine in PROJ, wiring up GDAL, PROJ, libgeotiff, PostGIS, QGIS, and whoever else to use it all directly. The result of this effort is GDAL 3.0 and PROJ 6.1 Nyall Dawson has an excellent post describing how this effort improved QGIS https://lists.osgeo.org/pipermail/qgis-developer/2019-June/057525.html by making the software work better, more simple, and more unified with the rest of the other projects. gdalbarn.com has the history of the efforts of who pushed in money and Even's progress reports as he did the development. gdal.org refactor Even, myself, Mateusz, and Dan Baston at the OSGeo Community Sprint in Minneapolis refactored the gdal.org website to be based on Sphinx like geotools.org, mapserver.org, pdal.io, proj.org, and many other ReadTheDocs-based efforts. A primary goal was to allow you to hit the front page and be able grep for a driver name right away. The docs were buried, and this was an effort to bubble them up, and make it easier for people to contribute to them. For example, there's now a PDF https://gdal.org/gdal.pdf (900 pages!) and people can use Sphinx Intersphinx capability to link to GDAL documentation in their own projects. COG Driver Even just pushed an explicit Cloud Optimized GeoTIFF COG driver for GDAL that will be released in the next GDAL release that handles the writing side of COG. Your audience should know about COG.
In this episode of Scaling Postgres, we discuss videos from PGCon, Postgres tips & tricks, PostGIS parallel performance and using strings. Subscribe at https://www.scalingpostgres.com to get notified of new episodes. Links for this episode: https://www.youtube.com/channel/UCer4R0y7DrLsOXo-bI71O6A/videos https://www.youtube.com/user/PerconaMySQL/videos https://momjian.us/main/writings/pgsql/tips.pdf https://www.youtube.com/watch?v=vFq9Yg8a3CE http://blog.cleverelephant.ca/2019/06/parallel-postgis-4b.html https://www.2ndquadrant.com/en/blog/beautiful-things-strings/ http://rhaas.blogspot.com/2019/06/the-contributors-team.html https://habr.com/en/company/postgrespro/blog/452900/ https://pgdash.io/blog/postgres-incremental-backup-recovery.html https://www.scalingpostgres.com/tutorials/postgresql-backup-point-in-time-recovery/ https://severalnines.com/blog/how-optimize-postgresql-logical-replication https://rjuju.github.io/postgresql/2019/06/05/powa-4-new-in-powa-archivist.html
A huge episode this month. We're not starting this podcast slowly, we dive right into spatial databases. Topics this episode include PostGIS, Spatial SQL, connecting to the database and how to get help. There is so much packed into this 60 minute episode, so please make sure you check out the links below.
A huge episode this month. We're not starting this podcast slowly, we dive right into spatial databases. Topics this episode include PostGIS, Spatial SQL, connecting to the database and how to get help. There is so much packed into this 60 minute episode, so please make sure you check out the links below.
So it starts when someone talks about what they wish for in podcasts and ends when you fire up Skype and start recording. James and Bill talk about why they started the podcast and what they use. A good intro into what Cageyjames & GeoBabbler will become.
Panel: Mark Ericksen Charles Max Wood Special Guest: Kamil Lelonek In this episode of Elixir Mix, the panel talks with Kamil Lelonek who is a full-stack developer and programmer. Chuck, Mark, and Kamil talk about Elixir, Postgrex, databases, and so much more! Check it out! Show Topics: 0:00 – Advertisement: Get A Coder Job! 0:48 – Chuck: Hello! Our panel today is Mark and myself. Friendly reminder to listeners: check out my show the DevRev. Our guest today is Kamil Lelonek! 1:23 – Guest. 1:43 – Chuck: Today, we are talking about databases. 1:55 – Guest. 3:10 – Mark: We have your blog that you wrote in our show notes. Talk about your experience with exploring these features? 3:39 – Chuck. 3:46 – Mark: I didn’t know those features are in Postgrex. Can you talk about your experience and your journey? 4:10 – Guest. 6:17 – Mark: I am curious, what problem were you trying to solve? 6:31 – Guest. 8:12 – Mark: I like you saying: rather than modifying the application code itself, you created a separate application. I think Elixir is a good fit for that – what’s your experience with this? 8:40 – Guest: I agree with this, but let’s think about it in the other way. 9:48 – Mark: Yeah I can see that. It’s good to be aware of the upsides and downsides. It’s an interesting idea. 10:40 – Guest. 11:38 – Chuck: My experience is mostly in Rails. The other way I have solved this problem is “pulling” but this way is more elegant. Before we have talked with Chris McCord about LiveVue. Is there a way to hook this handler up to LiveVue to stream the changes all the way up to the frontend of web application with Phoenix? 12:20 – Guest. 12:55 – Mark talks about Elixir and GenServer. 13:29 – Guest. 13:49 – Mark: Please go and read Kamil’s blog post because it’s simple and it’s written well! Mark: I think Elixir is a great usage for GenServers. 14:28 – Guest. 14:35 – Chuck: You setup a store procedure, which I don’t see a lot of people doing within the communities. How necessary is that store procedure that you’ve created there? 15:00 – Guest. 16:16 – Chuck: What if you want to do targeted notifications? 16:28 – Guest. 17:33 – Mark: I am curious if you have experimented with the practical limitations of this? Like at one point does it start to break down? 18:00 – Guest. 20:00 – Chuck: I will be honest I am kind of lazy. Outside of the general use I don’t go looking for these, but when I hear about them I say: wow! 20:09 – Guest. 20:57 – Chuck. 21:15 – Guest talks about solutions that he’s found. 22:08 – FreshBooks! 23:17 – Mark: What other kind of databases have you had experience with for comparison reasons? 23:40 – Guest. 24:56 – Mark: You talked about defaults and I want to come back to this topic. 25:08 – Mark asks Chuck a question. 25:12 – Chuck: I don’t know. 25:23 – Mark talks about the databases that his work utilizes. 26:45 – Mark and Chuck go back-and-forth. 27:49 – Guest mentions a solution to the before-mentioned problem that Mark gave. 28:47 – Mark: It can get messy. I don’t repose this as a permanent solution, but it allows you do a staged-migration. 29:15 – Chuck: Do you run into problems with Postgrex? Most technologies if you don’t run into problems you aren’t pushing it enough (at least that’s my experience). 29:29 – Guest answers the question. 30:26 – Mark talks about active, active, active. 31:14 – Guest. 33:25 – Mark: In Elixir, we talk about the things that are in the box and one thing that comes up is “mnesia.” Can you talk about this please? 33:47 – Guest talks about mnesia. 35:17 – Mark talks about mnesia some more. Mark: It is an available option (mnesia), but I don’t know if it’s something that people want when they are looking for something more traditional. 37:04 – Guest. 37:30 – Mark: Yeah something people should be aware of. If you are encountering problems it’s good to know the different tools that are out there and available. 38:42 – Mark: One question: What are some of your favorite features of Postgrex? 38:57 – Guest. 41:08 – Mark talks about Postgrex’s features. 42:14 – Guest. 43:10 – Mark: I had a case where Elixir and Erlang and you can convert term to binary and binary to term. I took some data structure and converted it to a binary and using Ecto and tell it: serialize this and when it loads back out it is a native Elixir type. It’s not always the right solution, but in my cases it actually worked. 43:59 – Guest talks about a library that he wrote back-in-the-day. 44:40 – Chuck: Anything else? Nope? Okay – Picks! 44:52 – Ad: Lootcrate.com END – CacheFly! Links: Ruby Elixir Elixir: GenServer GenServers Elm JavaScript Visual Studio Code React “How to use LISTEN and NOTIFY PostgreSQL commands in Elixir?" By Kamil Lelonek Guest’s Medium Blog Postgrex.Notifications Redis.io Event Store PostgreSQL MongoDB Erlang: mnesia GitHub: cachex GitHub: meh / amnesia PostGIS When to use Ecto, when to use Mnesia PostgreSQL Ecto.Type GitHub: Exnumerator YouTube: Entreprogrammers Kamil’s Twitter Sponsors: Loot Crate Get a Coder Job! Fresh Books CacheFly Picks: Mark Being professionally proactive! Chuck Get A Coder Job eBook Challenge: Pomodoro Technique Kamil Book: Thinking, Fast and Slow by Daniel Kahneman My Blog My Site
Panel: Mark Ericksen Charles Max Wood Special Guest: Kamil Lelonek In this episode of Elixir Mix, the panel talks with Kamil Lelonek who is a full-stack developer and programmer. Chuck, Mark, and Kamil talk about Elixir, Postgrex, databases, and so much more! Check it out! Show Topics: 0:00 – Advertisement: Get A Coder Job! 0:48 – Chuck: Hello! Our panel today is Mark and myself. Friendly reminder to listeners: check out my show the DevRev. Our guest today is Kamil Lelonek! 1:23 – Guest. 1:43 – Chuck: Today, we are talking about databases. 1:55 – Guest. 3:10 – Mark: We have your blog that you wrote in our show notes. Talk about your experience with exploring these features? 3:39 – Chuck. 3:46 – Mark: I didn’t know those features are in Postgrex. Can you talk about your experience and your journey? 4:10 – Guest. 6:17 – Mark: I am curious, what problem were you trying to solve? 6:31 – Guest. 8:12 – Mark: I like you saying: rather than modifying the application code itself, you created a separate application. I think Elixir is a good fit for that – what’s your experience with this? 8:40 – Guest: I agree with this, but let’s think about it in the other way. 9:48 – Mark: Yeah I can see that. It’s good to be aware of the upsides and downsides. It’s an interesting idea. 10:40 – Guest. 11:38 – Chuck: My experience is mostly in Rails. The other way I have solved this problem is “pulling” but this way is more elegant. Before we have talked with Chris McCord about LiveVue. Is there a way to hook this handler up to LiveVue to stream the changes all the way up to the frontend of web application with Phoenix? 12:20 – Guest. 12:55 – Mark talks about Elixir and GenServer. 13:29 – Guest. 13:49 – Mark: Please go and read Kamil’s blog post because it’s simple and it’s written well! Mark: I think Elixir is a great usage for GenServers. 14:28 – Guest. 14:35 – Chuck: You setup a store procedure, which I don’t see a lot of people doing within the communities. How necessary is that store procedure that you’ve created there? 15:00 – Guest. 16:16 – Chuck: What if you want to do targeted notifications? 16:28 – Guest. 17:33 – Mark: I am curious if you have experimented with the practical limitations of this? Like at one point does it start to break down? 18:00 – Guest. 20:00 – Chuck: I will be honest I am kind of lazy. Outside of the general use I don’t go looking for these, but when I hear about them I say: wow! 20:09 – Guest. 20:57 – Chuck. 21:15 – Guest talks about solutions that he’s found. 22:08 – FreshBooks! 23:17 – Mark: What other kind of databases have you had experience with for comparison reasons? 23:40 – Guest. 24:56 – Mark: You talked about defaults and I want to come back to this topic. 25:08 – Mark asks Chuck a question. 25:12 – Chuck: I don’t know. 25:23 – Mark talks about the databases that his work utilizes. 26:45 – Mark and Chuck go back-and-forth. 27:49 – Guest mentions a solution to the before-mentioned problem that Mark gave. 28:47 – Mark: It can get messy. I don’t repose this as a permanent solution, but it allows you do a staged-migration. 29:15 – Chuck: Do you run into problems with Postgrex? Most technologies if you don’t run into problems you aren’t pushing it enough (at least that’s my experience). 29:29 – Guest answers the question. 30:26 – Mark talks about active, active, active. 31:14 – Guest. 33:25 – Mark: In Elixir, we talk about the things that are in the box and one thing that comes up is “mnesia.” Can you talk about this please? 33:47 – Guest talks about mnesia. 35:17 – Mark talks about mnesia some more. Mark: It is an available option (mnesia), but I don’t know if it’s something that people want when they are looking for something more traditional. 37:04 – Guest. 37:30 – Mark: Yeah something people should be aware of. If you are encountering problems it’s good to know the different tools that are out there and available. 38:42 – Mark: One question: What are some of your favorite features of Postgrex? 38:57 – Guest. 41:08 – Mark talks about Postgrex’s features. 42:14 – Guest. 43:10 – Mark: I had a case where Elixir and Erlang and you can convert term to binary and binary to term. I took some data structure and converted it to a binary and using Ecto and tell it: serialize this and when it loads back out it is a native Elixir type. It’s not always the right solution, but in my cases it actually worked. 43:59 – Guest talks about a library that he wrote back-in-the-day. 44:40 – Chuck: Anything else? Nope? Okay – Picks! 44:52 – Ad: Lootcrate.com END – CacheFly! Links: Ruby Elixir Elixir: GenServer GenServers Elm JavaScript Visual Studio Code React “How to use LISTEN and NOTIFY PostgreSQL commands in Elixir?" By Kamil Lelonek Guest’s Medium Blog Postgrex.Notifications Redis.io Event Store PostgreSQL MongoDB Erlang: mnesia GitHub: cachex GitHub: meh / amnesia PostGIS When to use Ecto, when to use Mnesia PostgreSQL Ecto.Type GitHub: Exnumerator YouTube: Entreprogrammers Kamil’s Twitter Sponsors: Loot Crate Get a Coder Job! Fresh Books CacheFly Picks: Mark Being professionally proactive! Chuck Get A Coder Job eBook Challenge: Pomodoro Technique Kamil Book: Thinking, Fast and Slow by Daniel Kahneman My Blog My Site
Open source software is changing the world and growing a strong community from around the globe. Free software that is contributed to by the developer community underpins many of the spatial solutions that we build and Paul Ramsay, co-founder of PostGIS, explains why we all need to pay attention and start contributing to the project. In this episode: 0:48 Why have you devoted so much time to open source software in your life? 3:12 What have you learned along the way since your first version of PostGIS up until now 17 years later? 7:05 There is a battle between open source and commercial products. When do you use each of them? 8:52 We have some clients who have had success with open source 9:36 The keynote at the conference was a call to arms - why should people use and contribute to open source? 12:08 Why wouldn't an organisation commit time and money to the open source community? 13:29 For those who do want to contribute to open source software, where do you recommend they start? 17:15 In a corporate setting, how do you sell the vision and convince them to commit funds or man hours to open source? 20:08 What are the most interesting conversations you've had and interesting things you've seen at the FOSS4G conference? Links: PostGIS - https://postgis.net/ FOSS4G Conference Melbourne - https://foss4g-oceania.org/ Maori Maps cultural mapping - http://www.maorimaps.com
Matevž in Robi se v tokratnji oddaji podkasta pogovarjata z Anko Lisec, izr. prof. na Fakulteti za gradbeništvo in geodezijo, Univerza v Ljubljani. Anka dela in raziskuje na področju geoinformatike in katastra nepremičnin - pogovarjali smo se o povezavah med BIM in GIS ter različnih scenarijih uporabe geolokacijskih podatkov.
Panel: Charles Max Wood Eric Berry Special Guests: Daniel Azuma and Tee Parham In this episode of Ruby Rogues, the panel discusses geospatial programming with Daniel Azuma and Tee Parham. Daniel is a developer at Google and has been doing Ruby for about 14 years. Tee is co-founder and CTO of Neighborland, which is built on Ruby on Rails. Before that, he founded, managed and led technical projects for a small startup for about 8 years. They discuss what geospatial programming is, what RGeo Gem is, and other interesting aspects of geospatial programming. In particular, we dive pretty deep on: Daniel and Tee intros What is the landscape when it comes to geospatial programming? What is geospatial programming? Google Maps Get ahold of a lot of data Wide angle of data available Large amount of application available RGeo Gem What does RGeo do? CrimeReports.com Draw shapes on a map to outline the cities or space A lot goes into Geospatial programming What drove you to create this project? Why Ruby? Created out of necessity PostGIS When did this project begin to gain traction? Open sourced it so that other people could use it When did Tee get involved? Rails And much, much more! Links: Ruby Ruby on Rails Neighborland Google Maps RGeo Gem CrimeReports.com PostGIS @TeeParham Tee’s GitHub Daniel’s Blog Daniel’s GitHub Picks: Charles CodeSponsor Eric Polymail CodeSponsor Daniel Project Euler Bloom County Tee Observable MapSchool.io GeoJSON.io The Memory Illusion by Julia Shaw Retrospective Album
Panel: Charles Max Wood Eric Berry Special Guests: Daniel Azuma and Tee Parham In this episode of Ruby Rogues, the panel discusses geospatial programming with Daniel Azuma and Tee Parham. Daniel is a developer at Google and has been doing Ruby for about 14 years. Tee is co-founder and CTO of Neighborland, which is built on Ruby on Rails. Before that, he founded, managed and led technical projects for a small startup for about 8 years. They discuss what geospatial programming is, what RGeo Gem is, and other interesting aspects of geospatial programming. In particular, we dive pretty deep on: Daniel and Tee intros What is the landscape when it comes to geospatial programming? What is geospatial programming? Google Maps Get ahold of a lot of data Wide angle of data available Large amount of application available RGeo Gem What does RGeo do? CrimeReports.com Draw shapes on a map to outline the cities or space A lot goes into Geospatial programming What drove you to create this project? Why Ruby? Created out of necessity PostGIS When did this project begin to gain traction? Open sourced it so that other people could use it When did Tee get involved? Rails And much, much more! Links: Ruby Ruby on Rails Neighborland Google Maps RGeo Gem CrimeReports.com PostGIS @TeeParham Tee’s GitHub Daniel’s Blog Daniel’s GitHub Picks: Charles CodeSponsor Eric Polymail CodeSponsor Daniel Project Euler Bloom County Tee Observable MapSchool.io GeoJSON.io The Memory Illusion by Julia Shaw Retrospective Album
Panel: Charles Max Wood Eric Berry Special Guests: Daniel Azuma and Tee Parham In this episode of Ruby Rogues, the panel discusses geospatial programming with Daniel Azuma and Tee Parham. Daniel is a developer at Google and has been doing Ruby for about 14 years. Tee is co-founder and CTO of Neighborland, which is built on Ruby on Rails. Before that, he founded, managed and led technical projects for a small startup for about 8 years. They discuss what geospatial programming is, what RGeo Gem is, and other interesting aspects of geospatial programming. In particular, we dive pretty deep on: Daniel and Tee intros What is the landscape when it comes to geospatial programming? What is geospatial programming? Google Maps Get ahold of a lot of data Wide angle of data available Large amount of application available RGeo Gem What does RGeo do? CrimeReports.com Draw shapes on a map to outline the cities or space A lot goes into Geospatial programming What drove you to create this project? Why Ruby? Created out of necessity PostGIS When did this project begin to gain traction? Open sourced it so that other people could use it When did Tee get involved? Rails And much, much more! Links: Ruby Ruby on Rails Neighborland Google Maps RGeo Gem CrimeReports.com PostGIS @TeeParham Tee’s GitHub Daniel’s Blog Daniel’s GitHub Picks: Charles CodeSponsor Eric Polymail CodeSponsor Daniel Project Euler Bloom County Tee Observable MapSchool.io GeoJSON.io The Memory Illusion by Julia Shaw Retrospective Album
In episode 106 of Does Not Compute, Sean and Paul talk about the importance of HTTPS and VPNs, 1Password and RememBear, and using PostGIS to build location aware features for into your app.
Добрый день уважаемые слушатели. Представляем новый выпуск подкаста RWpod. В этом выпуске: Ruby Ruby 2.4.2 Released и Roda 3.0.0 has been released Create a Filesystem with Ruby and FUSE, GraphQL and Performance in Rails и Polymorphic Associations (video) JavaScript Announcing CoffeeScript 2, Planning for Babel 7.0, How JavaScript works: memory management + how to handle 4 common memory leaks и On React and WordPress Build An Interactive Game of Thrones Map (Part I) - Node.js, PostGIS, and Redis, Electron: The Bad Parts и Jquery-punchcard - jquery plugin for creating GitHub Punchcard like graphs
Derek and Sean discuss Microsoft's interest in open source, improving the Rails development story on Windows, and Sean's progress implementing an ORM in Rust. Developing on Windows PowerShell Cygwin Does Windows fork? Modern.ie ievms Internet Explorer compatibility mode PostGIS Using Arel to Compose SQL Queries Ransack Rust specialization Rust grapples with orphan instances Sean's in-progress Rust ORM
Neste episódio do DatabaseCast Mauro Pichilian (@pichiliani) e Wagner Crivelini (@wcrivelini) e o convidado Gabriel Moreira (@gspmoreira) se perdem no meio do caminho falando sobre GIS. Neste episódio você vai saber o que é um teodolito, quem ainda tem um guia físico de ruas guardado, quais são as tecnologias de back-end e front-em para se trabalhar com GIS, porque um DATUM pode dar dor de cabeça e simpatizar com quem foi assistir ao filme cinquenta tons de cinza. Confirma o livro "Conversando sobre banco de dados" em: http://clubedeautores.com.br/book/126042--Conversando_sobre_banco_de_dados Confira as camisetas com estampas fractais do DatabaseCast: http://www.zazzle.com.br/databasecast Não deixe de nos incentivar digitando o seu comentário no final deste artigo, mandando e-mail para databasecast@gmail.com, seguindo o nosso twitter @databasecast, vendo informações de bastidores no nosso Tumblr e curtindo a nossa página no Facebook e no Google+.
Neste episódio do DatabaseCast Mauro Pichilian (@pichiliani) e Wagner Crivelini (@wcrivelini) e o convidado Gabriel Moreira (@gspmoreira) se perdem no meio do caminho falando sobre GIS. Neste episódio você vai saber o que é um teodolito, quem ainda tem um guia físico de ruas guardado, quais são as tecnologias de back-end e front-em para se trabalhar com GIS, porque um DATUM pode dar dor de cabeça e simpatizar com quem foi assistir ao filme cinquenta tons de cinza. Confirma o livro "Conversando sobre banco de dados" em: http://clubedeautores.com.br/book/126042--Conversando_sobre_banco_de_dados Confira as camisetas com estampas fractais do DatabaseCast: http://www.zazzle.com.br/databasecast Não deixe de nos incentivar digitando o seu comentário no final deste artigo, mandando e-mail para databasecast@gmail.com, seguindo o nosso twitter @databasecast, vendo informações de bastidores no nosso Tumblr e curtindo a nossa página no Facebook e no Google+.
Speakers: Dr. Dennis Zielstra – GIS Administrator, Seminole Tribe of Florida and Dr. Francesco Tonini – Postdoctoral Research Scholar, Center for Geospatial Analytics
GeoKettle GeoKettle ist ein ETL-Programm für räumliche Daten. ETL steht für Extract, Load und Transform. GeoKettle basiert auf der OpenSource Software Pentaho Data Integration (Kettle) und ist mit der LPGL lizensiert. GeoKettle unterstützt dabei u.a. die OpenSource Bibliotheken GeoTools, Degree und gdal/oge und sextante. Ausgangslage Bei der Werraenergie, einem Energiedienstleister in Thüringen, stand aufgrund eines Systemwechsels im Bereich des CAD auch ein Umzug der Geodaten von einer Oracle-Datenbank in eine PostgreSQL Datenbank mit PostGIS an. Die Daten der Werraenergie lagen in einem nicht dokumentierten Datenmodell vor und konnten nur mit einem CAD als Shapedateien exportiert werden. Nach dem Export lagen 247 verschiedene Shapefiles vor. Das Versorgungsgebiet der Werraenergie ist allerdings so groß, dass der Export nur in 40 sich zum Teil überlagernden räumlichen Einheiten durchgeführt werden konnte, was in insgesamt 11.000 Shapefiles resultierte. Die Inhalte der Shapedateien ware zudem nicht eindeutig, sodass in einem Shapefile für Leitungen, sowohl die Leitungen mit Ihren Attributen als auch Hilfslinien für die Beschriftungen mit den gleichen Attributen beinhalten konnten. GeoKettle in action Beim Import der Daten in die PostgreSQL-Datenbank mussten folgende Arbeiten vorgenommen werden: Import der 11.000 Shapefile mussten in zwei verschiedene Datenbanken (Gas und Grundkarte) Verteilen von Objekten in einem Shapefile auf mehrere Tabellen Zusammenfassen von Objekte aus mehreren Shapefiles mit unterschiedlichen Attributfeldern in einer Tabelle Veränderung von Attributen Entfernen von redundanten Daten … Fazit Ohne den Einsatz von GeoKettle wäre die Datenmigration bei der Werraenergie kaum zu stemmen gewesen. Dabei hat sich GeoKettle als gut dokumentierte und leicht zu erlernende Software heraus gestellt, die eine echte Alternative zur FME darstellt.
Für ein Forschungsvorhaben wurden von einer Vielzahl von Waldbesitzern aus ganz Deutschland Geo- und Sachinformationen zu nutzungsfreien Wäldern bereitgestellt. Diese umfangreiche Datengrundlage enthält detaillierte Informationen zu ca. 400.000 Hektar Waldfläche in etwa 90 verschiedenen Datenlieferungen. Hierbei lagen die einzelnen Lieferungen in einer z.T. sehr heterogenen Form vor. Die Haltung, Bearbeitung und Bereitstellung aller Daten wurde in der Datenbank PostgreSQL mit der Erweiterung PostGIS durchgeführt. Wesentliche Arbeitsschritte bezüglich der Geoinformationen waren die Transformation in ein einheitliches Koordinatensystem, eine Überlagerungsanalyse der einzelnen Datenlieferungen, die Abbildung von überregionalen Daten auf die Waldflächen und die Identifizierung von zusammenhängenden oder benachbarten Flächenkomplexen. Hierbei wurden eine Reihe von Funktionen (PL/pgSQL) geschrieben, die die Grundfunktionalitäten von PostGIS zu spezialisierten Werkzeugen kombinieren. Beispielsweise wurde für die Verschneidung der Waldflächen mit deutschlandweit vorliegenden sehr großen Daten wie dem Digitalen Landbedeckungsmodell DLM-DE Funktionen entwickelt, die quadrantenweise eine sukzessive Abarbeitung der Gesamtfläche der Bundesrepublik ermöglichen. Auf diese Weise konnte eine übermäßige Auslastung des Arbeitsspeichers vermieden werden. Die Realisierung des Projektes wurde möglich durch die Nutzung von OpenSource Werkzeugen. Die verwendeten Werkzeuge stellen eine professionelle und höchst aktuelle Arbeitsumgebung dar, welche in dieser Form als kostenpflichtige Variante keines Falls hätte hergestellt werden können. Hintergrund: Die Bundesregierung hat das Ziel formuliert bis zum Stichjahr 2020 5 % der Waldfläche in Deutschland einer natürlichen Entwicklung zu überlassen. Da bisher keine verlässliche Bilanzierungsgrundlage existierte, wurde vom Bundesministerium für Umwelt, Naturschutz und Reaktorsicherheit ein Vorhaben beauftragt, welches die entsprechende Grundlage erarbeitet. Das Vorhaben ist weitgehend abgeschlossen.
Geologische Untersuchungskampagnen aus den Jahren 1950 bis 1980 auf dem Flächengebiet des Bundeslands Brandenburg haben einen gewaltigen Fundus an geologischen Fachinformationen hervorgebracht. In dem Projekt Brandenburg 3D (kurz: B3D) wurde auf Grundlage dieser Tiefbohrungen und geologischen Schnitte ein dreidimensionales Untergrundmodell bestehend aus Schichtgrenzen, Störungszonen und Salzstöcken modelliert. Um die Ergebnisse des Projekts an zentraler Stelle zu veröffentlichen und ohne fachspezifische Software zugänglich zu machen, wurde eine kombinierte WebGIS-Architektur entwickelt, die es erlaubt sowohl die vorhandenen zwei- als auch dreidimensionalen Daten über den Browser abzurufen. Zu den hier eingesetzten Komponenten zählen OpenLayers, GeoExt 2, Ext JS 4, X3DOM, PostGIS 2 und GeoServer. Alle Daten zu Bohrungen, Tiefenprofilen und geologischen Verbreitungsflächen wurden aufbereitet, in einer PostGreSQL/PostGIS Datenbank abgelegt und über den GeoServer publiziert. Für die 3D-Daten kam hierbei der W3DS (Web 3D Service) zum Einsatz, der die Daten als Szenengraph liefert und clientsetig eine volle Interaktion und Navigation ermöglicht. Die Basis für den 3D-Client ist das Framework X3DOM, welches im Rahmen des Projekts über eine eigene Bibliothek mit Ext JS verbunden wurde, sodass eine einfache Kommunikation zwischen diesen stattfinden kann. Das entwickelte WebGIS basiert rein auf JavaScript und es besteht damit keine Notwendigkeit von Browser-Plugins. Dieser Vortrag präsentiert die technischen Grundlagen der eingesetzten Architektur mit einem Fokus auf die 3D-Visualisierung und die hierauf aufbauenden Client-Funktionalitäten, z.B. das Generieren virtueller Profilschnitte an beliebiger Stelle des Untergrundmodells.
Since about 2008, the planning world has been experiencing a paradigm shift that began in places like California and Oregon that have adopted legislation requiring the linking of land use and transportation plans to outcomes, specifically to the reduction of greenhouse gases (GHGs). In response to this need, Calthorpe Associates has developed a new planning tool, called UrbanFootprint, on a fully Open Source platform (i.e. Ubuntu Linux, PostGIS, PostGreSQL, etc.). As a powerful and dynamic web and mobile-enabled geo-spatial scenario creation and modeling tool with full co-benefits analysis capacity, UrbanFootprint has great utility for urban planning and research at multiple scales, from general plans, to project assessments, to regional and state-wide scenario development and analysis. Scenario outcomes measurement modules include: a powerful ‘sketch’ transportation model that produces travel and emissions impacts; a public health analysis engine that measures land use impacts on respiratory disease, obesity, and related impacts and costs; climate-sensitive building energy and water modeling; fiscal impacts analysis; and greenhouse gas and other emissions modeling.
This podcast ventures into the exciting realm of digital humanities by taking a look at the aims and methods of the HESTIA Project! We will read Elton Barker (Principal Investigator), Stefan Bouzarovski (Co-Investigator), Chris Pelling (Co-Investigator) and Leif Isaksen (ICT Consultant)'s 2010 article, "Mapping an ancient historian in a digital age: the Herodotus Encoded Space-Text-Image Archive (HESTIA)."ABSTRACT: "HESTIA (the Herodotus Encoded Space-Text-Imaging Archive) employs the latest digital technology to develop an innovative methodology to the study of spatial data in Herodotus’ Histories. Using a digital text of Herodotus, freely available from the Perseus on-line library, to capture all the place-names mentioned in the narrative, we construct a database to house that information and represent it in a series of mapping applications, such as GIS, GoogleEarth and GoogleMap Timeline. As a collaboration of academics from the disciplines of Classics, Geography, and Archaeological Computing, HESTIA has the twin aim of investigating the ways geography is represented in the Histories and of bringing Herodotus’ world into people’s homes."The Article Itself. Link to the Leeds International Classical Studies JournalPodcast Link. Links to Some Other Projects Mentioned in the Article:1. Perseus Project2. PostgreSQL3. PostGIS4. TimeMap5. Nick Rabinowitz's Blog
This time around we’re joined by Selena Deckelmann, who as a Postgres contributor talks to us about some of the burning questions many PHP developers often have about Postgres when considering it as a replacement for the PHP’s defacto-standard relational DB: MySQL. We ask “Why are so few of the “cool kids” using Postgresql?” and “What are the most important differences between MySQL & Postgres?”, and after a bunch of useful discussion on things like true full text search, complex indexes, PostGIS and Foreign Data Wrappers we end up discussing cider and chickens. Standard. Planet Postgres Taps - Simple database import/export app Multicorn - Unified data access library SQL Alchemy pgFouine - a PostgreSQL log analyzer JSON datatype - Store JSON directly in a Postgres field. Redis4You - Postgres 9.1 foreign data wrapper interface One man vs. his pet hate in life, Oreo cream - The proper way of solving life’s problems Discuss this on Reddit.
A VerySpatial Podcast | Discussions on Geography and Geospatial Technologies
Main topic: EarthDay 2012 around the corner. News: 1940 Census, PostGIS, and Weather.