Hacker News new | past | comments | ask | show | jobs | submit login
Hacking the Postgres wire protocol (pgdog.dev)
212 points by levkk 21 hours ago | hide | past | favorite | 48 comments





The best part I think is "pg_query is special. It doesn’t actually implement parsing SQL. It works by extracting C source code directly from Postgres and wraps it with a nice Rust interface. This allows PgDog to understand all queries that Postgres can."

One of the things that puts metaprogramming on wheels is being able to pull in grammars, schemas and such for external systems. Think how many cool tools we could have if we could do this in a more general way.


> One of the things that puts metaprogramming on wheels is being able to pull in grammars, schemas and such for external systems.

That's what I've been trying to do with: https://github.com/schemamap/schemamap

For a well-constrained Postgres schema, a deterministic SQL compiler can be built (with plenty of logic programming/constraint-solving/hypergraph-analysis) that can integrate arbitrary external systems data.

While this is NP-hard, with some clever use of embeddings and ontologies, and use of every single constraint AST within the DB to reduce the search space, this becomes a feasible problem.

For any Clojurists interested, I've packaged `pg_query`, so you can use it in your applications: https://github.com/schemamap/pg-query-clj

At the moment I'm saving up for the next development cycle, so not only PG->PG schema mappings can be solved for (JSON schema is next!). Hope this sounds interesting :)


Json schema layer support sounds interesting. Truth be told I didn’t immediately figure out how your project works

I had such high hopes for tree-sitter but once it went all "and then, $CC -c -o" all was lost :-(

What do you mean?

Unless the system that wishes to consume tree-sitter grammars has access to a linker, it being written in C, and then compiled to machine code, gravely limits the places it can be consumed. That's in contrast to any one of the hundreds of grammar compiler compiler systems that allow targeting a platform of choice without mandating a platform of choice. I do see that tree-sitter alleges to emit wasm, but I am not deep enough in that ecosystem to know if it means "wasm for the browser," or "wasm for extism[1]," or "wasm, but if you already have WASI" or what

1: e.g. https://github.com/1Password/onepassword-sdk-go/blob/v0.2.1/...


Supabase's postgres LSP works in a similar way iirc.

Yes, the same way. It's all based on the extremely useful `https://github.com/pganalyze/libpg_query` project, which is where the "extracted the parser from Postgres" part comes in.

Supabase's LSP also uses tree-sitter for corrections and autocomplete, because one drawback of using the server's source is that pg_query only works on well-formed/executable SQL - when it detects a malformed query, it formulates an error and exits, since that's what you want in an SQL server. So for partially-correct syntax and fill-forward, tree-sitter covers the gaps.


I agree. Does anyone know much heavy lifting is done by pg_query in wrapping the Postgres code vs. Postgres in expressing that code in a manner that makes pg_query possible?

Tends to be a matter of opinion. Postgres does not expose the relevant functions, so https://github.com/pganalyze/libpg_query has to do some heavy lifting to convert their source code into a nice library. Conversely, Postgres is very well written code, in an extremely common language, with a stable release cadence, and such a long track record it is seen as the reference implementation for correctly parsing SQL.

Yeah, as one of the main authors of libpg_query, I think the primary things that make this easier is that Postgres has good abstractions internally, and the parser works independently from other parts (e.g. the community discourages adding settings that affect parser behavior).

Over the years we've only had to maintain a small set of patches on top of the Postgres source [0], together with some mocks and our libclang-based extraction logic [1]. Of course it would be nice if Postgres just packaged this directly like it packages the client libraries, but there is non-trivial effort involved to do that. From what I recall, the main issue is that error handling and memory allocations work differently in the client-side libraries (and so that would have to either also be moved out of the backend source, or use some other abstraction).

[0]: https://github.com/pganalyze/libpg_query/tree/17-latest/patc...

[1]: https://github.com/pganalyze/libpg_query/blob/17-latest/scri...


Unrelated to your comment but a big fan of pganalyze. Makes Postgres infinitely more grokable for developers who aren’t experts at running and tuning a database. Keep up the good work!

Thanks, glad to hear! I like to think that one of the reasons pganalyze is a good product (though there are always parts I'd like to improve, and feedback is always welcome) is because we like to use it ourselves to optimize our own database, where we can :)

Having written several pg txn poolers, I like the implementation. I also love your simple branding and I think this could make a fantastic product for enterprise customers. However, my biggest concern as a buyer is longevity. PgCat (another popular pooler in rust) will likely lose funding shortly. Meanwhile, PgBouncer is battle tested, small in scope, and is starting to get consistent new contribution.

Thanks!

Re: pgcat and longevity, it's actually the opposite. Pgcat is/was an open source project that I worked on in my spare time. Thankfully I found a couple engineers (and the awesome community contributions) at large corps to push it over the line.

PgDog is a startup, so this is now my full time job.


Glad to see you found some footing! I just saw the email about PgML today.

You should mention the timeline of projects leading up to PgDog in your marketing. I was looking for references to pgcat (very briefly) but didn’t see any. Your background will bring a lot of peace of mind to technology adopters in big enterprise.


Will do. You're not the first one to recommend this, it's about time I listen.

Wrapping up the actual Postgres code is a brilliant idea. Then you never have to worry about updating your code to keep up with their changing API.

And the fact that this works at all is another example of why Postgres is the default data store on the internet. It's battle tested and has a massive ecosystem of support tools, like this one.


>we need to ... understand SQL syntax using a parser, Rust ecosystem has a great library called pg_query

I'm impressed both that the Rust ecosystem had the right library available and that it was high enough quality code for the author's team to use it as a reference for building their own additional functionality.


Pgquery was created by the pganalyze team for their own purposes I believe initially for features like index recommendation tooling, but immediately planned as open source. It is indeed a very high quality project with the underlying C implementation having several wrappers that exist for a number of languages[1].

[1] https://github.com/pganalyze/libpg_query/blob/15-latest/READ...


Oddly, it actually originates from the Ruby ecosystem - `pganalyze` initially created `libpg_query` to parse Postgres in Ruby. `libpg_query` in C does the "magic" of reformulating Postgres's query parser into an AST generator, and serializes the resulting AST into a Protobuf. Then, `pg_query` bridges that into Rust.

We're using it to rewrite queries too. It's a pretty cool library.

I am long on this project and excited about it.

I am wondering, why it took so long for something like this to happen for Postgres, where as Vitess existed for MySQL more than a decade. I am not an expert, but I hear Citus is not really a fair comparison with Vitess.


> I am wondering, why it took so long for something like this to happen for Postgres

We were thinking about adding PostgreSQL support to Vitess several years ago. However, we realized that it would be more efficient if each tool was focused on a specific task.

https://github.com/vitessio/vitess/issues/7084#issuecomment-...


pgdog represents maybe just the VTGate portion of Vitess. There's a whole lot more to Vitess or any replication-on-top of a database type service than just the query router.

> pgdog represents maybe just the VTGate portion of Vitess

That’s today. The project is developing fast, so I am sure more things will be added :)


Exactly :)

pgdog looks interesting, but I read the docs wondering how it handles foreign keys between tables, and the docs don't seem to cover it. It was the first question I had, and I'd assume would be an important question for most users. The project states "Data relationships are not sacrificed to go faster" and that it operates "transparently to the application", but it's not clear how it does that in regard to foreign keys.

Additionally, maybe this is just a pet peeve with Postgres documentation in general, but the docs use "database" to mean both a single database and a Postgres server interchangeably. On some pages this makes the content harder to understand. I think it's good practice to distinguish between databases and "clusters" (no, not those clusters, actually servers), and perhaps between clusters and groups of servers. While some of the naming is unfortunate for historical reasons, re-using the same names for different concepts just causes more confusion.


There is a bit documentation about configuring foreign keys here: https://docs.pgdog.dev/configuration/pgdog.toml/sharded_tabl...

Medium term I can see detecting foreign keys automatically (if constraints exist).


I saw some docs around this but it doesn’t seem to answer the main problem.

If you have users and posts, with posts having a userid field referencing users.id, and you shard the users table on id, how does it behave?

Now I request posts for a user id, is it smart enough to map the posts field to the users field? Where are posts stored? What if one user has all of the posts, now my data will be unbalanced across shards. Can I shard on post.id? If so what are the implications for querying joined to user?

The naive answer is to break foreign key integrity in the database. But it specifically says that pgdog doesn’t require sacrificing data safety.


Am I right to think this could be used to "inject" limits on the number of rows returned by a user query, or otherwise restrict what users see allowed to do?

I know it sounds silly/crazy but I have a use case where I would like to allow "mostly" trusted users to access the database directly and not through an API, but I need to apply some restrictions on their output.


It can but it's not the primary goal at the moment. If you want to restrict the number of rows returned, you can rewrite the query to add a LIMIT clause. To control which rows your users can see, you can use row-level security.

One thing I was thinking of doing is generating query plans asynchronously and blocking/cancelling queries that would otherwise be expensive and cause downtime. That's on the roadmap.


I feel I am missing something with the approach of those routers. If I am doing cursor pagination over a couple million rows with some ordering this will pull in a huge number of data from each server and then perform it in memory at the router level if I understand correctly.

The neon approach of decoupling storage from processing but keeping the processing a query local to one server seems better to me, but I am maybe missing something.


Paginating over millions of rows isn't really done in OLTP use cases that PgDog is targeting, as far as I know. Would be great to learn about yours though. Feel free to reach out!

> More complex examples, like IN (1, 2, 3) or id != 25 can be handled as well. For the former, we can hash all values and route the query to the matching shards. For the latter, we can do the opposite.

This makes it sound like the query would be routed to all shards except the one matching HASH(25). But wouldn't id != 25 need to go to all shards, since there could be values in the id set that hash to the same shard as 25?


You're right. Good catch, I'll fix this in the article.

> with 4 bytes added for itself

Why one would want to do that? Only reason I can think of is, so that values < 4 can be reserved for future users, like new versions of client-server that agree on 64 bit payload if the first 4 bytes are zero or alike. But there are better ways.


I'm guessing so it's never zero and can't be confused with something else, like a bunch of NULs. There are a few messages that's don't have a payload, e.g. ParseComplete.

Makes sense, thanks.

Is there something like pg_query for Oracle ? I want to use it for making AST's out of Oracle SQL queries

Is there even a grammar for Oracle SQL? I took a swing at trying to slurp it out of the "alt text" of the railroad images in the many, many html files in (e.g. https://docs.oracle.com/en/database/oracle/oracle-database/2... -> https://docs.oracle.com/en/database/oracle/oracle-database/2... ) but then I thankfully moved off any project from that point on that used Oracle and let it go

I'm aware of https://github.com/orafce/orafce and https://github.com/pgoracle before that (the commit history also references ora2pg and searching for that surfaces https://github.com/darold/ora2pg which was just updated yesterday!) but I've never tried to use any of them in anger


There are firewalls that claim to do deep packet inspection of Oracle queries for security. They are not without faults.

Perhaps you could directly use or decompile the OCI libs. Though that probably is against some license.


Has someone done this for MySQL?


It would be nice if they could go more into detail about how they decide whether a query is read or write. The casual mention of this makes it sound like they originally took a too-naive approach (looking at the first word) but even ignoring CTEs, you can still have an UPDATE within a SELECT subquery, which makes the problem very hard without parsing the whole query. Perhaps the C++ code extracted from pg handles this for them and returns some metadata that indicates whether there are any writes buried in the query?

Forgot to add that to the article, but yes, we parse the statement and route SelectStmt[1] to replicas. If there is an UPDATE in the SELECT, we don't handle that at the moment, but can be easily added.

[1] https://docs.rs/pg_query/6.0.0/pg_query/protobuf/struct.Sele...


Do you have to traverse the syntax tree to determine that?

Yup.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: