
Product
Introducing Webhook Events for Alert Changes
Add real-time Socket webhook events to your workflows to automatically receive software supply chain alert changes in real time.
A library to pipe replication stream (with initial full copy) to external destinations
tube_ensure_exists(tube, partitions)) and removed
(tube_ensure_absent(tube)) at run-time. When removing, all the tables are
automatically detached from the tube.tube_list() function.tube_table_ensure_attached(tube, table, shard)). When attaching, you must
also provide some "shard number" value which will then be recorded in the tube
along with ids. (If shard number is not provided, pg-tube tries to infer it
from the table name's numeric suffix.) Once attached, all insert/update/delete
operations will result into adding a "touch" pod to the corresponding tube.tube_table_ensure_detached(tube, table)).tube_backfill_step1(),
tube_backfill_step2() and then tube_backfill_step3(step1_result, tube, table, order_col, shard)). This will bulk-insert all the ids from the table
to the tube (op='B') record. Ordering by the provided column in step3 is
important, because for encrypted databases (e.g. ECIES), ephemeral shared key
for neighbor rows will likely be cached with short expiration time. So the
closer the updates we process are in time, the higher is the chance to have a
Diffie-Hellman hit.tube_stats() function which shows all the details about the current
tubes structure and content.In its nature, tubes are eventual-consistent, mainly because of enrichment process which takes time and delivers the results with unpredictable latency. It means that there is absolutely no way to rely on some external source of order in the events (like row version), and the only way to solve it all is to never replay the same id concurrently. We also strictly rely on the fact that the downstream for the replay should be eventual-consistent and e.g. never reorder writes A and B for the same id once an ack from write A is received.
The application code should constantly monitor all existing tubes. One tube may correspond to one logical unit of work (e.g. we may have a live ES index, a spare ES index and a cache-invalidation tube, thus 3 tubes). For each tube, it spawns a beforehand-known number of separate replication stream workers, each worker processing only ids from their own set of shards (e.g. a "shard % 3" formula can be used as a function to allocate different touch events to different workers with numbers 0, 1, 2). Each worker reads ids from the tube, processes them in a strictly serial way ordering from lowest seq to highest (so it's guaranteed that there is never a concurrency when processing the same id) and removes from the tube. To reach eventual consistency, the workers must do enrichment (i.e. loading the actual data from the DB after receiving a touch event for some id). It's critical that each shard is processed by exactly one worker (no parallelism within one shard), and that the same id is never processed concurrently; having these assumptions allows us to NOT keep record versions in the tube and just rely on natural eventual consistency ordering (we can't even count on whether a record in the tube corresponds to a deletion or to an insert/update).
Backfill events (op='B') are processed in the exact same way as
insert/update/delete/touch events, but with lower priority (the application uses
ORDER BY op='B', seq clause in SELECT which matches the index exactly). Also,
once a control pod with type=backfill_end, start_seq=S payload is received, it
signals the worker that pods with seq < S are not needed on the downstream
anymore, and they should be removed (garbage collected).
Use yarn test and yarn test:db to run the automated tests.
FAQs
A library to pipe replication stream (with initial full copy) to external destinations
We found that pg-tube demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Product
Add real-time Socket webhook events to your workflows to automatically receive software supply chain alert changes in real time.

Security News
ENISA has become a CVE Program Root, giving the EU a central authority for coordinating vulnerability reporting, disclosure, and cross-border response.

Product
Socket now scans OpenVSX extensions, giving teams early detection of risky behaviors, hidden capabilities, and supply chain threats in developer tools.