Socket
Socket
Sign inDemoInstall

github.com/teambenny/goetl

Package Overview
Dependencies
9
Alerts
File Explorer

Install Socket

Detect and block malicious and high-risk dependencies

Install

    github.com/teambenny/goetl

package goetl is a library for performing data pipeline / ETL tasks in Go. The main construct in goetl is Pipeline. A Pipeline has a series of PipelineStages, which will each perform some type of data processing, and then send new data on to the next stage. Each PipelineStage consists of one or more Processors, which are responsible for receiving, processing, and then sending data on to the next stage of processing. DataProcessors each run in their own goroutine, and therefore all data processing can be executing concurrently. Here is a conceptual drawing of a fairly simple Pipeline: In this example, we have a Pipeline consisting of 3 PipelineStages. The first stage has a Processor that runs queries on a SQL database, the second is doing custom transformation work on that data, and the third stage branches into 2 Processors, one writing the resulting data to a CSV file, and the other inserting into another SQL database. In the example above, Stage 1 and Stage 3 are using built-in Processors (see the "processors" package/subdirectory). However, Stage 2 is using a custom implementation of Processor. By using a combination of built-in processors, and supporting the writing of any Go code to process data, goetl makes it possible to write very custom and fast data pipeline systems. See the Processor documentation to learn more. Since each Processor is running in its own goroutine, SQLReader can continue pulling and sending data while each subsequent stage is also processing data. Optimally-designed pipelines have processors that can each run in an isolated fashion, processing data without having to worry about what's coming next down the pipeline. All data payloads sent between Processors implement the etldata.Payload interface. Built-in processors send data flows using the type etldata.JSON. This provides a good balance of consistency and flexibility. See the "data" package for details and helper functions for dealing with etldata.Payload and etldata.JSON. Another good read for handling JSON data in Go is http://blog.golang.org/json-and-go. Note that many of the concepts in goetl were taken from the Golang blog's post on pipelines (http://blog.golang.org/pipelines). While the details discussed in that blog post are largely abstracted away by goetl, it is still an interesting read and will help explain the general concepts being applied. There are two ways to construct and run a Pipeline. The first is a basic, non-branching Pipeline. For example: This is a 3-stage Pipeline that queries some SQL data in stage 1, does some custom data transformation in stage 2, and then writes the resulting data to a SQL table in stage 3. The code to create and run this basic Pipeline would look something like: The second way to construct a Pipeline is using a PipelineLayout. This method allows for more complex Pipeline configurations that support branching between stages that are running multiple DataProcessors. Here is a (fairly complex) example: This Pipeline consists of 4 stages where each Processor is choosing which Processors in the subsequent stage should receive the data it sends. The SQLReader in stage 2, for example, is sending data to only 2 processors in the next stage, while the Custom Processor in stage 2 is sending its data to 3. The code for constructing and running a Pipeline like this would look like: This example is only conceptual, the main points being to explain the flexibility you have when designing your Pipeline's layout and to demonstrate the syntax for constructing a new PipelineLayout.


Version published

Readme

Source

goetl

This package was forked from the dailyburn/ratchet. When I left my job at Daily Burn, I was removed as a maintainer on the Ratchet project. Unfortunately it seems like that project is no longer being maintained, but I was still using the original code whenever I could.

goetl starts off with the release/v3.0.0 tag from the Daily Burn repo, and also implements the payload abstraction request. It stands on the original work of @stephenb who put together the original Ratchet implementation in a week. Thanks, Stephen.

A library for performing data pipeline / ETL tasks in Go.

The Go programming language's simplicity, execution speed, and concurrency support make it a great choice for building data pipeline systems that can perform custom ETL (Extract, Transform, Load) tasks. goetl is a library that is written 100% in Go, and let's you easily build custom data pipelines by writing your own Go code.

goetl provides a set of built-in, useful data processors, while also providing an interface to implement your own. Conceptually, data processors are organized into stages, and those stages are run within a pipeline.

Each data processor is receiving, processing, and then sending data to the next stage in the pipeline. All data processors are running in their own goroutine, so all processing is happening concurrently. Go channels are connecting each stage of processing, so the syntax for sending data will be intuitive for anyone familiar with Go.

Getting Started

  • Check out the full Godoc reference: GoDoc
  • Get goetl: go get github.com/teambenny/goetl

While not necessary, it may be helpful to understand some of the pipeline concepts used within goetl's internals: https://blog.golang.org/pipelines

Why would I use this?

goetl could be used anytime you need to perform some type of custom ETL. At Benny AI we use goetl mainly to handle extracting data from our application databases, transforming it into reporting-oriented formats, and then loading it into our dedicated reporting databases.

Another good use-case is when you have data stored in disparate locations that can't be easily tied together. For example, if you have some CSV data stored on S3, some related data in a SQL database, and want to combine them into a final CSV or SQL output.

In general, goetl tends to solve the type of data-related tasks that you end up writing a bunch of custom and difficult to maintain scripts to accomplish.

FAQs

Last updated on 17 Oct 2022

Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc