Package pq is a pure Go Postgres driver for the database/sql package. In most cases clients will use the database/sql package instead of using this package directly. For example: You can also connect to a database using a URL. For example: Similarly to libpq, when establishing a connection using pq you are expected to supply a connection string containing zero or more parameters. A subset of the connection parameters supported by libpq are also supported by pq. Additionally, pq also lets you specify run-time parameters (such as search_path or work_mem) directly in the connection string. This is different from libpq, which does not allow run-time parameters in the connection string, instead requiring you to supply them in the options parameter. For compatibility with libpq, the following special connection parameters are supported: Valid values for sslmode are: See http://www.postgresql.org/docs/current/static/libpq-connect.html#LIBPQ-CONNSTRING for more information about connection string parameters. Use single quotes for values that contain whitespace: A backslash will escape the next character in values: Note that the connection parameter client_encoding (which sets the text encoding for the connection) may be set but must be "UTF8", matching with the same rules as Postgres. It is an error to provide any other value. In addition to the parameters listed above, any run-time parameter that can be set at backend start time can be set in the connection string. For more information, see http://www.postgresql.org/docs/current/static/runtime-config.html. Most environment variables as specified at http://www.postgresql.org/docs/current/static/libpq-envars.html supported by libpq are also supported by pq. If any of the environment variables not supported by pq are set, pq will panic during connection establishment. Environment variables have a lower precedence than explicitly provided connection parameters. The pgpass mechanism as described in http://www.postgresql.org/docs/current/static/libpq-pgpass.html is supported, but on Windows PGPASSFILE must be specified explicitly. database/sql does not dictate any specific format for parameter markers in query strings, and pq uses the Postgres-native ordinal markers, as shown above. The same marker can be reused for the same parameter: pq does not support the LastInsertId() method of the Result type in database/sql. To return the identifier of an INSERT (or UPDATE or DELETE), use the Postgres RETURNING clause with a standard Query or QueryRow call: For more details on RETURNING, see the Postgres documentation: For additional instructions on querying see the documentation for the database/sql package. Parameters pass through driver.DefaultParameterConverter before they are handled by this package. When the binary_parameters connection option is enabled, []byte values are sent directly to the backend as data in binary format. This package returns the following types for values from the PostgreSQL backend: All other types are returned directly from the backend as []byte values in text format. pq may return errors of type *pq.Error which can be interrogated for error details: See the pq.Error type for details. You can perform bulk imports by preparing a statement returned by pq.CopyIn (or pq.CopyInSchema) in an explicit transaction (sql.Tx). The returned statement handle can then be repeatedly "executed" to copy data into the target table. After all data has been processed you should call Exec() once with no arguments to flush all buffered data. Any call to Exec() might return an error which should be handled appropriately, but because of the internal buffering an error returned by Exec() might not be related to the data passed in the call that failed. CopyIn uses COPY FROM internally. It is not possible to COPY outside of an explicit transaction in pq. Usage example: PostgreSQL supports a simple publish/subscribe model over database connections. See http://www.postgresql.org/docs/current/static/sql-notify.html for more information about the general mechanism. To start listening for notifications, you first have to open a new connection to the database by calling NewListener. This connection can not be used for anything other than LISTEN / NOTIFY. Calling Listen will open a "notification channel"; once a notification channel is open, a notification generated on that channel will effect a send on the Listener.Notify channel. A notification channel will remain open until Unlisten is called, though connection loss might result in some notifications being lost. To solve this problem, Listener sends a nil pointer over the Notify channel any time the connection is re-established following a connection loss. The application can get information about the state of the underlying connection by setting an event callback in the call to NewListener. A single Listener can safely be used from concurrent goroutines, which means that there is often no need to create more than one Listener in your application. However, a Listener is always connected to a single database, so you will need to create a new Listener instance for every database you want to receive notifications in. The channel name in both Listen and Unlisten is case sensitive, and can contain any characters legal in an identifier (see http://www.postgresql.org/docs/current/static/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS for more information). Note that the channel name will be truncated to 63 bytes by the PostgreSQL server. You can find a complete, working example of Listener usage at http://godoc.org/github.com/lib/pq/example/listen.
Package tada (TAble DAta) enables test-driven data pipelines. tada combines concepts from pandas, spreadsheets, R, Apache Spark, and SQL. Its most common use cases are cleaning, aggregating, transforming, and analyzing data. Some notable features of tada: * flexible constructor that supports most primitive data types * seamlessly handles null data and type conversions * robust datetime support * advanced filtering, lookups and merging, grouping, sorting, and pivoting * multi-level labels and columns * complete test coverage * interoperable with existing pandas dataframes via Apache Arrow The key data types are Series, DataFrames, and groupings of each. A Series is analogous to one column of a spreadsheet, and a DataFrame is analogous to a whole spreadsheet. Printing either data type will render an ASCII table. Both Series and DataFrames have one or more "label levels". On printing, these appear as the leftmost columns in a table, and typically have values that help identify ("label") specific rows. They are analogous to the "index" concept in pandas. For more detail and implementation notes, see https://docs.google.com/document/d/18DvZzd6Tg6Bz0SX0fY2SrXOjE8d9xDhU6bDEnaIc_rM/
Package tada (TAble DAta) enables test-driven data pipelines. tada combines concepts from pandas, spreadsheets, R, Apache Spark, and SQL. Its most common use cases are cleaning, aggregating, transforming, and analyzing data. Some notable features of tada: * flexible constructor that supports most primitive data types * seamlessly handles null data and type conversions * robust datetime support * advanced filtering, lookups and merging, grouping, sorting, and pivoting * multi-level labels and columns * complete test coverage * interoperable with existing pandas dataframes via Apache Arrow The key data types are Series, DataFrames, and groupings of each. A Series is analogous to one column of a spreadsheet, and a DataFrame is analogous to a whole spreadsheet. Printing either data type will render an ASCII table. Both Series and DataFrames have one or more "label levels". On printing, these appear as the leftmost columns in a table, and typically have values that help identify ("label") specific rows. They are analogous to the "index" concept in pandas. For more detail and implementation notes, see https://docs.google.com/document/d/18DvZzd6Tg6Bz0SX0fY2SrXOjE8d9xDhU6bDEnaIc_rM/