Package metrics is a telemetry client designed for Uber's software networking team. It prioritizes performance on the hot path and integration with both push- and pull-based collection systems. Like Prometheus and Tally, it supports metrics tagged with arbitrary key-value pairs. Like Prometheus, but unlike Tally, metric names should be relatively long and descriptive - generally speaking, metrics from the same process shouldn't share names. (See the documentation for the Root struct below for a longer explanation of the uniqueness rules.) For example, prefer "grpc_successes_by_procedure" over "successes", since "successes" is common and vague. Where relevant, metric names should indicate their unit of measurement (e.g., "grpc_success_latency_ms"). Counters represent monotonically increasing values, like a car's odometer. Gauges represent point-in-time readings, like a car's speedometer. Both counters and gauges expose not only write operations (set, add, increment, etc.), but also atomic reads. This makes them easy to integrate directly into your business logic: you can use them anywhere you'd otherwise use a 64-bit atomic integer. This package doesn't support analogs of Tally's timer or Prometheus's summary, because they can't be accurately aggregated at query time. Instead, it approximates distributions of values with histograms. These require more up-front work to set up, but are typically more accurate and flexible when queried. See https://prometheus.io/docs/practices/histograms/ for a more detailed discussion of the trade-offs involved. Plain counters, gauges, and histograms have a fixed set of tags. However, it's common to encounter situations where a subset of a metric's tags vary constantly. For example, you might want to track the latency of your database queries by table: you know the database cluster, application name, and hostname at process startup, but you need to specify the table name with each query. To model these situations, this package uses vectors. Each vector is a local cache of metrics, so accessing them is quite fast. Within a vector, all metrics share a common set of constant tags and a list of variable tags. In our database query example, the constant tags are cluster, application, and hostname, and the only variable tag is table name. Usage examples are included in the documentation for each vector type. This package integrates with StatsD- and M3-based collection systems by periodically pushing differential updates. (Users can integrate with other push-based systems by implementing the push.Target interface.) It integrates with pull-based collectors by exposing an HTTP handler that supports Prometheus's text and protocol buffer exposition formats. Examples of both push and pull integration are included in the documentation for the root struct's Push and ServeHTTP methods. If you're unfamiliar with Tally and Prometheus, you may want to consult their documentation:
Package xcore is a set of basic objects for programation (XCache for caches, XDataset for data sets, XLanguage for languages and XTemplate for templates). For GO, the actual existing code includes: - XCache: Application Memory Caches for any purpose, with time control and quantity control of object in the cache and also check changes against original source. It is a thread safe cache. - XDataset: Basic nested data structures for any purpose (template injection, configuration files, database records, etc). - XLanguage: language dependent text tables for internationalization of code. The sources can be text or XML file definitions. - XTemplate: template system with meta language to create complex documents (compatible with any text language, HTML, CSS, JS, PDF, XML, etc), heavily used on CMS systems and others. It is already used on sites that serve more than 60 million pages a month (500 pages per second on pike hour) and can be used on multithreading environment safely. XCache is a library to cache all the data you want into current application memory for a very fast access to the data. The access to the data support multithreading and concurrency. For the same reason, this type of cache is not persistent (if you exit the application) and cannot grow too much (as memory is the limit). However, you can control a timeout of each cache piece, and eventually a comparison function against a source (file, database, etc) to invalid the cache. 1. Declare a new XCache with NewXCache() function: 2. Fill in the cache: Once you have declared the cache, you can fill it with anything you want. The main cache object is an interface{} so you can put here anything you need, from simple variables to complex structures. You need to use the Set function: Note the ID is always a string, so convert a database key to string if needed. 3. To use the cache, just ask for your entry with Get function: 4. To maintain the cache: You may need Del function, to delete a specific entry (maybe because you deleted the record in database). You may also need Clean function to deletes a percentage of the cache, or Flush to deletes it all. The Verify function is used to check cache entries against their sources through the Validator function. Be very careful, if the cache is big or the Validator function is complex (maybe ask for a remote server information), the verification may be VERY slow and huge CPU use. The Count function gives some stats about the cache. 5. How to use Verify Function: This function is recommended when the source is local and fast to check (for instance a language file or a template file). When the source is distant (other cluster database, any rpc source on another network, integration of many parts, etc), it is more recommended to create a function that will delete the cache when needed (on demand cache change). The validator function is a func(id, time.Time) bool function. The first parameter is the ID entry in the cache, the second parameter the time of the entry was created. The validator function returns true is the cache is still valid, or false if it needs to be invalidated. The XCache is thread safe. The cache can be limited in quantity of entries and timeout for data. The cache is automanaged (for invalid expired data) and can be cleaned partially or totally manually. The XLanguage table of text entries can be loaded from XML file, XML string or normal text file or string. It is used to keep a table of id=value set of entries in any languages you need, so it is easy to switch between XLanguage instance based on the required language needed. Obviously, any XLanguage you load in any language should have the same id entries translated, for the same use. The XLanguage object is thread safe 1. loading: You can load any file or XML string directly into the object. 1.1 The XML Format is: NAMEOFTABLE is the name of your table entry, for example "loginform", "user_report", etc. LG is the ISO-3369 2 letters language ID, for example "es" for spanish, "en" for english, "fr" for french, etc. ENTRYNAME is the ID of the entry, for example "greating", "yourname", "submitbutton". ENTRYVALUE is the text for your entry, for example "Hello", "You are:", "Save" if your table is in english. STATUSVALUE is the status of the entry- You may put any value to control your translation over time and processes. 1.2 The flat text format is: ENTRYNAME is the ID of the entry, for example "greating", "yourname", "submitbutton". ENTRYVALUE is the text for your entry, for example "Hello", "You are:", "Save" if your table is in english. There is no name of table or language in this format (you "know" what you are loading). The advantage to use XML format is to have more control over your language, and eventyally add attributes into your entries, for instance you may add attributes translated="yes/no", verified="yes/no", and any other data that your system could insert. The XLanguage will ignore those attributes loading the table. 2. creation: To create a new XLanguage empty structure: There are 4 functions to create the language from a file or string, flat text or XML text: Then you can use the set of basic access functions: SetName/SetLanguage functions are used to set the table name and language of the object (generally to build an object from scratch). GetName/GetLanguage functions are used to get the table name and language of the object (generally when you load it from some source). Set/Get/Del functions are used to add or modify a new entry, read an entry, or deletes an entry in the object. SetStatus/GetStatus functions are used to add or get a status for the entry in the object. To create am XML file from the objet, you can use the GetXML() function 1. Overview: The XDataSet is a set of interfaces and basic classes ready-to-use to build a standard set of data optionally nested and hierarchical, that can be used for any purpose: - Keep complex data in memory. - Create JSON structures. - Inject data into templates. - Interchange database data (records set and record). You can store into it generic supported data, as well as any complex interface structures: - Int - Float - String - Time - Bool - []Int - []Float - []Time - []Bool - XDataSetDef (anything extended with this interface) - []String - Anything else ( interface{} ) - XDataSetCollectionDef (anything extended with this interface) The generic supported data comes with a set of functions to get/set those data directly into the XDataset. Example: Note that all references to XDataset and XDatasetCollection are pointers, always (to be able to modify the values of them). 2. XDatasetDef interface: It is the interface to describe a simple set of data mapped as "name": value, where value can be of any type. The interface implements a good amount of basic methods to get the value on various format such as GetString("name"), GetInt("name"), etc (see below). If the value is another type as asked, the method should contert it if possible. For instance "key":123 required through GetString("key") should return "123". The XDataset type is a simple map[string]interface{} with all the implemented methods and should be enough to use for almost all required cases. However, you can build any complex structure that extends the interface and implements all the required functions to stay compatible with the XDatasetDef. 3. XDatasetCollectionDef Interface: This is the interface used to extend any type of data as a Collection, i-e an array of XDatasetDef. This is a slice of any XDatasetDef compatible data. The interface implements some methods to work on array structure such as Push, Pop, Shift, Unshift and some methods to search data into the array. The XDatasetCollection type is a simple []DatasetDef with all the implemented methods and should be enough to use for almost all required cases. 1. Overview: The XDataSetTS is a DatasetDef structure, thread safe. It is build on the XDataset with the same properties, but is thread safe to protect Read/Write accesses from different thread. Example: You may also build a XDatasetTS to encapsulate a XDatasetDef that is not thread safe, to use it safely Note that all references to XDatasetTS are pointers, always (to be able to modify the values of them). The DatasetTS meet the XDatasetDef interface 1. Overview: This is a class to compile and keep a Template that can be injected with an XDataSet structure of data, with a metalanguage to inject the data. The metalanguage is extremely simple and is made to be useful and **really** separate programation from template code (not like other many generic template systems that just mix code and data). A template is a set of HTML/XML (or any other language) string with a meta language to inject variables and build a final string. The XCore XTemplate system is based on the injection of parameters, language translation strings and data fields directly into the HTML (Or any other language you need) template. The HTML itself (or any other language) is a text code not directly used by the template system, but used to dress the data you want to represent in your preferred language. The variables to inject must be into a XDataSet structure or into a structure extended from XDataSetDef interface. The injection of data is based on a XDataSet structure of values that can be nested into another XDataSet and XDataSetConnection and so on. The template compiler recognize nested arrays to automatically make loops on the information. Templates are made to store reusable HTML code, and overall easily changeable by people that do not know how to write programs. A template can be as simple as a single character (no variables to inject) to a very complex nested, conditional and loops sub-templates. Yes. this is a template, but a very simple one without need to inject any data. Let's go more complex: Having an array of data, we want to paint it beautifull: We can create a template to inject this data into it: 2. Create and use XTemplateData: In sight to create and use templates, you have all those possible options to use: Creates the XTemplate from a string or a file or any other source: Clone the XTemplate: 3. Metalanguage Reference: 3.1 Comments: %-- and --% You may use comments into your template. The comments will be discarded immediately at the compilation of the template and do not interfere with the rest of your code. Example: 3.2 Nested Templates: [[...]] and [[]] You can define new nested templates into your main template A nested template is defined by: The templteid is any combination of lowers letters only (a-z), numbers (0-9), and 3 special chars: . (point) - (dash) and _ (underline). The template is closed with [[]]. There is no limits into nesting templates. Any nested template will inheritate all the father elements and can use father elements too. To call a sub-template, you need to use &&templateid&& syntax (described below in this document). Example: You may use more than one id into the same template to avoid repetition of the same code. The different id's are separated with a pipe | Important note: A template will be visible only on the same level of its declaration. For example, if you put a subtemplate "b" into a subtemplate "a", it will not be visible by &&b&& from the top level, but only into the subtemplate "a". 3.3 Simple Elements: ##...## and {{...}} There are 2 types of simple elements. Language elements and Data injector elements (also called field elements). We "logically" define the 2 type of elements. The separation is only for human logic and template filling, however the language information can perfectly fit into the data to inject (and not use ## entries). 3.3.1 Languages elements: ##entry## All the languages elements should have the format: ##entry##. A language entry is generally anything written into your code or page that does not come from a database, and should adapt to the language of the client visiting your site. Using the languages elements may depend on the internationalization of your page. If your page is going to be in a single language forever, you really dont need to use languages entries. The language elements generally carry titles, menu options, tables headers etc. The language entries are set into the "#" entry of the main template XDataset to inject, and is a XLanguage table. Example: With data to inject: 3.3.2 Field elements: {{fieldname}} Fields values should have the format: {{fieldname}}. Your fields source can be a database or any other preferred repository data source. Example: You can access an element with its path into the data set to inject separating each field level with a > (greater than). This will take the name of the second hobby in the dataset defined upper. (collections are 0 indexed). The 1 denotes the second record of the hobbies XDatasetCollection. If the field is not found, it will be replaced with an empty string. Tecnically your field names can be any string in the dataset. However do not use { } or > into the names of your fields or the XTemplate may not use them correctly. We recommend to use lowercase names with numbers and ._- Accents and UTF8 symbols are also welcome. 3.3.3 Scope: When you use an id to point a value, the template will first search into the available ids of the local level. If no id is found, the it will search into the upper levers if any, and so on. Example: At the level of 'data2', using {{appname}} will get back 'DomCore'. At the level of 'key1', using {{appname}} will get back 'Nested App'. At the level of 'key2', using {{appname}} will get back 'DomCore'. At the level of root, 'data1' or 'detail', using {{appname}} will get back an empty string. 3.3.4 Path access: id>id>id>id At any level into the data array, you can access any entry into the subset array. For instance, taking the previous array of data to inject, let's suppose we are into a nested meta elements at the 'data1' level. You may want to access directly the 'Juan' entry. The path will be: The José's status value from the root will be: 3.4 Meta Elements They consist into an injection of a XDataset, called the "data to inject", into the template. The meta language is directly applied on the structure of the data array. The data to inject is a nested set of variables and values with the structure you want (there is no specific construction rules). You can inject nearly anything into a template meta elements. Example of a data array to inject: You can access directly any data into the array with its relative path (relative to the level you are when the metaelements are applied, see below). There are 4 structured meta elements in the XTemplate templates to use the data to inject: Reference, Loops, Condition and Debug. The structure of the meta elements in the template must follow the structure of the data to inject. 3.4.1 References to another template: &&order&& 3.4.1.1 When order is a single id (characters a-z0-9.-_), it will make a call to a sub template with the same set of data and replace the &&...&& with the result. The level in the data set is not changed. Example based on previous array of Fred's data: 3.4.1.2 When order contains 2 parameters separated by a semicolumn :, then second parameter is used to change the level of the data of array, with the subset with this id. The level in the data set is changed to this sub set. Example based on previous array of Fred's data: 3.4.1.3 When order contains 3 parameters separated by a semicolumn :, the second and third parameters are used to search the name of the new template based on the data fields to inject. This is an indirect access to the template. The name of the subtemplate is build with parameter3 as prefix and the content of parameter2 value. The third parameter must be empty. 3.4.2 Loops: @@order@@ 3.4.2.1 Overview This meta element will loop over each itterance of the set of data and concatenate each created template in the same order. You need to declare a sub template for this element. You may aso declare derivated sub templates for the different possible cases of the loop: For instance, If your main subtemplate for your look is called "hobby", you may need a different template for the first element, last element, Nth element, Element with a value "no" in the sport field, etc. The supported postfixes are: When the array to iterate is empty: - .none (for example "There is no hobby") When the array contains elements, it will search in order, the following template and use the first found: - templateid.key.[value] value is the key of the vector line. If the collection has a named key (string) or is a direct array (0, 1, 2...) - templateid.first if it is the first element of the array set (new from v1.01.11) - templateid.last if it is the first element of the array set (new from v1.01.11) - templateid.even if the line number is even - templateid in all other cases (odd is contained here if even is defined) Since v2.1.7, you can also use the pseudo field {{.counter}} into the loop subtemplate, to get the number of the counter of the loop, it is 1-based (first loop is 1, not 0) 3.4.2.2 When order is a single id (characters a-z0-9.-_), it will make a call to the sub template id with the same subset of data with the same id and replace the @@...@@ for each itterance of the data with the result. Example based on previous array of Fred's data: 3.4.2.3 When order contains 2 parameters separated by a semicolumn :, then first parameter is used to change the level of the data of array, with the subset with this id, and the second one for the template to use. Example based on previous array of Fred's data: 3.4.3 Conditional: ??order?? Makes a call to a subtemplate only if the field exists and have a value. This is very userfull to call a sub template for instance when an image or a video is set. When the condition is not met, it will search for the [id].none template. The conditional element does not change the level in the data set. 3.4.3.1 When order is a single id (characters a-z0-9.-_), it will make a call to the sub template id with the same field in the data and replace the ??...?? with the corresponding template Example based on previous array of Fred's data: 3.4.3.2 When order contains 2 parameters separated by a semicolumn :, then second parameter is used to change the level of the data of array, with the subset with this id. Example based on previous array of Fred's data: If the asked field is a catalog, true/false, numbered, you may also use .[value] subtemplates 3.5 Debug Tools: !!order!! There are two keywords to dump the content of the data set. This is very useful when you dont know the code that calls the template, don't remember some values, or for debug facilities. 3.5.1 !!dump!! Will show the totality of the data set, with ids and values. 3.5.1 !!list!! Will show only the tree of parameters, values are not shown.
Package pq is a pure Go Postgres driver for the database/sql package. In most cases clients will use the database/sql package instead of using this package directly. For example: You can also connect to a database using a URL. For example: Similarly to libpq, when establishing a connection using pq you are expected to supply a connection string containing zero or more parameters. A subset of the connection parameters supported by libpq are also supported by pq. Additionally, pq also lets you specify run-time parameters (such as search_path or work_mem) directly in the connection string. This is different from libpq, which does not allow run-time parameters in the connection string, instead requiring you to supply them in the options parameter. For compatibility with libpq, the following special connection parameters are supported: Valid values for sslmode are: See http://www.postgresql.org/docs/current/static/libpq-connect.html#LIBPQ-CONNSTRING for more information about connection string parameters. Use single quotes for values that contain whitespace: A backslash will escape the next character in values: Note that the connection parameter client_encoding (which sets the text encoding for the connection) may be set but must be "UTF8", matching with the same rules as Postgres. It is an error to provide any other value. In addition to the parameters listed above, any run-time parameter that can be set at backend start time can be set in the connection string. For more information, see http://www.postgresql.org/docs/current/static/runtime-config.html. Most environment variables as specified at http://www.postgresql.org/docs/current/static/libpq-envars.html supported by libpq are also supported by pq. If any of the environment variables not supported by pq are set, pq will panic during connection establishment. Environment variables have a lower precedence than explicitly provided connection parameters. The pgpass mechanism as described in http://www.postgresql.org/docs/current/static/libpq-pgpass.html is supported, but on Windows PGPASSFILE must be specified explicitly. database/sql does not dictate any specific format for parameter markers in query strings, and pq uses the Postgres-native ordinal markers, as shown above. The same marker can be reused for the same parameter: pq does not support the LastInsertId() method of the Result type in database/sql. To return the identifier of an INSERT (or UPDATE or DELETE), use the Postgres RETURNING clause with a standard Query or QueryRow call: For more details on RETURNING, see the Postgres documentation: For additional instructions on querying see the documentation for the database/sql package. Parameters pass through driver.DefaultParameterConverter before they are handled by this package. When the binary_parameters connection option is enabled, []byte values are sent directly to the backend as data in binary format. This package returns the following types for values from the PostgreSQL backend: All other types are returned directly from the backend as []byte values in text format. pq may return errors of type *pq.Error which can be interrogated for error details: See the pq.Error type for details. You can perform bulk imports by preparing a statement returned by pq.CopyIn (or pq.CopyInSchema) in an explicit transaction (sql.Tx). The returned statement handle can then be repeatedly "executed" to copy data into the target table. After all data has been processed you should call Exec() once with no arguments to flush all buffered data. Any call to Exec() might return an error which should be handled appropriately, but because of the internal buffering an error returned by Exec() might not be related to the data passed in the call that failed. CopyIn uses COPY FROM internally. It is not possible to COPY outside of an explicit transaction in pq. Usage example: PostgreSQL supports a simple publish/subscribe model over database connections. See http://www.postgresql.org/docs/current/static/sql-notify.html for more information about the general mechanism. To start listening for notifications, you first have to open a new connection to the database by calling NewListener. This connection can not be used for anything other than LISTEN / NOTIFY. Calling Listen will open a "notification channel"; once a notification channel is open, a notification generated on that channel will effect a send on the Listener.Notify channel. A notification channel will remain open until Unlisten is called, though connection loss might result in some notifications being lost. To solve this problem, Listener sends a nil pointer over the Notify channel any time the connection is re-established following a connection loss. The application can get information about the state of the underlying connection by setting an event callback in the call to NewListener. A single Listener can safely be used from concurrent goroutines, which means that there is often no need to create more than one Listener in your application. However, a Listener is always connected to a single database, so you will need to create a new Listener instance for every database you want to receive notifications in. The channel name in both Listen and Unlisten is case sensitive, and can contain any characters legal in an identifier (see http://www.postgresql.org/docs/current/static/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS for more information). Note that the channel name will be truncated to 63 bytes by the PostgreSQL server. You can find a complete, working example of Listener usage at https://godoc.org/github.com/greatfocus/pq/example/listen. If you need support for Kerberos authentication, add the following to your main package: This package is in a separate module so that users who don't need Kerberos don't have to download unnecessary dependencies. When imported, additional connection string parameters are supported:
Package ora implements an Oracle database driver. ### Golang Oracle Database Driver ### #### TL;DR; just use it #### Call stored procedure with OUT parameters: An Oracle database may be accessed through the database/sql(http://golang.org/pkg/database/sql) package or through the ora package directly. database/sql offers connection pooling, thread safety, a consistent API to multiple database technologies and a common set of Go types. The ora package offers additional features including pointers, slices, nullable types, numerics of various sizes, Oracle-specific types, Go return type configuration, and Oracle abstractions such as environment, server and session. The ora package is written with the Oracle Call Interface (OCI) C-language libraries provided by Oracle. The OCI libraries are a standard for client application communication and driver communication with Oracle databases. The ora package has been verified to work with: * Oracle Standard 11g (11.2.0.4.0), Linux x86_64 (RHEL6) * Oracle Enterprise 12c (12.1.0.1.0), Windows 8.1 and AMD64. --- * [Installation](https://github.com/rana/ora#installation) * [Data Types](https://github.com/rana/ora#data-types) * [SQL Placeholder Syntax](https://github.com/rana/ora#sql-placeholder-syntax) * [Working With The Sql Package](https://github.com/rana/ora#working-with-the-sql-package) * [Working With The Oracle Package Directly](https://github.com/rana/ora#working-with-the-oracle-package-directly) * [Logging](https://github.com/rana/ora#logging) * [Test Database Setup](https://github.com/rana/ora#test-database-setup) * [Limitations](https://github.com/rana/ora#limitations) * [License](https://github.com/rana/ora#license) * [API Reference](http://godoc.org/github.com/rana/ora#pkg-index) * [Examples](./examples) --- Minimum requirements are Go 1.3 with CGO enabled, a GCC C compiler, and Oracle 11g (11.2.0.4.0) or Oracle Instant Client (11.2.0.4.0). Install Oracle or Oracle Instant Client. Copy the [oci8.pc](contrib/oci8.pc) from the `contrib` folder (or the one for your system, maybe tailored to your specific locations) to a folder in `$PKG_CONFIG_PATH` or a system folder, such as The ora package has no external Go dependencies and is available on GitHub and gopkg.in: *WARNING*: If you have Oracle Instant Client 11.2, you'll need to add "=lnnz11" to the list of linked libs! Otherwise, you may encounter "undefined reference to `nzosSCSP_SetCertSelectionParams' " errors. Oracle Instant Client 12.1 does not need this. The ora package supports all built-in Oracle data types. The supported Oracle built-in data types are NUMBER, BINARY_DOUBLE, BINARY_FLOAT, FLOAT, DATE, TIMESTAMP, TIMESTAMP WITH TIME ZONE, TIMESTAMP WITH LOCAL TIME ZONE, INTERVAL YEAR TO MONTH, INTERVAL DAY TO SECOND, CHAR, NCHAR, VARCHAR, VARCHAR2, NVARCHAR2, LONG, CLOB, NCLOB, BLOB, LONG RAW, RAW, ROWID and BFILE. SYS_REFCURSOR is also supported. Oracle does not provide a built-in boolean type. Oracle provides a single-byte character type. A common practice is to define two single-byte characters which represent true and false. The ora package adopts this approach. The oracle package associates a Go bool value to a Go rune and sends and receives the rune to a CHAR(1 BYTE) column or CHAR(1 CHAR) column. The default false rune is zero '0'. The default true rune is one '1'. The bool rune association may be configured or disabled when directly using the ora package but not with the database/sql package. Within a SQL string a placeholder may be specified to indicate where a Go variable is placed. The SQL placeholder is an Oracle identifier, from 1 to 30 characters, prefixed with a colon (:). For example: Placeholders within a SQL statement are bound by position. The actual name is not used by the ora package driver e.g., placeholder names :c1, :1, or :xyz are treated equally. The `database/sql` package provides a LastInsertId method to return the last inserted row's id. Oracle does not provide such functionality, but if you append `... RETURNING col /*LastInsertId*/` to your SQL, then it will be presented as LastInsertId. Note that you have to mark with a `/*LastInsertId*/` (case insensitive) your `RETURNING` part, to allow ora to return the last column as `LastInsertId()`. That column must fit in `int64`, though! You may access an Oracle database through the database/sql package. The database/sql package offers a consistent API across different databases, connection pooling, thread safety and a set of common Go types. database/sql makes working with Oracle straight-forward. The ora package implements interfaces in the database/sql/driver package enabling database/sql to communicate with an Oracle database. Using database/sql ensures you never have to call the ora package directly. When using database/sql, the mapping between Go types and Oracle types may be changed slightly. The database/sql package has strict expectations on Go return types. The Go-to-Oracle type mapping for database/sql is: The "ora" driver is automatically registered for use with sql.Open, but you can call ora.SetCfg to set the used configuration options including statement configuration and Rset configuration. When configuring the driver for use with database/sql, keep in mind that database/sql has strict Go type-to-Oracle type mapping expectations. The ora package allows programming with pointers, slices, nullable types, numerics of various sizes, Oracle-specific types, Go return type configuration, and Oracle abstractions such as environment, server and session. When working with the ora package directly, the API is slightly different than database/sql. When using the ora package directly, the mapping between Go types and Oracle types may be changed. The Go-to-Oracle type mapping for the ora package is: An example of using the ora package directly: Pointers may be used to capture out-bound values from a SQL statement such as an insert or stored procedure call. For example, a numeric pointer captures an identity value: A string pointer captures an out parameter from a stored procedure: Slices may be used to insert multiple records with a single insert statement: The ora package provides nullable Go types to support DML operations such as insert and select. The nullable Go types provided by the ora package are Int64, Int32, Int16, Int8, Uint64, Uint32, Uint16, Uint8, Float64, Float32, Time, IntervalYM, IntervalDS, String, Bool, Binary and Bfile. For example, you may insert nullable Strings and select nullable Strings: The `Stmt.Prep` method is variadic accepting zero or more `GoColumnType` which define a Go return type for a select-list column. For example, a Prep call can be configured to return an int64 and a nullable Int64 from the same column: Go numerics of various sizes are supported in DML operations. The ora package supports int64, int32, int16, int8, uint64, uint32, uint16, uint8, float64 and float32. For example, you may insert a uint16 and select numerics of various sizes: If a non-nullable type is defined for a nullable column returning null, the Go type's zero value is returned. GoColumnTypes defined by the ora package are: When Stmt.Prep doesn't receive a GoColumnType, or receives an incorrect GoColumnType, the default value defined in RsetCfg is used. EnvCfg, SrvCfg, SesCfg, StmtCfg and RsetCfg are the main configuration structs. EnvCfg configures aspects of an Env. SrvCfg configures aspects of a Srv. SesCfg configures aspects of a Ses. StmtCfg configures aspects of a Stmt. RsetCfg configures aspects of Rset. StmtCfg and RsetCfg have the most options to configure. RsetCfg defines the default mapping between an Oracle select-list column and a Go type. StmtCfg may be set in an EnvCfg, SrvCfg, SesCfg and StmtCfg. RsetCfg may be set in a Stmt. EnvCfg.StmtCfg, SrvCfg.StmtCfg, SesCfg.StmtCfg may optionally be specified to configure a statement. If StmtCfg isn't specified default values are applied. EnvCfg.StmtCfg, SrvCfg.StmtCfg, SesCfg.StmtCfg cascade to new descendent structs. When ora.OpenEnv() is called a specified EnvCfg is used or a default EnvCfg is created. Creating a Srv with env.OpenSrv() will use SrvCfg.StmtCfg if it is specified; otherwise, EnvCfg.StmtCfg is copied by value to SrvCfg.StmtCfg. Creating a Ses with srv.OpenSes() will use SesCfg.StmtCfg if it is specified; otherwise, SrvCfg.StmtCfg is copied by value to SesCfg.StmtCfg. Creating a Stmt with ses.Prep() will use SesCfg.StmtCfg if it is specified; otherwise, a new StmtCfg with default values is set on the Stmt. Call Stmt.Cfg() to change a Stmt's configuration. An Env may contain multiple Srv. A Srv may contain multiple Ses. A Ses may contain multiple Stmt. A Stmt may contain multiple Rset. Setting a RsetCfg on a StmtCfg does not cascade through descendent structs. Configuration of Stmt.Cfg takes effect prior to calls to Stmt.Exe and Stmt.Qry; consequently, any updates to Stmt.Cfg after a call to Stmt.Exe or Stmt.Qry are not observed. One configuration scenario may be to set a server's select statements to return nullable Go types by default: Another scenario may be to configure the runes mapped to bool values: Oracle-specific types offered by the ora package are ora.Rset, ora.IntervalYM, ora.IntervalDS, ora.Raw, ora.Lob and ora.Bfile. ora.Rset represents an Oracle SYS_REFCURSOR. ora.IntervalYM represents an Oracle INTERVAL YEAR TO MONTH. ora.IntervalDS represents an Oracle INTERVAL DAY TO SECOND. ora.Raw represents an Oracle RAW or LONG RAW. ora.Lob may represent an Oracle BLOB or Oracle CLOB. And ora.Bfile represents an Oracle BFILE. ROWID columns are returned as strings and don't have a unique Go type. #### LOBs The default for SELECTing [BC]LOB columns is a safe Bin or S, which means all the contents of the LOB is slurped into memory and returned as a []byte or string. The DefaultLOBFetchLen says LOBs are prefetched only a minimal way, to minimize extra memory usage - you can override this using `stmt.SetCfg(stmt.Cfg().SetLOBFetchLen(100))`. If you want more control, you can use ora.L in Prep, Qry or `ses.SetCfg(ses.Cfg().SetBlob(ora.L))`. But keep in mind that Oracle restricts the use of LOBs: it is forbidden to do ANYTHING while reading the LOB! No another query, no exec, no close of the Rset - even *advance* to the next record in the result set is forbidden! Failing to adhere these rules results in "Invalid handle" and ORA-03127 errors. You cannot start reading another LOB till you haven't finished reading the previous LOB, not even in the same row! Failing this results in ORA-24804! For examples, see [z_lob_test.go](z_lob_test.go). #### Rset Rset is used to obtain Go values from a SQL select statement. Methods Rset.Next, Rset.NextRow, and Rset.Len are available. Fields Rset.Row, Rset.Err, Rset.Index, and Rset.ColumnNames are also available. The Next method attempts to load data from an Oracle buffer into Row, returning true when successful. When no data is available, or if an error occurs, Next returns false setting Row to nil. Any error in Next is assigned to Err. Calling Next increments Index and method Len returns the total number of rows processed. The NextRow method is convenient for returning a single row. NextRow calls Next and returns Row. ColumnNames returns the names of columns defined by the SQL select statement. Rset has two usages. Rset may be returned from Stmt.Qry when prepared with a SQL select statement: Or, *Rset may be passed to Stmt.Exe when prepared with a stored procedure accepting an OUT SYS_REFCURSOR parameter: Stored procedures with multiple OUT SYS_REFCURSOR parameters enable a single Exe call to obtain multiple Rsets: The types of values assigned to Row may be configured in StmtCfg.Rset. For configuration to take effect, assign StmtCfg.Rset prior to calling Stmt.Qry or Stmt.Exe. Rset prefetching may be controlled by StmtCfg.PrefetchRowCount and StmtCfg.PrefetchMemorySize. PrefetchRowCount works in coordination with PrefetchMemorySize. When PrefetchRowCount is set to zero only PrefetchMemorySize is used; otherwise, the minimum of PrefetchRowCount and PrefetchMemorySize is used. The default uses a PrefetchMemorySize of 134MB. Opening and closing Rsets is managed internally. Rset does not have an Open method or Close method. IntervalYM may be be inserted and selected: IntervalDS may be be inserted and selected: Transactions on an Oracle server are supported. DML statements auto-commit unless a transaction has started: Ses.PrepAndExe, Ses.PrepAndQry, Ses.Ins, Ses.Upd, and Ses.Sel are convenient one-line methods. Ses.PrepAndExe offers a convenient one-line call to Ses.Prep and Stmt.Exe. Ses.PrepAndQry offers a convenient one-line call to Ses.Prep and Stmt.Qry. Ses.Ins composes, prepares and executes a sql INSERT statement. Ses.Ins is useful when you have to create and maintain a simple INSERT statement with a long list of columns. As table columns are added and dropped over the lifetime of a table Ses.Ins is easy to read and revise. Ses.Upd composes, prepares and executes a sql UPDATE statement. Ses.Upd is useful when you have to create and maintain a simple UPDATE statement with a long list of columns. As table columns are added and dropped over the lifetime of a table Ses.Upd is easy to read and revise. Ses.Sel composes, prepares and queries a sql SELECT statement. Ses.Sel is useful when you have to create and maintain a simple SELECT statement with a long list of columns that have non-default GoColumnTypes. As table columns are added and dropped over the lifetime of a table Ses.Sel is easy to read and revise. The Ses.Ping method checks whether the client's connection to an Oracle server is valid. A call to Ping requires an open Ses. Ping will return a nil error when the connection is fine: The Srv.Version method is available to obtain the Oracle server version. A call to Version requires an open Ses: Further code examples are available in the [example file](https://github.com/rana/ora/blob/master/z_example_test.go), test files and [samples folder](https://github.com/rana/ora/tree/master/samples). The ora package provides a simple ora.Logger interface for logging. Logging is disabled by default. Specify one of three optional built-in logging packages to enable logging; or, use your own logging package. ora.Cfg().Log offers various options to enable or disable logging of specific ora driver methods. For example: To use the standard Go log package: which produces a sample log of: Messages are prefixed with 'ORA I' for information or 'ORA E' for an error. The log package is configured to write to os.Stderr by default. Use the ora/lg.Std type to configure an alternative io.Writer. To use the glog package: which produces a sample log of: To use the log15 package: which produces a sample log of: See https://github.com/rana/ora/tree/master/samples/lg15/main.go for sample code which uses the log15 package. Tests are available and require some setup. Setup varies depending on whether the Oracle server is configured as a container database or non-container database. It's simpler to setup a non-container database. An example for each setup is explained. Non-container test database setup steps: Container test database setup steps: Some helpful SQL maintenance statements: Run the tests. database/sql method Stmt.QueryRow is not supported. Go 1.6 introduced stricter cgo (call C from Go) rules, and introduced runtime checks. This is good, as the possibility of C code corrupting Go code is almost completely eliminated, but it also means a severe call overhead grow. [Sometimes](https://groups.google.com/forum/#!topic/golang-nuts/ccMkPG6Bi5k) this can be 22x the go 1.5.3 call time! So if you need performance more than correctness, start your programs with "GODEBUG=cgocheck=0" environment setting. Copyright 2017 Rana Ian, Tamás Gulácsi. All rights reserved. Use of this source code is governed by The MIT License found in the accompanying LICENSE file.
Package pgconn is a low-level PostgreSQL database driver. pgconn provides lower level access to a PostgreSQL connection than a database/sql or pgx connection. It operates at nearly the same level is the C library libpq. Use Connect to establish a connection. It accepts a connection string in URL or DSN and will read the environment for libpq style environment variables. ExecParams and ExecPrepared execute a single query. They return readers that iterate over each row. The Read method reads all rows into memory. Exec and ExecBatch can execute multiple queries in a single round trip. They return readers that iterate over each query result. The ReadAll method reads all query results into memory. All potentially blocking operations take a context.Context. If a context is canceled while the method is in progress the method immediately returns. In most circumstances, this will close the underlying connection. The CancelRequest method may be used to request the PostgreSQL server cancel an in-progress query without forcing the client to abort.
Package pq is a pure Go Postgres driver for the database/sql package. In most cases clients will use the database/sql package instead of using this package directly. For example: You can also connect to a database using a URL. For example: Similarly to libpq, when establishing a connection using pq you are expected to supply a connection string containing zero or more parameters. A subset of the connection parameters supported by libpq are also supported by pq. Additionally, pq also lets you specify run-time parameters (such as search_path or work_mem) directly in the connection string. This is different from libpq, which does not allow run-time parameters in the connection string, instead requiring you to supply them in the options parameter. For compatibility with libpq, the following special connection parameters are supported: Valid values for sslmode are: See http://www.postgresql.org/docs/current/static/libpq-connect.html#LIBPQ-CONNSTRING for more information about connection string parameters. Use single quotes for values that contain whitespace: A backslash will escape the next character in values: Note that the connection parameter client_encoding (which sets the text encoding for the connection) may be set but must be "UTF8", matching with the same rules as Postgres. It is an error to provide any other value. In addition to the parameters listed above, any run-time parameter that can be set at backend start time can be set in the connection string. For more information, see http://www.postgresql.org/docs/current/static/runtime-config.html. Most environment variables as specified at http://www.postgresql.org/docs/current/static/libpq-envars.html supported by libpq are also supported by pq. If any of the environment variables not supported by pq are set, pq will panic during connection establishment. Environment variables have a lower precedence than explicitly provided connection parameters. The pgpass mechanism as described in http://www.postgresql.org/docs/current/static/libpq-pgpass.html is supported, but on Windows PGPASSFILE must be specified explicitly. database/sql does not dictate any specific format for parameter markers in query strings, and pq uses the Postgres-native ordinal markers, as shown above. The same marker can be reused for the same parameter: pq does not support the LastInsertId() method of the Result type in database/sql. To return the identifier of an INSERT (or UPDATE or DELETE), use the Postgres RETURNING clause with a standard Query or QueryRow call: For more details on RETURNING, see the Postgres documentation: For additional instructions on querying see the documentation for the database/sql package. Parameters pass through driver.DefaultParameterConverter before they are handled by this package. When the binary_parameters connection option is enabled, []byte values are sent directly to the backend as data in binary format. This package returns the following types for values from the PostgreSQL backend: All other types are returned directly from the backend as []byte values in text format. pq may return errors of type *pq.Error which can be interrogated for error details: See the pq.Error type for details. You can perform bulk imports by preparing a statement returned by pq.CopyIn (or pq.CopyInSchema) in an explicit transaction (sql.Tx). The returned statement handle can then be repeatedly "executed" to copy data into the target table. After all data has been processed you should call Exec() once with no arguments to flush all buffered data. Any call to Exec() might return an error which should be handled appropriately, but because of the internal buffering an error returned by Exec() might not be related to the data passed in the call that failed. CopyIn uses COPY FROM internally. It is not possible to COPY outside of an explicit transaction in pq. Usage example: PostgreSQL supports a simple publish/subscribe model over database connections. See http://www.postgresql.org/docs/current/static/sql-notify.html for more information about the general mechanism. To start listening for notifications, you first have to open a new connection to the database by calling NewListener. This connection can not be used for anything other than LISTEN / NOTIFY. Calling Listen will open a "notification channel"; once a notification channel is open, a notification generated on that channel will effect a send on the Listener.Notify channel. A notification channel will remain open until Unlisten is called, though connection loss might result in some notifications being lost. To solve this problem, Listener sends a nil pointer over the Notify channel any time the connection is re-established following a connection loss. The application can get information about the state of the underlying connection by setting an event callback in the call to NewListener. A single Listener can safely be used from concurrent goroutines, which means that there is often no need to create more than one Listener in your application. However, a Listener is always connected to a single database, so you will need to create a new Listener instance for every database you want to receive notifications in. The channel name in both Listen and Unlisten is case sensitive, and can contain any characters legal in an identifier (see http://www.postgresql.org/docs/current/static/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS for more information). Note that the channel name will be truncated to 63 bytes by the PostgreSQL server. You can find a complete, working example of Listener usage at https://godoc.org/github.com/lib/pq/example/listen.
Package nzgo is a pure Go language driver for the database/sql package to work with IBM PDA (aka Netezza) In most cases clients will use the database/sql package instead of using this package directly. For example: nzgo defines a simple logger interface. Set logLevel to control logging verbosity and logPath to specify log file path. By default logging will be enabled with logLevel=Info and current directory as logPath. You can configure logLevel and logPath (i.e. log file directory) as per your requirement. There is one more configuration parameter with logger "additionalLogFile". This parameter can be used to set additional logger file. additionalLogFile can be used to enable writing logs to stdout, this can be achieved by simply setting "additionalLogFile=stdout" Valid values for 'logLevel' are : "OFF" , "DEBUG", "INFO" and "FATAL". logLevel=OFF can be used to turn off logging. It will turn of both internal and additionalLogFile logs. These logger configuration parameters should be mentinoed in connection string. The level of security (SSL/TLS) that the driver uses for the connection to the data store. onlyUnSecured: The driver does not use SSL. preferredUnSecured: If the server provides a choice, the driver does not use SSL. preferredSecured: If the server provides a choice, the driver uses SSL. onlySecured: The driver does not connect unless an SSL connection is available. Similarly, Netezza server has above securityLevel. Cases which would fail: Client tries to connect with 'Only secured' or 'Preferred secured' mode while server is 'Only Unsecured' mode. Client tries to connect with 'Only secured' or 'Preferred secured' mode while server is 'Preferred Unsecured' mode. Client tries to connect with 'Only Unsecured' or 'Preferred Unsecured' mode while server is 'Only Secured' mode. Client tries to connect with 'Only Unsecured' or 'Preferred Unsecured' mode while server is 'Preferred Secured' mode. Below are the securityLevel you can pass in connection string : Use Open to create a database handle with connection parameters: The Go Netezza Driver supports the following connection syntaxes (or data source name formats): In this case, application is running from NPS server itself so using 'localhost'. Golang driver should connect on port 5480(postgres port). The user is admin, password is password, database is db1, sslmode is require, and the location of the root certificate file is C:/Users/root31.crt with securityLevel as 'Only Secured session' When establishing a connection using nzgo you are expected to supply a connection string containing zero or more parameters. Below are subset of the connection parameters supported by nzgo. The following special connection parameters are supported: Valid values for sslmode are: Use single quotes for values that contain whitespace: A backslash will escape the next character in values: Note that the connection parameter client_encoding (which sets the text encoding for the connection) may be set but must be "UTF8", matching with the same rules as Postgres. It is an error to provide any other value. database/sql does not dictate any specific format for parameter markers in query strings, but nzgo uses the Netezza-specific parameter markers i.e. '?', as shown below. First parameter marker in the query would be replaced by first arguement, second parameter marker in the query would be replaced by second arguement and so on. nzgo supports the RowsAffected() method of the Result type in database/sql. For additional instructions on querying see the documentation for the database/sql package. nzgo also supports transaction queries as specified in database/sql package https://github.com/golang/go/wiki/SQLInterface. Transactions are started by calling Begin. This package returns the following types for values from the Netezza backend: You can unload data from an IBM Netezza database table on a Netezza host system to a remote client. This unload does not remove rows from the database but instead stores the unloaded data in a flat file (external table) that is suitable for loading back into a Netezza database. Below query would create a file 'et1.txt' on remote system from Netezza table t2 with data delimeted by '|'. See https://www.ibm.com/support/knowledgecenter/en/SSULQD_7.2.1/com.ibm.nz.load.doc/t_load_unloading_data_remote_client_sys.html for more information about external table
Package nzgo is a pure Go language driver for the database/sql package to work with IBM PDA (aka Netezza) In most cases clients will use the database/sql package instead of using this package directly. For example: nzgo defines a simple logger interface. Set logLevel to control logging verbosity and logPath to specify log file path. By default logging will be enabled with logLevel=Info and current directory as logPath. You can configure logLevel and logPath (i.e. log file directory) as per your requirement. There is one more configuration parameter with logger "additionalLogFile". This parameter can be used to set additional logger file. additionalLogFile can be used to enable writing logs to stdout, this can be achieved by simply setting "additionalLogFile=stdout" Valid values for 'logLevel' are : "OFF" , "DEBUG", "INFO" and "FATAL". logLevel=OFF can be used to turn off logging. It will turn of both internal and additionalLogFile logs. These logger configuration parameters should be mentinoed in connection string. The level of security (SSL/TLS) that the driver uses for the connection to the data store. onlyUnSecured: The driver does not use SSL. preferredUnSecured: If the server provides a choice, the driver does not use SSL. preferredSecured: If the server provides a choice, the driver uses SSL. onlySecured: The driver does not connect unless an SSL connection is available. Similarly, Netezza server has above securityLevel. Cases which would fail: Client tries to connect with 'Only secured' or 'Preferred secured' mode while server is 'Only Unsecured' mode. Client tries to connect with 'Only secured' or 'Preferred secured' mode while server is 'Preferred Unsecured' mode. Client tries to connect with 'Only Unsecured' or 'Preferred Unsecured' mode while server is 'Only Secured' mode. Client tries to connect with 'Only Unsecured' or 'Preferred Unsecured' mode while server is 'Preferred Secured' mode. Below are the securityLevel you can pass in connection string : Use Open to create a database handle with connection parameters: The Go Netezza Driver supports the following connection syntaxes (or data source name formats): In this case, application is running from NPS server itself so using 'localhost'. Golang driver should connect on port 5480(postgres port). The user is admin, password is password, database is db1, sslmode is require, and the location of the root certificate file is C:/Users/root31.crt with securityLevel as 'Only Secured session' When establishing a connection using nzgo you are expected to supply a connection string containing zero or more parameters. Below are subset of the connection parameters supported by nzgo. The following special connection parameters are supported: Valid values for sslmode are: Use single quotes for values that contain whitespace: A backslash will escape the next character in values: Note that the connection parameter client_encoding (which sets the text encoding for the connection) may be set but must be "UTF8", matching with the same rules as Postgres. It is an error to provide any other value. database/sql does not dictate any specific format for parameter markers in query strings, but nzgo uses the Netezza-specific parameter markers i.e. '?', as shown below. First parameter marker in the query would be replaced by first arguement, second parameter marker in the query would be replaced by second arguement and so on. nzgo supports the RowsAffected() method of the Result type in database/sql. For additional instructions on querying see the documentation for the database/sql package. nzgo also supports transaction queries as specified in database/sql package https://github.com/golang/go/wiki/SQLInterface. Transactions are started by calling Begin. This package returns the following types for values from the Netezza backend: You can unload data from an IBM Netezza database table on a Netezza host system to a remote client. This unload does not remove rows from the database but instead stores the unloaded data in a flat file (external table) that is suitable for loading back into a Netezza database. Below query would create a file 'et1.txt' on remote system from Netezza table t2 with data delimeted by '|'. See https://www.ibm.com/support/knowledgecenter/en/SSULQD_7.2.1/com.ibm.nz.load.doc/t_load_unloading_data_remote_client_sys.html for more information about external table
Package kv implements a simple and easy to use persistent key/value (KV) store. available at https://modern-c.appspot.com/-/builder/?importpath=modernc.org%2fkv 2016-07-11: KV now uses the stable version of lldb. (modernc.org/lldb). The stored KV pairs are sorted in the key collation order defined by an user supplied 'compare' function (passed as a field in Options). Keys, as well as the values associated with them, are opaque []bytes. Maximum size of a "native" key or value is 65787 bytes. Larger keys or values have to be composed of the "native" ones in client code. The maximum DB size kv can handle is 2^60 bytes (1 exabyte). See also [4]: "Block handles". Transactions are resource limited. All changes made by a transaction are held in memory until the top level transaction is committed. ACID[1] implementation notes/details follows. A successfully committed transaction appears (by its effects on the database) to be indivisible ("atomic") iff the transaction is performed in isolation. An aborted (via RollBack) transaction appears like it never happened under the same limitation. Atomic updates to the DB, via functions like Set, Inc, etc., are performed in their own automatic transaction. If the partial progress of any such function fails at any point, the automatic transaction is canceled via Rollback before returning from the function. A non nil error is returned in that case. All reads, including those made from any other concurrent non isolated transaction(s), performed during a not yet committed transaction, are dirty reads, i.e. the data returned are consistent with the in-progress state of the open transaction, or all of the open transactions. Obviously, conflicts, data races and inconsistent states can happen, but iff non isolated transactions are performed. Performing a Rollback at a nested transaction level properly returns the transaction state (and data read from the DB) to what it was before the respective BeginTransaction. Transactions of the atomic updating functions (Set, Put, Delete ...) are always isolated. Transactions controlled by BeginTransaction/Commit/RollBack, are isolated iff their execution is serialized. Transactions are committed using the two phase commit protocol(2PC)[2] and a write ahead log(WAL)[3]. DB recovery after a crash is performed automatically using data from the WAL. Last transaction data, either of an in progress transaction or a transaction being committed at the moment of the crash, can get lost. No protection from non readable files, files corrupted by other processes or by memory faults or other HW problems, is provided. Always properly backup your DB data file(s).
Package lddynamodb provides a DynamoDB-backed persistent data store for the LaunchDarkly Go SDK. For more details about how and why you can use a persistent data store, see: https://docs.launchdarkly.com/sdk/features/storing-data/dynamodb#go To use the DynamoDB data store with the LaunchDarkly client: By default, the data store uses a basic DynamoDB client configuration that is equivalent to doing this: This default configuration will only work if your AWS credentials and region are available from AWS environment variables and/or configuration files. If you want to set those programmatically or modify any other configuration settings, you can use the methods of the lddynamodb.DataStoreBuilder returned by lddynamodb.DataStore(). For example: Note that CacheSeconds() is not a method of lddynamodb.DataStoreBuilder, but rather a method of ldcomponents.PersistentDataStore(), because the caching behavior is provided by the SDK for all database integrations. If you are also using DynamoDB for other purposes, the data store can coexist with other data in the same table as long as you use the Prefix option to make each application use different keys. However, it is advisable to configure separate tables in DynamoDB, for better control over permissions and throughput.
Package gomusicbrainz implements a MusicBrainz WS2 client library. MusicBrainz WS2 (Version 2 of the XML Web Service) supports three different requests: With search requests you can search MusicBrainz´ database for all entities. GoMusicBrainz implements one search method for every search request in the form: searchTerm follows the Apache Lucene syntax and can either contain multiple fields with logical operators or just a simple search string. Please refer to https://lucene.apache.org/core/4_3_0/queryparser/org/apache/lucene/queryparser/classic/package-summary.html#package_description for more details on the lucene syntax. limit defines how many entries should be returned (1-100, default 25). offset is used for paging through more than one page of results. To ignore limit and/or offset, set it to -1. You can perform a lookup of an entity when you have the MBID for that entity. GoMusicBrainz provides two ways to perform lookup requests: Either the specific lookup method that is implemented for each entity that has a lookup endpoint in the form or the common lookup method if you already have an entity (with MBID) that implements the MBLookupEntity interface: With both methods you can include inc params which affect subqueries e.g. relationships. see http://musicbrainz.org/doc/Development/XML_Web_Service/Version_2#inc.3D_arguments_which_affect_subqueries Not all of them are supported yet. not supported yet.
The Escher HTTP request signing framework is intended to provide a secure way for clients to sign HTTP requests, and servers to check the integrity of these messages. The goal of the protocol is to introduce an authentication solution for REST API services, that is more secure than the currently available protocols. RFC 2617 (HTTP Authentication) defines Basic and Digest Access Authentication. They’re widely used, but Basic Access Authentication doesn’t encrypt the secret and doesn’t add integrity checks to the requests. Digest Access Authentication sends the secret encrypted, but the algorithm with creating a checksum with a nonce and using md5 should not be considered highly secure these days, and as with Basic Access Authentication, there’s no way to check the integrity of the message. RFC 6749 (OAuth 2.0 Authorization) enables a third-party application to obtain limited access to an HTTP service, either on behalf of a resource owner by orchestrating an approval interaction between the resource owner and the HTTP service, or by allowing the third-party application to obtain access on its own behalf. This is not helpful for a machine-to-machine communication situation, like a REST API authentication, because typically there’s no third-party user involved. Additionally, after a token is obtained from the authorization endpoint, it is used with no encryption, and doesn’t provide integration checking, or prevent repeating messages. OAuth 2.0 is a stateful protocol which needs a database to store the tokens for client sessions. Amazon and other service providers created protocols addressing these issues, however there is no public standard with open source implementations available from them. As Escher is based on a publicly documented, widely, in-the-wild used protocol, the specification does not include novelty techniques. 2. Signing an HTTP Request Escher defines a stateless signature generation mechanism. The signature is calculated from the key parts of the HTTP request, a service identifier string called credential scope, and a client key and client secret. The signature generation steps are: canonicalizing the request, creating a string to calculate the signature, and adding the signature to the original request. Escher supports two hash algorithms: SHA256 and SHA512 designed by the NSA (U.S. National Security Agency). 2.1. Canonicalizing the Request In order to calculate a checksum from the key HTTP request parts, the HTTP request method, the request URI, the query parts, the headers, and the request body have to be canonicalized. The output of the canonicalization step will be a string including the request parts separated by LF (line feed, “n”) characters. The string will be used to calculate a checksum for the request. 2.1.1. The HTTP method The HTTP method defined by RFC2616 (Hypertext Transfer Protocol) is case sensitive, and must be available in upper case, no transformation has to be applied: POST 2.1.2. The Path The path is the absolute path of the URL. Starts with a slash (/) character, and does not include the query part (and the question mark). Escher follows the rules defined by RFC3986 (Uniform Resource Identifier) to normalize the path. Basically it means: Convert relative paths to absolute, remove redundant path components. URI-encode each path components: the “reserved characters” defined by RFC3986 (Uniform Resource Identifier) have to be kept as they are (no encoding applied) all other characters have to be percent encoded, including SPACE (to %20, instead of +) non-ASCII, UTF-8 characters should be percent encoded to 2 or more pieces (á to %C3%A1) percent encoded hexadecimal numbers have to be upper cased (eg: a%c2%b1b to a%C2%B1b) Normalize empty paths to /. For example: 2.1.3. The Query String RFC3986 (Uniform Resource Identifier) should provide guidance for canonicalization of the query string, but here’s the complete list of the rules to be applied: URI-encode each query parameter names and values the “reserved characters” defined by RFC3986 (Uniform Resource Identifier) have to be kept as they are (no encoding applied) all other characters have to be percent encoded, including SPACE (to %20, instead of +) non-ASCII, UTF-8 characters should be percent encoded to 2 or more pieces (á to %C3%A1) percent encoded hexadecimal numbers have to be upper cased (eg: a%c2%b1b to a%C2%B1b) Normalize empty query strings to empty string. Sort query parameters by the encoded parameter names (ASCII order). Do not shorten parameter values if their parameter name is the same (key=B&key=A is a valid output), the order of parameters in a URL may be significant (this is not defined by the HTTP standard). Separate parameter names and values by = signs, include = for empty values, too Separate parameters by & For example: To canonicalize the headers, the following rules have to be followed: Lower case the header names. Separate header names and values by a :, with no spaces. Sort header names to alphabetical order (ASCII). Group headers with the same names into a header, and separate their values by commas, without sorting. Trim header values, keep all the spaces between quote characters ("). For example: 2.1.5. Signed Headers The list of headers to include when calculating the signature. Lower cased value of header names, separated by ;, like this: date;host 2.1.6. Body Checksum A checksum for the request body, aka the payload has to be calculated. Escher supports SHA-256 and SHA-512 algorithms for checksum calculation. If the request contains no body, an empty string has to be used as the input for the hash algorithm. The selected algorithm will be added later to the authorization header, so the server will be able to use the same algorithm for validation. The checksum of the body has to be presented as a lower cased hexadecimal string, for example: 2.1.7. Concatenating the Canonicalized Parts All the steps above produce a row of data, except the headers canonicalization, as it creates one row per headers. These have to be concatenated with LF (line feed, “n”) characters into a string. An example: 2.2. Creating the Signature The next step is creating another string which will be directly used to calculate the signature. 2.2.1. Algorithm ID The algorithm ID comes from the algo_prefix (default value is ESR) and the algorithm used to calculate checksums during the signing process. The string algo_prefix, “HMAC”, and the algorithm name should be concatenated with dashes, like this: 2.2.2. Long Date The long date is the request date in the ISO 8601 basic format, like YYYYMMDD + T + HHMMSS + Z. Note that the basic format uses no punctuation. Example is: This date has to be added later, too, as a date header (default header name is X-Escher-Date). 2.2.3. Date and Credential Scope Next information is the short date, and the credential scope concatenated with a / character. The short date is the request date’s date part, an ISO 8601 basic formatted representation, the credential scope is defined by the service. Example: This will be added later, too, as part of the authorization header (default header name is X-Escher-Auth). 2.2.4. Checksum of the Canonicalized Request Take the output of step 2.1.7., and create a checksum from the canonicalized checksum string. This checksum has to be represented as a lower cased hexadecimal string, too. Something like this will be an output: 2.2.5. Concatenating the Signing String Concatenate the outputs of steps 2.2. with LF characters. Example output: 2.2.6. The Signing Key The signing key is based on the algo_prefix, the client secret, the parts of the credential scope, and the request date. Take the algo_prefix, concatenate the client secret to it. First apply the HMAC algorithm to the request date, then apply the actual value on each of the credential scope parts (splitted at /). The end result is a binary signing key. Pseudo code: 2.2.7. Create the Signature The signature is created from the output of steps 2.2.5. (Signing String) and 2.2.6. (Signing Key). With the selected algorithm, create a checksum. It has to be represented as a lower cased hexadecimal string. Something like this will be an output: 2.3. Adding the Signature to the Request The final step of the Escher signing process is adding the Signature to the request. Escher adds a new header to the request, by default, the header name is X-Escher-Auth. The header value will include the algorithm ID (see 2.2.1.), the client key, the short date and the credential scope (see 2.2.3.), the signed headers string (see 2.1.5.) and finally the signature (see 2.2.7.). The values of this inputs have to be concatenated like this: 3. Presigning a URL The URL presigning process is very similar to the request signing procedure. But for a URL, there are no headers, no request body, so the calculation of the Signature is different. Also, the Signature cannot be added to the headers, but is included as query parameters. A significant difference is that the presigning allows defining an expiration time. By default, it is 86400 secs, 24 hours. The current time and the expiration time will be included in the URL, and the server has to check if the URL is expired. 3.1. Canonicalizing the URL to Presign The canonicalization for URL presigning is the same process as for HTTP requests, in this section we will cover the differences only. 3.1.1. The HTTP method The HTTP method for presigned URLs is fixed to: For example: 3.1.3. The Query String The query is coming from the URL, but the algorithm, credentials, date, expiration time, and signed headers have to be added to the query parts. 3.1.4. The Headers A URL has no headers, Escher creates the Host header based on the URL’s domain information, and adds it to the canonicalized request. For example: 3.1.5. Signed Headers It will be host, as that’s the only header included. Example:
Package pq is a pure Go Postgres driver for the database/sql package. In most cases clients will use the database/sql package instead of using this package directly. For example: You can also connect to a database using a URL. For example: Similarly to libpq, when establishing a connection using pq you are expected to supply a connection string containing zero or more parameters. A subset of the connection parameters supported by libpq are also supported by pq. Additionally, pq also lets you specify run-time parameters (such as search_path or work_mem) directly in the connection string. This is different from libpq, which does not allow run-time parameters in the connection string, instead requiring you to supply them in the options parameter. For compatibility with libpq, the following special connection parameters are supported: Valid values for sslmode are: See http://www.postgresql.org/docs/current/static/libpq-connect.html#LIBPQ-CONNSTRING for more information about connection string parameters. Use single quotes for values that contain whitespace: A backslash will escape the next character in values: Note that the connection parameter client_encoding (which sets the text encoding for the connection) may be set but must be "UTF8", matching with the same rules as Postgres. It is an error to provide any other value. In addition to the parameters listed above, any run-time parameter that can be set at backend start time can be set in the connection string. For more information, see http://www.postgresql.org/docs/current/static/runtime-config.html. Most environment variables as specified at http://www.postgresql.org/docs/current/static/libpq-envars.html supported by libpq are also supported by pq. If any of the environment variables not supported by pq are set, pq will panic during connection establishment. Environment variables have a lower precedence than explicitly provided connection parameters. database/sql does not dictate any specific format for parameter markers in query strings, and pq uses the Postgres-native ordinal markers, as shown above. The same marker can be reused for the same parameter: pq does not support the LastInsertId() method of the Result type in database/sql. To return the identifier of an INSERT (or UPDATE or DELETE), use the Postgres RETURNING clause with a standard Query or QueryRow call: For more details on RETURNING, see the Postgres documentation: For additional instructions on querying see the documentation for the database/sql package. pq may return errors of type *pq.Error which can be interrogated for error details: See the pq.Error type for details. You can perform bulk imports by preparing a statement returned by pq.CopyIn (or pq.CopyInSchema) in an explicit transaction (sql.Tx). The returned statement handle can then be repeatedly "executed" to copy data into the target table. After all data has been processed you should call Exec() once with no arguments to flush all buffered data. Any call to Exec() might return an error which should be handled appropriately, but because of the internal buffering an error returned by Exec() might not be related to the data passed in the call that failed. CopyIn uses COPY FROM internally. It is not possible to COPY outside of an explicit transaction in pq. Usage example: PostgreSQL supports a simple publish/subscribe model over database connections. See http://www.postgresql.org/docs/current/static/sql-notify.html for more information about the general mechanism. To start listening for notifications, you first have to open a new connection to the database by calling NewListener. This connection can not be used for anything other than LISTEN / NOTIFY. Calling Listen will open a "notification channel"; once a notification channel is open, a notification generated on that channel will effect a send on the Listener.Notify channel. A notification channel will remain open until Unlisten is called, though connection loss might result in some notifications being lost. To solve this problem, Listener sends a nil pointer over the Notify channel any time the connection is re-established following a connection loss. The application can get information about the state of the underlying connection by setting an event callback in the call to NewListener. A single Listener can safely be used from concurrent goroutines, which means that there is often no need to create more than one Listener in your application. However, a Listener is always connected to a single database, so you will need to create a new Listener instance for every database you want to receive notifications in. The channel name in both Listen and Unlisten is case sensitive, and can contain any characters legal in an identifier (see http://www.postgresql.org/docs/current/static/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS for more information). Note that the channel name will be truncated to 63 bytes by the PostgreSQL server. You can find a complete, working example of Listener usage at http://godoc.org/github.com/lib/pq/listen_example.
Package spanner provides a client for reading and writing to Cloud Spanner databases. See the packages under admin for clients that operate on databases and instances. See https://cloud.google.com/spanner/docs/getting-started/go/ for an introduction to Cloud Spanner and additional help on using this API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. To start working with this package, create a client that refers to the database of interest: Remember to close the client after use to free up the sessions in the session pool. To use an emulator with this library, you can set the SPANNER_EMULATOR_HOST environment variable to the address at which your emulator is running. This will send requests to that address instead of to Cloud Spanner. You can then create and use a client as usual: Two Client methods, Apply and Single, work well for simple reads and writes. As a quick introduction, here we write a new row to the database and read it back: All the methods used above are discussed in more detail below. Every Cloud Spanner row has a unique key, composed of one or more columns. Construct keys with a literal of type Key: The keys of a Cloud Spanner table are ordered. You can specify ranges of keys using the KeyRange type: By default, a KeyRange includes its start key but not its end key. Use the Kind field to specify other boundary conditions: A KeySet represents a set of keys. A single Key or KeyRange can act as a KeySet. Use the KeySets function to build the union of several KeySets: AllKeys returns a KeySet that refers to all the keys in a table: All Cloud Spanner reads and writes occur inside transactions. There are two types of transactions, read-only and read-write. Read-only transactions cannot change the database, do not acquire locks, and may access either the current database state or states in the past. Read-write transactions can read the database before writing to it, and always apply to the most recent database state. The simplest and fastest transaction is a ReadOnlyTransaction that supports a single read operation. Use Client.Single to create such a transaction. You can chain the call to Single with a call to a Read method. When you only want one row whose key you know, use ReadRow. Provide the table name, key, and the columns you want to read: Read multiple rows with the Read method. It takes a table name, KeySet, and list of columns: Read returns a RowIterator. You can call the Do method on the iterator and pass a callback: RowIterator also follows the standard pattern for the Google Cloud Client Libraries: Always call Stop when you finish using an iterator this way, whether or not you iterate to the end. (Failing to call Stop could lead you to exhaust the database's session quota.) To read rows with an index, use ReadUsingIndex. The most general form of reading uses SQL statements. Construct a Statement with NewStatement, setting any parameters using the Statement's Params map: You can also construct a Statement directly with a struct literal, providing your own map of parameters. Use the Query method to run the statement and obtain an iterator: Once you have a Row, via an iterator or a call to ReadRow, you can extract column values in several ways. Pass in a pointer to a Go variable of the appropriate type when you extract a value. You can extract by column position or name: You can extract all the columns at once: Or you can define a Go struct that corresponds to your columns, and extract into that: For Cloud Spanner columns that may contain NULL, use one of the NullXXX types, like NullString: To perform more than one read in a transaction, use ReadOnlyTransaction: You must call Close when you are done with the transaction. Cloud Spanner read-only transactions conceptually perform all their reads at a single moment in time, called the transaction's read timestamp. Once a read has started, you can call ReadOnlyTransaction's Timestamp method to obtain the read timestamp. By default, a transaction will pick the most recent time (a time where all previously committed transactions are visible) for its reads. This provides the freshest data, but may involve some delay. You can often get a quicker response if you are willing to tolerate "stale" data. You can control the read timestamp selected by a transaction by calling the WithTimestampBound method on the transaction before using it. For example, to perform a query on data that is at most one minute stale, use See the documentation of TimestampBound for more details. To write values to a Cloud Spanner database, construct a Mutation. The spanner package has functions for inserting, updating and deleting rows. Except for the Delete methods, which take a Key or KeyRange, each mutation-building function comes in three varieties. One takes lists of columns and values along with the table name: One takes a map from column names to values: And the third accepts a struct value, and determines the columns from the struct field names: To apply a list of mutations to the database, use Apply: If you need to read before writing in a single transaction, use a ReadWriteTransaction. ReadWriteTransactions may be aborted automatically by the backend and need to be retried. You pass in a function to ReadWriteTransaction, and the client will handle the retries automatically. Use the transaction's BufferWrite method to buffer mutations, which will all be executed at the end of the transaction: Cloud Spanner STRUCT (aka STRUCT) values (https://cloud.google.com/spanner/docs/data-types#struct-type) can be represented by a Go struct value. A proto StructType is built from the field types and field tag information of the Go struct. If a field in the struct type definition has a "spanner:<field_name>" tag, then the value of the "spanner" key in the tag is used as the name for that field in the built StructType, otherwise the field name in the struct definition is used. To specify a field with an empty field name in a Cloud Spanner STRUCT type, use the `spanner:""` tag annotation against the corresponding field in the Go struct's type definition. A STRUCT value can contain STRUCT-typed and Array-of-STRUCT typed fields and these can be specified using named struct-typed and []struct-typed fields inside a Go struct. However, embedded struct fields are not allowed. Unexported struct fields are ignored. NULL STRUCT values in Cloud Spanner are typed. A nil pointer to a Go struct value can be used to specify a NULL STRUCT value of the corresponding StructType. Nil and empty slices of a Go STRUCT type can be used to specify NULL and empty array values respectively of the corresponding StructType. A slice of pointers to a Go struct type can be used to specify an array of NULL-able STRUCT values. Spanner supports DML statements like INSERT, UPDATE and DELETE. Use ReadWriteTransaction.Update to run DML statements. It returns the number of rows affected. (You can call use ReadWriteTransaction.Query with a DML statement. The first call to Next on the resulting RowIterator will return iterator.Done, and the RowCount field of the iterator will hold the number of affected rows.) For large databases, it may be more efficient to partition the DML statement. Use client.PartitionedUpdate to run a DML statement in this way. Not all DML statements can be partitioned. This client has been instrumented to use OpenCensus tracing (http://opencensus.io). To enable tracing, see "Enabling Tracing for a Program" at https://godoc.org/go.opencensus.io/trace. OpenCensus tracing requires Go 1.8 or higher.
**Please use xcore/v2** **The version 1 is obsolete.** Package xcore is a set of basic objects for programation (XCache for caches, XDataset for data sets, XLanguage for languages and XTemplate for templates). For GO, the actual existing code includes: - XCache: Application Memory Caches for any purpose, with time control and quantity control of object in the cache and also check changes against original source. It is a thread safe cache. - XDataset: Basic nested data structures for any purpose (template injection, configuration files, database records, etc). - XLanguage: language dependent text tables for internationalization of code. The sources can be text or XML file definitions. - XTemplate: template system with meta language to create complex documents (compatible with any text language, HTML, CSS, JS, PDF, XML, etc), heavily used on CMS systems and others. It is already used on sites that serve more than 60 million pages a month (500 pages per second on pike hour) and can be used on multithreading environment safely. XCache is a library to cache all the data you want into current application memory for a very fast access to the data. The access to the data support multithreading and concurrency. For the same reason, this type of cache is not persistent (if you exit the application) and cannot grow too much (as memory is the limit). However, you can control a timeout of each cache piece, and eventually a comparison function against a source (file, database, etc) to invalid the cache. 1. Declare a new XCache with NewXCache() function: 2. Fill in the cache: Once you have declared the cache, you can fill it with anything you want. The main cache object is an interface{} so you can put here anything you need, from simple variables to complex structures. You need to use the Set function: Note the ID is always a string, so convert a database key to string if needed. 3. To use the cache, just ask for your entry with Get function: 4. To maintain the cache: You may need Del function, to delete a specific entry (maybe because you deleted the record in database). You may also need Clean function to deletes a percentage of the cache, or Flush to deletes it all. The Verify function is used to check cache entries against their sources through the Validator function. Be very careful, if the cache is big or the Validator function is complex (maybe ask for a remote server information), the verification may be VERY slow and huge CPU use. The Count function gives some stats about the cache. 5. How to use Verify Function: This function is recommended when the source is local and fast to check (for instance a language file or a template file). When the source is distant (other cluster database, any rpc source on another network, integration of many parts, etc), it is more recommended to create a function that will delete the cache when needed (on demand cache change). The validator function is a func(id, time.Time) bool function. The first parameter is the ID entry in the cache, the second parameter the time of the entry was created. The validator function returns true is the cache is still valid, or false if it needs to be invalidated. The XCache is thread safe. The cache can be limited in quantity of entries and timeout for data. The cache is automanaged (for invalid expired data) and can be cleaned partially or totally manually. The XLanguage table of text entries can be loaded from XML file, XML string or normal text file or string. It is used to keep a table of id=value set of entries in any languages you need, so it is easy to switch between XLanguage instance based on the required language needed. Obviously, any XLanguage you load in any language should have the same id entries translated, for the same use. 1. loading: You can load any file or XML string directly into the object. 1.1 The XML Format is: NAMEOFTABLE is the name of your table entry, for example "loginform", "user_report", etc. LG is the ISO-3369 2 letters language ID, for example "es" for spanish, "en" for english, "fr" for french, etc. ENTRYNAME is the ID of the entry, for example "greating", "yourname", "submitbutton". ENTRYVALUE is the text for your entry, for example "Hello", "You are:", "Save" if your table is in english. 1.2 The flat text format is: ENTRYNAME is the ID of the entry, for example "greating", "yourname", "submitbutton". ENTRYVALUE is the text for your entry, for example "Hello", "You are:", "Save" if your table is in english. There is no name of table or language in this format (you "know" what you are loading). The advantage to use XML format is to have more control over your language, and eventyally add attributes into your entries, for instance you may add attributes translated="yes/no", verified="yes/no", and any other data that your system could insert. The XLanguage will ignore those attributes loading the table. 2. creation: To create a new XLanguage empty structure: There are 4 functions to create the language from a file or string, flat text or XML text: Then you can use the set of basic access functions: SetName/SetLanguage functions are used to set the table name and language of the object (generally to build an object from scratch). GetName/GetLanguage functions are used to get the table name and language of the object (generally when you load it from some source). Set/Get/Del functions are used to add or modify a new entry, read an entry, or deletes an entry in the object. 1. Overview: The XDataSet is a set of interfaces and basic classes ready-to-use to build a standard set of data optionally nested and hierarchical, that can be used for any purpose: - Keep complex data in memory. - Create JSON structures. - Inject data into templates. - Interchange database data (records set and record). You can store into it generic supported data, as well as any complex interface structures: - Int - Float - String - Time - Bool - []Int - []Float - []Time - []Bool - XDataSetDef (anything extended with this interface) - []String - Anything else ( interface{} ) - XDataSetCollectionDef (anything extended with this interface) The generic supported data comes with a set of functions to get/set those data directly into the XDataset. Example: Note that all references to XDataset and XDatasetCollection are pointers, always (to be able to modify the values of them). 2. XDatasetDef interface: It is the interface to describe a simple set of data mapped as "name": value, where value can be of any type. The interface implements a good amount of basic methods to get the value on various format such as GetString("name"), GetInt("name"), etc (see below). If the value is another type as asked, the method should contert it if possible. For instance "key":123 required through GetString("key") should return "123". The XDataset type is a simple map[string]interface{} with all the implemented methods and should be enough to use for almost all required cases. However, you can build any complex structure that extends the interface and implements all the required functions to stay compatible with the XDatasetDef. 3. XDatasetCollectionDef Interface: This is the interface used to extend any type of data as a Collection, i-e an array of XDatasetDef. This is a slice of any XDatasetDef compatible data. The interface implements some methods to work on array structure such as Push, Pop, Shift, Unshift and some methods to search data into the array. The XDatasetCollection type is a simple []DatasetDef with all the implemented methods and should be enough to use for almost all required cases. 1. Overview: This is a class to compile and keep a Template that can be injected with an XDataSet structure of data, with a metalanguage to inject the data. The metalanguage is extremely simple and is made to be useful and **really** separate programation from template code (not like other many generic template systems that just mix code and data). A template is a set of HTML/XML (or any other language) string with a meta language to inject variables and build a final string. The XCore XTemplate system is based on the injection of parameters, language translation strings and data fields directly into the HTML (Or any other language you need) template. The HTML itself (or any other language) is a text code not directly used by the template system, but used to dress the data you want to represent in your preferred language. The variables to inject must be into a XDataSet structure or into a structure extended from XDataSetDef interface. The injection of data is based on a XDataSet structure of values that can be nested into another XDataSet and XDataSetConnection and so on. The template compiler recognize nested arrays to automatically make loops on the information. Templates are made to store reusable HTML code, and overall easily changeable by people that do not know how to write programs. A template can be as simple as a single character (no variables to inject) to a very complex nested, conditional and loops sub-templates. Yes. this is a template, but a very simple one without need to inject any data. Let's go more complex: Having an array of data, we want to paint it beautifull: We can create a template to inject this data into it: 2. Create and use XTemplateData: In sight to create and use templates, you have all those possible options to use: Creates the XTemplate from a string or a file or any other source: 3. Metalanguage Reference: 3.1 Comments: %-- and --% You may use comments into your template. The comments will be discarded immediately at the compilation of the template and do not interfere with the rest of your code. Example: 3.2 Nested Templates: [[...]] and [[]] You can define new nested templates into your main template A nested template is defined by: The templteid is any combination of lowers letters only (a-z), numbers (0-9), and 3 special chars: . (point) - (dash) and _ (underline). The template is closed with [[]]. There is no limits into nesting templates. Any nested template will inheritate all the father elements and can use father elements too. To call a sub-template, you need to use &&templateid&& syntax (described below in this document). Example: You may use more than one id into the same template to avoid repetition of the same code. The different id's are separated with a pipe | Important note: A template will be visible only on the same level of its declaration. For example, if you put a subtemplate "b" into a subtemplate "a", it will not be visible by &&b&& from the top level, but only into the subtemplate "a". 3.3 Simple Elements: ##...## and {{...}} There are 2 types of simple elements. Language elements and Data injector elements (also called field elements). We "logically" define the 2 type of elements. The separation is only for human logic and template filling, however the language information can perfectly fit into the data to inject (and not use ## entries). 3.3.1 Languages elements: ##entry## All the languages elements should have the format: ##entry##. A language entry is generally anything written into your code or page that does not come from a database, and should adapt to the language of the client visiting your site. Using the languages elements may depend on the internationalization of your page. If your page is going to be in a single language forever, you really dont need to use languages entries. The language elements generally carry titles, menu options, tables headers etc. The language entries are set into the "#" entry of the main template XDataset to inject, and is a XLanguage table. Example: With data to inject: 3.3.2 Field elements: {{fieldname}} Fields values should have the format: {{fieldname}}. Your fields source can be a database or any other preferred repository data source. Example: You can access an element with its path into the data set to inject separating each field level with a > (greater than). This will take the name of the second hobby in the dataset defined upper. (collections are 0 indexed). The 1 denotes the second record of the hobbies XDatasetCollection. If the field is not found, it will be replaced with an empty string. Tecnically your field names can be any string in the dataset. However do not use { } or > into the names of your fields or the XTemplate may not use them correctly. We recommend to use lowercase names with numbers and ._- Accents and UTF8 symbols are also welcome. 3.3.3 Scope: When you use an id to point a value, the template will first search into the available ids of the local level. If no id is found, the it will search into the upper levers if any, and so on. Example: At the level of 'data2', using {{appname}} will get back 'DomCore'. At the level of 'key1', using {{appname}} will get back 'Nested App'. At the level of 'key2', using {{appname}} will get back 'DomCore'. At the level of root, 'data1' or 'detail', using {{appname}} will get back an empty string. 3.3.4 Path access: id>id>id>id At any level into the data array, you can access any entry into the subset array. For instance, taking the previous array of data to inject, let's suppose we are into a nested meta elements at the 'data1' level. You may want to access directly the 'Juan' entry. The path will be: The José's status value from the root will be: 3.4 Meta Elements They consist into an injection of a XDataset, called the "data to inject", into the template. The meta language is directly applied on the structure of the data array. The data to inject is a nested set of variables and values with the structure you want (there is no specific construction rules). You can inject nearly anything into a template meta elements. Example of a data array to inject: You can access directly any data into the array with its relative path (relative to the level you are when the metaelements are applied, see below). There are 4 structured meta elements in the XTemplate templates to use the data to inject: Reference, Loops, Condition and Debug. The structure of the meta elements in the template must follow the structure of the data to inject. 3.4.1 References to another template: &&order&& 3.4.1.1 When order is a single id (characters a-z0-9.-_), it will make a call to a sub template with the same set of data and replace the &&...&& with the result. The level in the data set is not changed. Example based on previous array of Fred's data: 3.4.1.2 When order contains 2 parameters separated by a semicolumn :, then second parameter is used to change the level of the data of array, with the subset with this id. The level in the data set is changed to this sub set. Example based on previous array of Fred's data: 3.4.1.3 When order contains 3 parameters separated by a semicolumn :, the second and third parameters are used to search the name of the new template based on the data fields to inject. This is an indirect access to the template. The name of the subtemplate is build with parameter3 as prefix and the content of parameter2 value. The third parameter must be empty. 3.4.2 Loops: @@order@@ 3.4.2.1 Overview This meta element will loop over each itterance of the set of data and concatenate each created template in the same order. You need to declare a sub template for this element. You may aso declare derivated sub templates for the different possible cases of the loop: For instance, If your main subtemplate for your look is called "hobby", you may need a different template for the first element, last element, Nth element, Element with a value "no" in the sport field, etc. The supported postfixes are: When the array to iterate is empty: - .none (for example "There is no hobby") When the array contains elements, it will search in order, the following template and use the first found: - templateid.key.[value] value is the key of the vector line. If the collection has a named key (string) or is a direct array (0, 1, 2...) - templateid.first if it is the first element of the array set (new from v1.01.11) - templateid.last if it is the first element of the array set (new from v1.01.11) - templateid.even if the line number is even - templateid in all other cases (odd is contained here if even is defined) 3.4.2.2 When order is a single id (characters a-z0-9.-_), it will make a call to the sub template id with the same subset of data with the same id and replace the @@...@@ for each itterance of the data with the result. Example based on previous array of Fred's data: 3.4.2.3 When order contains 2 parameters separated by a semicolumn :, then first parameter is used to change the level of the data of array, with the subset with this id, and the second one for the template to use. Example based on previous array of Fred's data: 3.4.3 Conditional: ??order?? Makes a call to a subtemplate only if the field exists and have a value. This is very userfull to call a sub template for instance when an image or a video is set. When the condition is not met, it will search for the [id].none template. The conditional element does not change the level in the data set. 3.4.3.1 When order is a single id (characters a-z0-9.-_), it will make a call to the sub template id with the same field in the data and replace the ??...?? with the corresponding template Example based on previous array of Fred's data: 3.4.3.2 When order contains 2 parameters separated by a semicolumn :, then second parameter is used to change the level of the data of array, with the subset with this id. Example based on previous array of Fred's data: If the asked field is a catalog, true/false, numbered, you may also use .[value] subtemplates 3.5 Debug Tools: !!order!! There are two keywords to dump the content of the data set. This is very useful when you dont know the code that calls the template, don't remember some values, or for debug facilities. 3.5.1 !!dump!! Will show the totality of the data set, with ids and values. 3.5.1 !!list!! Will show only the tree of parameters, values are not shown.
Package gosnowflake is a pure Go Snowflake driver for the database/sql package. Clients can use the database/sql package directly. For example: Use Open to create a database handle with connection parameters: The Go Snowflake Driver supports the following connection syntaxes (or data source name formats): where all parameters must be escaped or use `Config` and `DSN` to construct a DSN string. The following example opens a database handle with the Snowflake account myaccount where the username is jsmith, password is mypassword, database is mydb, schema is testschema, and warehouse is mywh: The following connection parameters are supported: account <string>: Specifies the name of your Snowflake account, where string is the name assigned to your account by Snowflake. In the URL you received from Snowflake, your account name is the first segment in the domain (e.g. abc123 in https://abc123.snowflakecomputing.com). This parameter is optional if your account is specified after the @ character. If you are not on us-west-2 region or AWS deployment, then append the region after the account name, e.g. “<account>.<region>”. If you are not on AWS deployment, then append not only the region, but also the platform, e.g., “<account>.<region>.<platform>”. Account, region, and platform should be separated by a period (“.”), as shown above. If you are using a global url, then append connection group and "global", e.g., "account-<connection_group>.global". Account and connection group are separated by a dash ("-"), as shown above. region <string>: DEPRECATED. You may specify a region, such as “eu-central-1”, with this parameter. However, since this parameter is deprecated, it is best to specify the region as part of the account parameter. For details, see the description of the account parameter. database: Specifies the database to use by default in the client session (can be changed after login). schema: Specifies the database schema to use by default in the client session (can be changed after login). warehouse: Specifies the virtual warehouse to use by default for queries, loading, etc. in the client session (can be changed after login). role: Specifies the role to use by default for accessing Snowflake objects in the client session (can be changed after login). passcode: Specifies the passcode provided by Duo when using MFA for login. passcodeInPassword: false by default. Set to true if the MFA passcode is embedded in the login password. Appends the MFA passcode to the end of the password. loginTimeout: Specifies the timeout, in seconds, for login. The default is 60 seconds. The login request gives up after the timeout length if the HTTP response is success. authenticator: Specifies the authenticator to use for authenticating user credentials: To use the internal Snowflake authenticator, specify snowflake (Default). To authenticate through Okta, specify https://<okta_account_name>.okta.com (URL prefix for Okta). To authenticate using your IDP via a browser, specify externalbrowser. To authenticate via OAuth, specify oauth and provide an OAuth Access Token (see the token parameter below). application: Identifies your application to Snowflake Support. insecureMode: false by default. Set to true to bypass the Online Certificate Status Protocol (OCSP) certificate revocation check. IMPORTANT: Change the default value for testing or emergency situations only. token: a token that can be used to authenticate. Should be used in conjunction with the "oauth" authenticator. client_session_keep_alive: Set to true have a heartbeat in the background every hour to keep the connection alive such that the connection session will never expire. Care should be taken in using this option as it opens up the access forever as long as the process is alive. ocspFailOpen: true by default. Set to false to make OCSP check fail closed mode. validateDefaultParameters: true by default. Set to false to disable checks on existence and privileges check for Database, Schema, Warehouse and Role when setting up the connection All other parameters are taken as session parameters. For example, TIMESTAMP_OUTPUT_FORMAT session parameter can be set by adding: The Go Snowflake Driver honors the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY for the forward proxy setting. NO_PROXY specifies which hostname endings should be allowed to bypass the proxy server, e.g. :code:`no_proxy=.amazonaws.com` means that AWS S3 access does not need to go through the proxy. NO_PROXY does not support wildcards. Each value specified should be one of the following: The end of a hostname (or a complete hostname), for example: ".amazonaws.com" or "xy12345.snowflakecomputing.com". An IP address, for example "192.196.1.15". If more than one value is specified, values should be separated by commas, for example: By default, the driver's builtin logger is NOP; no output is generated. This is intentional for those applications that use the same set of logger parameters not to conflict with glog, which is incorporated in the driver logging framework. In order to enable debug logging for the driver, add a build tag sfdebug to the go tool command lines, for example: For tests, run the test command with the tag along with glog parameters. For example, the following command will generate all acitivty logs in the standard error. Likewise, if you build your application with the tag, you may specify the same set of glog parameters. To get the logs for a specific module, use the -vmodule option. For example, to retrieve the driver.go and connection.go module logs: Note: If your request retrieves no logs, call db.Close() or glog.flush() to flush the glog buffer. Note: The logger may be changed in the future for better logging. Currently if the applications use the same parameters as glog, you cannot collect both application and driver logs at the same time. From 0.5.0, a signal handling responsibility has moved to the applications. If you want to cancel a query/command by Ctrl+C, add a os.Interrupt trap in context to execute methods that can take the context parameter, e.g., QueryContext, ExecContext. See cmd/selectmany.go for the full example. Queries return SQL column type information in the ColumnType type. The DatabaseTypeName method returns the following strings representing Snowflake data types: Go's database/sql package limits Go's data types to the following for binding and fetching: Fetching data isn't an issue since the database data type is provided along with the data so the Go Snowflake Driver can translate Snowflake data types to Go native data types. When the client binds data to send to the server, however, the driver cannot determine the date/timestamp data types to associate with binding parameters. For example: To resolve this issue, a binding parameter flag is introduced that associates any subsequent time.Time type to the DATE, TIME, TIMESTAMP_LTZ, TIMESTAMP_NTZ or BINARY data type. The above example could be rewritten as follows: The driver fetches TIMESTAMP_TZ (timestamp with time zone) data using the offset-based Location types, which represent a collection of time offsets in use in a geographical area, such as CET (Central European Time) or UTC (Coordinated Universal Time). The offset-based Location data is generated and cached when a Go Snowflake Driver application starts, and if the given offset is not in the cache, it is generated dynamically. Currently, Snowflake doesn't support the name-based Location types, e.g., America/Los_Angeles. For more information about Location types, see the Go documentation for https://golang.org/pkg/time/#Location. Internally, this feature leverages the []byte data type. As a result, BINARY data cannot be bound without the binding parameter flag. In the following example, sf is an alias for the gosnowflake package: The driver directly downloads a result set from the cloud storage if the size is large. It is required to shift workloads from the Snowflake database to the clients for scale. The download takes place by goroutine named "Chunk Downloader" asynchronously so that the driver can fetch the next result set while the application can consume the current result set. The application may change the number of result set chunk downloader if required. Note this doesn't help reduce memory footprint by itself. Consider Custom JSON Decoder. Experimental: Custom JSON Decoder for parsing Result Set The application may have the driver use a custom JSON decoder that incrementally parses the result set as follows. This option will reduce the memory footprint to half or even quarter, but it can significantly degrade the performance depending on the environment. The test cases running on Travis Ubuntu box show five times less memory footprint while four times slower. Be cautious when using the option. (Private Preview) JWT authentication ** Not recommended for production use until GA Now JWT token is supported when compiling with a golang version of 1.10 or higher. Binary compiled with lower version of golang would return an error at runtime when users try to use JWT authentication feature. To enable this feature, one can construct DSN with fields "authenticator=SNOWFLAKE_JWT&privateKey=<your_private_key>", or using Config structure specifying: The <your_private_key> should be a base64 URL encoded PKCS8 rsa private key string. One way to encode a byte slice to URL base 64 URL format is through base64.URLEncoding.EncodeToString() function. On the server side, one can alter the public key with the SQL command: The <your_public_key> should be a base64 Standard encoded PKI public key string. One way to encode a byte slice to base 64 Standard format is through base64.StdEncoding.EncodeToString() function. To generate the valid key pair, one can do the following command on the shell script: GET and PUT operations are unsupported.
Package esquery provides a non-obtrusive, idiomatic and easy-to-use query and aggregation builder for the official Go client (https://github.com/elastic/go-elasticsearch) for the ElasticSearch database (https://www.elastic.co/products/elasticsearch). esquery alleviates the need to use extremely nested maps (map[string]interface{}) and serializing queries to JSON manually. It also helps eliminating common mistakes such as misspelling query types, as everything is statically typed. Using `esquery` can make your code much easier to write, read and maintain, and significantly reduce the amount of code you write. esquery provides a method chaining-style API for building and executing queries and aggregations. It does not wrap the official Go client nor does it require you to change your existing code in order to integrate the library. Queries can be directly built with `esquery`, and executed by passing an `*elasticsearch.Client` instance (with optional search parameters). Results are returned as-is from the official client (e.g. `*esapi.Response` objects). Getting started is extremely simple: esquery currently supports version 7 of the ElasticSearch Go client. The library cannot currently generate "short queries". For example, whereas ElasticSearch can accept this: { "query": { "term": { "user": "Kimchy" } } } The library will always generate this: This is also true for queries such as "bool", where fields like "must" can either receive one query object, or an array of query objects. `esquery` will generate an array even if there's only one query object.
Package pwnedpasswords implements a client for checking passwords against the "Have I Been Pwned", Pwned Passwords API. The Pwned Passwords API implements a k-Anonymity model that allows you to check your password against the database without providing the API the full password or full SHA-1 password hash. This works by creating a SHA-1 hash of the password locally, hex-encodes the SHA-1 checksum, and then sends the first five bytes (prefix) to the Pwned Passwords API. The API then returns the suffix of hashes it has that start with that prefix. The client then compares the returned hashes locally to look for a match. This prevents the password, hashed or not, from leaving the local system.
Package database is the database client wrapper
Package nzgo is a pure Go language driver for the database/sql package to work with IBM PDA (aka Netezza) In most cases clients will use the database/sql package instead of using this package directly. For example: nzgo defines a simple logger interface. Set logLevel to control logging verbosity and logPath to specify log file path. By default logging will be enabled with logLevel=Info and current directory as logPath. You can configure logLevel and logPath (i.e. log file directory) as per your requirement. There is one more configuration parameter with logger "additionalLogFile". This parameter can be used to set additional logger file. additionalLogFile can be used to enable writing logs to stdout, this can be achieved by simply setting "additionalLogFile=stdout" Valid values for 'logLevel' are : "OFF" , "DEBUG", "INFO" and "FATAL". logLevel=OFF can be used to turn off logging. It will turn of both internal and additionalLogFile logs. These logger configuration parameters should be mentinoed in connection string. The level of security (SSL/TLS) that the driver uses for the connection to the data store. onlyUnSecured: The driver does not use SSL. preferredUnSecured: If the server provides a choice, the driver does not use SSL. preferredSecured: If the server provides a choice, the driver uses SSL. onlySecured: The driver does not connect unless an SSL connection is available. Similarly, Netezza server has above securityLevel. Cases which would fail: Client tries to connect with 'Only secured' or 'Preferred secured' mode while server is 'Only Unsecured' mode. Client tries to connect with 'Only secured' or 'Preferred secured' mode while server is 'Preferred Unsecured' mode. Client tries to connect with 'Only Unsecured' or 'Preferred Unsecured' mode while server is 'Only Secured' mode. Client tries to connect with 'Only Unsecured' or 'Preferred Unsecured' mode while server is 'Preferred Secured' mode. Below are the securityLevel you can pass in connection string : Use Open to create a database handle with connection parameters: The Go Netezza Driver supports the following connection syntaxes (or data source name formats): In this case, application is running from NPS server itself so using 'localhost'. Golang driver should connect on port 5480(postgres port). The user is admin, password is password, database is db1, sslmode is require, and the location of the root certificate file is C:/Users/root31.crt with securityLevel as 'Only Secured session' When establishing a connection using nzgo you are expected to supply a connection string containing zero or more parameters. Below are subset of the connection parameters supported by nzgo. The following special connection parameters are supported: Valid values for sslmode are: Use single quotes for values that contain whitespace: A backslash will escape the next character in values: Note that the connection parameter client_encoding (which sets the text encoding for the connection) may be set but must be "UTF8", matching with the same rules as Postgres. It is an error to provide any other value. database/sql does not dictate any specific format for parameter markers in query strings, but nzgo uses the Netezza-specific parameter markers i.e. '?', as shown below. First parameter marker in the query would be replaced by first arguement, second parameter marker in the query would be replaced by second arguement and so on. nzgo supports the RowsAffected() method of the Result type in database/sql. For additional instructions on querying see the documentation for the database/sql package. nzgo also supports transaction queries as specified in database/sql package https://github.com/golang/go/wiki/SQLInterface. Transactions are started by calling Begin. This package returns the following types for values from the Netezza backend: You can unload data from an IBM Netezza database table on a Netezza host system to a remote client. This unload does not remove rows from the database but instead stores the unloaded data in a flat file (external table) that is suitable for loading back into a Netezza database. Below query would create a file 'et1.txt' on remote system from Netezza table t2 with data delimeted by '|'. See https://www.ibm.com/support/knowledgecenter/en/SSULQD_7.2.1/com.ibm.nz.load.doc/t_load_unloading_data_remote_client_sys.html for more information about external table
Package universum provides a Go client for interacting with Universum DB. This client manages connections through a connection pool and allows you to perform operations like Get, Set, Delete, Increment, and others on key-value pairs in the database.
Pacakge rss-processor provides RSS feeds processing for clients It has the search, fetchrss and serverss packages to enable it get store and provide clients with the ability to search stored feeds and present the results via a HTTP interface. The application uses already stored RSS sources (i.e links) on the sources table in the database, thus for more RSS feeds one only needs to add the source and the application will pick up in the next wake up (at the moment every 5 minutes). MySQL database was chosen as the data backend to leverage it's FULL TEXT search capabilities to enable clients query for various topics get results. N.B the added RSS sources in the DB are not exhaustive and others from the publishers can be added to the table and get processed on. the next run. The application has three main packages. The packages are called and composed in the main function to start up the application. First the application fetches feeds for RSS sources defined in the Database and stores them locally. Thereafter periodically after 5 minutes the fetchrss package fetches the RSS feeds and updates the local database with newer feeds. The main programme also starts a http server (on port 9000) on a separate goroutine to listen to and process client requests to search the current locally stored feeds. The server package utilizes the search package to search and return matching results from the feeds in the database. The search is dependent on FULL TEXT search for MySQL databases. The documentation can be read using the godoc tool by running the command and thereafter visiting To enable quick running, a docker-compose file is present in the directory and the app can be started with. The docker file creates two containers one for the MySQL db and one for the RSS processor app (this app). Then you can query to get feeds for a given query The schema is under the sql/ directory and is automatically used by the docker containers to bootstrap the DB.
interpreter for a REPL like feature for gsdb functions and variables that are used to interact with specified database for RUD proper validation for this is extremely important for these commands as other clients can issue erroneous directory to a database every command passes through either the databaseExecutor or the spaceExecutor common utilities and functions used again and again in the code
Package ppcerrors enhances the standard errors package and absorbs the design of the github.com/pkg/errors package to meet the specific error-handling needs of PPC Studio projects, which makes it easier to identify, append information to, and log errors that are being passed inside and across the systems. By default, the caller information is not printed when an error is created. For example, the following code wraps an "mock mongodb error" error twice: The default output using fmt.Println(err) will be like this: You will see the error chain in the output orderred from the last wrapped error to the initial error. However, the output does not contain the function name, package path, file path, and line number of each error's occurrence. To print the these information, set ppcerrors.Config.Caller to true: Then the output using fmt.Printf("%+v", err) will be like this: Explanation to the above output: Each error in the chain is printed in three lines: Now you can easily locate where the error occurred in the code and how the error being passed inside and across the systems. See more from the [examples](https://pkg.go.dev/github.com/ppc-games/ppcerrors#pkg-examples) For example, defining errors that occur when performing mongo operations: Note: ErrUpdateOneFailed and ErrNoDocumentWasUpdated are type definition structs, not type error. For example, when an error occurs when saving a user to the database, you can use the predefined definition to Wrap an existing error with additional information, or use New to create a new error. For example, to log MongoDB errors in a middleware: For example, defining error codes that are returned to the client in HTTP responses: Note: ErrInternalServerError and ErrUnauthorized are type errorCode structs, not type error. For example, when an error occurs when one user requests the login API: Note: different kinds of errors returned by SaveUser are normalized to ErrInternalServerError. Refer to the example in the HasDefinition section. For example, wrap an error returns from a third-party library: Note: There is no need to initialize a definition for each third-party error when no specific error-handling to that error. For the same reason, there is no need to define an error code if no system cares about that error. But Wrap is always necessary for the purpose of recording where the error occurred.
Package goworker is a Resque-compatible, Go-based background worker. It allows you to push jobs into a queue using an expressive language like Ruby while harnessing the efficiency and concurrency of Go to minimize job latency and cost. goworker workers can run alongside Ruby Resque clients so that you can keep all but your most resource-intensive jobs in Ruby. To create a worker, write a function matching the signature and register it using Here is a simple worker that prints its arguments: To create workers that share a database pool or other resources, use a closure to share variables. goworker worker functions receive the queue they are serving and a slice of interfaces. To use them as parameters to other functions, use Go type assertions to convert them into usable types. For testing, it is helpful to use the redis-cli program to insert jobs onto the Redis queue: will insert 100 jobs for the MyClass worker onto the myqueue queue. It is equivalent to: After building your workers, you will have an executable that you can run which will automatically poll a Redis server and call your workers as jobs arrive. There are several flags which control the operation of the goworker client. -queues="comma,delimited,queues" — This is the only required flag. The recommended practice is to separate your Resque workers from your goworkers with different queues. Otherwise, Resque worker classes that have no goworker analog will cause the goworker process to fail the jobs. Because of this, there is no default queue, nor is there a way to select all queues (à la Resque's * queue). Queues are processed in the order they are specififed. If you have multiple queues you can assign them weights. A queue with a weight of 2 will be checked twice as often as a queue with a weight of 1: -queues='high=2,low=1'. -interval=5.0 — Specifies the wait period between polling if no job was in the queue the last time one was requested. -concurrency=25 — Specifies the number of concurrently executing workers. This number can be as low as 1 or rather comfortably as high as 100,000, and should be tuned to your workflow and the availability of outside resources. -connections=2 — Specifies the maximum number of Redis connections that goworker will consume between the poller and all workers. There is not much performance gain over two and a slight penalty when using only one. This is configurable in case you need to keep connection counts low for cloud Redis providers who limit plans on maxclients. -uri=redis://localhost:6379/ — Specifies the URI of the Redis database from which goworker polls for jobs. Accepts URIs of the format redis://user:pass@host:port/db or unix:///path/to/redis.sock. The flag may also be set by the environment variable $($REDIS_PROVIDER) or $REDIS_URL. E.g. set $REDIS_PROVIDER to REDISTOGO_URL on Heroku to let the Redis To Go add-on configure the Redis database. -namespace=resque: — Specifies the namespace from which goworker retrieves jobs and stores stats on workers. -exit-on-complete=false — Exits goworker when there are no jobs left in the queue. This is helpful in conjunction with the time command to benchmark different configurations. -use-number=false — Uses json.Number when decoding numbers in the job payloads. This will avoid issues that occur when goworker and the json package decode large numbers as floats, which then get encoded in scientific notation, losing pecision. This will default to true soon. You can also configure your own flags for use within your workers. Be sure to set them before calling goworker.Main(). It is okay to call flags.Parse() before calling goworker.Main() if you need to do additional processing on your flags. To stop goworker, send a QUIT, TERM, or INT signal to the process. This will immediately stop job polling. There can be up to $CONCURRENCY jobs currently running, which will continue to run until they are finished. Like Resque, goworker makes no guarantees about the safety of jobs in the event of process shutdown. Workers must be both idempotent and tolerant to loss of the job in the event of failure. If the process is killed with a KILL or by a system failure, there may be one job that is currently in the poller's buffer that will be lost without any representation in either the queue or the worker variable. If you are running Goworker on a system like Heroku, which sends a TERM to signal a process that it needs to stop, ten seconds later sends a KILL to force the process to stop, your jobs must finish within 10 seconds or they may be lost. Jobs will be recoverable from the Redis database under as a JSON object with keys queue, run_at, and payload, but the process is manual. Additionally, there is no guarantee that the job in Redis under the worker key has not finished, if the process is killed before goworker can flush the update to Redis.
Package t38c implements client for working with a Tile38 database.
Package t38c implements client for working with a Tile38 database.
goworker is a Resque-compatible, Go-based background worker. It allows you to push jobs into a queue using an expressive language like Ruby while harnessing the efficiency and concurrency of Go to minimize job latency and cost. goworker workers can run alongside Ruby Resque clients so that you can keep all but your most resource-intensive jobs in Ruby. To create a worker, write a function matching the signature and register it using Here is a simple worker that prints its arguments: To create workers that share a database pool or other resources, use a closure to share variables. goworker worker functions receive the queue they are serving and a slice of interfaces. To use them as parameters to other functions, use Go type assertions to convert them into usable types. For testing, it is helpful to use the redis-cli program to insert jobs onto the Redis queue: will insert 100 jobs for the MyClass worker onto the myqueue queue. It is equivalent to: After building your workers, you will have an executable that you can run which will automatically poll a Redis server and call your workers as jobs arrive. There are several flags which control the operation of the goworker client. -queues="comma,delimited,queues" — This is the only required flag. The recommended practice is to separate your Resque workers from your goworkers with different queues. Otherwise, Resque worker classes that have no goworker analog will cause the goworker process to fail the jobs. Because of this, there is no default queue, nor is there a way to select all queues (à la Resque's * queue). Queues are processed in the order they are specififed. If you have multiple queues you can assign them weights. A queue with a weight of 2 will weight of 1: -queues='high=2,low=1'. -interval=5.0 — Specifies the wait period between polling if no job was in the queue the last time one was requested. -concurrency=25 — Specifies the number of concurrently executing workers. This number can be as low as 1 or rather comfortably as high as 100,000, and should be tuned to your workflow and the availability of outside resources. -connections=2 — Specifies the maximum number of Redis connections that goworker will consume between the poller and all workers. There is not much performance gain over two and a slight penalty when using only one. This is configurable in case you need to keep connection counts low for cloud Redis providers who limit plans on maxclients. -uri=redis://localhost:6379/ — Specifies the URI of the Redis database from which goworker polls for jobs. Accepts URIs of the format redis://user:pass@host:port/db or unix:///path/to/redis.sock. The flag may also be set by the environment variable $($REDIS_PROVIDER) or $REDIS_URL. E.g. set $REDIS_PROVIDER to REDISTOGO_URL on Heroku to let the Redis To Go add-on configure the Redis database. -namespace=resque: — Specifies the namespace from which goworker retrieves jobs and stores stats on workers. -exit-on-complete=false — Exits goworker when there are no jobs left in the queue. This is helpful in conjunction with the time command to benchmark different configurations. You can also configure your own flags for use within your workers. Be sure to set them before calling goworker.Main(). It is okay to call flags.Parse() before calling goworker.Main() if you need to do additional processing on your flags. To stop goworker, send a QUIT, TERM, or INT signal to the process. This will immediately stop job polling. There can be up to $CONCURRENCY jobs currently running, which will continue to run until they are finished. Like Resque, goworker makes no guarantees about the safety of jobs in the event of process shutdown. Workers must be both idempotent and tolerant to loss of the job in the event of failure. If the process is killed with a KILL or by a system failure, there may be one job that is currently in the poller's buffer that will be lost without any representation in either the queue or the worker variable. If you are running Goworker on a system like Heroku, which sends a TERM to signal a process that it needs to stop, ten seconds later sends a KILL to force the process to stop, your jobs must finish within 10 seconds or they may be lost. Jobs will be recoverable from the Redis database under as a JSON object with keys queue, run_at, and payload, but the process is manual. Additionally, there is no guarantee that the job in Redis under the worker key has not finished, if the process is killed before goworker can flush the update to Redis.
Package kivik provides a generic interface to CouchDB or CouchDB-like databases. The kivik package must be used in conjunction with a database driver. The officially supported drivers are: The Filesystem and Memory drivers are also available, but in early stages of development, and so many features do not yet work: The kivik driver system is modeled after the standard library's `sql` and `sql/driver` packages, although the client API is completely different due to the different database models implemented by SQL and NoSQL databases such as CouchDB. couchDB stores JSON, so Kivik translates Go data structures to and from JSON as necessary. The conversion between Go data types and JSON, and vice versa, is handled automatically according to the rules and behavior described in the documentationf or the standard library's `encoding/json` package (https://golang.org/pkg/encoding/json). One would be well-advised to become familiar with using `json` struct field tags (https://golang.org/pkg/encoding/json/#Marshal) when working with JSON documents. Most Kivik methods take `context.Context` as their first argument. This allows the cancellation of blocking operations in the case that the result is no longer needed. A typical use case for a web application would be to cancel a Kivik request if the remote HTTP client ahs disconnected, rednering the results of the query irrelevant. To learn more about Go's contexts, read the `context` package documentation (https://golang.org/pkg/context/) and read the Go blog post "Go Concurrency Patterns: Context" (https://blog.golang.org/context) for example code. If in doubt, you can pass `context.TODO()` as the context variable. Example: Kivik returns errors that embed an HTTP status code. In most cases, this is the HTTP status code returned by the server. The embedded HTTP status code may be accessed easily using the StatusCode() method, or with a type assertion to `interface { StatusCode() int }`. Example: Any error that does not conform to this interface will be assumed to represent a http.StatusInternalServerError status code. For common usage, authentication should be as simple as including the authentication credentials in the connection DSN. For example: This will connect to `localhost` on port 5984, using the username `admin` and the password `abc123`. When connecting to CouchDB (as in the above example), this will use cookie auth (https://docs.couchdb.org/en/stable/api/server/authn.html?highlight=cookie%20auth#cookie-authentication). Depending on which driver you use, there may be other ways to authenticate, as well. At the moment, the CouchDB driver is the only official driver which offers additional authentication methods. Please refer to the CouchDB package documentation for details (https://pkg.go.dev/github.com/IG-Soft/couchdb/v3). With a client handle in hand, you can create a database handle with the DB() method to interact with a specific database.