Package xcore is a set of basic objects for programation (XCache for caches, XDataset for data sets, XLanguage for languages and XTemplate for templates). For GO, the actual existing code includes: - XCache: Application Memory Caches for any purpose, with time control and quantity control of object in the cache and also check changes against original source. It is a thread safe cache. - XDataset: Basic nested data structures for any purpose (template injection, configuration files, database records, etc). - XLanguage: language dependent text tables for internationalization of code. The sources can be text or XML file definitions. - XTemplate: template system with meta language to create complex documents (compatible with any text language, HTML, CSS, JS, PDF, XML, etc), heavily used on CMS systems and others. It is already used on sites that serve more than 60 million pages a month (500 pages per second on pike hour) and can be used on multithreading environment safely. XCache is a library to cache all the data you want into current application memory for a very fast access to the data. The access to the data support multithreading and concurrency. For the same reason, this type of cache is not persistent (if you exit the application) and cannot grow too much (as memory is the limit). However, you can control a timeout of each cache piece, and eventually a comparison function against a source (file, database, etc) to invalid the cache. 1. Declare a new XCache with NewXCache() function: 2. Fill in the cache: Once you have declared the cache, you can fill it with anything you want. The main cache object is an interface{} so you can put here anything you need, from simple variables to complex structures. You need to use the Set function: Note the ID is always a string, so convert a database key to string if needed. 3. To use the cache, just ask for your entry with Get function: 4. To maintain the cache: You may need Del function, to delete a specific entry (maybe because you deleted the record in database). You may also need Clean function to deletes a percentage of the cache, or Flush to deletes it all. The Verify function is used to check cache entries against their sources through the Validator function. Be very careful, if the cache is big or the Validator function is complex (maybe ask for a remote server information), the verification may be VERY slow and huge CPU use. The Count function gives some stats about the cache. 5. How to use Verify Function: This function is recommended when the source is local and fast to check (for instance a language file or a template file). When the source is distant (other cluster database, any rpc source on another network, integration of many parts, etc), it is more recommended to create a function that will delete the cache when needed (on demand cache change). The validator function is a func(id, time.Time) bool function. The first parameter is the ID entry in the cache, the second parameter the time of the entry was created. The validator function returns true is the cache is still valid, or false if it needs to be invalidated. The XCache is thread safe. The cache can be limited in quantity of entries and timeout for data. The cache is automanaged (for invalid expired data) and can be cleaned partially or totally manually. The XLanguage table of text entries can be loaded from XML file, XML string or normal text file or string. It is used to keep a table of id=value set of entries in any languages you need, so it is easy to switch between XLanguage instance based on the required language needed. Obviously, any XLanguage you load in any language should have the same id entries translated, for the same use. The XLanguage object is thread safe 1. loading: You can load any file or XML string directly into the object. 1.1 The XML Format is: NAMEOFTABLE is the name of your table entry, for example "loginform", "user_report", etc. LG is the ISO-3369 2 letters language ID, for example "es" for spanish, "en" for english, "fr" for french, etc. ENTRYNAME is the ID of the entry, for example "greating", "yourname", "submitbutton". ENTRYVALUE is the text for your entry, for example "Hello", "You are:", "Save" if your table is in english. STATUSVALUE is the status of the entry- You may put any value to control your translation over time and processes. 1.2 The flat text format is: ENTRYNAME is the ID of the entry, for example "greating", "yourname", "submitbutton". ENTRYVALUE is the text for your entry, for example "Hello", "You are:", "Save" if your table is in english. There is no name of table or language in this format (you "know" what you are loading). The advantage to use XML format is to have more control over your language, and eventyally add attributes into your entries, for instance you may add attributes translated="yes/no", verified="yes/no", and any other data that your system could insert. The XLanguage will ignore those attributes loading the table. 2. creation: To create a new XLanguage empty structure: There are 4 functions to create the language from a file or string, flat text or XML text: Then you can use the set of basic access functions: SetName/SetLanguage functions are used to set the table name and language of the object (generally to build an object from scratch). GetName/GetLanguage functions are used to get the table name and language of the object (generally when you load it from some source). Set/Get/Del functions are used to add or modify a new entry, read an entry, or deletes an entry in the object. SetStatus/GetStatus functions are used to add or get a status for the entry in the object. To create am XML file from the objet, you can use the GetXML() function 1. Overview: The XDataSet is a set of interfaces and basic classes ready-to-use to build a standard set of data optionally nested and hierarchical, that can be used for any purpose: - Keep complex data in memory. - Create JSON structures. - Inject data into templates. - Interchange database data (records set and record). You can store into it generic supported data, as well as any complex interface structures: - Int - Float - String - Time - Bool - []Int - []Float - []Time - []Bool - XDataSetDef (anything extended with this interface) - []String - Anything else ( interface{} ) - XDataSetCollectionDef (anything extended with this interface) The generic supported data comes with a set of functions to get/set those data directly into the XDataset. Example: Note that all references to XDataset and XDatasetCollection are pointers, always (to be able to modify the values of them). 2. XDatasetDef interface: It is the interface to describe a simple set of data mapped as "name": value, where value can be of any type. The interface implements a good amount of basic methods to get the value on various format such as GetString("name"), GetInt("name"), etc (see below). If the value is another type as asked, the method should contert it if possible. For instance "key":123 required through GetString("key") should return "123". The XDataset type is a simple map[string]interface{} with all the implemented methods and should be enough to use for almost all required cases. However, you can build any complex structure that extends the interface and implements all the required functions to stay compatible with the XDatasetDef. 3. XDatasetCollectionDef Interface: This is the interface used to extend any type of data as a Collection, i-e an array of XDatasetDef. This is a slice of any XDatasetDef compatible data. The interface implements some methods to work on array structure such as Push, Pop, Shift, Unshift and some methods to search data into the array. The XDatasetCollection type is a simple []DatasetDef with all the implemented methods and should be enough to use for almost all required cases. 1. Overview: The XDataSetTS is a DatasetDef structure, thread safe. It is build on the XDataset with the same properties, but is thread safe to protect Read/Write accesses from different thread. Example: You may also build a XDatasetTS to encapsulate a XDatasetDef that is not thread safe, to use it safely Note that all references to XDatasetTS are pointers, always (to be able to modify the values of them). The DatasetTS meet the XDatasetDef interface 1. Overview: This is a class to compile and keep a Template that can be injected with an XDataSet structure of data, with a metalanguage to inject the data. The metalanguage is extremely simple and is made to be useful and **really** separate programation from template code (not like other many generic template systems that just mix code and data). A template is a set of HTML/XML (or any other language) string with a meta language to inject variables and build a final string. The XCore XTemplate system is based on the injection of parameters, language translation strings and data fields directly into the HTML (Or any other language you need) template. The HTML itself (or any other language) is a text code not directly used by the template system, but used to dress the data you want to represent in your preferred language. The variables to inject must be into a XDataSet structure or into a structure extended from XDataSetDef interface. The injection of data is based on a XDataSet structure of values that can be nested into another XDataSet and XDataSetConnection and so on. The template compiler recognize nested arrays to automatically make loops on the information. Templates are made to store reusable HTML code, and overall easily changeable by people that do not know how to write programs. A template can be as simple as a single character (no variables to inject) to a very complex nested, conditional and loops sub-templates. Yes. this is a template, but a very simple one without need to inject any data. Let's go more complex: Having an array of data, we want to paint it beautifull: We can create a template to inject this data into it: 2. Create and use XTemplateData: In sight to create and use templates, you have all those possible options to use: Creates the XTemplate from a string or a file or any other source: Clone the XTemplate: 3. Metalanguage Reference: 3.1 Comments: %-- and --% You may use comments into your template. The comments will be discarded immediately at the compilation of the template and do not interfere with the rest of your code. Example: 3.2 Nested Templates: [[...]] and [[]] You can define new nested templates into your main template A nested template is defined by: The templteid is any combination of lowers letters only (a-z), numbers (0-9), and 3 special chars: . (point) - (dash) and _ (underline). The template is closed with [[]]. There is no limits into nesting templates. Any nested template will inheritate all the father elements and can use father elements too. To call a sub-template, you need to use &&templateid&& syntax (described below in this document). Example: You may use more than one id into the same template to avoid repetition of the same code. The different id's are separated with a pipe | Important note: A template will be visible only on the same level of its declaration. For example, if you put a subtemplate "b" into a subtemplate "a", it will not be visible by &&b&& from the top level, but only into the subtemplate "a". 3.3 Simple Elements: ##...## and {{...}} There are 2 types of simple elements. Language elements and Data injector elements (also called field elements). We "logically" define the 2 type of elements. The separation is only for human logic and template filling, however the language information can perfectly fit into the data to inject (and not use ## entries). 3.3.1 Languages elements: ##entry## All the languages elements should have the format: ##entry##. A language entry is generally anything written into your code or page that does not come from a database, and should adapt to the language of the client visiting your site. Using the languages elements may depend on the internationalization of your page. If your page is going to be in a single language forever, you really dont need to use languages entries. The language elements generally carry titles, menu options, tables headers etc. The language entries are set into the "#" entry of the main template XDataset to inject, and is a XLanguage table. Example: With data to inject: 3.3.2 Field elements: {{fieldname}} Fields values should have the format: {{fieldname}}. Your fields source can be a database or any other preferred repository data source. Example: You can access an element with its path into the data set to inject separating each field level with a > (greater than). This will take the name of the second hobby in the dataset defined upper. (collections are 0 indexed). The 1 denotes the second record of the hobbies XDatasetCollection. If the field is not found, it will be replaced with an empty string. Tecnically your field names can be any string in the dataset. However do not use { } or > into the names of your fields or the XTemplate may not use them correctly. We recommend to use lowercase names with numbers and ._- Accents and UTF8 symbols are also welcome. 3.3.3 Scope: When you use an id to point a value, the template will first search into the available ids of the local level. If no id is found, the it will search into the upper levers if any, and so on. Example: At the level of 'data2', using {{appname}} will get back 'DomCore'. At the level of 'key1', using {{appname}} will get back 'Nested App'. At the level of 'key2', using {{appname}} will get back 'DomCore'. At the level of root, 'data1' or 'detail', using {{appname}} will get back an empty string. 3.3.4 Path access: id>id>id>id At any level into the data array, you can access any entry into the subset array. For instance, taking the previous array of data to inject, let's suppose we are into a nested meta elements at the 'data1' level. You may want to access directly the 'Juan' entry. The path will be: The José's status value from the root will be: 3.4 Meta Elements They consist into an injection of a XDataset, called the "data to inject", into the template. The meta language is directly applied on the structure of the data array. The data to inject is a nested set of variables and values with the structure you want (there is no specific construction rules). You can inject nearly anything into a template meta elements. Example of a data array to inject: You can access directly any data into the array with its relative path (relative to the level you are when the metaelements are applied, see below). There are 4 structured meta elements in the XTemplate templates to use the data to inject: Reference, Loops, Condition and Debug. The structure of the meta elements in the template must follow the structure of the data to inject. 3.4.1 References to another template: &&order&& 3.4.1.1 When order is a single id (characters a-z0-9.-_), it will make a call to a sub template with the same set of data and replace the &&...&& with the result. The level in the data set is not changed. Example based on previous array of Fred's data: 3.4.1.2 When order contains 2 parameters separated by a semicolumn :, then second parameter is used to change the level of the data of array, with the subset with this id. The level in the data set is changed to this sub set. Example based on previous array of Fred's data: 3.4.1.3 When order contains 3 parameters separated by a semicolumn :, the second and third parameters are used to search the name of the new template based on the data fields to inject. This is an indirect access to the template. The name of the subtemplate is build with parameter3 as prefix and the content of parameter2 value. The third parameter must be empty. 3.4.2 Loops: @@order@@ 3.4.2.1 Overview This meta element will loop over each itterance of the set of data and concatenate each created template in the same order. You need to declare a sub template for this element. You may aso declare derivated sub templates for the different possible cases of the loop: For instance, If your main subtemplate for your look is called "hobby", you may need a different template for the first element, last element, Nth element, Element with a value "no" in the sport field, etc. The supported postfixes are: When the array to iterate is empty: - .none (for example "There is no hobby") When the array contains elements, it will search in order, the following template and use the first found: - templateid.key.[value] value is the key of the vector line. If the collection has a named key (string) or is a direct array (0, 1, 2...) - templateid.first if it is the first element of the array set (new from v1.01.11) - templateid.last if it is the first element of the array set (new from v1.01.11) - templateid.even if the line number is even - templateid in all other cases (odd is contained here if even is defined) Since v2.1.7, you can also use the pseudo field {{.counter}} into the loop subtemplate, to get the number of the counter of the loop, it is 1-based (first loop is 1, not 0) 3.4.2.2 When order is a single id (characters a-z0-9.-_), it will make a call to the sub template id with the same subset of data with the same id and replace the @@...@@ for each itterance of the data with the result. Example based on previous array of Fred's data: 3.4.2.3 When order contains 2 parameters separated by a semicolumn :, then first parameter is used to change the level of the data of array, with the subset with this id, and the second one for the template to use. Example based on previous array of Fred's data: 3.4.3 Conditional: ??order?? Makes a call to a subtemplate only if the field exists and have a value. This is very userfull to call a sub template for instance when an image or a video is set. When the condition is not met, it will search for the [id].none template. The conditional element does not change the level in the data set. 3.4.3.1 When order is a single id (characters a-z0-9.-_), it will make a call to the sub template id with the same field in the data and replace the ??...?? with the corresponding template Example based on previous array of Fred's data: 3.4.3.2 When order contains 2 parameters separated by a semicolumn :, then second parameter is used to change the level of the data of array, with the subset with this id. Example based on previous array of Fred's data: If the asked field is a catalog, true/false, numbered, you may also use .[value] subtemplates 3.5 Debug Tools: !!order!! There are two keywords to dump the content of the data set. This is very useful when you dont know the code that calls the template, don't remember some values, or for debug facilities. 3.5.1 !!dump!! Will show the totality of the data set, with ids and values. 3.5.1 !!list!! Will show only the tree of parameters, values are not shown.
Package ora implements an Oracle database driver. ### Golang Oracle Database Driver ### #### TL;DR; just use it #### Call stored procedure with OUT parameters: An Oracle database may be accessed through the database/sql(http://golang.org/pkg/database/sql) package or through the ora package directly. database/sql offers connection pooling, thread safety, a consistent API to multiple database technologies and a common set of Go types. The ora package offers additional features including pointers, slices, nullable types, numerics of various sizes, Oracle-specific types, Go return type configuration, and Oracle abstractions such as environment, server and session. The ora package is written with the Oracle Call Interface (OCI) C-language libraries provided by Oracle. The OCI libraries are a standard for client application communication and driver communication with Oracle databases. The ora package has been verified to work with: * Oracle Standard 11g (11.2.0.4.0), Linux x86_64 (RHEL6) * Oracle Enterprise 12c (12.1.0.1.0), Windows 8.1 and AMD64. --- * [Installation](https://github.com/rana/ora#installation) * [Data Types](https://github.com/rana/ora#data-types) * [SQL Placeholder Syntax](https://github.com/rana/ora#sql-placeholder-syntax) * [Working With The Sql Package](https://github.com/rana/ora#working-with-the-sql-package) * [Working With The Oracle Package Directly](https://github.com/rana/ora#working-with-the-oracle-package-directly) * [Logging](https://github.com/rana/ora#logging) * [Test Database Setup](https://github.com/rana/ora#test-database-setup) * [Limitations](https://github.com/rana/ora#limitations) * [License](https://github.com/rana/ora#license) * [API Reference](http://godoc.org/github.com/rana/ora#pkg-index) * [Examples](./examples) --- Minimum requirements are Go 1.3 with CGO enabled, a GCC C compiler, and Oracle 11g (11.2.0.4.0) or Oracle Instant Client (11.2.0.4.0). Install Oracle or Oracle Instant Client. Copy the [oci8.pc](contrib/oci8.pc) from the `contrib` folder (or the one for your system, maybe tailored to your specific locations) to a folder in `$PKG_CONFIG_PATH` or a system folder, such as The ora package has no external Go dependencies and is available on GitHub and gopkg.in: *WARNING*: If you have Oracle Instant Client 11.2, you'll need to add "=lnnz11" to the list of linked libs! Otherwise, you may encounter "undefined reference to `nzosSCSP_SetCertSelectionParams' " errors. Oracle Instant Client 12.1 does not need this. The ora package supports all built-in Oracle data types. The supported Oracle built-in data types are NUMBER, BINARY_DOUBLE, BINARY_FLOAT, FLOAT, DATE, TIMESTAMP, TIMESTAMP WITH TIME ZONE, TIMESTAMP WITH LOCAL TIME ZONE, INTERVAL YEAR TO MONTH, INTERVAL DAY TO SECOND, CHAR, NCHAR, VARCHAR, VARCHAR2, NVARCHAR2, LONG, CLOB, NCLOB, BLOB, LONG RAW, RAW, ROWID and BFILE. SYS_REFCURSOR is also supported. Oracle does not provide a built-in boolean type. Oracle provides a single-byte character type. A common practice is to define two single-byte characters which represent true and false. The ora package adopts this approach. The oracle package associates a Go bool value to a Go rune and sends and receives the rune to a CHAR(1 BYTE) column or CHAR(1 CHAR) column. The default false rune is zero '0'. The default true rune is one '1'. The bool rune association may be configured or disabled when directly using the ora package but not with the database/sql package. Within a SQL string a placeholder may be specified to indicate where a Go variable is placed. The SQL placeholder is an Oracle identifier, from 1 to 30 characters, prefixed with a colon (:). For example: Placeholders within a SQL statement are bound by position. The actual name is not used by the ora package driver e.g., placeholder names :c1, :1, or :xyz are treated equally. The `database/sql` package provides a LastInsertId method to return the last inserted row's id. Oracle does not provide such functionality, but if you append `... RETURNING col /*LastInsertId*/` to your SQL, then it will be presented as LastInsertId. Note that you have to mark with a `/*LastInsertId*/` (case insensitive) your `RETURNING` part, to allow ora to return the last column as `LastInsertId()`. That column must fit in `int64`, though! You may access an Oracle database through the database/sql package. The database/sql package offers a consistent API across different databases, connection pooling, thread safety and a set of common Go types. database/sql makes working with Oracle straight-forward. The ora package implements interfaces in the database/sql/driver package enabling database/sql to communicate with an Oracle database. Using database/sql ensures you never have to call the ora package directly. When using database/sql, the mapping between Go types and Oracle types may be changed slightly. The database/sql package has strict expectations on Go return types. The Go-to-Oracle type mapping for database/sql is: The "ora" driver is automatically registered for use with sql.Open, but you can call ora.SetCfg to set the used configuration options including statement configuration and Rset configuration. When configuring the driver for use with database/sql, keep in mind that database/sql has strict Go type-to-Oracle type mapping expectations. The ora package allows programming with pointers, slices, nullable types, numerics of various sizes, Oracle-specific types, Go return type configuration, and Oracle abstractions such as environment, server and session. When working with the ora package directly, the API is slightly different than database/sql. When using the ora package directly, the mapping between Go types and Oracle types may be changed. The Go-to-Oracle type mapping for the ora package is: An example of using the ora package directly: Pointers may be used to capture out-bound values from a SQL statement such as an insert or stored procedure call. For example, a numeric pointer captures an identity value: A string pointer captures an out parameter from a stored procedure: Slices may be used to insert multiple records with a single insert statement: The ora package provides nullable Go types to support DML operations such as insert and select. The nullable Go types provided by the ora package are Int64, Int32, Int16, Int8, Uint64, Uint32, Uint16, Uint8, Float64, Float32, Time, IntervalYM, IntervalDS, String, Bool, Binary and Bfile. For example, you may insert nullable Strings and select nullable Strings: The `Stmt.Prep` method is variadic accepting zero or more `GoColumnType` which define a Go return type for a select-list column. For example, a Prep call can be configured to return an int64 and a nullable Int64 from the same column: Go numerics of various sizes are supported in DML operations. The ora package supports int64, int32, int16, int8, uint64, uint32, uint16, uint8, float64 and float32. For example, you may insert a uint16 and select numerics of various sizes: If a non-nullable type is defined for a nullable column returning null, the Go type's zero value is returned. GoColumnTypes defined by the ora package are: When Stmt.Prep doesn't receive a GoColumnType, or receives an incorrect GoColumnType, the default value defined in RsetCfg is used. EnvCfg, SrvCfg, SesCfg, StmtCfg and RsetCfg are the main configuration structs. EnvCfg configures aspects of an Env. SrvCfg configures aspects of a Srv. SesCfg configures aspects of a Ses. StmtCfg configures aspects of a Stmt. RsetCfg configures aspects of Rset. StmtCfg and RsetCfg have the most options to configure. RsetCfg defines the default mapping between an Oracle select-list column and a Go type. StmtCfg may be set in an EnvCfg, SrvCfg, SesCfg and StmtCfg. RsetCfg may be set in a Stmt. EnvCfg.StmtCfg, SrvCfg.StmtCfg, SesCfg.StmtCfg may optionally be specified to configure a statement. If StmtCfg isn't specified default values are applied. EnvCfg.StmtCfg, SrvCfg.StmtCfg, SesCfg.StmtCfg cascade to new descendent structs. When ora.OpenEnv() is called a specified EnvCfg is used or a default EnvCfg is created. Creating a Srv with env.OpenSrv() will use SrvCfg.StmtCfg if it is specified; otherwise, EnvCfg.StmtCfg is copied by value to SrvCfg.StmtCfg. Creating a Ses with srv.OpenSes() will use SesCfg.StmtCfg if it is specified; otherwise, SrvCfg.StmtCfg is copied by value to SesCfg.StmtCfg. Creating a Stmt with ses.Prep() will use SesCfg.StmtCfg if it is specified; otherwise, a new StmtCfg with default values is set on the Stmt. Call Stmt.Cfg() to change a Stmt's configuration. An Env may contain multiple Srv. A Srv may contain multiple Ses. A Ses may contain multiple Stmt. A Stmt may contain multiple Rset. Setting a RsetCfg on a StmtCfg does not cascade through descendent structs. Configuration of Stmt.Cfg takes effect prior to calls to Stmt.Exe and Stmt.Qry; consequently, any updates to Stmt.Cfg after a call to Stmt.Exe or Stmt.Qry are not observed. One configuration scenario may be to set a server's select statements to return nullable Go types by default: Another scenario may be to configure the runes mapped to bool values: Oracle-specific types offered by the ora package are ora.Rset, ora.IntervalYM, ora.IntervalDS, ora.Raw, ora.Lob and ora.Bfile. ora.Rset represents an Oracle SYS_REFCURSOR. ora.IntervalYM represents an Oracle INTERVAL YEAR TO MONTH. ora.IntervalDS represents an Oracle INTERVAL DAY TO SECOND. ora.Raw represents an Oracle RAW or LONG RAW. ora.Lob may represent an Oracle BLOB or Oracle CLOB. And ora.Bfile represents an Oracle BFILE. ROWID columns are returned as strings and don't have a unique Go type. #### LOBs The default for SELECTing [BC]LOB columns is a safe Bin or S, which means all the contents of the LOB is slurped into memory and returned as a []byte or string. The DefaultLOBFetchLen says LOBs are prefetched only a minimal way, to minimize extra memory usage - you can override this using `stmt.SetCfg(stmt.Cfg().SetLOBFetchLen(100))`. If you want more control, you can use ora.L in Prep, Qry or `ses.SetCfg(ses.Cfg().SetBlob(ora.L))`. But keep in mind that Oracle restricts the use of LOBs: it is forbidden to do ANYTHING while reading the LOB! No another query, no exec, no close of the Rset - even *advance* to the next record in the result set is forbidden! Failing to adhere these rules results in "Invalid handle" and ORA-03127 errors. You cannot start reading another LOB till you haven't finished reading the previous LOB, not even in the same row! Failing this results in ORA-24804! For examples, see [z_lob_test.go](z_lob_test.go). #### Rset Rset is used to obtain Go values from a SQL select statement. Methods Rset.Next, Rset.NextRow, and Rset.Len are available. Fields Rset.Row, Rset.Err, Rset.Index, and Rset.ColumnNames are also available. The Next method attempts to load data from an Oracle buffer into Row, returning true when successful. When no data is available, or if an error occurs, Next returns false setting Row to nil. Any error in Next is assigned to Err. Calling Next increments Index and method Len returns the total number of rows processed. The NextRow method is convenient for returning a single row. NextRow calls Next and returns Row. ColumnNames returns the names of columns defined by the SQL select statement. Rset has two usages. Rset may be returned from Stmt.Qry when prepared with a SQL select statement: Or, *Rset may be passed to Stmt.Exe when prepared with a stored procedure accepting an OUT SYS_REFCURSOR parameter: Stored procedures with multiple OUT SYS_REFCURSOR parameters enable a single Exe call to obtain multiple Rsets: The types of values assigned to Row may be configured in StmtCfg.Rset. For configuration to take effect, assign StmtCfg.Rset prior to calling Stmt.Qry or Stmt.Exe. Rset prefetching may be controlled by StmtCfg.PrefetchRowCount and StmtCfg.PrefetchMemorySize. PrefetchRowCount works in coordination with PrefetchMemorySize. When PrefetchRowCount is set to zero only PrefetchMemorySize is used; otherwise, the minimum of PrefetchRowCount and PrefetchMemorySize is used. The default uses a PrefetchMemorySize of 134MB. Opening and closing Rsets is managed internally. Rset does not have an Open method or Close method. IntervalYM may be be inserted and selected: IntervalDS may be be inserted and selected: Transactions on an Oracle server are supported. DML statements auto-commit unless a transaction has started: Ses.PrepAndExe, Ses.PrepAndQry, Ses.Ins, Ses.Upd, and Ses.Sel are convenient one-line methods. Ses.PrepAndExe offers a convenient one-line call to Ses.Prep and Stmt.Exe. Ses.PrepAndQry offers a convenient one-line call to Ses.Prep and Stmt.Qry. Ses.Ins composes, prepares and executes a sql INSERT statement. Ses.Ins is useful when you have to create and maintain a simple INSERT statement with a long list of columns. As table columns are added and dropped over the lifetime of a table Ses.Ins is easy to read and revise. Ses.Upd composes, prepares and executes a sql UPDATE statement. Ses.Upd is useful when you have to create and maintain a simple UPDATE statement with a long list of columns. As table columns are added and dropped over the lifetime of a table Ses.Upd is easy to read and revise. Ses.Sel composes, prepares and queries a sql SELECT statement. Ses.Sel is useful when you have to create and maintain a simple SELECT statement with a long list of columns that have non-default GoColumnTypes. As table columns are added and dropped over the lifetime of a table Ses.Sel is easy to read and revise. The Ses.Ping method checks whether the client's connection to an Oracle server is valid. A call to Ping requires an open Ses. Ping will return a nil error when the connection is fine: The Srv.Version method is available to obtain the Oracle server version. A call to Version requires an open Ses: Further code examples are available in the [example file](https://github.com/rana/ora/blob/master/z_example_test.go), test files and [samples folder](https://github.com/rana/ora/tree/master/samples). The ora package provides a simple ora.Logger interface for logging. Logging is disabled by default. Specify one of three optional built-in logging packages to enable logging; or, use your own logging package. ora.Cfg().Log offers various options to enable or disable logging of specific ora driver methods. For example: To use the standard Go log package: which produces a sample log of: Messages are prefixed with 'ORA I' for information or 'ORA E' for an error. The log package is configured to write to os.Stderr by default. Use the ora/lg.Std type to configure an alternative io.Writer. To use the glog package: which produces a sample log of: To use the log15 package: which produces a sample log of: See https://github.com/rana/ora/tree/master/samples/lg15/main.go for sample code which uses the log15 package. Tests are available and require some setup. Setup varies depending on whether the Oracle server is configured as a container database or non-container database. It's simpler to setup a non-container database. An example for each setup is explained. Non-container test database setup steps: Container test database setup steps: Some helpful SQL maintenance statements: Run the tests. database/sql method Stmt.QueryRow is not supported. Go 1.6 introduced stricter cgo (call C from Go) rules, and introduced runtime checks. This is good, as the possibility of C code corrupting Go code is almost completely eliminated, but it also means a severe call overhead grow. [Sometimes](https://groups.google.com/forum/#!topic/golang-nuts/ccMkPG6Bi5k) this can be 22x the go 1.5.3 call time! So if you need performance more than correctness, start your programs with "GODEBUG=cgocheck=0" environment setting. Copyright 2017 Rana Ian, Tamás Gulácsi. All rights reserved. Use of this source code is governed by The MIT License found in the accompanying LICENSE file.
Package pq is a pure Go Postgres driver for the database/sql package. In most cases clients will use the database/sql package instead of using this package directly. For example: You can also connect to a database using a URL. For example: Similarly to libpq, when establishing a connection using pq you are expected to supply a connection string containing zero or more parameters. A subset of the connection parameters supported by libpq are also supported by pq. Additionally, pq also lets you specify run-time parameters (such as search_path or work_mem) directly in the connection string. This is different from libpq, which does not allow run-time parameters in the connection string, instead requiring you to supply them in the options parameter. For compatibility with libpq, the following special connection parameters are supported: Valid values for sslmode are: See http://www.postgresql.org/docs/current/static/libpq-connect.html#LIBPQ-CONNSTRING for more information about connection string parameters. Use single quotes for values that contain whitespace: A backslash will escape the next character in values: Note that the connection parameter client_encoding (which sets the text encoding for the connection) may be set but must be "UTF8", matching with the same rules as Postgres. It is an error to provide any other value. In addition to the parameters listed above, any run-time parameter that can be set at backend start time can be set in the connection string. For more information, see http://www.postgresql.org/docs/current/static/runtime-config.html. Most environment variables as specified at http://www.postgresql.org/docs/current/static/libpq-envars.html supported by libpq are also supported by pq. If any of the environment variables not supported by pq are set, pq will panic during connection establishment. Environment variables have a lower precedence than explicitly provided connection parameters. The pgpass mechanism as described in http://www.postgresql.org/docs/current/static/libpq-pgpass.html is supported, but on Windows PGPASSFILE must be specified explicitly. database/sql does not dictate any specific format for parameter markers in query strings, and pq uses the Postgres-native ordinal markers, as shown above. The same marker can be reused for the same parameter: pq does not support the LastInsertId() method of the Result type in database/sql. To return the identifier of an INSERT (or UPDATE or DELETE), use the Postgres RETURNING clause with a standard Query or QueryRow call: For more details on RETURNING, see the Postgres documentation: For additional instructions on querying see the documentation for the database/sql package. Parameters pass through driver.DefaultParameterConverter before they are handled by this package. When the binary_parameters connection option is enabled, []byte values are sent directly to the backend as data in binary format. This package returns the following types for values from the PostgreSQL backend: All other types are returned directly from the backend as []byte values in text format. pq may return errors of type *pq.Error which can be interrogated for error details: See the pq.Error type for details. You can perform bulk imports by preparing a statement returned by pq.CopyIn (or pq.CopyInSchema) in an explicit transaction (sql.Tx). The returned statement handle can then be repeatedly "executed" to copy data into the target table. After all data has been processed you should call Exec() once with no arguments to flush all buffered data. Any call to Exec() might return an error which should be handled appropriately, but because of the internal buffering an error returned by Exec() might not be related to the data passed in the call that failed. CopyIn uses COPY FROM internally. It is not possible to COPY outside of an explicit transaction in pq. Usage example: PostgreSQL supports a simple publish/subscribe model over database connections. See http://www.postgresql.org/docs/current/static/sql-notify.html for more information about the general mechanism. To start listening for notifications, you first have to open a new connection to the database by calling NewListener. This connection can not be used for anything other than LISTEN / NOTIFY. Calling Listen will open a "notification channel"; once a notification channel is open, a notification generated on that channel will effect a send on the Listener.Notify channel. A notification channel will remain open until Unlisten is called, though connection loss might result in some notifications being lost. To solve this problem, Listener sends a nil pointer over the Notify channel any time the connection is re-established following a connection loss. The application can get information about the state of the underlying connection by setting an event callback in the call to NewListener. A single Listener can safely be used from concurrent goroutines, which means that there is often no need to create more than one Listener in your application. However, a Listener is always connected to a single database, so you will need to create a new Listener instance for every database you want to receive notifications in. The channel name in both Listen and Unlisten is case sensitive, and can contain any characters legal in an identifier (see http://www.postgresql.org/docs/current/static/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS for more information). Note that the channel name will be truncated to 63 bytes by the PostgreSQL server. You can find a complete, working example of Listener usage at https://godoc.org/github.com/lib/pq/example/listen.
Package pgconn is a low-level PostgreSQL database driver. pgconn provides lower level access to a PostgreSQL connection than a database/sql or pgx connection. It operates at nearly the same level is the C library libpq. Use Connect to establish a connection. It accepts a connection string in URL or DSN and will read the environment for libpq style environment variables. ExecParams and ExecPrepared execute a single query. They return readers that iterate over each row. The Read method reads all rows into memory. Exec and ExecBatch can execute multiple queries in a single round trip. They return readers that iterate over each query result. The ReadAll method reads all query results into memory. All potentially blocking operations take a context.Context. If a context is canceled while the method is in progress the method immediately returns. In most circumstances, this will close the underlying connection. The CancelRequest method may be used to request the PostgreSQL server cancel an in-progress query without forcing the client to abort.
Package pq is a pure Go Postgres driver for the database/sql package. In most cases clients will use the database/sql package instead of using this package directly. For example: You can also connect to a database using a URL. For example: Similarly to libpq, when establishing a connection using pq you are expected to supply a connection string containing zero or more parameters. A subset of the connection parameters supported by libpq are also supported by pq. Additionally, pq also lets you specify run-time parameters (such as search_path or work_mem) directly in the connection string. This is different from libpq, which does not allow run-time parameters in the connection string, instead requiring you to supply them in the options parameter. For compatibility with libpq, the following special connection parameters are supported: Valid values for sslmode are: See http://www.postgresql.org/docs/current/static/libpq-connect.html#LIBPQ-CONNSTRING for more information about connection string parameters. Use single quotes for values that contain whitespace: A backslash will escape the next character in values: Note that the connection parameter client_encoding (which sets the text encoding for the connection) may be set but must be "UTF8", matching with the same rules as Postgres. It is an error to provide any other value. In addition to the parameters listed above, any run-time parameter that can be set at backend start time can be set in the connection string. For more information, see http://www.postgresql.org/docs/current/static/runtime-config.html. Most environment variables as specified at http://www.postgresql.org/docs/current/static/libpq-envars.html supported by libpq are also supported by pq. If any of the environment variables not supported by pq are set, pq will panic during connection establishment. Environment variables have a lower precedence than explicitly provided connection parameters. The pgpass mechanism as described in http://www.postgresql.org/docs/current/static/libpq-pgpass.html is supported, but on Windows PGPASSFILE must be specified explicitly. database/sql does not dictate any specific format for parameter markers in query strings, and pq uses the Postgres-native ordinal markers, as shown above. The same marker can be reused for the same parameter: pq does not support the LastInsertId() method of the Result type in database/sql. To return the identifier of an INSERT (or UPDATE or DELETE), use the Postgres RETURNING clause with a standard Query or QueryRow call: For more details on RETURNING, see the Postgres documentation: For additional instructions on querying see the documentation for the database/sql package. Parameters pass through driver.DefaultParameterConverter before they are handled by this package. When the binary_parameters connection option is enabled, []byte values are sent directly to the backend as data in binary format. This package returns the following types for values from the PostgreSQL backend: All other types are returned directly from the backend as []byte values in text format. pq may return errors of type *pq.Error which can be interrogated for error details: See the pq.Error type for details. You can perform bulk imports by preparing a statement returned by pq.CopyIn (or pq.CopyInSchema) in an explicit transaction (sql.Tx). The returned statement handle can then be repeatedly "executed" to copy data into the target table. After all data has been processed you should call Exec() once with no arguments to flush all buffered data. Any call to Exec() might return an error which should be handled appropriately, but because of the internal buffering an error returned by Exec() might not be related to the data passed in the call that failed. CopyIn uses COPY FROM internally. It is not possible to COPY outside of an explicit transaction in pq. Usage example: PostgreSQL supports a simple publish/subscribe model over database connections. See http://www.postgresql.org/docs/current/static/sql-notify.html for more information about the general mechanism. To start listening for notifications, you first have to open a new connection to the database by calling NewListener. This connection can not be used for anything other than LISTEN / NOTIFY. Calling Listen will open a "notification channel"; once a notification channel is open, a notification generated on that channel will effect a send on the Listener.Notify channel. A notification channel will remain open until Unlisten is called, though connection loss might result in some notifications being lost. To solve this problem, Listener sends a nil pointer over the Notify channel any time the connection is re-established following a connection loss. The application can get information about the state of the underlying connection by setting an event callback in the call to NewListener. A single Listener can safely be used from concurrent goroutines, which means that there is often no need to create more than one Listener in your application. However, a Listener is always connected to a single database, so you will need to create a new Listener instance for every database you want to receive notifications in. The channel name in both Listen and Unlisten is case sensitive, and can contain any characters legal in an identifier (see http://www.postgresql.org/docs/current/static/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS for more information). Note that the channel name will be truncated to 63 bytes by the PostgreSQL server. You can find a complete, working example of Listener usage at https://godoc.org/github.com/greatfocus/pq/example/listen. If you need support for Kerberos authentication, add the following to your main package: This package is in a separate module so that users who don't need Kerberos don't have to download unnecessary dependencies. When imported, additional connection string parameters are supported:
Package nzgo is a pure Go language driver for the database/sql package to work with IBM PDA (aka Netezza) In most cases clients will use the database/sql package instead of using this package directly. For example: nzgo defines a simple logger interface. Set logLevel to control logging verbosity and logPath to specify log file path. By default logging will be enabled with logLevel=Info and current directory as logPath. You can configure logLevel and logPath (i.e. log file directory) as per your requirement. There is one more configuration parameter with logger "additionalLogFile". This parameter can be used to set additional logger file. additionalLogFile can be used to enable writing logs to stdout, this can be achieved by simply setting "additionalLogFile=stdout" Valid values for 'logLevel' are : "OFF" , "DEBUG", "INFO" and "FATAL". logLevel=OFF can be used to turn off logging. It will turn of both internal and additionalLogFile logs. These logger configuration parameters should be mentinoed in connection string. The level of security (SSL/TLS) that the driver uses for the connection to the data store. onlyUnSecured: The driver does not use SSL. preferredUnSecured: If the server provides a choice, the driver does not use SSL. preferredSecured: If the server provides a choice, the driver uses SSL. onlySecured: The driver does not connect unless an SSL connection is available. Similarly, Netezza server has above securityLevel. Cases which would fail: Client tries to connect with 'Only secured' or 'Preferred secured' mode while server is 'Only Unsecured' mode. Client tries to connect with 'Only secured' or 'Preferred secured' mode while server is 'Preferred Unsecured' mode. Client tries to connect with 'Only Unsecured' or 'Preferred Unsecured' mode while server is 'Only Secured' mode. Client tries to connect with 'Only Unsecured' or 'Preferred Unsecured' mode while server is 'Preferred Secured' mode. Below are the securityLevel you can pass in connection string : Use Open to create a database handle with connection parameters: The Go Netezza Driver supports the following connection syntaxes (or data source name formats): In this case, application is running from NPS server itself so using 'localhost'. Golang driver should connect on port 5480(postgres port). The user is admin, password is password, database is db1, sslmode is require, and the location of the root certificate file is C:/Users/root31.crt with securityLevel as 'Only Secured session' When establishing a connection using nzgo you are expected to supply a connection string containing zero or more parameters. Below are subset of the connection parameters supported by nzgo. The following special connection parameters are supported: Valid values for sslmode are: Use single quotes for values that contain whitespace: A backslash will escape the next character in values: Note that the connection parameter client_encoding (which sets the text encoding for the connection) may be set but must be "UTF8", matching with the same rules as Postgres. It is an error to provide any other value. database/sql does not dictate any specific format for parameter markers in query strings, but nzgo uses the Netezza-specific parameter markers i.e. '?', as shown below. First parameter marker in the query would be replaced by first arguement, second parameter marker in the query would be replaced by second arguement and so on. nzgo supports the RowsAffected() method of the Result type in database/sql. For additional instructions on querying see the documentation for the database/sql package. nzgo also supports transaction queries as specified in database/sql package https://github.com/golang/go/wiki/SQLInterface. Transactions are started by calling Begin. This package returns the following types for values from the Netezza backend: You can unload data from an IBM Netezza database table on a Netezza host system to a remote client. This unload does not remove rows from the database but instead stores the unloaded data in a flat file (external table) that is suitable for loading back into a Netezza database. Below query would create a file 'et1.txt' on remote system from Netezza table t2 with data delimeted by '|'. See https://www.ibm.com/support/knowledgecenter/en/SSULQD_7.2.1/com.ibm.nz.load.doc/t_load_unloading_data_remote_client_sys.html for more information about external table
Package nzgo is a pure Go language driver for the database/sql package to work with IBM PDA (aka Netezza) In most cases clients will use the database/sql package instead of using this package directly. For example: nzgo defines a simple logger interface. Set logLevel to control logging verbosity and logPath to specify log file path. By default logging will be enabled with logLevel=Info and current directory as logPath. You can configure logLevel and logPath (i.e. log file directory) as per your requirement. There is one more configuration parameter with logger "additionalLogFile". This parameter can be used to set additional logger file. additionalLogFile can be used to enable writing logs to stdout, this can be achieved by simply setting "additionalLogFile=stdout" Valid values for 'logLevel' are : "OFF" , "DEBUG", "INFO" and "FATAL". logLevel=OFF can be used to turn off logging. It will turn of both internal and additionalLogFile logs. These logger configuration parameters should be mentinoed in connection string. The level of security (SSL/TLS) that the driver uses for the connection to the data store. onlyUnSecured: The driver does not use SSL. preferredUnSecured: If the server provides a choice, the driver does not use SSL. preferredSecured: If the server provides a choice, the driver uses SSL. onlySecured: The driver does not connect unless an SSL connection is available. Similarly, Netezza server has above securityLevel. Cases which would fail: Client tries to connect with 'Only secured' or 'Preferred secured' mode while server is 'Only Unsecured' mode. Client tries to connect with 'Only secured' or 'Preferred secured' mode while server is 'Preferred Unsecured' mode. Client tries to connect with 'Only Unsecured' or 'Preferred Unsecured' mode while server is 'Only Secured' mode. Client tries to connect with 'Only Unsecured' or 'Preferred Unsecured' mode while server is 'Preferred Secured' mode. Below are the securityLevel you can pass in connection string : Use Open to create a database handle with connection parameters: The Go Netezza Driver supports the following connection syntaxes (or data source name formats): In this case, application is running from NPS server itself so using 'localhost'. Golang driver should connect on port 5480(postgres port). The user is admin, password is password, database is db1, sslmode is require, and the location of the root certificate file is C:/Users/root31.crt with securityLevel as 'Only Secured session' When establishing a connection using nzgo you are expected to supply a connection string containing zero or more parameters. Below are subset of the connection parameters supported by nzgo. The following special connection parameters are supported: Valid values for sslmode are: Use single quotes for values that contain whitespace: A backslash will escape the next character in values: Note that the connection parameter client_encoding (which sets the text encoding for the connection) may be set but must be "UTF8", matching with the same rules as Postgres. It is an error to provide any other value. database/sql does not dictate any specific format for parameter markers in query strings, but nzgo uses the Netezza-specific parameter markers i.e. '?', as shown below. First parameter marker in the query would be replaced by first arguement, second parameter marker in the query would be replaced by second arguement and so on. nzgo supports the RowsAffected() method of the Result type in database/sql. For additional instructions on querying see the documentation for the database/sql package. nzgo also supports transaction queries as specified in database/sql package https://github.com/golang/go/wiki/SQLInterface. Transactions are started by calling Begin. This package returns the following types for values from the Netezza backend: You can unload data from an IBM Netezza database table on a Netezza host system to a remote client. This unload does not remove rows from the database but instead stores the unloaded data in a flat file (external table) that is suitable for loading back into a Netezza database. Below query would create a file 'et1.txt' on remote system from Netezza table t2 with data delimeted by '|'. See https://www.ibm.com/support/knowledgecenter/en/SSULQD_7.2.1/com.ibm.nz.load.doc/t_load_unloading_data_remote_client_sys.html for more information about external table
Package kv implements a simple and easy to use persistent key/value (KV) store. available at https://modern-c.appspot.com/-/builder/?importpath=modernc.org%2fkv 2016-07-11: KV now uses the stable version of lldb. (modernc.org/lldb). The stored KV pairs are sorted in the key collation order defined by an user supplied 'compare' function (passed as a field in Options). Keys, as well as the values associated with them, are opaque []bytes. Maximum size of a "native" key or value is 65787 bytes. Larger keys or values have to be composed of the "native" ones in client code. The maximum DB size kv can handle is 2^60 bytes (1 exabyte). See also [4]: "Block handles". Transactions are resource limited. All changes made by a transaction are held in memory until the top level transaction is committed. ACID[1] implementation notes/details follows. A successfully committed transaction appears (by its effects on the database) to be indivisible ("atomic") iff the transaction is performed in isolation. An aborted (via RollBack) transaction appears like it never happened under the same limitation. Atomic updates to the DB, via functions like Set, Inc, etc., are performed in their own automatic transaction. If the partial progress of any such function fails at any point, the automatic transaction is canceled via Rollback before returning from the function. A non nil error is returned in that case. All reads, including those made from any other concurrent non isolated transaction(s), performed during a not yet committed transaction, are dirty reads, i.e. the data returned are consistent with the in-progress state of the open transaction, or all of the open transactions. Obviously, conflicts, data races and inconsistent states can happen, but iff non isolated transactions are performed. Performing a Rollback at a nested transaction level properly returns the transaction state (and data read from the DB) to what it was before the respective BeginTransaction. Transactions of the atomic updating functions (Set, Put, Delete ...) are always isolated. Transactions controlled by BeginTransaction/Commit/RollBack, are isolated iff their execution is serialized. Transactions are committed using the two phase commit protocol(2PC)[2] and a write ahead log(WAL)[3]. DB recovery after a crash is performed automatically using data from the WAL. Last transaction data, either of an in progress transaction or a transaction being committed at the moment of the crash, can get lost. No protection from non readable files, files corrupted by other processes or by memory faults or other HW problems, is provided. Always properly backup your DB data file(s).
Package lddynamodb provides a DynamoDB-backed persistent data store for the LaunchDarkly Go SDK. For more details about how and why you can use a persistent data store, see: https://docs.launchdarkly.com/sdk/features/storing-data/dynamodb#go To use the DynamoDB data store with the LaunchDarkly client: By default, the data store uses a basic DynamoDB client configuration that is equivalent to doing this: This default configuration will only work if your AWS credentials and region are available from AWS environment variables and/or configuration files. If you want to set those programmatically or modify any other configuration settings, you can use the methods of the lddynamodb.DataStoreBuilder returned by lddynamodb.DataStore(). For example: Note that CacheSeconds() is not a method of lddynamodb.DataStoreBuilder, but rather a method of ldcomponents.PersistentDataStore(), because the caching behavior is provided by the SDK for all database integrations. If you are also using DynamoDB for other purposes, the data store can coexist with other data in the same table as long as you use the Prefix option to make each application use different keys. However, it is advisable to configure separate tables in DynamoDB, for better control over permissions and throughput.
Package gomusicbrainz implements a MusicBrainz WS2 client library. MusicBrainz WS2 (Version 2 of the XML Web Service) supports three different requests: With search requests you can search MusicBrainz´ database for all entities. GoMusicBrainz implements one search method for every search request in the form: searchTerm follows the Apache Lucene syntax and can either contain multiple fields with logical operators or just a simple search string. Please refer to https://lucene.apache.org/core/4_3_0/queryparser/org/apache/lucene/queryparser/classic/package-summary.html#package_description for more details on the lucene syntax. limit defines how many entries should be returned (1-100, default 25). offset is used for paging through more than one page of results. To ignore limit and/or offset, set it to -1. You can perform a lookup of an entity when you have the MBID for that entity. GoMusicBrainz provides two ways to perform lookup requests: Either the specific lookup method that is implemented for each entity that has a lookup endpoint in the form or the common lookup method if you already have an entity (with MBID) that implements the MBLookupEntity interface: With both methods you can include inc params which affect subqueries e.g. relationships. see http://musicbrainz.org/doc/Development/XML_Web_Service/Version_2#inc.3D_arguments_which_affect_subqueries Not all of them are supported yet. not supported yet.
The Escher HTTP request signing framework is intended to provide a secure way for clients to sign HTTP requests, and servers to check the integrity of these messages. The goal of the protocol is to introduce an authentication solution for REST API services, that is more secure than the currently available protocols. RFC 2617 (HTTP Authentication) defines Basic and Digest Access Authentication. They’re widely used, but Basic Access Authentication doesn’t encrypt the secret and doesn’t add integrity checks to the requests. Digest Access Authentication sends the secret encrypted, but the algorithm with creating a checksum with a nonce and using md5 should not be considered highly secure these days, and as with Basic Access Authentication, there’s no way to check the integrity of the message. RFC 6749 (OAuth 2.0 Authorization) enables a third-party application to obtain limited access to an HTTP service, either on behalf of a resource owner by orchestrating an approval interaction between the resource owner and the HTTP service, or by allowing the third-party application to obtain access on its own behalf. This is not helpful for a machine-to-machine communication situation, like a REST API authentication, because typically there’s no third-party user involved. Additionally, after a token is obtained from the authorization endpoint, it is used with no encryption, and doesn’t provide integration checking, or prevent repeating messages. OAuth 2.0 is a stateful protocol which needs a database to store the tokens for client sessions. Amazon and other service providers created protocols addressing these issues, however there is no public standard with open source implementations available from them. As Escher is based on a publicly documented, widely, in-the-wild used protocol, the specification does not include novelty techniques. 2. Signing an HTTP Request Escher defines a stateless signature generation mechanism. The signature is calculated from the key parts of the HTTP request, a service identifier string called credential scope, and a client key and client secret. The signature generation steps are: canonicalizing the request, creating a string to calculate the signature, and adding the signature to the original request. Escher supports two hash algorithms: SHA256 and SHA512 designed by the NSA (U.S. National Security Agency). 2.1. Canonicalizing the Request In order to calculate a checksum from the key HTTP request parts, the HTTP request method, the request URI, the query parts, the headers, and the request body have to be canonicalized. The output of the canonicalization step will be a string including the request parts separated by LF (line feed, “n”) characters. The string will be used to calculate a checksum for the request. 2.1.1. The HTTP method The HTTP method defined by RFC2616 (Hypertext Transfer Protocol) is case sensitive, and must be available in upper case, no transformation has to be applied: POST 2.1.2. The Path The path is the absolute path of the URL. Starts with a slash (/) character, and does not include the query part (and the question mark). Escher follows the rules defined by RFC3986 (Uniform Resource Identifier) to normalize the path. Basically it means: Convert relative paths to absolute, remove redundant path components. URI-encode each path components: the “reserved characters” defined by RFC3986 (Uniform Resource Identifier) have to be kept as they are (no encoding applied) all other characters have to be percent encoded, including SPACE (to %20, instead of +) non-ASCII, UTF-8 characters should be percent encoded to 2 or more pieces (á to %C3%A1) percent encoded hexadecimal numbers have to be upper cased (eg: a%c2%b1b to a%C2%B1b) Normalize empty paths to /. For example: 2.1.3. The Query String RFC3986 (Uniform Resource Identifier) should provide guidance for canonicalization of the query string, but here’s the complete list of the rules to be applied: URI-encode each query parameter names and values the “reserved characters” defined by RFC3986 (Uniform Resource Identifier) have to be kept as they are (no encoding applied) all other characters have to be percent encoded, including SPACE (to %20, instead of +) non-ASCII, UTF-8 characters should be percent encoded to 2 or more pieces (á to %C3%A1) percent encoded hexadecimal numbers have to be upper cased (eg: a%c2%b1b to a%C2%B1b) Normalize empty query strings to empty string. Sort query parameters by the encoded parameter names (ASCII order). Do not shorten parameter values if their parameter name is the same (key=B&key=A is a valid output), the order of parameters in a URL may be significant (this is not defined by the HTTP standard). Separate parameter names and values by = signs, include = for empty values, too Separate parameters by & For example: To canonicalize the headers, the following rules have to be followed: Lower case the header names. Separate header names and values by a :, with no spaces. Sort header names to alphabetical order (ASCII). Group headers with the same names into a header, and separate their values by commas, without sorting. Trim header values, keep all the spaces between quote characters ("). For example: 2.1.5. Signed Headers The list of headers to include when calculating the signature. Lower cased value of header names, separated by ;, like this: date;host 2.1.6. Body Checksum A checksum for the request body, aka the payload has to be calculated. Escher supports SHA-256 and SHA-512 algorithms for checksum calculation. If the request contains no body, an empty string has to be used as the input for the hash algorithm. The selected algorithm will be added later to the authorization header, so the server will be able to use the same algorithm for validation. The checksum of the body has to be presented as a lower cased hexadecimal string, for example: 2.1.7. Concatenating the Canonicalized Parts All the steps above produce a row of data, except the headers canonicalization, as it creates one row per headers. These have to be concatenated with LF (line feed, “n”) characters into a string. An example: 2.2. Creating the Signature The next step is creating another string which will be directly used to calculate the signature. 2.2.1. Algorithm ID The algorithm ID comes from the algo_prefix (default value is ESR) and the algorithm used to calculate checksums during the signing process. The string algo_prefix, “HMAC”, and the algorithm name should be concatenated with dashes, like this: 2.2.2. Long Date The long date is the request date in the ISO 8601 basic format, like YYYYMMDD + T + HHMMSS + Z. Note that the basic format uses no punctuation. Example is: This date has to be added later, too, as a date header (default header name is X-Escher-Date). 2.2.3. Date and Credential Scope Next information is the short date, and the credential scope concatenated with a / character. The short date is the request date’s date part, an ISO 8601 basic formatted representation, the credential scope is defined by the service. Example: This will be added later, too, as part of the authorization header (default header name is X-Escher-Auth). 2.2.4. Checksum of the Canonicalized Request Take the output of step 2.1.7., and create a checksum from the canonicalized checksum string. This checksum has to be represented as a lower cased hexadecimal string, too. Something like this will be an output: 2.2.5. Concatenating the Signing String Concatenate the outputs of steps 2.2. with LF characters. Example output: 2.2.6. The Signing Key The signing key is based on the algo_prefix, the client secret, the parts of the credential scope, and the request date. Take the algo_prefix, concatenate the client secret to it. First apply the HMAC algorithm to the request date, then apply the actual value on each of the credential scope parts (splitted at /). The end result is a binary signing key. Pseudo code: 2.2.7. Create the Signature The signature is created from the output of steps 2.2.5. (Signing String) and 2.2.6. (Signing Key). With the selected algorithm, create a checksum. It has to be represented as a lower cased hexadecimal string. Something like this will be an output: 2.3. Adding the Signature to the Request The final step of the Escher signing process is adding the Signature to the request. Escher adds a new header to the request, by default, the header name is X-Escher-Auth. The header value will include the algorithm ID (see 2.2.1.), the client key, the short date and the credential scope (see 2.2.3.), the signed headers string (see 2.1.5.) and finally the signature (see 2.2.7.). The values of this inputs have to be concatenated like this: 3. Presigning a URL The URL presigning process is very similar to the request signing procedure. But for a URL, there are no headers, no request body, so the calculation of the Signature is different. Also, the Signature cannot be added to the headers, but is included as query parameters. A significant difference is that the presigning allows defining an expiration time. By default, it is 86400 secs, 24 hours. The current time and the expiration time will be included in the URL, and the server has to check if the URL is expired. 3.1. Canonicalizing the URL to Presign The canonicalization for URL presigning is the same process as for HTTP requests, in this section we will cover the differences only. 3.1.1. The HTTP method The HTTP method for presigned URLs is fixed to: For example: 3.1.3. The Query String The query is coming from the URL, but the algorithm, credentials, date, expiration time, and signed headers have to be added to the query parts. 3.1.4. The Headers A URL has no headers, Escher creates the Host header based on the URL’s domain information, and adds it to the canonicalized request. For example: 3.1.5. Signed Headers It will be host, as that’s the only header included. Example:
**Please use xcore/v2** **The version 1 is obsolete.** Package xcore is a set of basic objects for programation (XCache for caches, XDataset for data sets, XLanguage for languages and XTemplate for templates). For GO, the actual existing code includes: - XCache: Application Memory Caches for any purpose, with time control and quantity control of object in the cache and also check changes against original source. It is a thread safe cache. - XDataset: Basic nested data structures for any purpose (template injection, configuration files, database records, etc). - XLanguage: language dependent text tables for internationalization of code. The sources can be text or XML file definitions. - XTemplate: template system with meta language to create complex documents (compatible with any text language, HTML, CSS, JS, PDF, XML, etc), heavily used on CMS systems and others. It is already used on sites that serve more than 60 million pages a month (500 pages per second on pike hour) and can be used on multithreading environment safely. XCache is a library to cache all the data you want into current application memory for a very fast access to the data. The access to the data support multithreading and concurrency. For the same reason, this type of cache is not persistent (if you exit the application) and cannot grow too much (as memory is the limit). However, you can control a timeout of each cache piece, and eventually a comparison function against a source (file, database, etc) to invalid the cache. 1. Declare a new XCache with NewXCache() function: 2. Fill in the cache: Once you have declared the cache, you can fill it with anything you want. The main cache object is an interface{} so you can put here anything you need, from simple variables to complex structures. You need to use the Set function: Note the ID is always a string, so convert a database key to string if needed. 3. To use the cache, just ask for your entry with Get function: 4. To maintain the cache: You may need Del function, to delete a specific entry (maybe because you deleted the record in database). You may also need Clean function to deletes a percentage of the cache, or Flush to deletes it all. The Verify function is used to check cache entries against their sources through the Validator function. Be very careful, if the cache is big or the Validator function is complex (maybe ask for a remote server information), the verification may be VERY slow and huge CPU use. The Count function gives some stats about the cache. 5. How to use Verify Function: This function is recommended when the source is local and fast to check (for instance a language file or a template file). When the source is distant (other cluster database, any rpc source on another network, integration of many parts, etc), it is more recommended to create a function that will delete the cache when needed (on demand cache change). The validator function is a func(id, time.Time) bool function. The first parameter is the ID entry in the cache, the second parameter the time of the entry was created. The validator function returns true is the cache is still valid, or false if it needs to be invalidated. The XCache is thread safe. The cache can be limited in quantity of entries and timeout for data. The cache is automanaged (for invalid expired data) and can be cleaned partially or totally manually. The XLanguage table of text entries can be loaded from XML file, XML string or normal text file or string. It is used to keep a table of id=value set of entries in any languages you need, so it is easy to switch between XLanguage instance based on the required language needed. Obviously, any XLanguage you load in any language should have the same id entries translated, for the same use. 1. loading: You can load any file or XML string directly into the object. 1.1 The XML Format is: NAMEOFTABLE is the name of your table entry, for example "loginform", "user_report", etc. LG is the ISO-3369 2 letters language ID, for example "es" for spanish, "en" for english, "fr" for french, etc. ENTRYNAME is the ID of the entry, for example "greating", "yourname", "submitbutton". ENTRYVALUE is the text for your entry, for example "Hello", "You are:", "Save" if your table is in english. 1.2 The flat text format is: ENTRYNAME is the ID of the entry, for example "greating", "yourname", "submitbutton". ENTRYVALUE is the text for your entry, for example "Hello", "You are:", "Save" if your table is in english. There is no name of table or language in this format (you "know" what you are loading). The advantage to use XML format is to have more control over your language, and eventyally add attributes into your entries, for instance you may add attributes translated="yes/no", verified="yes/no", and any other data that your system could insert. The XLanguage will ignore those attributes loading the table. 2. creation: To create a new XLanguage empty structure: There are 4 functions to create the language from a file or string, flat text or XML text: Then you can use the set of basic access functions: SetName/SetLanguage functions are used to set the table name and language of the object (generally to build an object from scratch). GetName/GetLanguage functions are used to get the table name and language of the object (generally when you load it from some source). Set/Get/Del functions are used to add or modify a new entry, read an entry, or deletes an entry in the object. 1. Overview: The XDataSet is a set of interfaces and basic classes ready-to-use to build a standard set of data optionally nested and hierarchical, that can be used for any purpose: - Keep complex data in memory. - Create JSON structures. - Inject data into templates. - Interchange database data (records set and record). You can store into it generic supported data, as well as any complex interface structures: - Int - Float - String - Time - Bool - []Int - []Float - []Time - []Bool - XDataSetDef (anything extended with this interface) - []String - Anything else ( interface{} ) - XDataSetCollectionDef (anything extended with this interface) The generic supported data comes with a set of functions to get/set those data directly into the XDataset. Example: Note that all references to XDataset and XDatasetCollection are pointers, always (to be able to modify the values of them). 2. XDatasetDef interface: It is the interface to describe a simple set of data mapped as "name": value, where value can be of any type. The interface implements a good amount of basic methods to get the value on various format such as GetString("name"), GetInt("name"), etc (see below). If the value is another type as asked, the method should contert it if possible. For instance "key":123 required through GetString("key") should return "123". The XDataset type is a simple map[string]interface{} with all the implemented methods and should be enough to use for almost all required cases. However, you can build any complex structure that extends the interface and implements all the required functions to stay compatible with the XDatasetDef. 3. XDatasetCollectionDef Interface: This is the interface used to extend any type of data as a Collection, i-e an array of XDatasetDef. This is a slice of any XDatasetDef compatible data. The interface implements some methods to work on array structure such as Push, Pop, Shift, Unshift and some methods to search data into the array. The XDatasetCollection type is a simple []DatasetDef with all the implemented methods and should be enough to use for almost all required cases. 1. Overview: This is a class to compile and keep a Template that can be injected with an XDataSet structure of data, with a metalanguage to inject the data. The metalanguage is extremely simple and is made to be useful and **really** separate programation from template code (not like other many generic template systems that just mix code and data). A template is a set of HTML/XML (or any other language) string with a meta language to inject variables and build a final string. The XCore XTemplate system is based on the injection of parameters, language translation strings and data fields directly into the HTML (Or any other language you need) template. The HTML itself (or any other language) is a text code not directly used by the template system, but used to dress the data you want to represent in your preferred language. The variables to inject must be into a XDataSet structure or into a structure extended from XDataSetDef interface. The injection of data is based on a XDataSet structure of values that can be nested into another XDataSet and XDataSetConnection and so on. The template compiler recognize nested arrays to automatically make loops on the information. Templates are made to store reusable HTML code, and overall easily changeable by people that do not know how to write programs. A template can be as simple as a single character (no variables to inject) to a very complex nested, conditional and loops sub-templates. Yes. this is a template, but a very simple one without need to inject any data. Let's go more complex: Having an array of data, we want to paint it beautifull: We can create a template to inject this data into it: 2. Create and use XTemplateData: In sight to create and use templates, you have all those possible options to use: Creates the XTemplate from a string or a file or any other source: 3. Metalanguage Reference: 3.1 Comments: %-- and --% You may use comments into your template. The comments will be discarded immediately at the compilation of the template and do not interfere with the rest of your code. Example: 3.2 Nested Templates: [[...]] and [[]] You can define new nested templates into your main template A nested template is defined by: The templteid is any combination of lowers letters only (a-z), numbers (0-9), and 3 special chars: . (point) - (dash) and _ (underline). The template is closed with [[]]. There is no limits into nesting templates. Any nested template will inheritate all the father elements and can use father elements too. To call a sub-template, you need to use &&templateid&& syntax (described below in this document). Example: You may use more than one id into the same template to avoid repetition of the same code. The different id's are separated with a pipe | Important note: A template will be visible only on the same level of its declaration. For example, if you put a subtemplate "b" into a subtemplate "a", it will not be visible by &&b&& from the top level, but only into the subtemplate "a". 3.3 Simple Elements: ##...## and {{...}} There are 2 types of simple elements. Language elements and Data injector elements (also called field elements). We "logically" define the 2 type of elements. The separation is only for human logic and template filling, however the language information can perfectly fit into the data to inject (and not use ## entries). 3.3.1 Languages elements: ##entry## All the languages elements should have the format: ##entry##. A language entry is generally anything written into your code or page that does not come from a database, and should adapt to the language of the client visiting your site. Using the languages elements may depend on the internationalization of your page. If your page is going to be in a single language forever, you really dont need to use languages entries. The language elements generally carry titles, menu options, tables headers etc. The language entries are set into the "#" entry of the main template XDataset to inject, and is a XLanguage table. Example: With data to inject: 3.3.2 Field elements: {{fieldname}} Fields values should have the format: {{fieldname}}. Your fields source can be a database or any other preferred repository data source. Example: You can access an element with its path into the data set to inject separating each field level with a > (greater than). This will take the name of the second hobby in the dataset defined upper. (collections are 0 indexed). The 1 denotes the second record of the hobbies XDatasetCollection. If the field is not found, it will be replaced with an empty string. Tecnically your field names can be any string in the dataset. However do not use { } or > into the names of your fields or the XTemplate may not use them correctly. We recommend to use lowercase names with numbers and ._- Accents and UTF8 symbols are also welcome. 3.3.3 Scope: When you use an id to point a value, the template will first search into the available ids of the local level. If no id is found, the it will search into the upper levers if any, and so on. Example: At the level of 'data2', using {{appname}} will get back 'DomCore'. At the level of 'key1', using {{appname}} will get back 'Nested App'. At the level of 'key2', using {{appname}} will get back 'DomCore'. At the level of root, 'data1' or 'detail', using {{appname}} will get back an empty string. 3.3.4 Path access: id>id>id>id At any level into the data array, you can access any entry into the subset array. For instance, taking the previous array of data to inject, let's suppose we are into a nested meta elements at the 'data1' level. You may want to access directly the 'Juan' entry. The path will be: The José's status value from the root will be: 3.4 Meta Elements They consist into an injection of a XDataset, called the "data to inject", into the template. The meta language is directly applied on the structure of the data array. The data to inject is a nested set of variables and values with the structure you want (there is no specific construction rules). You can inject nearly anything into a template meta elements. Example of a data array to inject: You can access directly any data into the array with its relative path (relative to the level you are when the metaelements are applied, see below). There are 4 structured meta elements in the XTemplate templates to use the data to inject: Reference, Loops, Condition and Debug. The structure of the meta elements in the template must follow the structure of the data to inject. 3.4.1 References to another template: &&order&& 3.4.1.1 When order is a single id (characters a-z0-9.-_), it will make a call to a sub template with the same set of data and replace the &&...&& with the result. The level in the data set is not changed. Example based on previous array of Fred's data: 3.4.1.2 When order contains 2 parameters separated by a semicolumn :, then second parameter is used to change the level of the data of array, with the subset with this id. The level in the data set is changed to this sub set. Example based on previous array of Fred's data: 3.4.1.3 When order contains 3 parameters separated by a semicolumn :, the second and third parameters are used to search the name of the new template based on the data fields to inject. This is an indirect access to the template. The name of the subtemplate is build with parameter3 as prefix and the content of parameter2 value. The third parameter must be empty. 3.4.2 Loops: @@order@@ 3.4.2.1 Overview This meta element will loop over each itterance of the set of data and concatenate each created template in the same order. You need to declare a sub template for this element. You may aso declare derivated sub templates for the different possible cases of the loop: For instance, If your main subtemplate for your look is called "hobby", you may need a different template for the first element, last element, Nth element, Element with a value "no" in the sport field, etc. The supported postfixes are: When the array to iterate is empty: - .none (for example "There is no hobby") When the array contains elements, it will search in order, the following template and use the first found: - templateid.key.[value] value is the key of the vector line. If the collection has a named key (string) or is a direct array (0, 1, 2...) - templateid.first if it is the first element of the array set (new from v1.01.11) - templateid.last if it is the first element of the array set (new from v1.01.11) - templateid.even if the line number is even - templateid in all other cases (odd is contained here if even is defined) 3.4.2.2 When order is a single id (characters a-z0-9.-_), it will make a call to the sub template id with the same subset of data with the same id and replace the @@...@@ for each itterance of the data with the result. Example based on previous array of Fred's data: 3.4.2.3 When order contains 2 parameters separated by a semicolumn :, then first parameter is used to change the level of the data of array, with the subset with this id, and the second one for the template to use. Example based on previous array of Fred's data: 3.4.3 Conditional: ??order?? Makes a call to a subtemplate only if the field exists and have a value. This is very userfull to call a sub template for instance when an image or a video is set. When the condition is not met, it will search for the [id].none template. The conditional element does not change the level in the data set. 3.4.3.1 When order is a single id (characters a-z0-9.-_), it will make a call to the sub template id with the same field in the data and replace the ??...?? with the corresponding template Example based on previous array of Fred's data: 3.4.3.2 When order contains 2 parameters separated by a semicolumn :, then second parameter is used to change the level of the data of array, with the subset with this id. Example based on previous array of Fred's data: If the asked field is a catalog, true/false, numbered, you may also use .[value] subtemplates 3.5 Debug Tools: !!order!! There are two keywords to dump the content of the data set. This is very useful when you dont know the code that calls the template, don't remember some values, or for debug facilities. 3.5.1 !!dump!! Will show the totality of the data set, with ids and values. 3.5.1 !!list!! Will show only the tree of parameters, values are not shown.
Package gosnowflake is a pure Go Snowflake driver for the database/sql package. Clients can use the database/sql package directly. For example: Use Open to create a database handle with connection parameters: The Go Snowflake Driver supports the following connection syntaxes (or data source name formats): where all parameters must be escaped or use `Config` and `DSN` to construct a DSN string. The following example opens a database handle with the Snowflake account myaccount where the username is jsmith, password is mypassword, database is mydb, schema is testschema, and warehouse is mywh: The following connection parameters are supported: account <string>: Specifies the name of your Snowflake account, where string is the name assigned to your account by Snowflake. In the URL you received from Snowflake, your account name is the first segment in the domain (e.g. abc123 in https://abc123.snowflakecomputing.com). This parameter is optional if your account is specified after the @ character. If you are not on us-west-2 region or AWS deployment, then append the region after the account name, e.g. “<account>.<region>”. If you are not on AWS deployment, then append not only the region, but also the platform, e.g., “<account>.<region>.<platform>”. Account, region, and platform should be separated by a period (“.”), as shown above. If you are using a global url, then append connection group and "global", e.g., "account-<connection_group>.global". Account and connection group are separated by a dash ("-"), as shown above. region <string>: DEPRECATED. You may specify a region, such as “eu-central-1”, with this parameter. However, since this parameter is deprecated, it is best to specify the region as part of the account parameter. For details, see the description of the account parameter. database: Specifies the database to use by default in the client session (can be changed after login). schema: Specifies the database schema to use by default in the client session (can be changed after login). warehouse: Specifies the virtual warehouse to use by default for queries, loading, etc. in the client session (can be changed after login). role: Specifies the role to use by default for accessing Snowflake objects in the client session (can be changed after login). passcode: Specifies the passcode provided by Duo when using MFA for login. passcodeInPassword: false by default. Set to true if the MFA passcode is embedded in the login password. Appends the MFA passcode to the end of the password. loginTimeout: Specifies the timeout, in seconds, for login. The default is 60 seconds. The login request gives up after the timeout length if the HTTP response is success. authenticator: Specifies the authenticator to use for authenticating user credentials: To use the internal Snowflake authenticator, specify snowflake (Default). To authenticate through Okta, specify https://<okta_account_name>.okta.com (URL prefix for Okta). To authenticate using your IDP via a browser, specify externalbrowser. To authenticate via OAuth, specify oauth and provide an OAuth Access Token (see the token parameter below). application: Identifies your application to Snowflake Support. insecureMode: false by default. Set to true to bypass the Online Certificate Status Protocol (OCSP) certificate revocation check. IMPORTANT: Change the default value for testing or emergency situations only. token: a token that can be used to authenticate. Should be used in conjunction with the "oauth" authenticator. client_session_keep_alive: Set to true have a heartbeat in the background every hour to keep the connection alive such that the connection session will never expire. Care should be taken in using this option as it opens up the access forever as long as the process is alive. ocspFailOpen: true by default. Set to false to make OCSP check fail closed mode. validateDefaultParameters: true by default. Set to false to disable checks on existence and privileges check for Database, Schema, Warehouse and Role when setting up the connection All other parameters are taken as session parameters. For example, TIMESTAMP_OUTPUT_FORMAT session parameter can be set by adding: The Go Snowflake Driver honors the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY for the forward proxy setting. NO_PROXY specifies which hostname endings should be allowed to bypass the proxy server, e.g. :code:`no_proxy=.amazonaws.com` means that AWS S3 access does not need to go through the proxy. NO_PROXY does not support wildcards. Each value specified should be one of the following: The end of a hostname (or a complete hostname), for example: ".amazonaws.com" or "xy12345.snowflakecomputing.com". An IP address, for example "192.196.1.15". If more than one value is specified, values should be separated by commas, for example: By default, the driver's builtin logger is NOP; no output is generated. This is intentional for those applications that use the same set of logger parameters not to conflict with glog, which is incorporated in the driver logging framework. In order to enable debug logging for the driver, add a build tag sfdebug to the go tool command lines, for example: For tests, run the test command with the tag along with glog parameters. For example, the following command will generate all acitivty logs in the standard error. Likewise, if you build your application with the tag, you may specify the same set of glog parameters. To get the logs for a specific module, use the -vmodule option. For example, to retrieve the driver.go and connection.go module logs: Note: If your request retrieves no logs, call db.Close() or glog.flush() to flush the glog buffer. Note: The logger may be changed in the future for better logging. Currently if the applications use the same parameters as glog, you cannot collect both application and driver logs at the same time. From 0.5.0, a signal handling responsibility has moved to the applications. If you want to cancel a query/command by Ctrl+C, add a os.Interrupt trap in context to execute methods that can take the context parameter, e.g., QueryContext, ExecContext. See cmd/selectmany.go for the full example. Queries return SQL column type information in the ColumnType type. The DatabaseTypeName method returns the following strings representing Snowflake data types: Go's database/sql package limits Go's data types to the following for binding and fetching: Fetching data isn't an issue since the database data type is provided along with the data so the Go Snowflake Driver can translate Snowflake data types to Go native data types. When the client binds data to send to the server, however, the driver cannot determine the date/timestamp data types to associate with binding parameters. For example: To resolve this issue, a binding parameter flag is introduced that associates any subsequent time.Time type to the DATE, TIME, TIMESTAMP_LTZ, TIMESTAMP_NTZ or BINARY data type. The above example could be rewritten as follows: The driver fetches TIMESTAMP_TZ (timestamp with time zone) data using the offset-based Location types, which represent a collection of time offsets in use in a geographical area, such as CET (Central European Time) or UTC (Coordinated Universal Time). The offset-based Location data is generated and cached when a Go Snowflake Driver application starts, and if the given offset is not in the cache, it is generated dynamically. Currently, Snowflake doesn't support the name-based Location types, e.g., America/Los_Angeles. For more information about Location types, see the Go documentation for https://golang.org/pkg/time/#Location. Internally, this feature leverages the []byte data type. As a result, BINARY data cannot be bound without the binding parameter flag. In the following example, sf is an alias for the gosnowflake package: The driver directly downloads a result set from the cloud storage if the size is large. It is required to shift workloads from the Snowflake database to the clients for scale. The download takes place by goroutine named "Chunk Downloader" asynchronously so that the driver can fetch the next result set while the application can consume the current result set. The application may change the number of result set chunk downloader if required. Note this doesn't help reduce memory footprint by itself. Consider Custom JSON Decoder. Experimental: Custom JSON Decoder for parsing Result Set The application may have the driver use a custom JSON decoder that incrementally parses the result set as follows. This option will reduce the memory footprint to half or even quarter, but it can significantly degrade the performance depending on the environment. The test cases running on Travis Ubuntu box show five times less memory footprint while four times slower. Be cautious when using the option. (Private Preview) JWT authentication ** Not recommended for production use until GA Now JWT token is supported when compiling with a golang version of 1.10 or higher. Binary compiled with lower version of golang would return an error at runtime when users try to use JWT authentication feature. To enable this feature, one can construct DSN with fields "authenticator=SNOWFLAKE_JWT&privateKey=<your_private_key>", or using Config structure specifying: The <your_private_key> should be a base64 URL encoded PKCS8 rsa private key string. One way to encode a byte slice to URL base 64 URL format is through base64.URLEncoding.EncodeToString() function. On the server side, one can alter the public key with the SQL command: The <your_public_key> should be a base64 Standard encoded PKI public key string. One way to encode a byte slice to base 64 Standard format is through base64.StdEncoding.EncodeToString() function. To generate the valid key pair, one can do the following command on the shell script: GET and PUT operations are unsupported.
Package esquery provides a non-obtrusive, idiomatic and easy-to-use query and aggregation builder for the official Go client (https://github.com/elastic/go-elasticsearch) for the ElasticSearch database (https://www.elastic.co/products/elasticsearch). esquery alleviates the need to use extremely nested maps (map[string]interface{}) and serializing queries to JSON manually. It also helps eliminating common mistakes such as misspelling query types, as everything is statically typed. Using `esquery` can make your code much easier to write, read and maintain, and significantly reduce the amount of code you write. esquery provides a method chaining-style API for building and executing queries and aggregations. It does not wrap the official Go client nor does it require you to change your existing code in order to integrate the library. Queries can be directly built with `esquery`, and executed by passing an `*elasticsearch.Client` instance (with optional search parameters). Results are returned as-is from the official client (e.g. `*esapi.Response` objects). Getting started is extremely simple: esquery currently supports version 7 of the ElasticSearch Go client. The library cannot currently generate "short queries". For example, whereas ElasticSearch can accept this: { "query": { "term": { "user": "Kimchy" } } } The library will always generate this: This is also true for queries such as "bool", where fields like "must" can either receive one query object, or an array of query objects. `esquery` will generate an array even if there's only one query object.
Package database is the database client wrapper
Package nzgo is a pure Go language driver for the database/sql package to work with IBM PDA (aka Netezza) In most cases clients will use the database/sql package instead of using this package directly. For example: nzgo defines a simple logger interface. Set logLevel to control logging verbosity and logPath to specify log file path. By default logging will be enabled with logLevel=Info and current directory as logPath. You can configure logLevel and logPath (i.e. log file directory) as per your requirement. There is one more configuration parameter with logger "additionalLogFile". This parameter can be used to set additional logger file. additionalLogFile can be used to enable writing logs to stdout, this can be achieved by simply setting "additionalLogFile=stdout" Valid values for 'logLevel' are : "OFF" , "DEBUG", "INFO" and "FATAL". logLevel=OFF can be used to turn off logging. It will turn of both internal and additionalLogFile logs. These logger configuration parameters should be mentinoed in connection string. The level of security (SSL/TLS) that the driver uses for the connection to the data store. onlyUnSecured: The driver does not use SSL. preferredUnSecured: If the server provides a choice, the driver does not use SSL. preferredSecured: If the server provides a choice, the driver uses SSL. onlySecured: The driver does not connect unless an SSL connection is available. Similarly, Netezza server has above securityLevel. Cases which would fail: Client tries to connect with 'Only secured' or 'Preferred secured' mode while server is 'Only Unsecured' mode. Client tries to connect with 'Only secured' or 'Preferred secured' mode while server is 'Preferred Unsecured' mode. Client tries to connect with 'Only Unsecured' or 'Preferred Unsecured' mode while server is 'Only Secured' mode. Client tries to connect with 'Only Unsecured' or 'Preferred Unsecured' mode while server is 'Preferred Secured' mode. Below are the securityLevel you can pass in connection string : Use Open to create a database handle with connection parameters: The Go Netezza Driver supports the following connection syntaxes (or data source name formats): In this case, application is running from NPS server itself so using 'localhost'. Golang driver should connect on port 5480(postgres port). The user is admin, password is password, database is db1, sslmode is require, and the location of the root certificate file is C:/Users/root31.crt with securityLevel as 'Only Secured session' When establishing a connection using nzgo you are expected to supply a connection string containing zero or more parameters. Below are subset of the connection parameters supported by nzgo. The following special connection parameters are supported: Valid values for sslmode are: Use single quotes for values that contain whitespace: A backslash will escape the next character in values: Note that the connection parameter client_encoding (which sets the text encoding for the connection) may be set but must be "UTF8", matching with the same rules as Postgres. It is an error to provide any other value. database/sql does not dictate any specific format for parameter markers in query strings, but nzgo uses the Netezza-specific parameter markers i.e. '?', as shown below. First parameter marker in the query would be replaced by first arguement, second parameter marker in the query would be replaced by second arguement and so on. nzgo supports the RowsAffected() method of the Result type in database/sql. For additional instructions on querying see the documentation for the database/sql package. nzgo also supports transaction queries as specified in database/sql package https://github.com/golang/go/wiki/SQLInterface. Transactions are started by calling Begin. This package returns the following types for values from the Netezza backend: You can unload data from an IBM Netezza database table on a Netezza host system to a remote client. This unload does not remove rows from the database but instead stores the unloaded data in a flat file (external table) that is suitable for loading back into a Netezza database. Below query would create a file 'et1.txt' on remote system from Netezza table t2 with data delimeted by '|'. See https://www.ibm.com/support/knowledgecenter/en/SSULQD_7.2.1/com.ibm.nz.load.doc/t_load_unloading_data_remote_client_sys.html for more information about external table
Package goworker is a Resque-compatible, Go-based background worker. It allows you to push jobs into a queue using an expressive language like Ruby while harnessing the efficiency and concurrency of Go to minimize job latency and cost. goworker workers can run alongside Ruby Resque clients so that you can keep all but your most resource-intensive jobs in Ruby. To create a worker, write a function matching the signature and register it using Here is a simple worker that prints its arguments: To create workers that share a database pool or other resources, use a closure to share variables. goworker worker functions receive the queue they are serving and a slice of interfaces. To use them as parameters to other functions, use Go type assertions to convert them into usable types. For testing, it is helpful to use the redis-cli program to insert jobs onto the Redis queue: will insert 100 jobs for the MyClass worker onto the myqueue queue. It is equivalent to: After building your workers, you will have an executable that you can run which will automatically poll a Redis server and call your workers as jobs arrive. There are several flags which control the operation of the goworker client. -queues="comma,delimited,queues" — This is the only required flag. The recommended practice is to separate your Resque workers from your goworkers with different queues. Otherwise, Resque worker classes that have no goworker analog will cause the goworker process to fail the jobs. Because of this, there is no default queue, nor is there a way to select all queues (à la Resque's * queue). Queues are processed in the order they are specififed. If you have multiple queues you can assign them weights. A queue with a weight of 2 will be checked twice as often as a queue with a weight of 1: -queues='high=2,low=1'. -interval=5.0 — Specifies the wait period between polling if no job was in the queue the last time one was requested. -concurrency=25 — Specifies the number of concurrently executing workers. This number can be as low as 1 or rather comfortably as high as 100,000, and should be tuned to your workflow and the availability of outside resources. -connections=2 — Specifies the maximum number of Redis connections that goworker will consume between the poller and all workers. There is not much performance gain over two and a slight penalty when using only one. This is configurable in case you need to keep connection counts low for cloud Redis providers who limit plans on maxclients. -uri=redis://localhost:6379/ — Specifies the URI of the Redis database from which goworker polls for jobs. Accepts URIs of the format redis://user:pass@host:port/db or unix:///path/to/redis.sock. The flag may also be set by the environment variable $($REDIS_PROVIDER) or $REDIS_URL. E.g. set $REDIS_PROVIDER to REDISTOGO_URL on Heroku to let the Redis To Go add-on configure the Redis database. -namespace=resque: — Specifies the namespace from which goworker retrieves jobs and stores stats on workers. -exit-on-complete=false — Exits goworker when there are no jobs left in the queue. This is helpful in conjunction with the time command to benchmark different configurations. -use-number=false — Uses json.Number when decoding numbers in the job payloads. This will avoid issues that occur when goworker and the json package decode large numbers as floats, which then get encoded in scientific notation, losing pecision. This will default to true soon. You can also configure your own flags for use within your workers. Be sure to set them before calling goworker.Main(). It is okay to call flags.Parse() before calling goworker.Main() if you need to do additional processing on your flags. To stop goworker, send a QUIT, TERM, or INT signal to the process. This will immediately stop job polling. There can be up to $CONCURRENCY jobs currently running, which will continue to run until they are finished. Like Resque, goworker makes no guarantees about the safety of jobs in the event of process shutdown. Workers must be both idempotent and tolerant to loss of the job in the event of failure. If the process is killed with a KILL or by a system failure, there may be one job that is currently in the poller's buffer that will be lost without any representation in either the queue or the worker variable. If you are running Goworker on a system like Heroku, which sends a TERM to signal a process that it needs to stop, ten seconds later sends a KILL to force the process to stop, your jobs must finish within 10 seconds or they may be lost. Jobs will be recoverable from the Redis database under as a JSON object with keys queue, run_at, and payload, but the process is manual. Additionally, there is no guarantee that the job in Redis under the worker key has not finished, if the process is killed before goworker can flush the update to Redis.
Package t38c implements client for working with a Tile38 database.
Package t38c implements client for working with a Tile38 database.
goworker is a Resque-compatible, Go-based background worker. It allows you to push jobs into a queue using an expressive language like Ruby while harnessing the efficiency and concurrency of Go to minimize job latency and cost. goworker workers can run alongside Ruby Resque clients so that you can keep all but your most resource-intensive jobs in Ruby. To create a worker, write a function matching the signature and register it using Here is a simple worker that prints its arguments: To create workers that share a database pool or other resources, use a closure to share variables. goworker worker functions receive the queue they are serving and a slice of interfaces. To use them as parameters to other functions, use Go type assertions to convert them into usable types. For testing, it is helpful to use the redis-cli program to insert jobs onto the Redis queue: will insert 100 jobs for the MyClass worker onto the myqueue queue. It is equivalent to: After building your workers, you will have an executable that you can run which will automatically poll a Redis server and call your workers as jobs arrive. There are several flags which control the operation of the goworker client. -queues="comma,delimited,queues" — This is the only required flag. The recommended practice is to separate your Resque workers from your goworkers with different queues. Otherwise, Resque worker classes that have no goworker analog will cause the goworker process to fail the jobs. Because of this, there is no default queue, nor is there a way to select all queues (à la Resque's * queue). Queues are processed in the order they are specififed. If you have multiple queues you can assign them weights. A queue with a weight of 2 will weight of 1: -queues='high=2,low=1'. -interval=5.0 — Specifies the wait period between polling if no job was in the queue the last time one was requested. -concurrency=25 — Specifies the number of concurrently executing workers. This number can be as low as 1 or rather comfortably as high as 100,000, and should be tuned to your workflow and the availability of outside resources. -connections=2 — Specifies the maximum number of Redis connections that goworker will consume between the poller and all workers. There is not much performance gain over two and a slight penalty when using only one. This is configurable in case you need to keep connection counts low for cloud Redis providers who limit plans on maxclients. -uri=redis://localhost:6379/ — Specifies the URI of the Redis database from which goworker polls for jobs. Accepts URIs of the format redis://user:pass@host:port/db or unix:///path/to/redis.sock. The flag may also be set by the environment variable $($REDIS_PROVIDER) or $REDIS_URL. E.g. set $REDIS_PROVIDER to REDISTOGO_URL on Heroku to let the Redis To Go add-on configure the Redis database. -namespace=resque: — Specifies the namespace from which goworker retrieves jobs and stores stats on workers. -exit-on-complete=false — Exits goworker when there are no jobs left in the queue. This is helpful in conjunction with the time command to benchmark different configurations. You can also configure your own flags for use within your workers. Be sure to set them before calling goworker.Main(). It is okay to call flags.Parse() before calling goworker.Main() if you need to do additional processing on your flags. To stop goworker, send a QUIT, TERM, or INT signal to the process. This will immediately stop job polling. There can be up to $CONCURRENCY jobs currently running, which will continue to run until they are finished. Like Resque, goworker makes no guarantees about the safety of jobs in the event of process shutdown. Workers must be both idempotent and tolerant to loss of the job in the event of failure. If the process is killed with a KILL or by a system failure, there may be one job that is currently in the poller's buffer that will be lost without any representation in either the queue or the worker variable. If you are running Goworker on a system like Heroku, which sends a TERM to signal a process that it needs to stop, ten seconds later sends a KILL to force the process to stop, your jobs must finish within 10 seconds or they may be lost. Jobs will be recoverable from the Redis database under as a JSON object with keys queue, run_at, and payload, but the process is manual. Additionally, there is no guarantee that the job in Redis under the worker key has not finished, if the process is killed before goworker can flush the update to Redis.
Package kivik provides a generic interface to CouchDB or CouchDB-like databases. The kivik package must be used in conjunction with a database driver. The officially supported drivers are: The Filesystem and Memory drivers are also available, but in early stages of development, and so many features do not yet work: The kivik driver system is modeled after the standard library's `sql` and `sql/driver` packages, although the client API is completely different due to the different database models implemented by SQL and NoSQL databases such as CouchDB. couchDB stores JSON, so Kivik translates Go data structures to and from JSON as necessary. The conversion between Go data types and JSON, and vice versa, is handled automatically according to the rules and behavior described in the documentationf or the standard library's `encoding/json` package (https://golang.org/pkg/encoding/json). One would be well-advised to become familiar with using `json` struct field tags (https://golang.org/pkg/encoding/json/#Marshal) when working with JSON documents. Most Kivik methods take `context.Context` as their first argument. This allows the cancellation of blocking operations in the case that the result is no longer needed. A typical use case for a web application would be to cancel a Kivik request if the remote HTTP client ahs disconnected, rednering the results of the query irrelevant. To learn more about Go's contexts, read the `context` package documentation (https://golang.org/pkg/context/) and read the Go blog post "Go Concurrency Patterns: Context" (https://blog.golang.org/context) for example code. If in doubt, you can pass `context.TODO()` as the context variable. Example: Kivik returns errors that embed an HTTP status code. In most cases, this is the HTTP status code returned by the server. The embedded HTTP status code may be accessed easily using the StatusCode() method, or with a type assertion to `interface { StatusCode() int }`. Example: Any error that does not conform to this interface will be assumed to represent a http.StatusInternalServerError status code. For common usage, authentication should be as simple as including the authentication credentials in the connection DSN. For example: This will connect to `localhost` on port 5984, using the username `admin` and the password `abc123`. When connecting to CouchDB (as in the above example), this will use cookie auth (https://docs.couchdb.org/en/stable/api/server/authn.html?highlight=cookie%20auth#cookie-authentication). Depending on which driver you use, there may be other ways to authenticate, as well. At the moment, the CouchDB driver is the only official driver which offers additional authentication methods. Please refer to the CouchDB package documentation for details (https://pkg.go.dev/github.com/IG-Soft/couchdb/v3). With a client handle in hand, you can create a database handle with the DB() method to interact with a specific database.
Package firebasedb implements a REST client for the Firebase Realtime Database (https://firebase.google.com/docs/database/). The API is as close as possible to the official JavaScript API. Similar / related project: Reference / documentation: This packages uses the "Advanced Go Concurrency Patterns" presented by Sameer Ajmani:
Package pq is a pure Go Postgres driver for the database/sql package. In most cases clients will use the database/sql package instead of using this package directly. For example: You can also connect to a database using a URL. For example: Similarly to libpq, when establishing a connection using pq you are expected to supply a connection string containing zero or more parameters. A subset of the connection parameters supported by libpq are also supported by pq. Additionally, pq also lets you specify run-time parameters (such as search_path or work_mem) directly in the connection string. This is different from libpq, which does not allow run-time parameters in the connection string, instead requiring you to supply them in the options parameter. For compatibility with libpq, the following special connection parameters are supported: Valid values for sslmode are: See http://www.postgresql.org/docs/current/static/libpq-connect.html#LIBPQ-CONNSTRING for more information about connection string parameters. Use single quotes for values that contain whitespace: A backslash will escape the next character in values: Note that the connection parameter client_encoding (which sets the text encoding for the connection) may be set but must be "UTF8", matching with the same rules as Postgres. It is an error to provide any other value. In addition to the parameters listed above, any run-time parameter that can be set at backend start time can be set in the connection string. For more information, see http://www.postgresql.org/docs/current/static/runtime-config.html. Most environment variables as specified at http://www.postgresql.org/docs/current/static/libpq-envars.html supported by libpq are also supported by pq. If any of the environment variables not supported by pq are set, pq will panic during connection establishment. Environment variables have a lower precedence than explicitly provided connection parameters. database/sql does not dictate any specific format for parameter markers in query strings, and pq uses the Postgres-native ordinal markers, as shown above. The same marker can be reused for the same parameter: pq does not support the LastInsertId() method of the Result type in database/sql. To return the identifier of an INSERT (or UPDATE or DELETE), use the Postgres RETURNING clause with a standard Query or QueryRow call: For more details on RETURNING, see the Postgres documentation: For additional instructions on querying see the documentation for the database/sql package. pq may return errors of type *pq.Error which can be interrogated for error details: See the pq.Error type for details. You can perform bulk imports by preparing a statement returned by pq.CopyIn (or pq.CopyInSchema) in an explicit transaction (sql.Tx). The returned statement handle can then be repeatedly "executed" to copy data into the target table. After all data has been processed you should call Exec() once with no arguments to flush all buffered data. Any call to Exec() might return an error which should be handled appropriately, but because of the internal buffering an error returned by Exec() might not be related to the data passed in the call that failed. CopyIn uses COPY FROM internally. It is not possible to COPY outside of an explicit transaction in pq. Usage example: PostgreSQL supports a simple publish/subscribe model over database connections. See http://www.postgresql.org/docs/current/static/sql-notify.html for more information about the general mechanism. To start listening for notifications, you first have to open a new connection to the database by calling NewListener. This connection can not be used for anything other than LISTEN / NOTIFY. Calling Listen will open a "notification channel"; once a notification channel is open, a notification generated on that channel will effect a send on the Listener.Notify channel. A notification channel will remain open until Unlisten is called, though connection loss might result in some notifications being lost. To solve this problem, Listener sends a nil pointer over the Notify channel any time the connection is re-established following a connection loss. The application can get information about the state of the underlying connection by setting an event callback in the call to NewListener. A single Listener can safely be used from concurrent goroutines, which means that there is often no need to create more than one Listener in your application. However, a Listener is always connected to a single database, so you will need to create a new Listener instance for every database you want to receive notifications in. The channel name in both Listen and Unlisten is case sensitive, and can contain any characters legal in an identifier (see http://www.postgresql.org/docs/current/static/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS for more information). Note that the channel name will be truncated to 63 bytes by the PostgreSQL server. You can find a complete, working example of Listener usage at http://godoc.org/github.com/lib/pq/listen_example.
Open Source Business Management Framework Nervatura is a business management framework. It can handle any type of business related information, starting from customer details, up to shipping, stock or payment information. Developed as open-source project and can be used freely under the scope of LGPLv3 License. The framework is based on Nervatura Object Model (https://nervatura.github.io/nervatura/docs/model) specification. It is a general open-data model, which can store all information generated in the operation of a usual corporation. The Nervatura service is small and fast. A single ~6 MB file contains all the necessary dependencies. The framework includes: • CLI API (https://nervatura.github.io/nervatura/docs/service/cli#cli-api) (command line) • CGO API (https://nervatura.github.io/nervatura/docs/service/cli#cgo-api) (C shared library) • standard HTTP OPEN API (https://nervatura.github.io/nervatura/docs/service/api) for client communication • HTTP/2-based gRPC API (https://nervatura.github.io/nervatura/docs/service/grpc) for server-side communication • JWT generation, external token validation, SSL/TLS support and other HTTP security settings (https://github.com/nervatura/nervatura-service/blob/master/.env.example) • built-in database drivers for postgres, mysql, mssql, sqlite databases • a basic report generation library for creating simple PDF documents (eg. order, invoice, etc.) or CSV data files • sample report templates and REPORT EDITOR (https://nervatura.github.io/nervatura/docs/client/program/editor) GUI • CLIENT (https://nervatura.github.io/nervatura/docs/client) Web Component application and a basic **ADMIN** interface The client and report interface supports multilingualism (https://nervatura.github.io/nervatura/docs/start/customization#customize-the-appearance). The framework can be easily extended with additional interfaces and functions in the any languages. https://nervatura.github.io/nervatura/docs/install https://nervatura.github.io/nervatura/docs/start More info see http://www.nervatura.com.
Command goat provides an implementation of a BitTorrent tracker, written in Go. goat can be built using Go 1.1+. It can be downloaded, built, and installed, simply by running: In addition, goat depends on a MySQL server for data storage. After creating a database and user for goat, its database schema may be imported from the SQL files located in 'res/'. goat will not run unless MySQL is installed, and a database and user are properly configured for its use. Optionally, goat can be built to use ql (https://github.com/cznic/ql) as its storage backend. This is done by supplying the 'ql' tag in the go get command: A blank ql database file is located under 'res/ql/goat.db', and will be copied to '~/.config/goat/goat.db' on UNIX systems. goat is now able to use ql as its storage backend, for those who do not wish to use an external, MySQL backend. goat is capable of listening for torrent traffic in three modes: HTTP, HTTPS, and UDP. HTTP/HTTPS are the recommended methods, and are required in order for goat to serve its API, and to allow use of private tracker passkeys. HTTP is considered the standard mode of operation for goat. HTTP allows gathering a great number of metrics, use of passkeys, use of a client whitelist, and access to goat's RESTful API, when configured. For most trackers, this will be the only listener which is necessary in order for goat to function properly. The HTTPS listener provides a method to encrypt traffic to the tracker, but must be used with caution. Unless the SSL certificate in use is signed by a proper certificate authority, it will distress most clients, and they may outright refuse to announce to it. If you are in possession of a certificate signed by a certificate authority, this mode may be more ideal, as it provides added security for your clients. The UDP listener is the most unusual method of the three, and should only be used for public trackers. The BitTorrent UDP tracker protocol specifies a very specific packet format, meaning that additional information or parameters cannot be packed into a UDP datagram in a standard way. The UDP tracker may be the fastest and least bandwidth-intensive, but as stated, should only be used for public trackers. A new feature goat added to goat in order to allow better interoperability with many languages is a RESTful API, which is served using the HTTP or HTTPS listeners. This API enables easy retrieval of tracker statistics, while allowing goat to run as a completely independent process. It should be noted that the API is only enabled when configured, and when a HTTP or HTTPS listener is enabled. Without a transport mechanism, the API will be inaccessible. Currently, the API is read-only, and only allows use of the HTTP GET method. This may change in the future, but as of now, it doesn't make any sense to modify tracker parameters without doing a proper announce or scrape via BitTorrent client. The API will feature several modes of authentication, including HTTP Basic and HMAC-SHA1. For the time being, only HTTP Basic is implemented. This method makes use of a username/password pair using the user's username, and an API key as the password. This list contains all API calls currently recognized by goat. Each call must be authenticated using the aforementioned methods. Retrieve a list of all files tracked by goat. Some extended attributes are not added to reduce strain on database, and to provide a more general overview. Retrieve extended attributes about a specific file with matching ID. This provides counts for number of completions, seeders, leechers, and a list of fileUser relationships associated with a given file. Retrieve a variety of metrics about the current status of goat, including its PID, hostname, memory usage, number of HTTP/UDP hits, etc. goat is configured using a JSON file, which will be created under '~/.config/goat/config.json' on UNIX systems. Here is an example configuration, describing the settings available to the user.
goworker is a Resque-compatible, Go-based background worker. It allows you to push jobs into a queue using an expressive language like Ruby while harnessing the efficiency and concurrency of Go to minimize job latency and cost. goworker workers can run alongside Ruby Resque clients so that you can keep all but your most resource-intensive jobs in Ruby. To create a worker, write a function matching the signature and register it using Here is a simple worker that prints its arguments: To create workers that share a database pool or other resources, use a closure to share variables. goworker worker functions receive the queue they are serving and a slice of interfaces. To use them as parameters to other functions, use Go type assertions to convert them into usable types. For testing, it is helpful to use the redis-cli program to insert jobs onto the Redis queue: will insert 100 jobs for the MyClass worker onto the myqueue queue. It is equivalent to: After building your workers, you will have an executable that you can run which will automatically poll a Redis server and call your workers as jobs arrive. There are several flags which control the operation of the goworker client. -queues="comma,delimited,queues" — This is the only required flag. The recommended practice is to separate your Resque workers from your goworkers with different queues. Otherwise, Resque worker classes that have no goworker analog will cause the goworker process to fail the jobs. Because of this, there is no default queue, nor is there a way to select all queues (à la Resque's * queue). Queues are processed in the order they are specififed. If you have multiple queues you can assign them weights. A queue with a weight of 2 will weight of 1: -queues='high=2,low=1'. -interval=5.0 — Specifies the wait period between polling if no job was in the queue the last time one was requested. -concurrency=25 — Specifies the number of concurrently executing workers. This number can be as low as 1 or rather comfortably as high as 100,000, and should be tuned to your workflow and the availability of outside resources. -connections=2 — Specifies the maximum number of Redis connections that goworker will consume between the poller and all workers. There is not much performance gain over two and a slight penalty when using only one. This is configurable in case you need to keep connection counts low for cloud Redis providers who limit plans on maxclients. -uri=redis://localhost:6379/ — Specifies the URI of the Redis database from which goworker polls for jobs. Accepts URIs of the format redis://user:pass@host:port/db or unix:///path/to/redis.sock. The flag may also be set by the environment variable $($REDIS_PROVIDER) or $REDIS_URL. E.g. set $REDIS_PROVIDER to REDISTOGO_URL on Heroku to let the Redis To Go add-on configure the Redis database. -namespace=resque: — Specifies the namespace from which goworker retrieves jobs and stores stats on workers. -exit-on-complete=false — Exits goworker when there are no jobs left in the queue. This is helpful in conjunction with the time command to benchmark different configurations. You can also configure your own flags for use within your workers. Be sure to set them before calling goworker.Main(). It is okay to call flags.Parse() before calling goworker.Main() if you need to do additional processing on your flags. To stop goworker, send a QUIT, TERM, or INT signal to the process. This will immediately stop job polling. There can be up to $CONCURRENCY jobs currently running, which will continue to run until they are finished. Like Resque, goworker makes no guarantees about the safety of jobs in the event of process shutdown. Workers must be both idempotent and tolerant to loss of the job in the event of failure. If the process is killed with a KILL or by a system failure, there may be one job that is currently in the poller's buffer that will be lost without any representation in either the queue or the worker variable. If you are running Goworker on a system like Heroku, which sends a TERM to signal a process that it needs to stop, ten seconds later sends a KILL to force the process to stop, your jobs must finish within 10 seconds or they may be lost. Jobs will be recoverable from the Redis database under as a JSON object with keys queue, run_at, and payload, but the process is manual. Additionally, there is no guarantee that the job in Redis under the worker key has not finished, if the process is killed before goworker can flush the update to Redis.
Package kv implements a simple and easy to use persistent key/value (KV) store. 2016-07-11: KV now uses the stable version of lldb. (modernc.org/lldb). The stored KV pairs are sorted in the key collation order defined by an user supplied 'compare' function (passed as a field in Options). Keys, as well as the values associated with them, are opaque []bytes. Maximum size of a "native" key or value is 65787 bytes. Larger keys or values have to be composed of the "native" ones in client code. The maximum DB size kv can handle is 2^60 bytes (1 exabyte). See also [4]: "Block handles". Transactions are resource limited. All changes made by a transaction are held in memory until the top level transaction is committed. ACID[1] implementation notes/details follows. A successfully committed transaction appears (by its effects on the database) to be indivisible ("atomic") iff the transaction is performed in isolation. An aborted (via RollBack) transaction appears like it never happened under the same limitation. Atomic updates to the DB, via functions like Set, Inc, etc., are performed in their own automatic transaction. If the partial progress of any such function fails at any point, the automatic transaction is canceled via Rollback before returning from the function. A non nil error is returned in that case. All reads, including those made from any other concurrent non isolated transaction(s), performed during a not yet committed transaction, are dirty reads, i.e. the data returned are consistent with the in-progress state of the open transaction, or all of the open transactions. Obviously, conflicts, data races and inconsistent states can happen, but iff non isolated transactions are performed. Performing a Rollback at a nested transaction level properly returns the transaction state (and data read from the DB) to what it was before the respective BeginTransaction. Transactions of the atomic updating functions (Set, Put, Delete ...) are always isolated. Transactions controlled by BeginTransaction/Commit/RollBack, are isolated iff their execution is serialized. Transactions are committed using the two phase commit protocol(2PC)[2] and a write ahead log(WAL)[3]. DB recovery after a crash is performed automatically using data from the WAL. Last transaction data, either of an in progress transaction or a transaction being committed at the moment of the crash, can get lost. No protection from non readable files, files corrupted by other processes or by memory faults or other HW problems, is provided. Always properly backup your DB data file(s).
MongoTUI is a MongoDb TUI client which allows to connect to multiple MongoDB instances. Press <Ctrl>-<c> to connect to a MongoDB instance, you can enter the connection parameters individually or the connection URI as well. Notice that the connection URI always wins, if the individual fields and the connection URI are filled. The open database connections, accessed by <Ctrl>-<d>, their databases and collections are displayed as a tree view in the left application panel. You can navigate through the nodes with the arrow keys or left-click on them. The command editor is accessible by <Ctrl>-<e>. The commands are fired on the database which are selected in the tree view by pressing <Enter> or <Return> in the command editor. The command result is shown in the result panel as a tree view. You can access it with <Ctrl>-<r> and navigate through the nodes with the arrow keys. <Ctrl>-<t> disconnects (terminates) the selected connection. <Ctrl>-<q> disconnects the open connections and quits the application.
Package msgqueue implements task/job queue with in-memory, SQS, IronMQ backends. go-msgqueue is a thin wrapper for SQS and IronMQ clients that uses Redis to implement rate limiting and call once semantic. go-msgqueue consists of following components: rate limiting is implemented in the processor package using https://github.com/go-redis/redis_rate. Call once is implemented in clients by checking if message name exists in Redis database.
Package ql implements a pure Go embedded SQL database engine. QL is a member of the SQL family of languages. It is less complex and less powerful than SQL (whichever specification SQL is considered to be). 2016-03-17: Release v1.0.1 adjusts for latest goyacc. Parser error messages are improved and changed, but their exact form is not considered a API change. 2016-03-05: The current version has been tagged v1.0.0. 2015-06-15: To improve compatibility with other SQL implementations, the count built-in aggregate function now accepts * as its argument. 2015-05-29: The execution planner was rewritten from scratch. It should use indices in all places where they were used before plus in some additional situations. It is possible to investigate the plan using the newly added EXPLAIN statement. The QL tool is handy for such analysis. If the planner would have used an index, but no such exists, the plan includes hints in form of copy/paste ready CREATE INDEX statements. The planner is still quite simple and a lot of work on it is yet ahead. You can help this process by filling an issue with a schema and query which fails to use an index or indices when it should, in your opinion. Bonus points for including output of `ql 'explain <query>'`. 2015-05-09: The grammar of the CREATE INDEX statement now accepts an expression list instead of a single expression, which was further limited to just a column name or the built-in id(). As a side effect, composite indices are now functional. However, the values in the expression-list style index are not yet used by other statements or the statement/query planner. The composite index is useful while having UNIQUE clause to check for semantically duplicate rows before they get added to the table or when such a row is mutated using the UPDATE statement and the expression-list style index tuple of the row is thus recomputed. 2015-05-02: The Schema field of table __Table now correctly reflects any column constraints and/or defaults. Also, the (*DB).Info method now has that information provided in new ColumInfo fields NotNull, Constraint and Default. 2015-04-20: Added support for {LEFT,RIGHT,FULL} [OUTER] JOIN. 2015-04-18: Column definitions can now have constraints and defaults. Details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. 2015-03-06: New built-in functions formatFloat and formatInt. Thanks urandom! (https://github.com/urandom) 2015-02-16: IN predicate now accepts a SELECT statement. See the updated "Predicates" section. 2015-01-17: Logical operators || and && have now alternative spellings: OR and AND (case insensitive). AND was a keyword before, but OR is a new one. This can possibly break existing queries. For the record, it's a good idea to not use any name appearing in, for example, [7] in your queries as the list of QL's keywords may expand for gaining better compatibility with existing SQL "standards". 2015-01-12: ACID guarantees were tightened at the cost of performance in some cases. The write collecting window mechanism, a formerly used implementation detail, was removed. Inserting rows one by one in a transaction is now slow. I mean very slow. Try to avoid inserting single rows in a transaction. Instead, whenever possible, perform batch updates of tens to, say thousands of rows in a single transaction. See also: http://www.sqlite.org/faq.html#q19, the discussed synchronization principles involved are the same as for QL, modulo minor details. Note: A side effect is that closing a DB before exiting an application, both for the Go API and through database/sql driver, is no more required, strictly speaking. Beware that exiting an application while there is an open (uncommitted) transaction in progress means losing the transaction data. However, the DB will not become corrupted because of not closing it. Nor that was the case before, but formerly failing to close a DB could have resulted in losing the data of the last transaction. 2014-09-21: id() now optionally accepts a single argument - a table name. 2014-09-01: Added the DB.Flush() method and the LIKE pattern matching predicate. 2014-08-08: The built in functions max and min now accept also time values. Thanks opennota! (https://github.com/opennota) 2014-06-05: RecordSet interface extended by new methods FirstRow and Rows. 2014-06-02: Indices on id() are now used by SELECT statements. 2014-05-07: Introduction of Marshal, Schema, Unmarshal. 2014-04-15: Added optional IF NOT EXISTS clause to CREATE INDEX and optional IF EXISTS clause to DROP INDEX. 2014-04-12: The column Unique in the virtual table __Index was renamed to IsUnique because the old name is a keyword. Unfortunately, this is a breaking change, sorry. 2014-04-11: Introduction of LIMIT, OFFSET. 2014-04-10: Introduction of query rewriting. 2014-04-07: Introduction of indices. QL imports zappy[8], a block-based compressor, which speeds up its performance by using a C version of the compression/decompression algorithms. If a CGO-free (pure Go) version of QL, or an app using QL, is required, please include 'purego' in the -tags option of go {build,get,install}. For example: If zappy was installed before installing QL, it might be necessary to rebuild zappy first (or rebuild QL with all its dependencies using the -a option): The syntax is specified using Extended Backus-Naur Form (EBNF) Lower-case production names are used to identify lexical tokens. Non-terminals are in CamelCase. Lexical tokens are enclosed in double quotes "" or back quotes “. The form a … b represents the set of characters from a through b as alternatives. The horizontal ellipsis … is also used elsewhere in the spec to informally denote various enumerations or code snippets that are not further specified. QL source code is Unicode text encoded in UTF-8. The text is not canonicalized, so a single accented code point is distinct from the same character constructed from combining an accent and a letter; those are treated as two code points. For simplicity, this document will use the unqualified term character to refer to a Unicode code point in the source text. Each code point is distinct; for instance, upper and lower case letters are different characters. Implementation restriction: For compatibility with other tools, the parser may disallow the NUL character (U+0000) in the statement. Implementation restriction: A byte order mark is disallowed anywhere in QL statements. The following terms are used to denote specific character classes The underscore character _ (U+005F) is considered a letter. Lexical elements are comments, tokens, identifiers, keywords, operators and delimiters, integer, floating-point, imaginary, rune and string literals and QL parameters. Line comments start with the character sequence // or -- and stop at the end of the line. A line comment acts like a space. General comments start with the character sequence /* and continue through the character sequence */. A general comment acts like a space. Comments do not nest. Tokens form the vocabulary of QL. There are four classes: identifiers, keywords, operators and delimiters, and literals. White space, formed from spaces (U+0020), horizontal tabs (U+0009), carriage returns (U+000D), and newlines (U+000A), is ignored except as it separates tokens that would otherwise combine into a single token. The formal grammar uses semicolons ";" as separators of QL statements. A single QL statement or the last QL statement in a list of statements can have an optional semicolon terminator. (Actually a separator from the following empty statement.) Identifiers name entities such as tables or record set columns. An identifier is a sequence of one or more letters and digits. The first character in an identifier must be a letter. For example No identifiers are predeclared, however note that no keyword can be used as an identifier. Identifiers starting with two underscores are used for meta data virtual tables names. For forward compatibility, users should generally avoid using any identifiers starting with two underscores. For example The following keywords are reserved and may not be used as identifiers. Keywords are not case sensitive. The following character sequences represent operators, delimiters, and other special tokens Operators consisting of more than one character are referred to by names in the rest of the documentation An integer literal is a sequence of digits representing an integer constant. An optional prefix sets a non-decimal base: 0 for octal, 0x or 0X for hexadecimal. In hexadecimal literals, letters a-f and A-F represent values 10 through 15. For example A floating-point literal is a decimal representation of a floating-point constant. It has an integer part, a decimal point, a fractional part, and an exponent part. The integer and fractional part comprise decimal digits; the exponent part is an e or E followed by an optionally signed decimal exponent. One of the integer part or the fractional part may be elided; one of the decimal point or the exponent may be elided. For example An imaginary literal is a decimal representation of the imaginary part of a complex constant. It consists of a floating-point literal or decimal integer followed by the lower-case letter i. For example A rune literal represents a rune constant, an integer value identifying a Unicode code point. A rune literal is expressed as one or more characters enclosed in single quotes. Within the quotes, any character may appear except single quote and newline. A single quoted character represents the Unicode value of the character itself, while multi-character sequences beginning with a backslash encode values in various formats. The simplest form represents the single character within the quotes; since QL statements are Unicode characters encoded in UTF-8, multiple UTF-8-encoded bytes may represent a single integer value. For instance, the literal 'a' holds a single byte representing a literal a, Unicode U+0061, value 0x61, while 'ä' holds two bytes (0xc3 0xa4) representing a literal a-dieresis, U+00E4, value 0xe4. Several backslash escapes allow arbitrary values to be encoded as ASCII text. There are four ways to represent the integer value as a numeric constant: \x followed by exactly two hexadecimal digits; \u followed by exactly four hexadecimal digits; \U followed by exactly eight hexadecimal digits, and a plain backslash \ followed by exactly three octal digits. In each case the value of the literal is the value represented by the digits in the corresponding base. Although these representations all result in an integer, they have different valid ranges. Octal escapes must represent a value between 0 and 255 inclusive. Hexadecimal escapes satisfy this condition by construction. The escapes \u and \U represent Unicode code points so within them some values are illegal, in particular those above 0x10FFFF and surrogate halves. After a backslash, certain single-character escapes represent special values All other sequences starting with a backslash are illegal inside rune literals. For example A string literal represents a string constant obtained from concatenating a sequence of characters. There are two forms: raw string literals and interpreted string literals. Raw string literals are character sequences between back quotes “. Within the quotes, any character is legal except back quote. The value of a raw string literal is the string composed of the uninterpreted (implicitly UTF-8-encoded) characters between the quotes; in particular, backslashes have no special meaning and the string may contain newlines. Carriage returns inside raw string literals are discarded from the raw string value. Interpreted string literals are character sequences between double quotes "". The text between the quotes, which may not contain newlines, forms the value of the literal, with backslash escapes interpreted as they are in rune literals (except that \' is illegal and \" is legal), with the same restrictions. The three-digit octal (\nnn) and two-digit hexadecimal (\xnn) escapes represent individual bytes of the resulting string; all other escapes represent the (possibly multi-byte) UTF-8 encoding of individual characters. Thus inside a string literal \377 and \xFF represent a single byte of value 0xFF=255, while ÿ, \u00FF, \U000000FF and \xc3\xbf represent the two bytes 0xc3 0xbf of the UTF-8 encoding of character U+00FF. For example These examples all represent the same string If the statement source represents a character as two code points, such as a combining form involving an accent and a letter, the result will be an error if placed in a rune literal (it is not a single code point), and will appear as two code points if placed in a string literal. Literals are assigned their values from the respective text representation at "compile" (parse) time. QL parameters provide the same functionality as literals, but their value is assigned at execution time from an expression list passed to DB.Run or DB.Execute. Using '?' or '$' is completely equivalent. For example Keywords 'false' and 'true' (not case sensitive) represent the two possible constant values of type bool (also not case sensitive). Keyword 'NULL' (not case sensitive) represents an untyped constant which is assignable to any type. NULL is distinct from any other value of any type. A type determines the set of values and operations specific to values of that type. A type is specified by a type name. Named instances of the boolean, numeric, and string types are keywords. The names are not case sensitive. Note: The blob type is exchanged between the back end and the API as []byte. On 32 bit platforms this limits the size which the implementation can handle to 2G. A boolean type represents the set of Boolean truth values denoted by the predeclared constants true and false. The predeclared boolean type is bool. A duration type represents the elapsed time between two instants as an int64 nanosecond count. The representation limits the largest representable duration to approximately 290 years. A numeric type represents sets of integer or floating-point values. The predeclared architecture-independent numeric types are The value of an n-bit integer is n bits wide and represented using two's complement arithmetic. Conversions are required when different numeric types are mixed in an expression or assignment. A string type represents the set of string values. A string value is a (possibly empty) sequence of bytes. The case insensitive keyword for the string type is 'string'. The length of a string (its size in bytes) can be discovered using the built-in function len. A time type represents an instant in time with nanosecond precision. Each time has associated with it a location, consulted when computing the presentation form of the time. The following functions are implicitly declared An expression specifies the computation of a value by applying operators and functions to operands. Operands denote the elementary values in an expression. An operand may be a literal, a (possibly qualified) identifier denoting a constant or a function or a table/record set column, or a parenthesized expression. A qualified identifier is an identifier qualified with a table/record set name prefix. For example Primary expression are the operands for unary and binary expressions. For example A primary expression of the form denotes the element of a string indexed by x. Its type is byte. The value x is called the index. The following rules apply - The index x must be of integer type except bigint or duration; it is in range if 0 <= x < len(s), otherwise it is out of range. - A constant index must be non-negative and representable by a value of type int. - A constant index must be in range if the string a is a literal. - If x is out of range at run time, a run-time error occurs. - s[x] is the byte at index x and the type of s[x] is byte. If s is NULL or x is NULL then the result is NULL. Otherwise s[x] is illegal. For a string, the primary expression constructs a substring. The indices low and high select which elements appear in the result. The result has indices starting at 0 and length equal to high - low. For convenience, any of the indices may be omitted. A missing low index defaults to zero; a missing high index defaults to the length of the sliced operand The indices low and high are in range if 0 <= low <= high <= len(a), otherwise they are out of range. A constant index must be non-negative and representable by a value of type int. If both indices are constant, they must satisfy low <= high. If the indices are out of range at run time, a run-time error occurs. Integer values of type bigint or duration cannot be used as indices. If s is NULL the result is NULL. If low or high is not omitted and is NULL then the result is NULL. Given an identifier f denoting a predeclared function, calls f with arguments a1, a2, … an. Arguments are evaluated before the function is called. The type of the expression is the result type of f. In a function call, the function value and arguments are evaluated in the usual order. After they are evaluated, the parameters of the call are passed by value to the function and the called function begins execution. The return value of the function is passed by value when the function returns. Calling an undefined function causes a compile-time error. Operators combine operands into expressions. Comparisons are discussed elsewhere. For other binary operators, the operand types must be identical unless the operation involves shifts or untyped constants. For operations involving constants only, see the section on constant expressions. Except for shift operations, if one operand is an untyped constant and the other operand is not, the constant is converted to the type of the other operand. The right operand in a shift expression must have unsigned integer type or be an untyped constant that can be converted to unsigned integer type. If the left operand of a non-constant shift expression is an untyped constant, the type of the constant is what it would be if the shift expression were replaced by its left operand alone. Expressions of the form yeild a boolean value true if expr2, a regular expression, matches expr1 (see also [6]). Both expression must be of type string. If any one of the expressions is NULL the result is NULL. Predicates are special form expressions having a boolean result type. Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be comparable as defined in "Comparison operators". Another form of the IN predicate creates the expression list from a result of a SelectStmt. The SelectStmt must select only one column. The produced expression list is resource limited by the memory available to the process. NULL values produced by the SelectStmt are ignored, but if all records of the SelectStmt are NULL the predicate yields NULL. The select statement is evaluated only once. If the type of expr is not the same as the type of the field returned by the SelectStmt then the set operation yields false. The type of the column returned by the SelectStmt must be one of the simple (non blob-like) types: Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be ordered as defined in "Comparison operators". Expressions of the form yeild a boolean value true if expr does not have a specific type (case A) or if expr has a specific type (case B). In other cases the result is a boolean value false. Unary operators have the highest precedence. There are five precedence levels for binary operators. Multiplication operators bind strongest, followed by addition operators, comparison operators, && (logical AND), and finally || (logical OR) Binary operators of the same precedence associate from left to right. For instance, x / y * z is the same as (x / y) * z. Note that the operator precedence is reflected explicitly by the grammar. Arithmetic operators apply to numeric values and yield a result of the same type as the first operand. The four standard arithmetic operators (+, -, *, /) apply to integer, rational, floating-point, and complex types; + also applies to strings; +,- also applies to times. All other arithmetic operators apply to integers only. sum integers, rationals, floats, complex values, strings difference integers, rationals, floats, complex values, times product integers, rationals, floats, complex values / quotient integers, rationals, floats, complex values % remainder integers & bitwise AND integers | bitwise OR integers ^ bitwise XOR integers &^ bit clear (AND NOT) integers << left shift integer << unsigned integer >> right shift integer >> unsigned integer Strings can be concatenated using the + operator String addition creates a new string by concatenating the operands. A value of type duration can be added to or subtracted from a value of type time. Times can subtracted from each other producing a value of type duration. For two integer values x and y, the integer quotient q = x / y and remainder r = x % y satisfy the following relationships with x / y truncated towards zero ("truncated division"). As an exception to this rule, if the dividend x is the most negative value for the int type of x, the quotient q = x / -1 is equal to x (and r = 0). If the divisor is a constant expression, it must not be zero. If the divisor is zero at run time, a run-time error occurs. If the dividend is non-negative and the divisor is a constant power of 2, the division may be replaced by a right shift, and computing the remainder may be replaced by a bitwise AND operation The shift operators shift the left operand by the shift count specified by the right operand. They implement arithmetic shifts if the left operand is a signed integer and logical shifts if it is an unsigned integer. There is no upper limit on the shift count. Shifts behave as if the left operand is shifted n times by 1 for a shift count of n. As a result, x << 1 is the same as x*2 and x >> 1 is the same as x/2 but truncated towards negative infinity. For integer operands, the unary operators +, -, and ^ are defined as follows For floating-point and complex numbers, +x is the same as x, while -x is the negation of x. The result of a floating-point or complex division by zero is not specified beyond the IEEE-754 standard; whether a run-time error occurs is implementation-specific. Whenever any operand of any arithmetic operation, unary or binary, is NULL, as well as in the case of the string concatenating operation, the result is NULL. For unsigned integer values, the operations +, -, *, and << are computed modulo 2n, where n is the bit width of the unsigned integer's type. Loosely speaking, these unsigned integer operations discard high bits upon overflow, and expressions may rely on “wrap around”. For signed integers with a finite bit width, the operations +, -, *, and << may legally overflow and the resulting value exists and is deterministically defined by the signed integer representation, the operation, and its operands. No exception is raised as a result of overflow. An evaluator may not optimize an expression under the assumption that overflow does not occur. For instance, it may not assume that x < x + 1 is always true. Integers of type bigint and rationals do not overflow but their handling is limited by the memory resources available to the program. Comparison operators compare two operands and yield a boolean value. In any comparison, the first operand must be of same type as is the second operand, or vice versa. The equality operators == and != apply to operands that are comparable. The ordering operators <, <=, >, and >= apply to operands that are ordered. These terms and the result of the comparisons are defined as follows - Boolean values are comparable. Two boolean values are equal if they are either both true or both false. - Complex values are comparable. Two complex values u and v are equal if both real(u) == real(v) and imag(u) == imag(v). - Integer values are comparable and ordered, in the usual way. Note that durations are integers. - Floating point values are comparable and ordered, as defined by the IEEE-754 standard. - Rational values are comparable and ordered, in the usual way. - String values are comparable and ordered, lexically byte-wise. - Time values are comparable and ordered. Whenever any operand of any comparison operation is NULL, the result is NULL. Note that slices are always of type string. Logical operators apply to boolean values and yield a boolean result. The right operand is evaluated conditionally. The truth tables for logical operations with NULL values Conversions are expressions of the form T(x) where T is a type and x is an expression that can be converted to type T. A constant value x can be converted to type T in any of these cases: - x is representable by a value of type T. - x is a floating-point constant, T is a floating-point type, and x is representable by a value of type T after rounding using IEEE 754 round-to-even rules. The constant T(x) is the rounded value. - x is an integer constant and T is a string type. The same rule as for non-constant x applies in this case. Converting a constant yields a typed constant as result. A non-constant value x can be converted to type T in any of these cases: - x has type T. - x's type and T are both integer or floating point types. - x's type and T are both complex types. - x is an integer, except bigint or duration, and T is a string type. Specific rules apply to (non-constant) conversions between numeric types or to and from a string type. These conversions may change the representation of x and incur a run-time cost. All other conversions only change the type but not the representation of x. A conversion of NULL to any type yields NULL. For the conversion of non-constant numeric values, the following rules apply 1. When converting between integer types, if the value is a signed integer, it is sign extended to implicit infinite precision; otherwise it is zero extended. It is then truncated to fit in the result type's size. For example, if v == uint16(0x10F0), then uint32(int8(v)) == 0xFFFFFFF0. The conversion always yields a valid value; there is no indication of overflow. 2. When converting a floating-point number to an integer, the fraction is discarded (truncation towards zero). 3. When converting an integer or floating-point number to a floating-point type, or a complex number to another complex type, the result value is rounded to the precision specified by the destination type. For instance, the value of a variable x of type float32 may be stored using additional precision beyond that of an IEEE-754 32-bit number, but float32(x) represents the result of rounding x's value to 32-bit precision. Similarly, x + 0.1 may use more than 32 bits of precision, but float32(x + 0.1) does not. In all non-constant conversions involving floating-point or complex values, if the result type cannot represent the value the conversion succeeds but the result value is implementation-dependent. 1. Converting a signed or unsigned integer value to a string type yields a string containing the UTF-8 representation of the integer. Values outside the range of valid Unicode code points are converted to "\uFFFD". 2. Converting a blob to a string type yields a string whose successive bytes are the elements of the blob. 3. Converting a value of a string type to a blob yields a blob whose successive elements are the bytes of the string. 4. Converting a value of a bigint type to a string yields a string containing the decimal decimal representation of the integer. 5. Converting a value of a string type to a bigint yields a bigint value containing the integer represented by the string value. A prefix of “0x” or “0X” selects base 16; the “0” prefix selects base 8, and a “0b” or “0B” prefix selects base 2. Otherwise the value is interpreted in base 10. An error occurs if the string value is not in any valid format. 6. Converting a value of a rational type to a string yields a string containing the decimal decimal representation of the rational in the form "a/b" (even if b == 1). 7. Converting a value of a string type to a bigrat yields a bigrat value containing the rational represented by the string value. The string can be given as a fraction "a/b" or as a floating-point number optionally followed by an exponent. An error occurs if the string value is not in any valid format. 8. Converting a value of a duration type to a string returns a string representing the duration in the form "72h3m0.5s". Leading zero units are omitted. As a special case, durations less than one second format using a smaller unit (milli-, micro-, or nanoseconds) to ensure that the leading digit is non-zero. The zero duration formats as 0, with no unit. 9. Converting a string value to a duration yields a duration represented by the string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". 10. Converting a time value to a string returns the time formatted using the format string When evaluating the operands of an expression or of function calls, operations are evaluated in lexical left-to-right order. For example, in the evaluation of the function calls and evaluation of c happen in the order h(), i(), j(), c. Floating-point operations within a single expression are evaluated according to the associativity of the operators. Explicit parentheses affect the evaluation by overriding the default associativity. In the expression x + (y + z) the addition y + z is performed before adding x. Statements control execution. The empty statement does nothing. Alter table statements modify existing tables. With the ADD clause it adds a new column to the table. The column must not exist. With the DROP clause it removes an existing column from a table. The column must exist and it must be not the only (last) column of the table. IOW, there cannot be a table with no columns. For example When adding a column to a table with existing data, the constraint clause of the ColumnDef cannot be used. Adding a constrained column to an empty table is fine. Begin transactions statements introduce a new transaction level. Every transaction level must be eventually balanced by exactly one of COMMIT or ROLLBACK statements. Note that when a transaction is roll-backed because of a statement failure then no explicit balancing of the respective BEGIN TRANSACTION is statement is required nor permitted. Failure to properly balance any opened transaction level may cause dead locks and/or lose of data updated in the uppermost opened but never properly closed transaction level. For example A database cannot be updated (mutated) outside of a transaction. Statements requiring a transaction A database is effectively read only outside of a transaction. Statements not requiring a transaction The commit statement closes the innermost transaction nesting level. If that's the outermost level then the updates to the DB made by the transaction are atomically made persistent. For example Create index statements create new indices. Index is a named projection of ordered values of a table column to the respective records. As a special case the id() of the record can be indexed. Index name must not be the same as any of the existing tables and it also cannot be the same as of any column name of the table the index is on. For example Now certain SELECT statements may use the indices to speed up joins and/or to speed up record set filtering when the WHERE clause is used; or the indices might be used to improve the performance when the ORDER BY clause is present. The UNIQUE modifier requires the indexed values tuple to be index-wise unique or have all values NULL. The optional IF NOT EXISTS clause makes the statement a no operation if the index already exists. A simple index consists of only one expression which must be either a column name or the built-in id(). A more complex and more general index is one that consists of more than one expression or its single expression does not qualify as a simple index. In this case the type of all expressions in the list must be one of the non blob-like types. Note: Blob-like types are blob, bigint, bigrat, time and duration. Create table statements create new tables. A column definition declares the column name and type. Table names and column names are case sensitive. Neither a table or an index of the same name may exist in the DB. For example The optional IF NOT EXISTS clause makes the statement a no operation if the table already exists. The optional constraint clause has two forms. The first one is found in many SQL dialects. This form prevents the data in column DepartmentName to be NULL. The second form allows an arbitrary boolean expression to be used to validate the column. If the value of the expression is true then the validation succeeded. If the value of the expression is false or NULL then the validation fails. If the value of the expression is not of type bool an error occurs. The optional DEFAULT clause is an expression which, if present, is substituted instead of a NULL value when the colum is assigned a value. Note that the constraint and/or default expressions may refer to other columns by name: When a table row is inserted by the INSERT INTO statement or when a table row is updated by the UPDATE statement, the order of operations is as follows: 1. The new values of the affected columns are set and the values of all the row columns become the named values which can be referred to in default expressions evaluated in step 2. 2. If any row column value is NULL and the DEFAULT clause is present in the column's definition, the default expression is evaluated and its value is set as the respective column value. 3. The values, potentially updated, of row columns become the named values which can be referred to in constraint expressions evaluated during step 4. 4. All row columns which definition has the constraint clause present will have that constraint checked. If any constraint violation is detected, the overall operation fails and no changes to the table are made. Delete from statements remove rows from a table, which must exist. For example If the WHERE clause is not present then all rows are removed and the statement is equivalent to the TRUNCATE TABLE statement. Drop index statements remove indices from the DB. The index must exist. For example The optional IF EXISTS clause makes the statement a no operation if the index does not exist. Drop table statements remove tables from the DB. The table must exist. For example The optional IF EXISTS clause makes the statement a no operation if the table does not exist. Insert into statements insert new rows into tables. New rows come from literal data, if using the VALUES clause, or are a result of select statement. In the later case the select statement is fully evaluated before the insertion of any rows is performed, allowing to insert values calculated from the same table rows are to be inserted into. If the ColumnNameList part is omitted then the number of values inserted in the row must be the same as are columns in the table. If the ColumnNameList part is present then the number of values per row must be same as the same number of column names. All other columns of the record are set to NULL. The type of the value assigned to a column must be the same as is the column's type or the value must be NULL. For example If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. Explain statement produces a recordset consisting of lines of text which describe the execution plan of a statement, if any. For example, the QL tool treats the explain statement specially and outputs the joined lines: The explanation may aid in uderstanding how a statement/query would be executed and if indices are used as expected - or which indices may possibly improve the statement performance. The create index statements above were directly copy/pasted in the terminal from the suggestions provided by the filter recordset pipeline part returned by the explain statement. If the statement has nothing special in its plan, the result is the original statement. To get an explanation of the select statement of the IN predicate, use the EXPLAIN statement with that particular select statement. The rollback statement closes the innermost transaction nesting level discarding any updates to the DB made by it. If that's the outermost level then the effects on the DB are as if the transaction never happened. For example The (temporary) record set from the last statement is returned and can be processed by the client. In this case the rollback is the same as 'DROP TABLE tmp;' but it can be a more complex operation. Select from statements produce recordsets. The optional DISTINCT modifier ensures all rows in the result recordset are unique. Either all of the resulting fields are returned ('*') or only those named in FieldList. RecordSetList is a list of table names or parenthesized select statements, optionally (re)named using the AS clause. The result can be filtered using a WhereClause and orderd by the OrderBy clause. For example If Recordset is a nested, parenthesized SelectStmt then it must be given a name using the AS clause if its field are to be accessible in expressions. A field is an named expression. Identifiers, not used as a type in conversion or a function name in the Call clause, denote names of (other) fields, values of which should be used in the expression. The expression can be named using the AS clause. If the AS clause is not present and the expression consists solely of a field name, then that field name is used as the name of the resulting field. Otherwise the field is unnamed. For example The SELECT statement can optionally enumerate the desired/resulting fields in a list. No two identical field names can appear in the list. When more than one record set is used in the FROM clause record set list, the result record set field names are rewritten to be qualified using the record set names. If a particular record set doesn't have a name, its respective fields became unnamed. The optional JOIN clause, for example is mostly equal to except that the rows from a which, when they appear in the cross join, never made expr to evaluate to true, are combined with a virtual row from b, containing all nulls, and added to the result set. For the RIGHT JOIN variant the discussed rules are used for rows from b not satisfying expr == true and the virtual, all-null row "comes" from a. The FULL JOIN adds the respective rows which would be otherwise provided by the separate executions of the LEFT JOIN and RIGHT JOIN variants. For more thorough OUTER JOIN discussion please see the Wikipedia article at [10]. Resultins rows of a SELECT statement can be optionally ordered by the ORDER BY clause. Collating proceeds by considering the expressions in the expression list left to right until a collating order is determined. Any possibly remaining expressions are not evaluated. All of the expression values must yield an ordered type or NULL. Ordered types are defined in "Comparison operators". Collating of elements having a NULL value is different compared to what the comparison operators yield in expression evaluation (NULL result instead of a boolean value). Below, T denotes a non NULL value of any QL type. NULL collates before any non NULL value (is considered smaller than T). Two NULLs have no collating order (are considered equal). The WHERE clause restricts records considered by some statements, like SELECT FROM, DELETE FROM, or UPDATE. It is an error if the expression evaluates to a non null value of non bool type. The GROUP BY clause is used to project rows having common values into a smaller set of rows. For example Using the GROUP BY without any aggregate functions in the selected fields is in certain cases equal to using the DISTINCT modifier. The last two examples above produce the same resultsets. The optional OFFSET clause allows to ignore first N records. For example The above will produce only rows 11, 12, ... of the record set, if they exist. The value of the expression must a non negative integer, but not bigint or duration. The optional LIMIT clause allows to ignore all but first N records. For example The above will return at most the first 10 records of the record set. The value of the expression must a non negative integer, but not bigint or duration. The LIMIT and OFFSET clauses can be combined. For example Considering table t has, say 10 records, the above will produce only records 4 - 8. After returning record #8, no more result rows/records are computed. 1. The FROM clause is evaluated, producing a Cartesian product of its source record sets (tables or nested SELECT statements). 2. If present, the JOIN cluase is evaluated on the result set of the previous evaluation and the recordset specified by the JOIN clause. (... JOIN Recordset ON ...) 3. If present, the WHERE clause is evaluated on the result set of the previous evaluation. 4. If present, the GROUP BY clause is evaluated on the result set of the previous evaluation(s). 5. The SELECT field expressions are evaluated on the result set of the previous evaluation(s). 6. If present, the DISTINCT modifier is evaluated on the result set of the previous evaluation(s). 7. If present, the ORDER BY clause is evaluated on the result set of the previous evaluation(s). 8. If present, the OFFSET clause is evaluated on the result set of the previous evaluation(s). The offset expression is evaluated once for the first record produced by the previous evaluations. 9. If present, the LIMIT clause is evaluated on the result set of the previous evaluation(s). The limit expression is evaluated once for the first record produced by the previous evaluations. Truncate table statements remove all records from a table. The table must exist. For example Update statements change values of fields in rows of a table. For example Note: The SET clause is optional. If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. To allow to query for DB meta data, there exist specially named tables, some of them being virtual. Note: Virtual system tables may have fake table-wise unique but meaningless and unstable record IDs. Do not apply the built-in id() to any system table. The table __Table lists all tables in the DB. The schema is The Schema column returns the statement to (re)create table Name. This table is virtual. The table __Colum lists all columns of all tables in the DB. The schema is The Ordinal column defines the 1-based index of the column in the record. This table is virtual. The table __Colum2 lists all columns of all tables in the DB which have the constraint NOT NULL or which have a constraint expression defined or which have a default expression defined. The schema is It's possible to obtain a consolidated recordset for all properties of all DB columns using The Name column is the column name in TableName. The table __Index lists all indices in the DB. The schema is The IsUnique columns reflects if the index was created using the optional UNIQUE clause. This table is virtual. Built-in functions are predeclared. The built-in aggregate function avg returns the average of values of an expression. Avg ignores NULL values, but returns NULL if all values of a column are NULL or if avg is applied to an empty record set. The column values must be of a numeric type. The built-in function contains returns true if substr is within s. If any argument to contains is NULL the result is NULL. The built-in aggregate function count returns how many times an expression has a non NULL values or the number of rows in a record set. Note: count() returns 0 for an empty record set. For example Date returns the time corresponding to in the appropriate zone for that time in the given location. The month, day, hour, min, sec, and nsec values may be outside their usual ranges and will be normalized during the conversion. For example, October 32 converts to November 1. A daylight savings time transition skips or repeats times. For example, in the United States, March 13, 2011 2:15am never occurred, while November 6, 2011 1:15am occurred twice. In such cases, the choice of time zone, and therefore the time, is not well-defined. Date returns a time that is correct in one of the two zones involved in the transition, but it does not guarantee which. A location maps time instants to the zone in use at that time. Typically, the location represents the collection of time offsets in use in a geographical area, such as "CEST" and "CET" for central Europe. "local" represents the system's local time zone. "UTC" represents Universal Coordinated Time (UTC). The month specifies a month of the year (January = 1, ...). If any argument to date is NULL the result is NULL. The built-in function day returns the day of the month specified by t. If the argument to day is NULL the result is NULL. The built-in function formatTime returns a textual representation of the time value formatted according to layout, which defines the format by showing how the reference time, would be displayed if it were the value; it serves as an example of the desired output. The same display rules will then be applied to the time value. If any argument to formatTime is NULL the result is NULL. NOTE: The string value of the time zone, like "CET" or "ACDT", is dependent on the time zone of the machine the function is run on. For example, if the t value is in "CET", but the machine is in "ACDT", instead of "CET" the result is "+0100". This is the same what Go (time.Time).String() returns and in fact formatTime directly calls t.String(). returns on a machine in the CET time zone, but may return on a machine in the ACDT zone. The time value is in both cases the same so its ordering and comparing is correct. Only the display value can differ. The built-in functions formatFloat and formatInt format numbers to strings using go's number format functions in the `strconv` package. For all three functions, only the first argument is mandatory. The default values of the rest are shown in the examples. If the first argument is NULL, the result is NULL. returns returns returns Unlike the `strconv` equivalent, the formatInt function handles all integer types, both signed and unsigned. The built-in function hasPrefix tests whether the string s begins with prefix. If any argument to hasPrefix is NULL the result is NULL. The built-in function hasSuffix tests whether the string s ends with suffix. If any argument to hasSuffix is NULL the result is NULL. The built-in function hour returns the hour within the day specified by t, in the range [0, 23]. If the argument to hour is NULL the result is NULL. The built-in function hours returns the duration as a floating point number of hours. If the argument to hours is NULL the result is NULL. The built-in function id takes zero or one arguments. If no argument is provided, id() returns a table-unique automatically assigned numeric identifier of type int. Ids of deleted records are not reused unless the DB becomes completely empty (has no tables). For example If id() without arguments is called for a row which is not a table record then the result value is NULL. For example If id() has one argument it must be a table name of a table in a cross join. For example The built-in function len takes a string argument and returns the lentgh of the string in bytes. The expression len(s) is constant if s is a string constant. If the argument to len is NULL the result is NULL. The built-in aggregate function max returns the largest value of an expression in a record set. Max ignores NULL values, but returns NULL if all values of a column are NULL or if max is applied to an empty record set. The expression values must be of an ordered type. For example The built-in aggregate function min returns the smallest value of an expression in a record set. Min ignores NULL values, but returns NULL if all values of a column are NULL or if min is applied to an empty record set. For example The column values must be of an ordered type. The built-in function minute returns the minute offset within the hour specified by t, in the range [0, 59]. If the argument to minute is NULL the result is NULL. The built-in function minutes returns the duration as a floating point number of minutes. If the argument to minutes is NULL the result is NULL. The built-in function month returns the month of the year specified by t (January = 1, ...). If the argument to month is NULL the result is NULL. The built-in function nanosecond returns the nanosecond offset within the second specified by t, in the range [0, 999999999]. If the argument to nanosecond is NULL the result is NULL. The built-in function nanoseconds returns the duration as an integer nanosecond count. If the argument to nanoseconds is NULL the result is NULL. The built-in function now returns the current local time. The built-in function parseTime parses a formatted string and returns the time value it represents. The layout defines the format by showing how the reference time, would be interpreted if it were the value; it serves as an example of the input format. The same interpretation will then be made to the input string. Elements omitted from the value are assumed to be zero or, when zero is impossible, one, so parsing "3:04pm" returns the time corresponding to Jan 1, year 0, 15:04:00 UTC (note that because the year is 0, this time is before the zero Time). Years must be in the range 0000..9999. The day of the week is checked for syntax but it is otherwise ignored. In the absence of a time zone indicator, parseTime returns a time in UTC. When parsing a time with a zone offset like -0700, if the offset corresponds to a time zone used by the current location, then parseTime uses that location and zone in the returned time. Otherwise it records the time as being in a fabricated location with time fixed at the given zone offset. When parsing a time with a zone abbreviation like MST, if the zone abbreviation has a defined offset in the current location, then that offset is used. The zone abbreviation "UTC" is recognized as UTC regardless of location. If the zone abbreviation is unknown, Parse records the time as being in a fabricated location with the given zone abbreviation and a zero offset. This choice means that such a time can be parses and reformatted with the same layout losslessly, but the exact instant used in the representation will differ by the actual zone offset. To avoid such problems, prefer time layouts that use a numeric zone offset. If any argument to parseTime is NULL the result is NULL. The built-in function second returns the second offset within the minute specified by t, in the range [0, 59]. If the argument to second is NULL the result is NULL. The built-in function seconds returns the duration as a floating point number of seconds. If the argument to seconds is NULL the result is NULL. The built-in function since returns the time elapsed since t. It is shorthand for now()-t. If the argument to since is NULL the result is NULL. The built-in aggregate function sum returns the sum of values of an expression for all rows of a record set. Sum ignores NULL values, but returns NULL if all values of a column are NULL or if sum is applied to an empty record set. The column values must be of a numeric type. The built-in function timeIn returns t with the location information set to loc. For discussion of the loc argument please see date(). If any argument to timeIn is NULL the result is NULL. The built-in function weekday returns the day of the week specified by t. Sunday == 0, Monday == 1, ... If the argument to weekday is NULL the result is NULL. The built-in function year returns the year in which t occurs. If the argument to year is NULL the result is NULL. The built-in function yearDay returns the day of the year specified by t, in the range [1,365] for non-leap years, and [1,366] in leap years. If the argument to yearDay is NULL the result is NULL. Three functions assemble and disassemble complex numbers. The built-in function complex constructs a complex value from a floating-point real and imaginary part, while real and imag extract the real and imaginary parts of a complex value. The type of the arguments and return value correspond. For complex, the two arguments must be of the same floating-point type and the return type is the complex type with the corresponding floating-point constituents: complex64 for float32, complex128 for float64. The real and imag functions together form the inverse, so for a complex value z, z == complex(real(z), imag(z)). If the operands of these functions are all constants, the return value is a constant. If any argument to any of complex, real, imag functions is NULL the result is NULL. For the numeric types, the following sizes are guaranteed Portions of this specification page are modifications based on work[2] created and shared by Google[3] and used according to terms described in the Creative Commons 3.0 Attribution License[4]. This specification is licensed under the Creative Commons Attribution 3.0 License, and code is licensed under a BSD license[5]. Links from the above documentation This section is not part of the specification. WARNING: The implementation of indices is new and it surely needs more time to become mature. Indices are used currently used only by the WHERE clause. The following expression patterns of 'WHERE expression' are recognized and trigger index use. The relOp is one of the relation operators <, <=, ==, >=, >. For the equality operator both operands must be of comparable types. For all other operators both operands must be of ordered types. The constant expression is a compile time constant expression. Some constant folding is still a TODO. Parameter is a QL parameter ($1 etc.). Consider tables t and u, both with an indexed field f. The WHERE expression doesn't comply with the above simple detected cases. However, such query is now automatically rewritten to which will use both of the indices. The impact of using the indices can be substantial (cf. BenchmarkCrossJoin*) if the resulting rows have low "selectivity", ie. only few rows from both tables are selected by the respective WHERE filtering. Note: Existing QL DBs can be used and indices can be added to them. However, once any indices are present in the DB, the old QL versions cannot work with such DB anymore. Running a benchmark with -v (-test.v) outputs information about the scale used to report records/s and a brief description of the benchmark. For example Running the full suite of benchmarks takes a lot of time. Use the -timeout flag to avoid them being killed after the default time limit (10 minutes).
Package sessions provides sessions support for net/http and valyala/fasthttp unique with auto-GC, register unlimited number of databases to Load and Update/Save the sessions in external server or to an external (no/or/and sql) database Usage net/http: // init a new sessions manager( if you use only one web framework inside your app then you can use the package-level functions like: sessions.Start/sessions.Destroy) manager := sessions.New(sessions.Config{}) // start a session for a particular client manager.Start(http.ResponseWriter, *http.Request) // destroy a session from the server and client, manager.Destroy(http.ResponseWriter, *http.Request) Usage valyala/fasthttp: // init a new sessions manager( if you use only one web framework inside your app then you can use the package-level functions like: sessions.Start/sessions.Destroy) manager := sessions.New(sessions.Config{}) // start a session for a particular client manager.StartFasthttp(*fasthttp.RequestCtx) // destroy a session from the server and client, manager.DestroyFasthttp(*fasthttp.Request) Note that, now, you can use both fasthttp and net/http within the same sessions manager(.New) instance! So now, you can share sessions between a net/http app and valyala/fasthttp app
Package firestore provides a client for reading and writing to a Cloud Firestore database. See https://cloud.google.com/firestore/docs for an introduction to Cloud Firestore and additional help on using the Firestore API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Note: you can't use both Cloud Firestore and Cloud Datastore in the same project. To start working with this package, create a client with a project ID: In Firestore, documents are sets of key-value pairs, and collections are groups of documents. A Firestore database consists of a hierarchy of alternating collections and documents, referred to by slash-separated paths like "States/California/Cities/SanFrancisco". This client is built around references to collections and documents. CollectionRefs and DocumentRefs are lightweight values that refer to the corresponding database entities. Creating a ref does not involve any network traffic. Use DocumentRef.Get to read a document. The result is a DocumentSnapshot. Call its Data method to obtain the entire document contents as a map. You can also obtain a single field with DataAt, or extract the data into a struct with DataTo. With the type definition we can extract the document's data into a value of type State: Note that this client supports struct tags beginning with "firestore:" that work like the tags of the encoding/json package, letting you rename fields, ignore them, or omit their values when empty. To retrieve multiple documents from their references in a single call, use Client.GetAll. For writing individual documents, use the methods on DocumentReference. Create creates a new document. The first return value is a WriteResult, which contains the time at which the document was updated. Create fails if the document exists. Another method, Set, either replaces an existing document or creates a new one. To update some fields of an existing document, use Update. It takes a list of paths to update and their corresponding values. Use DocumentRef.Delete to delete a document. You can condition Deletes or Updates on when a document was last changed. Specify these preconditions as an option to a Delete or Update method. The check and the write happen atomically with a single RPC. Here we update a doc only if it hasn't changed since we read it. You could also do this with a transaction. To perform multiple writes at once, use a WriteBatch. Its methods chain for convenience. WriteBatch.Commit sends the collected writes to the server, where they happen atomically. You can use SQL to select documents from a collection. Begin with the collection, and build up a query using Select, Where and other methods of Query. Supported operators include `<`, `<=`, `>`, `>=`, `==`, and 'array-contains'. Call the Query's Documents method to get an iterator, and use it like the other Google Cloud Client iterators. To get all the documents in a collection, you can use the collection itself as a query. Use a transaction to execute reads and writes atomically. All reads must happen before any writes. Transaction creation, commit, rollback and retry are handled for you by the Client.RunTransaction method; just provide a function and use the read and write methods of the Transaction passed to it.
Package goworker is a Resque-compatible, Go-based background worker. It allows you to push jobs into a queue using an expressive language like Ruby while harnessing the efficiency and concurrency of Go to minimize job latency and cost. goworker workers can run alongside Ruby Resque clients so that you can keep all but your most resource-intensive jobs in Ruby. To create a worker, write a function matching the signature and register it using Here is a simple worker that prints its arguments: To create workers that share a database pool or other resources, use a closure to share variables. goworker worker functions receive the queue they are serving and a slice of interfaces. To use them as parameters to other functions, use Go type assertions to convert them into usable types. For testing, it is helpful to use the redis-cli program to insert jobs onto the Redis queue: will insert 100 jobs for the MyClass worker onto the myqueue queue. It is equivalent to: After building your workers, you will have an executable that you can run which will automatically poll a Redis server and call your workers as jobs arrive. There are several flags which control the operation of the goworker client. -queues="comma,delimited,queues" — This is the only required flag. The recommended practice is to separate your Resque workers from your goworkers with different queues. Otherwise, Resque worker classes that have no goworker analog will cause the goworker process to fail the jobs. Because of this, there is no default queue, nor is there a way to select all queues (à la Resque's * queue). Queues are processed in the order they are specififed. If you have multiple queues you can assign them weights. A queue with a weight of 2 will be checked twice as often as a queue with a weight of 1: -queues='high=2,low=1'. -interval=5.0 — Specifies the wait period between polling if no job was in the queue the last time one was requested. -concurrency=25 — Specifies the number of concurrently executing workers. This number can be as low as 1 or rather comfortably as high as 100,000, and should be tuned to your workflow and the availability of outside resources. -connections=2 — Specifies the maximum number of Redis connections that goworker will consume between the poller and all workers. There is not much performance gain over two and a slight penalty when using only one. This is configurable in case you need to keep connection counts low for cloud Redis providers who limit plans on maxclients. -uri=redis://localhost:6379/ — Specifies the URI of the Redis database from which goworker polls for jobs. Accepts URIs of the format redis://user:pass@host:port/db or unix:///path/to/redis.sock. The flag may also be set by the environment variable $($REDIS_PROVIDER) or $REDIS_URL. E.g. set $REDIS_PROVIDER to REDISTOGO_URL on Heroku to let the Redis To Go add-on configure the Redis database. -namespace=resque: — Specifies the namespace from which goworker retrieves jobs and stores stats on workers. -exit-on-complete=false — Exits goworker when there are no jobs left in the queue. This is helpful in conjunction with the time command to benchmark different configurations. -use-number=false — Uses json.Number when decoding numbers in the job payloads. This will avoid issues that occur when goworker and the json package decode large numbers as floats, which then get encoded in scientific notation, losing pecision. This will default to true soon. You can also configure your own flags for use within your workers. Be sure to set them before calling goworker.Main(). It is okay to call flags.Parse() before calling goworker.Main() if you need to do additional processing on your flags. To stop goworker, send a QUIT, TERM, or INT signal to the process. This will immediately stop job polling. There can be up to $CONCURRENCY jobs currently running, which will continue to run until they are finished. Like Resque, goworker makes no guarantees about the safety of jobs in the event of process shutdown. Workers must be both idempotent and tolerant to loss of the job in the event of failure. If the process is killed with a KILL or by a system failure, there may be one job that is currently in the poller's buffer that will be lost without any representation in either the queue or the worker variable. If you are running Goworker on a system like Heroku, which sends a TERM to signal a process that it needs to stop, ten seconds later sends a KILL to force the process to stop, your jobs must finish within 10 seconds or they may be lost. Jobs will be recoverable from the Redis database under as a JSON object with keys queue, run_at, and payload, but the process is manual. Additionally, there is no guarantee that the job in Redis under the worker key has not finished, if the process is killed before goworker can flush the update to Redis.
Package firestore provides a client for reading and writing to a Cloud Firestore database. See https://cloud.google.com/firestore/docs for an introduction to Cloud Firestore and additional help on using the Firestore API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Note: you can't use both Cloud Firestore and Cloud Datastore in the same project. To start working with this package, create a client with a project ID: In Firestore, documents are sets of key-value pairs, and collections are groups of documents. A Firestore database consists of a hierarchy of alternating collections and documents, referred to by slash-separated paths like "States/California/Cities/SanFrancisco". This client is built around references to collections and documents. CollectionRefs and DocumentRefs are lightweight values that refer to the corresponding database entities. Creating a ref does not involve any network traffic. Use DocumentRef.Get to read a document. The result is a DocumentSnapshot. Call its Data method to obtain the entire document contents as a map. You can also obtain a single field with DataAt, or extract the data into a struct with DataTo. With the type definition we can extract the document's data into a value of type State: Note that this client supports struct tags beginning with "firestore:" that work like the tags of the encoding/json package, letting you rename fields, ignore them, or omit their values when empty. To retrieve multiple documents from their references in a single call, use Client.GetAll. For writing individual documents, use the methods on DocumentReference. Create creates a new document. The first return value is a WriteResult, which contains the time at which the document was updated. Create fails if the document exists. Another method, Set, either replaces an existing document or creates a new one. To update some fields of an existing document, use Update. It takes a list of paths to update and their corresponding values. Use DocumentRef.Delete to delete a document. You can condition Deletes or Updates on when a document was last changed. Specify these preconditions as an option to a Delete or Update method. The check and the write happen atomically with a single RPC. Here we update a doc only if it hasn't changed since we read it. You could also do this with a transaction. To perform multiple writes at once, use a WriteBatch. Its methods chain for convenience. WriteBatch.Commit sends the collected writes to the server, where they happen atomically. You can use SQL to select documents from a collection. Begin with the collection, and build up a query using Select, Where and other methods of Query. Supported operators include `<`, `<=`, `>`, `>=`, `==`, and 'array-contains'. Call the Query's Documents method to get an iterator, and use it like the other Google Cloud Client iterators. To get all the documents in a collection, you can use the collection itself as a query. Use a transaction to execute reads and writes atomically. All reads must happen before any writes. Transaction creation, commit, rollback and retry are handled for you by the Client.RunTransaction method; just provide a function and use the read and write methods of the Transaction passed to it.