text is a repository of text-related packages related to internationalization (i18n) and localization (l10n), such as character encodings, text transformations, and locale-specific text handling. There is a 30 minute video, recorded on 2017-11-30, on the "State of golang.org/x/text" at https://www.youtube.com/watch?v=uYrDrMEGu58
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross-Field and Cross-Struct validation for nested structs and has the ability to dive into arrays and maps of any type. Why not a better error message? Because this library intends for you to handle your own error messages. Why should I handle my own errors? Many reasons. We built an internationalized application and needed to know the field, and what validation failed so we could provide a localized error. Doing things this way is actually the way the standard library does, see the file.Open method here: The authors return type "error" to avoid the issue discussed in the following, where err is always != nil: Validator only returns nil or ValidationErrors as type error; so, in your code all you need to do is check if the error returned is not nil, and if it's not type cast it to type ValidationErrors like so err.(validator.ValidationErrors). Custom functions can be added. Example: Cross-Field Validation can be done via the following tags: If, however, some custom cross-field validation is required, it can be done using a custom validation. Why not just have cross-fields validation tags (i.e. only eqcsfield and not eqfield)? The reason is efficiency. If you want to check a field within the same struct "eqfield" only has to find the field on the same struct (1 level). But, if we used "eqcsfield" it could be multiple levels down. Example: Multiple validators on a field will process in the order defined. Example: Bad Validator definitions are not handled by the library. Example: Baked In Cross-Field validation only compares fields on the same struct. If Cross-Field + Cross-Struct validation is needed you should implement your own custom validator. Comma (",") is the default separator of validation tags. If you wish to have a comma included within the parameter (i.e. excludesall=,) you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C. Pipe ("|") is the default separator of validation tags. If you wish to have a pipe included within the parameter i.e. excludesall=| you will need to use the UTF-8 hex representation 0x7C, which is replaced in the code as a pipe, so the above will become excludesall=0x7C Here is a list of the current built in validators: Tells the validation to skip this struct field; this is particularly handy in ignoring embedded structs from being validated. (Usage: -) This is the 'or' operator allowing multiple validators to be used and accepted. (Usage: rbg|rgba) <-- this would allow either rgb or rgba colors to be accepted. This can also be combined with 'and' for example ( Usage: omitempty,rgb|rgba) When a field that is a nested struct is encountered, and contains this flag any validation on the nested struct will be run, but none of the nested struct fields will be validated. This is usefull if inside of you program you know the struct will be valid, but need to verify it has been assigned. NOTE: only "required" and "omitempty" can be used on a struct itself. Same as structonly tag except that any struct level validations will not run. Is a special tag without a validation function attached. It is used when a field is a Pointer, Interface or Invalid and you wish to validate that it exists. Example: want to ensure a bool exists if you define the bool as a pointer and use exists it will ensure there is a value; couldn't use required as it would fail when the bool was false. exists will fail is the value is a Pointer, Interface or Invalid and is nil. Allows conditional validation, for example if a field is not set with a value (Determined by the "required" validator) then other validation such as min or max won't run, but if a value is set validation will run. This tells the validator to dive into a slice, array or map and validate that level of the slice, array or map with the validation tags that follow. Multidimensional nesting is also supported, each level you wish to dive will require another dive tag. Example #1 Example #2 This validates that the value is not the data types default zero value. For numbers ensures value is not zero. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For numbers, max will ensure that the value is equal to the parameter given. For strings, it checks that the string length is exactly that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, max will ensure that the value is less than or equal to the parameter given. For strings, it checks that the string length is at most that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, min will ensure that the value is greater or equal to the parameter given. For strings, it checks that the string length is at least that number of characters. For slices, arrays, and maps, validates the number of items. For strings & numbers, eq will ensure that the value is equal to the parameter given. For slices, arrays, and maps, validates the number of items. For strings & numbers, ne will ensure that the value is not equal to the parameter given. For slices, arrays, and maps, validates the number of items. For numbers, this will ensure that the value is greater than the parameter given. For strings, it checks that the string length is greater than that number of characters. For slices, arrays and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than time.Now.UTC(). Same as 'min' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than or equal to time.Now.UTC(). For numbers, this will ensure that the value is less than the parameter given. For strings, it checks that the string length is less than that number of characters. For slices, arrays, and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than time.Now.UTC(). Same as 'max' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than or equal to time.Now.UTC(). This will validate the field value against another fields value either within a struct or passed in field. Example #1: Example #2: Field Equals Another Field (relative) This does the same as eqfield except that it validates the field provided relative to the top level struct. This will validate the field value against another fields value either within a struct or passed in field. Examples: Field Does Not Equal Another Field (relative) This does the same as nefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltefield except that it validates the field provided relative to the top level struct. This validates that a string value contains alpha characters only This validates that a string value contains alphanumeric characters only This validates that a string value contains a basic numeric value. basic excludes exponents etc... This validates that a string value contains a valid hexadecimal. This validates that a string value contains a valid hex color including hashtag (#) This validates that a string value contains a valid rgb color This validates that a string value contains a valid rgba color This validates that a string value contains a valid hsl color This validates that a string value contains a valid hsla color This validates that a string value contains a valid email This may not conform to all possibilities of any rfc standard, but neither does any email provider accept all posibilities. This validates that a string value contains a valid url This will accept any url the golang request uri accepts but must contain a schema for example http:// or rtmp:// This validates that a string value contains a valid uri This will accept any uri the golang request uri accepts This validates that a string value contains a valid base64 value. Although an empty string is valid base64 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains the substring value. This validates that a string value contains any Unicode code points in the substring value. This validates that a string value contains the supplied rune value. This validates that a string value does not contain the substring value. This validates that a string value does not contain any Unicode code points in the substring value. This validates that a string value does not contain the supplied rune value. This validates that a string value contains a valid isbn10 or isbn13 value. This validates that a string value contains a valid isbn10 value. This validates that a string value contains a valid isbn13 value. This validates that a string value contains a valid UUID. This validates that a string value contains a valid version 3 UUID. This validates that a string value contains a valid version 4 UUID. This validates that a string value contains a valid version 5 UUID. This validates that a string value contains only ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains only printable ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains one or more multibyte characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains a valid DataURI. NOTE: this will also validate that the data portion is valid base64 This validates that a string value contains a valid latitude. This validates that a string value contains a valid longitude. This validates that a string value contains a valid U.S. Social Security Number. This validates that a string value contains a valid IP Adress. This validates that a string value contains a valid v4 IP Adress. This validates that a string value contains a valid v6 IP Adress. This validates that a string value contains a valid CIDR Adress. This validates that a string value contains a valid v4 CIDR Adress. This validates that a string value contains a valid v6 CIDR Adress. This validates that a string value contains a valid resolvable TCP Adress. This validates that a string value contains a valid resolvable v4 TCP Adress. This validates that a string value contains a valid resolvable v6 TCP Adress. This validates that a string value contains a valid resolvable UDP Adress. This validates that a string value contains a valid resolvable v4 UDP Adress. This validates that a string value contains a valid resolvable v6 UDP Adress. This validates that a string value contains a valid resolvable IP Adress. This validates that a string value contains a valid resolvable v4 IP Adress. This validates that a string value contains a valid resolvable v6 IP Adress. This validates that a string value contains a valid Unix Adress. This validates that a string value contains a valid MAC Adress. Note: See Go's ParseMAC for accepted formats and types: NOTE: When returning an error, the tag returned in "FieldError" will be the alias tag unless the dive tag is part of the alias. Everything after the dive tag is not reported as the alias tag. Also, the "ActualTag" in the before case will be the actual tag within the alias that failed. Here is a list of the current built in alias tags: Validator notes: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross Field and Cross Struct validation for nested structs. Validate A simple example usage: The error can be used like so Both StructErrors and FieldError implement the Error interface but it's intended use is for development + debugging, not a production error message. Why not a better error message? because this library intends for you to handle your own error messages Why should I handle my own errors? Many reasons, for us building an internationalized application I needed to know the field and what validation failed so that I could provide an error in the users specific language. The hierarchical error structure is hard to work with sometimes.. Agreed Flatten function to the rescue! Flatten will return a map of FieldError's but the field name will be namespaced. Custom functions can be added Cross Field Validation can be implemented, for example Start & End Date range validation Multiple validators on a field will process in the order defined Bad Validator definitions are not handled by the library NOTE: Baked In Cross field validation only compares fields on the same struct, if cross field + cross struct validation is needed your own custom validator should be implemented. NOTE2: comma is the default separator of validation tags, if you wish to have a comma included within the parameter i.e. excludesall=, you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C Here is a list of the current built in validators: Validator notes: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
text is a repository of text-related packages related to internationalization (i18n) and localization (l10n), such as character encodings, text transformations, and locale-specific text handling.
This is a GSSAPI provider for Go, which expects to be initialized with the name of a dynamically loadable module which can be dlopen'd to get at a C language binding GSSAPI library. The GSSAPI concepts are explained in RFC 2743, "Generic Security Service Application Program Interface Version 2, Update 1". The API calls for C, together with a number of values for constants, come from RFC 2744, "Generic Security Service API Version 2 : C-bindings". Note that the basic GSSAPI bindings for C use the Latin-1 character set. UTF-8 interfaces are specified in RFC 5178, "Generic Security Service Application Program Interface (GSS-API) Internationalization and Domain-Based Service Names and Name Type", in 2008. Looking in 2013, this API does not appear to be provided by either MIT or Heimdal. This API applies solely to hostnames though, which can also be supplied in ACE encoding, bypassing the issue. For now, we assume that hostnames and usercodes are all ASCII-ish and pass UTF-8 into the library. Patches for more comprehensive support welcome.
Package monday is a minimalistic translator for month and day of week names in time.Date objects Monday is not an alternative to standard time package. It is a temporary solution to use while the internationalization features are not ready. That's why monday doesn't create any additional parsing algorithms, layout identifiers. It is just a wrapper for time.Format and time.ParseInLocation and uses all the same layout IDs, constants, etc. Format usage: Parse usage: Monday initializes all its data once in the init func and then uses only func calls and local vars. Thus, it's thread-safe and doesn't need any mutexes to be used with.
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross Field and Cross Struct validation for nested structs and has the ability to dive into arrays and maps of any type. Why not a better error message? because this library intends for you to handle your own error messages. Why should I handle my own errors? Many reasons, for us building an internationalized application I needed to know the field and what validation failed so that I could provide an error in the users specific language. Custom functions can be added Cross Field Validation can be implemented, for example Start & End Date range validation Multiple validators on a field will process in the order defined Bad Validator definitions are not handled by the library NOTE: Baked In Cross field validation only compares fields on the same struct, if cross field + cross struct validation is needed your own custom validator should be implemented. NOTE2: comma is the default separator of validation tags, if you wish to have a comma included within the parameter i.e. excludesall=, you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C NOTE3: pipe is the default separator of or validation tags, if you wish to have a pipe included within the parameter i.e. excludesall=| you will need to use the UTF-8 hex representation 0x7C, which is replaced in the code as a pipe, so the above will become excludesall=0x7C Here is a list of the current built in validators: Validator notes: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross Field validation and even Cross Field Cross Struct validation for nested structs. Built In Validator A simple example usage: The error can be used like so Both StructValidationErrors and FieldValidationError implement the Error interface but it's intended use is for development + debugging, not a production error message. Why not a better error message? because this library intends for you to handle your own error messages Why should I handle my own errors? Many reasons, for us building an internationalized application I needed to know the field and what validation failed so that I could provide an error in the users specific language. The hierarchical error structure is hard to work with sometimes.. Agreed Flatten function to the rescue! Flatten will return a map of FieldValidationError's but the field name will be namespaced. Custom functions can be added Cross Field Validation can be implemented, for example Start & End Date range validation Multiple validators on a field will process in the order defined Bad Validator definitions are not handled by the library NOTE: Baked In Cross field validation only compares fields on the same struct, if cross field + cross struct validation is needed your own custom validator should be implemented. Here is a list of the current built in validators: Validator notes: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package i18n is for app Internationalization and Localization.
Package i18n is for app Internationalization and Localization.
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross Field and Cross Struct validation for nested structs. Validate A simple example usage: The error can be used like so Both StructErrors and FieldError implement the Error interface but it's intended use is for development + debugging, not a production error message. Why not a better error message? because this library intends for you to handle your own error messages Why should I handle my own errors? Many reasons, for us building an internationalized application I needed to know the field and what validation failed so that I could provide an error in the users specific language. The hierarchical error structure is hard to work with sometimes.. Agreed Flatten function to the rescue! Flatten will return a map of FieldError's but the field name will be namespaced. Custom functions can be added Cross Field Validation can be implemented, for example Start & End Date range validation Multiple validators on a field will process in the order defined Bad Validator definitions are not handled by the library NOTE: Baked In Cross field validation only compares fields on the same struct, if cross field + cross struct validation is needed your own custom validator should be implemented. NOTE2: comma is the default separator of validation tags, if you wish to have a comma included within the parameter i.e. excludesall=, you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C Here is a list of the current built in validators: Validator notes: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package i18n is for app Internationalization and Localization.
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross-Field and Cross-Struct validation for nested structs and has the ability to dive into arrays and maps of any type. Why not a better error message? Because this library intends for you to handle your own error messages. Why should I handle my own errors? Many reasons. We built an internationalized application and needed to know the field, and what validation failed so we could provide a localized error. Doing things this way is actually the way the standard library does, see the file.Open method here: The authors return type "error" to avoid the issue discussed in the following, where err is always != nil: Validator only returns nil or ValidationErrors as type error; so, in your code all you need to do is check if the error returned is not nil, and if it's not type cast it to type ValidationErrors like so err.(validator.ValidationErrors). Custom functions can be added. Example: Cross-Field Validation can be done via the following tags: If, however, some custom cross-field validation is required, it can be done using a custom validation. Why not just have cross-fields validation tags (i.e. only eqcsfield and not eqfield)? The reason is efficiency. If you want to check a field within the same struct "eqfield" only has to find the field on the same struct (1 level). But, if we used "eqcsfield" it could be multiple levels down. Example: Multiple validators on a field will process in the order defined. Example: Bad Validator definitions are not handled by the library. Example: Baked In Cross-Field validation only compares fields on the same struct. If Cross-Field + Cross-Struct validation is needed you should implement your own custom validator. Comma (",") is the default separator of validation tags. If you wish to have a comma included within the parameter (i.e. excludesall=,) you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C. Pipe ("|") is the default separator of validation tags. If you wish to have a pipe included within the parameter i.e. excludesall=| you will need to use the UTF-8 hex representation 0x7C, which is replaced in the code as a pipe, so the above will become excludesall=0x7C Here is a list of the current built in validators: Tells the validation to skip this struct field; this is particularly handy in ignoring embedded structs from being validated. (Usage: -) This is the 'or' operator allowing multiple validators to be used and accepted. (Usage: rbg|rgba) <-- this would allow either rgb or rgba colors to be accepted. This can also be combined with 'and' for example ( Usage: omitempty,rgb|rgba) When a field that is a nested struct is encountered, and contains this flag any validation on the nested struct will be run, but none of the nested struct fields will be validated. This is usefull if inside of you program you know the struct will be valid, but need to verify it has been assigned. NOTE: only "required" and "omitempty" can be used on a struct itself. Same as structonly tag except that any struct level validations will not run. Is a special tag without a validation function attached. It is used when a field is a Pointer, Interface or Invalid and you wish to validate that it exists. Example: want to ensure a bool exists if you define the bool as a pointer and use exists it will ensure there is a value; couldn't use required as it would fail when the bool was false. exists will fail is the value is a Pointer, Interface or Invalid and is nil. Allows conditional validation, for example if a field is not set with a value (Determined by the "required" validator) then other validation such as min or max won't run, but if a value is set validation will run. This tells the validator to dive into a slice, array or map and validate that level of the slice, array or map with the validation tags that follow. Multidimensional nesting is also supported, each level you wish to dive will require another dive tag. Example #1 Example #2 This validates that the value is not the data types default zero value. For numbers ensures value is not zero. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For numbers, max will ensure that the value is equal to the parameter given. For strings, it checks that the string length is exactly that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, max will ensure that the value is less than or equal to the parameter given. For strings, it checks that the string length is at most that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, min will ensure that the value is greater or equal to the parameter given. For strings, it checks that the string length is at least that number of characters. For slices, arrays, and maps, validates the number of items. For strings & numbers, eq will ensure that the value is equal to the parameter given. For slices, arrays, and maps, validates the number of items. For strings & numbers, ne will ensure that the value is not equal to the parameter given. For slices, arrays, and maps, validates the number of items. For numbers, this will ensure that the value is greater than the parameter given. For strings, it checks that the string length is greater than that number of characters. For slices, arrays and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than time.Now.UTC(). Same as 'min' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than or equal to time.Now.UTC(). For numbers, this will ensure that the value is less than the parameter given. For strings, it checks that the string length is less than that number of characters. For slices, arrays, and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than time.Now.UTC(). Same as 'max' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than or equal to time.Now.UTC(). This will validate the field value against another fields value either within a struct or passed in field. Example #1: Example #2: Field Equals Another Field (relative) This does the same as eqfield except that it validates the field provided relative to the top level struct. This will validate the field value against another fields value either within a struct or passed in field. Examples: Field Does Not Equal Another Field (relative) This does the same as nefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltefield except that it validates the field provided relative to the top level struct. This validates that a string value contains alpha characters only This validates that a string value contains alphanumeric characters only This validates that a string value contains a basic numeric value. basic excludes exponents etc... This validates that a string value contains a valid hexadecimal. This validates that a string value contains a valid hex color including hashtag (#) This validates that a string value contains a valid rgb color This validates that a string value contains a valid rgba color This validates that a string value contains a valid hsl color This validates that a string value contains a valid hsla color This validates that a string value contains a valid email This may not conform to all possibilities of any rfc standard, but neither does any email provider accept all posibilities. This validates that a string value contains a valid url This will accept any url the golang request uri accepts but must contain a schema for example http:// or rtmp:// This validates that a string value contains a valid uri This will accept any uri the golang request uri accepts This validates that a string value contains a valid base64 value. Although an empty string is valid base64 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains the substring value. This validates that a string value contains any Unicode code points in the substring value. This validates that a string value contains the supplied rune value. This validates that a string value does not contain the substring value. This validates that a string value does not contain any Unicode code points in the substring value. This validates that a string value does not contain the supplied rune value. This validates that a string value contains a valid isbn10 or isbn13 value. This validates that a string value contains a valid isbn10 value. This validates that a string value contains a valid isbn13 value. This validates that a string value contains a valid UUID. This validates that a string value contains a valid version 3 UUID. This validates that a string value contains a valid version 4 UUID. This validates that a string value contains a valid version 5 UUID. This validates that a string value contains only ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains only printable ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains one or more multibyte characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains a valid DataURI. NOTE: this will also validate that the data portion is valid base64 This validates that a string value contains a valid latitude. This validates that a string value contains a valid longitude. This validates that a string value contains a valid U.S. Social Security Number. This validates that a string value contains a valid IP Adress. This validates that a string value contains a valid v4 IP Adress. This validates that a string value contains a valid v6 IP Adress. This validates that a string value contains a valid CIDR Adress. This validates that a string value contains a valid v4 CIDR Adress. This validates that a string value contains a valid v6 CIDR Adress. This validates that a string value contains a valid resolvable TCP Adress. This validates that a string value contains a valid resolvable v4 TCP Adress. This validates that a string value contains a valid resolvable v6 TCP Adress. This validates that a string value contains a valid resolvable UDP Adress. This validates that a string value contains a valid resolvable v4 UDP Adress. This validates that a string value contains a valid resolvable v6 UDP Adress. This validates that a string value contains a valid resolvable IP Adress. This validates that a string value contains a valid resolvable v4 IP Adress. This validates that a string value contains a valid resolvable v6 IP Adress. This validates that a string value contains a valid Unix Adress. This validates that a string value contains a valid MAC Adress. Note: See Go's ParseMAC for accepted formats and types: NOTE: When returning an error, the tag returned in "FieldError" will be the alias tag unless the dive tag is part of the alias. Everything after the dive tag is not reported as the alias tag. Also, the "ActualTag" in the before case will be the actual tag within the alias that failed. Here is a list of the current built in alias tags: Validator notes: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package neuron is the cloud-native, distributed ORM implementation. It's design allows to use the separate repository for each model, with a possibility to have different relationships types between them. Neuron-core consists of following packages: neuron - (Neuron Core) the root package that gives easy access to all subpackages. . controller - is the neuron's core, that registers and stores the models and contains configurations required by other packages. config - contains the configurations for all packages. query - used to create queries, filters, sort, pagination on base of mapped models. mapping - contains the information about the mapped models their fields and settings. class - contains errors classification system for the neuron packages. log - is the logging interface for the neuron based applications. i18n - is the neuron based application supported internationalization. repository - is a package used to store and register the repositories. It is also used to get the repository/factory per model. A modular design allows to use and compile only required repositories.
Package i18n provides an Internationalization and Localization middleware for Macaron applications.
Package getoptions - Go option parser inspired on the flexibility of Perl’s GetOpt::Long. It will operate on any given slice of strings and return the remaining (non used) command line arguments. This allows to easily subcommand. See https://github.com/DavidGamba/go-getoptions for extra documentation details. • Allow passing options and non-options in any order. • Support for `--long` options. • Support for short (`-s`) options with flexible behaviour (see https://github.com/DavidGamba/go-getoptions#operation_modes for details): • `Called()` method indicates if the option was passed on the command line. • Multiple aliases for the same option. e.g. `help`, `man`. • `CalledAs()` method indicates what alias was used to call the option on the command line. • Simple synopsis and option list automated help. • Boolean, String, Int and Float64 type options. • Options with Array arguments. The same option can be used multiple times with different arguments. The list of arguments will be saved into an Array like structure inside the program. • Options with array arguments and multiple entries. For example: `color --rgb 10 20 30 --next-option` • When using integer array options with multiple arguments, positive integer ranges are allowed. For example: `1..3` to indicate `1 2 3`. • Options with key value arguments and multiple entries. • Options with Key Value arguments. This allows the same option to be used multiple times with arguments of key value type. For example: `rpmbuild --define name=myrpm --define version=123`. • Supports passing `--` to stop parsing arguments (everything after will be left in the `remaining []string`). • Supports command line options with '='. For example: You can use `--string=mystring` and `--string mystring`. • Allows passing arguments to options that start with dash `-` when passed after equal. For example: `--string=--hello` and `--int=-123`. • Options with optional arguments. If the default argument is not passed the default is set. For example: You can call `--int 123` which yields `123` or `--int` which yields the given default. • Allows abbreviations when the provided option is not ambiguous. For example: An option called `build` can be called with `--b`, `--bu`, `--bui`, `--buil` and `--build` as long as there is no ambiguity. In the case of ambiguity, the shortest non ambiguous combination is required. • Support for the lonesome dash "-". To indicate, for example, when to read input from STDIO. • Incremental options. Allows the same option to be called multiple times to increment a counter. • Supports case sensitive options. For example, you can use `v` to define `verbose` and `V` to define `Version`. • Support indicating if an option is required and allows overriding default error message. • Errors exposed as public variables to allow overriding them for internationalization. • Supports subcommands (stop parsing arguments when non option is passed). • Multiple ways of managing unknown options: • Require order: Allows for subcommands. Stop parsing arguments when the first non-option is found. When mixed with Pass through, it also stops parsing arguments when the first unmatched option is found. The library will panic if it finds that the programmer (not end user): • Defined the same alias twice. • Defined wrong min and max values for SliceMulti methods.
This is a GSSAPI provider for Go, which expects to be initialized with the name of a dynamically loadable module which can be dlopen'd to get at a C language binding GSSAPI library. The GSSAPI concepts are explained in RFC 2743, "Generic Security Service Application Program Interface Version 2, Update 1". The API calls for C, together with a number of values for constants, come from RFC 2744, "Generic Security Service API Version 2 : C-bindings". Note that the basic GSSAPI bindings for C use the Latin-1 character set. UTF-8 interfaces are specified in RFC 5178, "Generic Security Service Application Program Interface (GSS-API) Internationalization and Domain-Based Service Names and Name Type", in 2008. Looking in 2013, this API does not appear to be provided by either MIT or Heimdal. This API applies solely to hostnames though, which can also be supplied in ACE encoding, bypassing the issue. For now, we assume that hostnames and usercodes are all ASCII-ish and pass UTF-8 into the library. Patches for more comprehensive support welcome.
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross Field validation and even Cross Field Cross Struct validation for nested structs. Built In Validator A simple example usage: The error can be used like so Both StructValidationErrors and FieldValidationError implement the Error interface but it's intended use is for development + debugging, not a production error message. Why not a better error message? because this library intends for you to handle your own error messages Why should I handle my own errors? Many reasons, for us building an internationalized application I needed to know the field and what validation failed so that I could provide an error in the users specific language. The hierarchical error structure is hard to work with sometimes.. Agreed Flatten function to the rescue! Flatten will return a map of FieldValidationError's but the field name will be namespaced. Custom functions can be added Cross Field Validation can be implemented, for example Start & End Date range validation Multiple validators on a field will process in the order defined Bad Validator definitions are not handled by the library NOTE: Baked In Cross field validation only compares fields on the same struct, if cross field + cross struct validation is needed your own custom validator should be implemented. Here is a list of the current built in validators: Validator notes: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package validator implements value validations for structs and individual fields based on tags. Built In Validator A simple example usage: The error can be used like so Both StructValidationErrors and FieldValidationError implement the Error interface but it's intended use is for development + debugging, not a production error message. Why not a better error message? because this library intends for you to handle your own error messages Why should I handle my own errors? Many reasons, for me building an internationalized application I needed to know the field and what validation failed so that I could provide an error in the users specific language. The hierarchical error structure is hard to work with sometimes.. Agreed Flatten function to the rescue! Flatten will return a map of FieldValidationError's but the field name will be namespaced. Custom functions can be added Cross Field Validation can be implemented, for example Start & End Date range validation Multiple validators on a field will process in the order defined Bad Validator definitions are not handled by the library Here is a list of the current built in validators: Validator notes: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package goproperties implements read operations on .properties file. .properties is a file extension for files mainly used in Java related technologies to store the configurable parameters of an application. They can also be used for storing strings for Internationalization and localization; these are known as Property Resource Bundles. Each parameter is stored as a pair of strings, one storing the name of the parameter (called the key), and the other storing the value. Each line in a .properties file normally stores a single property. Several formats are possible for each line, including key=value, key = value, key:value, and key value. .properties files can use the number sign (#) or the exclamation mark (!) as the first non blank character in a line to denote that all text following it is a comment. The backwards slash is used to escape a character. An example of a properties file is provided below. In the example above, website would be a key, and its corresponding value would be http://en.wikipedia.org/. While the number sign and the exclamation mark marks text as comments, it has no effect when it is part of a property. Thus, the key message has the value Welcome to Wikipedia! and not Welcome to Wikipedia. Note also that all of the whitespace in front of Wikipedia! is excluded completely. The encoding of a .properties file is ISO-8859-1, also known as Latin-1. All non-Latin-1 characters must be entered by using Unicode escape characters, e. g. \uHHHH where HHHH is a hexadecimal index of the character in the Unicode character set. This allows for using .properties files as resource bundles for localization. A non-Latin-1 text file can be converted to a correct .properties file by using the native2ascii tool that is shipped with the JDK or by using a tool, such as po2prop, that manages the transformation from a bilingual localization format into .properties escaping. From Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/.properties
Package i18n offers the following basic internationalization functionality: There's more we'd like to add in the future, including: In order to interact with this package, you must first get a TranslatorFactory instace. Through the TranslatorFactory, you can get a Translator instance. Almost everything in this package is accessed through methods on the Translator struct. About the rules and messages paths: This package ships with built-in rules, and you are welcome to use those directly. However, if there are locales or rules that are missing from what ships directly with this package, or if you desire to use different rules than those that ship with this package, then you can specify additional rules paths. At this time, this package does not ship with built-in messages, other than a few used for the unit tests. You will need to specify your own messages path(s). For both rules and messages paths, you can specify multiple. Paths later in the slice take precedence over packages earlier in the slice. For a basic example of getting a TranslatorFactory instance: For simple message translation, use the Translate function, and send an empty map as the second argument (we'll explain that argument in the next section). You can also pass placeholder values to the translate function. That's what the second argument is for. In this example, we will inject a username into the translation. You can also translate strings with plurals. However, any one message can contain at most one plural. If you want to translate "I need 5 apples and 3 oranges" you are out of luck. The Pluralize method takes 3 arguments. The first is the message key - just like the Translate method. The second argument is a float which is used to determine which plural form to use. The third is a string representation of the number. Why two arguments for the number instead of one? This allows you ultimate flexibility in number formatting to use in the translation while eliminating the need for string number parsing. You can use the "FomatNumber", "FormatCurrency" and "FormatPercent" methods to do locale-based number formatting for numbers, currencies and percentages. If you need to sort a list of strings alphabetically, then you should not use a simple string comparison to do so - this will often result in incorrect results. "ȧ" would normally evaluate as greater than "z", which is not correct in any latin writing system alphabet. Use can use the Sort method on the Translator struct to do an alphabetic sorting that is correct for that locale. Alternatively, you can access the SortUniversal and the SortLocale functions directly without a Translator instance. SortUniversal does not take a specific locale into account when doing the alphabetic sorting, which means it might be slightly less accurate than the SortLocal function. However, there are cases in which the collation rules for a specific locale are unknown, or the sorting needs to be done in a local-agnostic way. For these cases, the SortUniversal function performs a unicode normalization in order to best sort the strings. In order to be flexible, these functions take a generic interface slice and a function for retrieving the value on which to perform the sorting. For example: When getting a Translator instance, the TranslatorFactory will automatically attempt to determine an appropriate fallback Translator for the locale you specify. For locales with specific "flavors", like "en-au" or "zh-hans", the "vanilla" version of that locale will be used if it exists. In these cases that would be "en" and "zh". When creating a TranslatorFactory instance, you can optionally specify a final fallback locale. This will be used if it exists. When determining a fallback, the the factory first checks the less specific versions of the specified locale, if they exist and will ultimate fallback to the global fallback if specified. All of the examples above conveniently ignore errors. We recommend that you DO handle errors. The system is designed to give you a valid result if at all possible, even in errors occur in the process. However, the errors are still returned and may provide you helpful information you might otherwise miss - like missing files, file permissions problems, yaml format problems, missing translations, etc. We recommend that you do some sort of logging of these errors.
text is a repository of text-related packages related to internationalization (i18n) and localization (l10n), such as character encodings, text transformations, and locale-specific text handling.
Package users implements common web user workflows. Most of the provided functions are regular net/http handler functions. The following functionality is provided: Special emphasis is placed on reducing the risk of someone hijacking user accounts. This is achieved by enforcing a certain user structure and following certain procedures: If your application does not follow these principles, you may not be able to use this package as is. However, the code may serve as a starting point if you apply its principles to your own use case. Note also the following: The users.Main() function registers all handlers and starts an HTTP server: Any other handlers can be added to the http.DefaultServeMux before calling users.Main(). Alternatively, you can start your own HTTP server. See the implementation of users.Main() for how to add the package's handlers. See the package example for a most basic way to use the package. In addition, the global Config struct contains all the variables that need to be adjusted for your specific application. It provides sensible defaults out of the box which you can see in its documentation. The fields are as follows: The following fields control how templates are handled: If your application supports internationalization, you can set the Internationalization field to true. If set to true, this package's code checks for the "lang" cookie and appends its value to the HTMLTemplateDir and MailTemplateDir directories to search for template files. Cookie values must be of the format "xx" or "xx-XX" (e.g. "en-US"). If they don't have this format or if the corresponding subdirectory does not exist, the search falls back to the HTMLTemplateDir and MailTemplateDir directories. It is up to the application to set the "lang" cookie. Emails are sent if the SendEmails field is set to true. You can provide your own email function by implementing the SendEmail field. Alternatively, the net/smtp package is used to send emails. The following fields need to specified (fields starting with "SMTP" are only needed when you don't provide your own SendEmail implementation): A number of functions serve as the interface to your database: Anyone using this package must define a type which implements this package's User interface. A user is in one of three possible states: Users have an ID which must be unique (e.g. generated by CUID() in the package github.com/rivo/sessions). But this package may access users based on their unique email address, their verification ID, or their password reset token. You must implement the Config.NewUser function. There are basic HTML templates (in the "html" subdirectory) and email templates (in the "mail" subdirectory). All HTML templates starting with "error_" are templates that will generate error messages which are then embedded in another HTML template. When starting to work with this package, you will want to make a copy of these two subdirectories and modify the templates to your needs. This package implements some functions to render templates which are also public so you may use them in other places, too. The function RenderPage() takes a template filename and a data object (to which the template will be bound), renders the template, and sends it to the browser. Instead of calling this function, however, RenderPageBasic() is used more often. It calls RenderPage() but populates the data object with the Config object and the User object (if one was provided). If an error message needs to be shown to the user, RenderPageError() can be used. This actually involves two templates, one to generate only the error message (these template files start with "error_"), and the other to generate the HTML file which shows the error message. Config and User will also be bound to the latter as well as any data sent to the error message template. There is another function for errors, RenderProgramError(), which is used to show program errors. These are unexpected errors, for example database connection issues, and should always be followed up on. While the user usually only sees a basic error message, more detailed information about the error is sent to the logger for further inspection. The SendMail() function renders mail templates (based on text/template) to send them to the user's email address. When writing your own templates, it is helpful to make a copy of the existing example templates and modify them to your needs. All templates include a header and a footer file. If you include more files, you will need to set the Config.HTMLTemplateIncludes and Config.MailTemplateIncludes fields accordingly.
Package gohg is a Go client library for using the Mercurial dvcs via it's Command Server. For Mercurial see: http://mercurial.selenic.com. For the Hg Command Server see: http://mercurial.selenic.com/wiki/CommandServer. ▪ Mercurial For Mercurial any version starting from 1.9 should be ok, cause that's the one where the Command Server was introduced. If you send wrong options to it through gohg, or commands or options not yet supported (or obsolete) in your Hg version, you'll simply get back an error from Hg itself, as gohg does not check them. But on the other hand gohg allows issuing new commands, not yet implemented by gohg; see further. ▪ Go Currently gohg is currently developed with Go1.2.1. Though I started with the Go1.0 versions, I can't remember having had to change one or two minor things when moving to Go1.1.1. Updating to Go1.1.2 required no changes at all. I had an issue though with Go1.2, on Windows only, causing some tests using os.exec.Command to fail. I'll have to look into that further, to find out if I should report a bug. ▪ Platform I'm developing and testing both on Windows 7 and Ubuntu 12.04/13.04/13.10. But I suppose it should work on any other platform that supports Hg and Go. Only Go and it's standard library. And Mercurial should be installed of course. At the commandline type: to have gohg available in your GOPATH. Start with importing the gohg package. Examples: All interaction with the Mercurial Command Server (Hg CS from now on) happens through the HgClient type, of which you have to create an instance: Then you can connect the Hg CS as follows: 1. The Hg executable: The first parameter is the Mercurial command to use (which 'hg'). You can leave it blanc to let the gohg tool use the default Mercurial command on the system. Having a parameter for the Hg command allows for using a different Hg version, for testing purposes for instance. 2. The repository path: The second parameter is the path to the repository you want to work on. You can leave it blanc to have gohg use the repository it can find for the current path you are running the program in (searching upward in the folder tree eventually). 3. The config for the session: The third parameter allows to provide extra configuration for the session. Though this is currently not implemented yet. 4. Should gohg create a new repo before connecting? This fourth parameter allows you to indicate that you want gohg to first create a new Mercurial repo if it does not already exist in the path given by the second parameter. See the documentation for more detailed info. 5. The returnvalue: The HgClient.Connect() method eventually returns an error, so you can check if the connection succeeded, and if it is safe to go on. Once the work is done, you can disconnect the Hg CS using a typical Go idiom: The gohg tool sets some environment variables for the Hg CS session, to ensure it's good working: Once we have a connection to a Hg CS we can do some work with the repository. This is done with commands, and gohg offers 3 ways to use them. 1. The command methods of the HgClient type. 2. The HgCmd type. 3. The ExecCmd() method of the HgClient type. Each of which has its own reason of existence. Commands return a byte slice containing the resulting data, and eventually an error. But there are a few exceptions (see api docs). If a command fails, the returned error contains 5 elements: 1) the name of the internal routine where the error was trapped, 2) the name of the HgClient command that was run, 3) the returncode by Mercurial, 4) the full command that was passed to the Hg CS, and 5) the eventual error message returned by Mercurial. So the command could return something like the following in the err variable when it fails: The command aliases (like 'id' for 'identify') are not implemented. But there are examples in identify.go and showconfig.go of how you can easily implement them yourself. This is the easiest way, a kind of convenience. And the most readable too. A con is that as a user you cannot know the exact command that was passed to Hg, without some extra mechanics. Each command has the same name as the corresponding Hg command, except it starts with a capital letter of course. An example (also see examples/example1.go): Note that these methods all use the HgCmd type internally. As such they are convenience wrappers around that type. You could also consider them as a kind of syntactic sugar. If you just want to simply issue a command, nothing more, they are the way to go. The only way to obtain the commandstring sent to Hg when using these command methods, is by calling the HgClient.ShowLastCmd() method afterwards before issuing any other commands: Using the HgCmd type is kind of the standard way. It is a struct that you can instantiate for any command, and for which you can set elements Name, Options and Params (see the api docs for more details). It allows for building the command step by step, and also to query the exact command that will be sent to the Hg CS. A pro of this method is that it allows you to obtain the exact command string that will be passed to Mercurial before it is performed, by calling the CmdLine() method of HgCmd. This could be handy for logging, or for showing feedback to the user in a GUI program. (You could even call CmdLine() several times, and show the building of the command step by step.) An example (also see examples/example2.go): As you can see, this way requires some more coding. The source code will also show you that the HgCmd type is indeed used as the underlying type for the convenience HgClient commands, in all the New<hg-command>Cmd() constructors. The HgClient type has an extra method ExecCmd(), allowing you to pass a fully custom built command to Hg. It accepts a string slice that is supposed to contain all the elements of the complete command, as you would type it at the command line. It could be a convenient way for performing commands that are not yet implemented in gohg, or to make use of extensions to Hg (for which gohg offers no support (yet?)). An example (also see examples/example3.go): Just like on the commandline, options come before parameters. Options to commands use the same name as the long form of the Mercurial option they represent, but start with the necessary capital letter. An options value can be of type bool, int or string. You just pass the value as the parameter to the option (= type conversion of the value to the option type). You can pass any number of options, as the elements of a slice. Options can occur more than once if appropriate (see the ones marked with '[+]' in the Mercurial help). Parameters are used to provide any arguments for a command that are not options. They are passed in as a string or a slice of strings, depending on the command. These parameters typically contain revisions, paths or filenames and so. The gohg tool only checks if the options the caller gives are valid for that command. It does not check if the values are valid for the combination of that command and that option, as that is done by Mercurial. No need to implement that again. If an option is not valid for a command, it is silently ignored, so it is not passed to the Hg CS. A few options are not implemented, as they seemed not relevant for use with this tool (for instance: the global --color option, or the --print0 option for status). The gohg tool only returns errors, with an as clear as possible message, and never uses log.Fatal() nor panics, even if those may seem appropriate. It leaves it up to the caller to do that eventually. It's not up to this library to decide whether to do a retry or to abort the complete application. ▪ The following config settings are fixated in the code (at least for now): ▪ As mentioned earlier, passing config info is not implemented yet. ▪ Currently the only support for extensions to Mercurial is through the ExecCmd method. ▪ If multiple Hg CSs are used against the same repo, it is up to Mercurial to handle this correctly. ▪ Mercurial is always run in english. Internationalization is not necessary here, as the conversation with Hg is internal to the application. Please note that this tool is still in it's very early stages. If you have suggestions or requests, or experience any problems, please use the issue tracker at https://bitbucket.org/gohg/gohg/issues?status=new&status=open. Or you could send a patch or a pull request. Copyright 2012-2014, The gohg Authors. All rights reserved. Use of this source code is governed by a BSD style license that can be found in the LICENSE.md file.
Mercurius gives you speed when create 'Go' applications. It lets you being focused at business. Get a Go web application template with: Internationalization, Routers, Logging, Cache, Database, Jade Template Render Engine, JWT, oAuth 2.0. Built upon Macaron, all those items are all configured and ready to use. " " " "
Package getoptions - Go option parser inspired on the flexibility of Perl’s GetOpt::Long. It will operate on any given slice of strings and return the remaining (non used) command line arguments. This allows to easily subcommand. The following is a basic example: • Allow passing options and non-options in any order. • Support for `--long` options. • Support for short (`-s`) options with flexible behaviour (see https://github.com/DavidGamba/go-getoptions#operation_modes for details): • Boolean, String, Int and Float64 type options. • Multiple aliases for the same option. e.g. `help`, `man`. • Negatable Boolean options. For example: `--verbose`, `--no-verbose` or `--noverbose`. • Options with Array arguments. The same option can be used multiple times with different arguments. The list of arguments will be saved into an Array like structure inside the program. • Options with array arguments and multiple entries. • When using integer array options with multiple arguments, positive integer ranges are allowed. For example: `1..3` to indicate `1 2 3`. • Options with key value arguments and multiple entries. • Options with Key Value arguments. This allows the same option to be used multiple times with arguments of key value type. For example: `rpmbuild --define name=myrpm --define version=123`. • Supports passing `--` to stop parsing arguments (everything after will be left in the `remaining []string`). • Supports subcommands (stop parsing arguments when non option is passed). • Supports command line options with '='. For example: You can use `--string=mystring` and `--string mystring`. • Allows passing arguments to options that start with dash `-` when passed after equal. For example: `--string=--hello` and `--int=-123`. • Options with optional arguments. If the default argument is not passed the default is set. • Allows abbreviations when the provided option is not ambiguous. • Called method indicates if the option was passed on the command line. • Errors exposed as public variables to allow overriding them for internationalization. • Multiple ways of managing unknown options: • Require order: Allows for subcommands. Stop parsing arguments when the first non-option is found. When mixed with Pass through, it also stops parsing arguments when the first unmatched option is found. • Support for the lonesome dash "-". To indicate, for example, when to read input from STDIO. • Incremental options. Allows the same option to be called multiple times to increment a counter. • Supports case sensitive options. For example, you can use `v` to define `verbose` and `V` to define `Version`. The library will panic if it finds that the programmer (not end user): • Defined the same alias twice. • Defined wrong min and max values for SliceMulti methods.
Package i18n provides an Internationalization and Localization middleware for Macaron applications.
Package monday is a minimalistic translator for month and day of week names in time.Date objects Monday is not an alternative to standard time package. It is a temporary solution to use while the internationalization features are not ready. That's why monday doesn't create any additional parsing algorithms, layout identifiers. It is just a wrapper for time.Format and time.ParseInLocation and uses all the same layout IDs, constants, etc. Format usage: Parse usage: Monday initializes all its data once in the init func and then uses only func calls and local vars. Thus, it's thread-safe and doesn't need any mutexes to be used with.
Package xcore is a set of basic objects for programation (XCache for caches, XDataset for data sets, XLanguage for languages and XTemplate for templates). For GO, the actual existing code includes: - XCache: Application Memory Caches for any purpose, with time control and quantity control of object in the cache and also check changes against original source. It is a thread safe cache. - XDataset: Basic nested data structures for any purpose (template injection, configuration files, database records, etc). - XLanguage: language dependent text tables for internationalization of code. The sources can be text or XML file definitions. - XTemplate: template system with meta language to create complex documents (compatible with any text language, HTML, CSS, JS, PDF, XML, etc), heavily used on CMS systems and others. It is already used on sites that serve more than 60 million pages a month (500 pages per second on pike hour) and can be used on multithreading environment safely. XCache is a library to cache all the data you want into current application memory for a very fast access to the data. The access to the data support multithreading and concurrency. For the same reason, this type of cache is not persistent (if you exit the application) and cannot grow too much (as memory is the limit). However, you can control a timeout of each cache piece, and eventually a comparison function against a source (file, database, etc) to invalid the cache. 1. Declare a new XCache with NewXCache() function: 2. Fill in the cache: Once you have declared the cache, you can fill it with anything you want. The main cache object is an interface{} so you can put here anything you need, from simple variables to complex structures. You need to use the Set function: Note the ID is always a string, so convert a database key to string if needed. 3. To use the cache, just ask for your entry with Get function: 4. To maintain the cache: You may need Del function, to delete a specific entry (maybe because you deleted the record in database). You may also need Clean function to deletes a percentage of the cache, or Flush to deletes it all. The Verify function is used to check cache entries against their sources through the Validator function. Be very careful, if the cache is big or the Validator function is complex (maybe ask for a remote server information), the verification may be VERY slow and huge CPU use. The Count function gives some stats about the cache. 5. How to use Verify Function: This function is recommended when the source is local and fast to check (for instance a language file or a template file). When the source is distant (other cluster database, any rpc source on another network, integration of many parts, etc), it is more recommended to create a function that will delete the cache when needed (on demand cache change). The validator function is a func(id, time.Time) bool function. The first parameter is the ID entry in the cache, the second parameter the time of the entry was created. The validator function returns true is the cache is still valid, or false if it needs to be invalidated. The XCache is thread safe. The cache can be limited in quantity of entries and timeout for data. The cache is automanaged (for invalid expired data) and can be cleaned partially or totally manually. The XLanguage table of text entries can be loaded from XML file, XML string or normal text file or string. It is used to keep a table of id=value set of entries in any languages you need, so it is easy to switch between XLanguage instance based on the required language needed. Obviously, any XLanguage you load in any language should have the same id entries translated, for the same use. The XLanguage object is thread safe 1. loading: You can load any file or XML string directly into the object. 1.1 The XML Format is: NAMEOFTABLE is the name of your table entry, for example "loginform", "user_report", etc. LG is the ISO-3369 2 letters language ID, for example "es" for spanish, "en" for english, "fr" for french, etc. ENTRYNAME is the ID of the entry, for example "greating", "yourname", "submitbutton". ENTRYVALUE is the text for your entry, for example "Hello", "You are:", "Save" if your table is in english. STATUSVALUE is the status of the entry- You may put any value to control your translation over time and processes. 1.2 The flat text format is: ENTRYNAME is the ID of the entry, for example "greating", "yourname", "submitbutton". ENTRYVALUE is the text for your entry, for example "Hello", "You are:", "Save" if your table is in english. There is no name of table or language in this format (you "know" what you are loading). The advantage to use XML format is to have more control over your language, and eventyally add attributes into your entries, for instance you may add attributes translated="yes/no", verified="yes/no", and any other data that your system could insert. The XLanguage will ignore those attributes loading the table. 2. creation: To create a new XLanguage empty structure: There are 4 functions to create the language from a file or string, flat text or XML text: Then you can use the set of basic access functions: SetName/SetLanguage functions are used to set the table name and language of the object (generally to build an object from scratch). GetName/GetLanguage functions are used to get the table name and language of the object (generally when you load it from some source). Set/Get/Del functions are used to add or modify a new entry, read an entry, or deletes an entry in the object. SetStatus/GetStatus functions are used to add or get a status for the entry in the object. To create am XML file from the objet, you can use the GetXML() function 1. Overview: The XDataSet is a set of interfaces and basic classes ready-to-use to build a standard set of data optionally nested and hierarchical, that can be used for any purpose: - Keep complex data in memory. - Create JSON structures. - Inject data into templates. - Interchange database data (records set and record). You can store into it generic supported data, as well as any complex interface structures: - Int - Float - String - Time - Bool - []Int - []Float - []Time - []Bool - XDataSetDef (anything extended with this interface) - []String - Anything else ( interface{} ) - XDataSetCollectionDef (anything extended with this interface) The generic supported data comes with a set of functions to get/set those data directly into the XDataset. Example: Note that all references to XDataset and XDatasetCollection are pointers, always (to be able to modify the values of them). 2. XDatasetDef interface: It is the interface to describe a simple set of data mapped as "name": value, where value can be of any type. The interface implements a good amount of basic methods to get the value on various format such as GetString("name"), GetInt("name"), etc (see below). If the value is another type as asked, the method should contert it if possible. For instance "key":123 required through GetString("key") should return "123". The XDataset type is a simple map[string]interface{} with all the implemented methods and should be enough to use for almost all required cases. However, you can build any complex structure that extends the interface and implements all the required functions to stay compatible with the XDatasetDef. 3. XDatasetCollectionDef Interface: This is the interface used to extend any type of data as a Collection, i-e an array of XDatasetDef. This is a slice of any XDatasetDef compatible data. The interface implements some methods to work on array structure such as Push, Pop, Shift, Unshift and some methods to search data into the array. The XDatasetCollection type is a simple []DatasetDef with all the implemented methods and should be enough to use for almost all required cases. 1. Overview: The XDataSetTS is a DatasetDef structure, thread safe. It is build on the XDataset with the same properties, but is thread safe to protect Read/Write accesses from different thread. Example: You may also build a XDatasetTS to encapsulate a XDatasetDef that is not thread safe, to use it safely Note that all references to XDatasetTS are pointers, always (to be able to modify the values of them). The DatasetTS meet the XDatasetDef interface 1. Overview: This is a class to compile and keep a Template that can be injected with an XDataSet structure of data, with a metalanguage to inject the data. The metalanguage is extremely simple and is made to be useful and **really** separate programation from template code (not like other many generic template systems that just mix code and data). A template is a set of HTML/XML (or any other language) string with a meta language to inject variables and build a final string. The XCore XTemplate system is based on the injection of parameters, language translation strings and data fields directly into the HTML (Or any other language you need) template. The HTML itself (or any other language) is a text code not directly used by the template system, but used to dress the data you want to represent in your preferred language. The variables to inject must be into a XDataSet structure or into a structure extended from XDataSetDef interface. The injection of data is based on a XDataSet structure of values that can be nested into another XDataSet and XDataSetConnection and so on. The template compiler recognize nested arrays to automatically make loops on the information. Templates are made to store reusable HTML code, and overall easily changeable by people that do not know how to write programs. A template can be as simple as a single character (no variables to inject) to a very complex nested, conditional and loops sub-templates. Yes. this is a template, but a very simple one without need to inject any data. Let's go more complex: Having an array of data, we want to paint it beautifull: We can create a template to inject this data into it: 2. Create and use XTemplateData: In sight to create and use templates, you have all those possible options to use: Creates the XTemplate from a string or a file or any other source: Clone the XTemplate: 3. Metalanguage Reference: 3.1 Comments: %-- and --% You may use comments into your template. The comments will be discarded immediately at the compilation of the template and do not interfere with the rest of your code. Example: 3.2 Nested Templates: [[...]] and [[]] You can define new nested templates into your main template A nested template is defined by: The templteid is any combination of lowers letters only (a-z), numbers (0-9), and 3 special chars: . (point) - (dash) and _ (underline). The template is closed with [[]]. There is no limits into nesting templates. Any nested template will inheritate all the father elements and can use father elements too. To call a sub-template, you need to use &&templateid&& syntax (described below in this document). Example: You may use more than one id into the same template to avoid repetition of the same code. The different id's are separated with a pipe | Important note: A template will be visible only on the same level of its declaration. For example, if you put a subtemplate "b" into a subtemplate "a", it will not be visible by &&b&& from the top level, but only into the subtemplate "a". 3.3 Simple Elements: ##...## and {{...}} There are 2 types of simple elements. Language elements and Data injector elements (also called field elements). We "logically" define the 2 type of elements. The separation is only for human logic and template filling, however the language information can perfectly fit into the data to inject (and not use ## entries). 3.3.1 Languages elements: ##entry## All the languages elements should have the format: ##entry##. A language entry is generally anything written into your code or page that does not come from a database, and should adapt to the language of the client visiting your site. Using the languages elements may depend on the internationalization of your page. If your page is going to be in a single language forever, you really dont need to use languages entries. The language elements generally carry titles, menu options, tables headers etc. The language entries are set into the "#" entry of the main template XDataset to inject, and is a XLanguage table. Example: With data to inject: 3.3.2 Field elements: {{fieldname}} Fields values should have the format: {{fieldname}}. Your fields source can be a database or any other preferred repository data source. Example: You can access an element with its path into the data set to inject separating each field level with a > (greater than). This will take the name of the second hobby in the dataset defined upper. (collections are 0 indexed). The 1 denotes the second record of the hobbies XDatasetCollection. If the field is not found, it will be replaced with an empty string. Tecnically your field names can be any string in the dataset. However do not use { } or > into the names of your fields or the XTemplate may not use them correctly. We recommend to use lowercase names with numbers and ._- Accents and UTF8 symbols are also welcome. 3.3.3 Scope: When you use an id to point a value, the template will first search into the available ids of the local level. If no id is found, the it will search into the upper levers if any, and so on. Example: At the level of 'data2', using {{appname}} will get back 'DomCore'. At the level of 'key1', using {{appname}} will get back 'Nested App'. At the level of 'key2', using {{appname}} will get back 'DomCore'. At the level of root, 'data1' or 'detail', using {{appname}} will get back an empty string. 3.3.4 Path access: id>id>id>id At any level into the data array, you can access any entry into the subset array. For instance, taking the previous array of data to inject, let's suppose we are into a nested meta elements at the 'data1' level. You may want to access directly the 'Juan' entry. The path will be: The José's status value from the root will be: 3.4 Meta Elements They consist into an injection of a XDataset, called the "data to inject", into the template. The meta language is directly applied on the structure of the data array. The data to inject is a nested set of variables and values with the structure you want (there is no specific construction rules). You can inject nearly anything into a template meta elements. Example of a data array to inject: You can access directly any data into the array with its relative path (relative to the level you are when the metaelements are applied, see below). There are 4 structured meta elements in the XTemplate templates to use the data to inject: Reference, Loops, Condition and Debug. The structure of the meta elements in the template must follow the structure of the data to inject. 3.4.1 References to another template: &&order&& 3.4.1.1 When order is a single id (characters a-z0-9.-_), it will make a call to a sub template with the same set of data and replace the &&...&& with the result. The level in the data set is not changed. Example based on previous array of Fred's data: 3.4.1.2 When order contains 2 parameters separated by a semicolumn :, then second parameter is used to change the level of the data of array, with the subset with this id. The level in the data set is changed to this sub set. Example based on previous array of Fred's data: 3.4.1.3 When order contains 3 parameters separated by a semicolumn :, the second and third parameters are used to search the name of the new template based on the data fields to inject. This is an indirect access to the template. The name of the subtemplate is build with parameter3 as prefix and the content of parameter2 value. The third parameter must be empty. 3.4.2 Loops: @@order@@ 3.4.2.1 Overview This meta element will loop over each itterance of the set of data and concatenate each created template in the same order. You need to declare a sub template for this element. You may aso declare derivated sub templates for the different possible cases of the loop: For instance, If your main subtemplate for your look is called "hobby", you may need a different template for the first element, last element, Nth element, Element with a value "no" in the sport field, etc. The supported postfixes are: When the array to iterate is empty: - .none (for example "There is no hobby") When the array contains elements, it will search in order, the following template and use the first found: - templateid.key.[value] value is the key of the vector line. If the collection has a named key (string) or is a direct array (0, 1, 2...) - templateid.first if it is the first element of the array set (new from v1.01.11) - templateid.last if it is the first element of the array set (new from v1.01.11) - templateid.even if the line number is even - templateid in all other cases (odd is contained here if even is defined) Since v2.1.7, you can also use the pseudo field {{.counter}} into the loop subtemplate, to get the number of the counter of the loop, it is 1-based (first loop is 1, not 0) 3.4.2.2 When order is a single id (characters a-z0-9.-_), it will make a call to the sub template id with the same subset of data with the same id and replace the @@...@@ for each itterance of the data with the result. Example based on previous array of Fred's data: 3.4.2.3 When order contains 2 parameters separated by a semicolumn :, then first parameter is used to change the level of the data of array, with the subset with this id, and the second one for the template to use. Example based on previous array of Fred's data: 3.4.3 Conditional: ??order?? Makes a call to a subtemplate only if the field exists and have a value. This is very userfull to call a sub template for instance when an image or a video is set. When the condition is not met, it will search for the [id].none template. The conditional element does not change the level in the data set. 3.4.3.1 When order is a single id (characters a-z0-9.-_), it will make a call to the sub template id with the same field in the data and replace the ??...?? with the corresponding template Example based on previous array of Fred's data: 3.4.3.2 When order contains 2 parameters separated by a semicolumn :, then second parameter is used to change the level of the data of array, with the subset with this id. Example based on previous array of Fred's data: If the asked field is a catalog, true/false, numbered, you may also use .[value] subtemplates 3.5 Debug Tools: !!order!! There are two keywords to dump the content of the data set. This is very useful when you dont know the code that calls the template, don't remember some values, or for debug facilities. 3.5.1 !!dump!! Will show the totality of the data set, with ids and values. 3.5.1 !!list!! Will show only the tree of parameters, values are not shown.
Package g11n is an internationalization library that offers: I. Initialization Create a new instance of g11n. Each instance handles messages and locales separately. Define a struct with messages. Initialize an instance of the struct through the g11n object. Invoke messages on that instance. II. Choosing locale Load a locale in the g11n instance. Different locale loaders could be registered by implementing the locale.Loader interface. Specify the locale for every message struct initialized by this g11n instance. III. Format parameters The parameters of a message call could be formatted by declaring a special type that implements The format method G11nParam is invoked before substituting a parameter in the message. IV. Format result The result of a message call could be further formatted by declaring a special result type that implements The format method G11nResult is invoked after all parameters have been substituted in the message.
Package xcore is a set of basic objects for programation (XCache for caches, XDataset for data sets, XLanguage for languages and XTemplate for templates). For GO, the actual existing code includes: - XCache: Application Memory Caches for any purpose, with time control and quantity control of object in the cache and also check changes against original source. It is a thread safe cache. - XDataset: Basic nested data structures for any purpose (template injection, configuration files, database records, etc). - XLanguage: language dependent text tables for internationalization of code. The sources can be text or XML file definitions. - XTemplate: template system with meta language to create complex documents (compatible with any text language, HTML, CSS, JS, PDF, XML, etc), heavily used on CMS systems and others. It is already used on sites that serve more than 60 million pages a month (500 pages per second on pike hour) and can be used on multithreading environment safely. XCache is a library to cache all the data you want into current application memory for a very fast access to the data. The access to the data support multithreading and concurrency. For the same reason, this type of cache is not persistent (if you exit the application) and cannot grow too much (as memory is the limit). However, you can control a timeout of each cache piece, and eventually a comparison function against a source (file, database, etc) to invalid the cache. 1. Declare a new XCache with NewXCache() function: 2. Fill in the cache: Once you have declared the cache, you can fill it with anything you want. The main cache object is an interface{} so you can put here anything you need, from simple variables to complex structures. You need to use the Set function: Note the ID is always a string, so convert a database key to string if needed. 3. To use the cache, just ask for your entry with Get function: 4. To maintain the cache: You may need Del function, to delete a specific entry (maybe because you deleted the record in database). You may also need Clean function to deletes a percentage of the cache, or Flush to deletes it all. The Verify function is used to check cache entries against their sources through the Validator function. Be very careful, if the cache is big or the Validator function is complex (maybe ask for a remote server information), the verification may be VERY slow and huge CPU use. The Count function gives some stats about the cache. 5. How to use Verify Function: This function is recommended when the source is local and fast to check (for instance a language file or a template file). When the source is distant (other cluster database, any rpc source on another network, integration of many parts, etc), it is more recommended to create a function that will delete the cache when needed (on demand cache change). The validator function is a func(id, time.Time) bool function. The first parameter is the ID entry in the cache, the second parameter the time of the entry was created. The validator function returns true is the cache is still valid, or false if it needs to be invalidated. The XCache is thread safe. The cache can be limited in quantity of entries and timeout for data. The cache is automanaged (for invalid expired data) and can be cleaned partially or totally manually. The XLanguage table of text entries can be loaded from XML file, XML string or normal text file or string. It is used to keep a table of id=value set of entries in any languages you need, so it is easy to switch between XLanguage instance based on the required language needed. Obviously, any XLanguage you load in any language should have the same id entries translated, for the same use. 1. loading: You can load any file or XML string directly into the object. 1.1 The XML Format is: NAMEOFTABLE is the name of your table entry, for example "loginform", "user_report", etc. LG is the ISO-3369 2 letters language ID, for example "es" for spanish, "en" for english, "fr" for french, etc. ENTRYNAME is the ID of the entry, for example "greating", "yourname", "submitbutton". ENTRYVALUE is the text for your entry, for example "Hello", "You are:", "Save" if your table is in english. 1.2 The flat text format is: ENTRYNAME is the ID of the entry, for example "greating", "yourname", "submitbutton". ENTRYVALUE is the text for your entry, for example "Hello", "You are:", "Save" if your table is in english. There is no name of table or language in this format (you "know" what you are loading). The advantage to use XML format is to have more control over your language, and eventyally add attributes into your entries, for instance you may add attributes translated="yes/no", verified="yes/no", and any other data that your system could insert. The XLanguage will ignore those attributes loading the table. 2. creation: To create a new XLanguage empty structure: There are 4 functions to create the language from a file or string, flat text or XML text: Then you can use the set of basic access functions: SetName/SetLanguage functions are used to set the table name and language of the object (generally to build an object from scratch). GetName/GetLanguage functions are used to get the table name and language of the object (generally when you load it from some source). Set/Get/Del functions are used to add or modify a new entry, read an entry, or deletes an entry in the object. 1. Overview: The XDataSet is a set of interfaces and basic classes ready-to-use to build a standard set of data optionally nested and hierarchical, that can be used for any purpose: - Keep complex data in memory. - Create JSON structures. - Inject data into templates. - Interchange database data (records set and record). You can store into it generic supported data, as well as any complex interface structures: - Int - Float - String - Time - Bool - []Int - []Float - []Time - []Bool - XDataSetDef (anything extended with this interface) - []String - Anything else ( interface{} ) - XDataSetCollectionDef (anything extended with this interface) The generic supported data comes with a set of functions to get/set those data directly into the XDataset. Example: Note that all references to XDataset and XDatasetCollection are pointers, always (to be able to modify the values of them). 2. XDatasetDef interface: It is the interface to describe a simple set of data mapped as "name": value, where value can be of any type. The interface implements a good amount of basic methods to get the value on various format such as GetString("name"), GetInt("name"), etc (see below). If the value is another type as asked, the method should contert it if possible. For instance "key":123 required through GetString("key") should return "123". The XDataset type is a simple map[string]interface{} with all the implemented methods and should be enough to use for almost all required cases. However, you can build any complex structure that extends the interface and implements all the required functions to stay compatible with the XDatasetDef. 3. XDatasetCollectionDef Interface: This is the interface used to extend any type of data as a Collection, i-e an array of XDatasetDef. This is a slice of any XDatasetDef compatible data. The interface implements some methods to work on array structure such as Push, Pop, Shift, Unshift and some methods to search data into the array. The XDatasetCollection type is a simple []DatasetDef with all the implemented methods and should be enough to use for almost all required cases. 1. Overview: This is a class to compile and keep a Template that can be injected with an XDataSet structure of data, with a metalanguage to inject the data. The metalanguage is extremely simple and is made to be useful and **really** separate programation from template code (not like other many generic template systems that just mix code and data). A template is a set of HTML/XML (or any other language) string with a meta language to inject variables and build a final string. The XCore XTemplate system is based on the injection of parameters, language translation strings and data fields directly into the HTML (Or any other language you need) template. The HTML itself (or any other language) is a text code not directly used by the template system, but used to dress the data you want to represent in your preferred language. The variables to inject must be into a XDataSet structure or into a structure extended from XDataSetDef interface. The injection of data is based on a XDataSet structure of values that can be nested into another XDataSet and XDataSetConnection and so on. The template compiler recognize nested arrays to automatically make loops on the information. Templates are made to store reusable HTML code, and overall easily changeable by people that do not know how to write programs. A template can be as simple as a single character (no variables to inject) to a very complex nested, conditional and loops sub-templates. Yes. this is a template, but a very simple one without need to inject any data. Let's go more complex: Having an array of data, we want to paint it beautifull: We can create a template to inject this data into it: 2. Create and use XTemplateData: In sight to create and use templates, you have all those possible options to use: Creates the XTemplate from a string or a file or any other source: 3. Metalanguage Reference: 3.1 Comments: %-- and --% You may use comments into your template. The comments will be discarded immediately at the compilation of the template and do not interfere with the rest of your code. Example: 3.2 Nested Templates: [[...]] and [[]] You can define new nested templates into your main template A nested template is defined by: The templteid is any combination of lowers letters only (a-z), numbers (0-9), and 3 special chars: . (point) - (dash) and _ (underline). The template is closed with [[]]. There is no limits into nesting templates. Any nested template will inheritate all the father elements and can use father elements too. To call a sub-template, you need to use &&templateid&& syntax (described below in this document). Example: You may use more than one id into the same template to avoid repetition of the same code. The different id's are separated with a pipe | Important note: A template will be visible only on the same level of its declaration. For example, if you put a subtemplate "b" into a subtemplate "a", it will not be visible by &&b&& from the top level, but only into the subtemplate "a". 3.3 Simple Elements: ##...## and {{...}} There are 2 types of simple elements. Language elements and Data injector elements (also called field elements). We "logically" define the 2 type of elements. The separation is only for human logic and template filling, however the language information can perfectly fit into the data to inject (and not use ## entries). 3.3.1 Languages elements: ##entry## All the languages elements should have the format: ##entry##. A language entry is generally anything written into your code or page that does not come from a database, and should adapt to the language of the client visiting your site. Using the languages elements may depend on the internationalization of your page. If your page is going to be in a single language forever, you really dont need to use languages entries. The language elements generally carry titles, menu options, tables headers etc. The language entries are set into the "#" entry of the main template XDataset to inject, and is a XLanguage table. Example: With data to inject: 3.3.2 Field elements: {{fieldname}} Fields values should have the format: {{fieldname}}. Your fields source can be a database or any other preferred repository data source. Example: You can access an element with its path into the data set to inject separating each field level with a > (greater than). This will take the name of the second hobby in the dataset defined upper. (collections are 0 indexed). The 1 denotes the second record of the hobbies XDatasetCollection. If the field is not found, it will be replaced with an empty string. Tecnically your field names can be any string in the dataset. However do not use { } or > into the names of your fields or the XTemplate may not use them correctly. We recommend to use lowercase names with numbers and ._- Accents and UTF8 symbols are also welcome. 3.3.3 Scope: When you use an id to point a value, the template will first search into the available ids of the local level. If no id is found, the it will search into the upper levers if any, and so on. Example: At the level of 'data2', using {{appname}} will get back 'DomCore'. At the level of 'key1', using {{appname}} will get back 'Nested App'. At the level of 'key2', using {{appname}} will get back 'DomCore'. At the level of root, 'data1' or 'detail', using {{appname}} will get back an empty string. 3.3.4 Path access: id>id>id>id At any level into the data array, you can access any entry into the subset array. For instance, taking the previous array of data to inject, let's suppose we are into a nested meta elements at the 'data1' level. You may want to access directly the 'Juan' entry. The path will be: The José's status value from the root will be: 3.4 Meta Elements They consist into an injection of a XDataset, called the "data to inject", into the template. The meta language is directly applied on the structure of the data array. The data to inject is a nested set of variables and values with the structure you want (there is no specific construction rules). You can inject nearly anything into a template meta elements. Example of a data array to inject: You can access directly any data into the array with its relative path (relative to the level you are when the metaelements are applied, see below). There are 4 structured meta elements in the XTemplate templates to use the data to inject: Reference, Loops, Condition and Debug. The structure of the meta elements in the template must follow the structure of the data to inject. 3.4.1 References to another template: &&order&& 3.4.1.1 When order is a single id (characters a-z0-9.-_), it will make a call to a sub template with the same set of data and replace the &&...&& with the result. The level in the data set is not changed. Example based on previous array of Fred's data: 3.4.1.2 When order contains 2 parameters separated by a semicolumn :, then second parameter is used to change the level of the data of array, with the subset with this id. The level in the data set is changed to this sub set. Example based on previous array of Fred's data: 3.4.1.3 When order contains 3 parameters separated by a semicolumn :, the second and third parameters are used to search the name of the new template based on the data fields to inject. This is an indirect access to the template. The name of the subtemplate is build with parameter3 as prefix and the content of parameter2 value. The third parameter must be empty. 3.4.2 Loops: @@order@@ 3.4.2.1 Overview This meta element will loop over each itterance of the set of data and concatenate each created template in the same order. You need to declare a sub template for this element. You may aso declare derivated sub templates for the different possible cases of the loop: For instance, If your main subtemplate for your look is called "hobby", you may need a different template for the first element, last element, Nth element, Element with a value "no" in the sport field, etc. The supported postfixes are: When the array to iterate is empty: - .none (for example "There is no hobby") When the array contains elements, it will search in order, the following template and use the first found: - templateid.key.[value] value is the key of the vector line. If the collection has a named key (string) or is a direct array (0, 1, 2...) - templateid.first if it is the first element of the array set (new from v1.01.11) - templateid.last if it is the first element of the array set (new from v1.01.11) - templateid.even if the line number is even - templateid in all other cases (odd is contained here if even is defined) 3.4.2.2 When order is a single id (characters a-z0-9.-_), it will make a call to the sub template id with the same subset of data with the same id and replace the @@...@@ for each itterance of the data with the result. Example based on previous array of Fred's data: 3.4.2.3 When order contains 2 parameters separated by a semicolumn :, then first parameter is used to change the level of the data of array, with the subset with this id, and the second one for the template to use. Example based on previous array of Fred's data: 3.4.3 Conditional: ??order?? Makes a call to a subtemplate only if the field exists and have a value. This is very userfull to call a sub template for instance when an image or a video is set. When the condition is not met, it will search for the [id].none template. The conditional element does not change the level in the data set. 3.4.3.1 When order is a single id (characters a-z0-9.-_), it will make a call to the sub template id with the same field in the data and replace the ??...?? with the corresponding template Example based on previous array of Fred's data: 3.4.3.2 When order contains 2 parameters separated by a semicolumn :, then second parameter is used to change the level of the data of array, with the subset with this id. Example based on previous array of Fred's data: If the asked field is a catalog, true/false, numbered, you may also use .[value] subtemplates 3.5 Debug Tools: !!order!! There are two keywords to dump the content of the data set. This is very useful when you dont know the code that calls the template, don't remember some values, or for debug facilities. 3.5.1 !!dump!! Will show the totality of the data set, with ids and values. 3.5.1 !!list!! Will show only the tree of parameters, values are not shown.
Package datesi18n provides internationalization for programs needing language support for weekdays and months. The package is intended to be used with the dates package. The following code is an example of using the datesi18n package: The output of the program is:
Package i18n offers the following basic internationalization functionality: There's more we'd like to add in the future, including: In order to interact with this package, you must first get a TranslatorFactory instace. Through the TranslatorFactory, you can get a Translator instance. Almost everything in this package is accessed through methods on the Translator struct. About the rules and messages paths: This package ships with built-in rules, and you are welcome to use those directly. However, if there are locales or rules that are missing from what ships directly with this package, or if you desire to use different rules than those that ship with this package, then you can specify additional rules paths. At this time, this package does not ship with built-in messages, other than a few used for the unit tests. You will need to specify your own messages path(s). For both rules and messages paths, you can specify multiple. Paths later in the slice take precedence over packages earlier in the slice. For a basic example of getting a TranslatorFactory instance: For simple message translation, use the Translate function, and send an empty map as the second argument (we'll explain that argument in the next section). You can also pass placeholder values to the translate function. That's what the second argument is for. In this example, we will inject a username into the translation. You can also translate strings with plurals. However, any one message can contain at most one plural. If you want to translate "I need 5 apples and 3 oranges" you are out of luck. The Pluralize method takes 3 arguments. The first is the message key - just like the Translate method. The second argument is a float which is used to determine which plural form to use. The third is a string representation of the number. Why two arguments for the number instead of one? This allows you ultimate flexibility in number formatting to use in the translation while eliminating the need for string number parsing. You can use the "FomatNumber", "FormatCurrency" and "FormatPercent" methods to do locale-based number formatting for numbers, currencies and percentages. If you need to sort a list of strings alphabetically, then you should not use a simple string comparison to do so - this will often result in incorrect results. "ȧ" would normally evaluate as greater than "z", which is not correct in any latin writing system alphabet. Use can use the Sort method on the Translator struct to do an alphabetic sorting that is correct for that locale. Alternatively, you can access the SortUniversal and the SortLocale functions directly without a Translator instance. SortUniversal does not take a specific locale into account when doing the alphabetic sorting, which means it might be slightly less accurate than the SortLocal function. However, there are cases in which the collation rules for a specific locale are unknown, or the sorting needs to be done in a local-agnostic way. For these cases, the SortUniversal function performs a unicode normalization in order to best sort the strings. In order to be flexible, these functions take a generic interface slice and a function for retrieving the value on which to perform the sorting. For example: When getting a Translator instance, the TranslatorFactory will automatically attempt to determine an appropriate fallback Translator for the locale you specify. For locales with specific "flavors", like "en-au" or "zh-hans", the "vanilla" version of that locale will be used if it exists. In these cases that would be "en" and "zh". When creating a TranslatorFactory instance, you can optionally specify a final fallback locale. This will be used if it exists. When determining a fallback, the the factory first checks the less specific versions of the specified locale, if they exist and will ultimate fallback to the global fallback if specified. All of the examples above conveniently ignore errors. We recommend that you DO handle errors. The system is designed to give you a valid result if at all possible, even in errors occur in the process. However, the errors are still returned and may provide you helpful information you might otherwise miss - like missing files, file permissions problems, yaml format problems, missing translations, etc. We recommend that you do some sort of logging of these errors.
Package i18n is a handler and helper for the core (https://godoc.org/github.com/volatile/core). It provides internationalization functions following the client preferences. A translation is associated to a key, which is associated to a language tag, which is part of Locales map. All translations can be stored like this: decimalMark and thousandsMark are special keys that define the digits separators for decimals and thousands when using Tn or Fmtn. With these translations, you need to Init this package (the second argument is the default locale): When a client makes a request, the best locale must be matched to his preferences. To achieve this, you need to Use the handler with one or more matchers: The client locale is set as soon as a matcher is confident. A matcher is a function that returns the locale parsed from core.Context with its level of confidence. These ones are actually available: MatcherAcceptLanguageHeader and MatcherFormValue. A translation can be accessed with T, receiving the core.Context (which contains the matched locale), the translation key, and optional arguments (if the translation contains formatting verbs): If a translation has pluralized forms, you can use Tn and the most appropriate form will be used according to the quantity: will result in "You have 333,000.333 bucks in your basement.". If you use templates, TemplatesFuncs provides a map of all usable functions. Example with package response (https://godoc.org/github.com/volatile/response) package:
This is a GSSAPI provider for Go, which expects to be initialized with the name of a dynamically loadable module which can be dlopen'd to get at a C language binding GSSAPI library. The GSSAPI concepts are explained in RFC 2743, "Generic Security Service Application Program Interface Version 2, Update 1". The API calls for C, together with a number of values for constants, come from RFC 2744, "Generic Security Service API Version 2 : C-bindings". Note that the basic GSSAPI bindings for C use the Latin-1 character set. UTF-8 interfaces are specified in RFC 5178, "Generic Security Service Application Program Interface (GSS-API) Internationalization and Domain-Based Service Names and Name Type", in 2008. Looking in 2013, this API does not appear to be provided by either MIT or Heimdal. This API applies solely to hostnames though, which can also be supplied in ACE encoding, bypassing the issue. For now, we assume that hostnames and usercodes are all ASCII-ish and pass UTF-8 into the library. Patches for more comprehensive support welcome.
This is a GSSAPI provider for Go, which expects to be initialized with the name of a dynamically loadable module which can be dlopen'd to get at a C language binding GSSAPI library. The GSSAPI concepts are explained in RFC 2743, "Generic Security Service Application Program Interface Version 2, Update 1". The API calls for C, together with a number of values for constants, come from RFC 2744, "Generic Security Service API Version 2 : C-bindings". Note that the basic GSSAPI bindings for C use the Latin-1 character set. UTF-8 interfaces are specified in RFC 5178, "Generic Security Service Application Program Interface (GSS-API) Internationalization and Domain-Based Service Names and Name Type", in 2008. Looking in 2013, this API does not appear to be provided by either MIT or Heimdal. This API applies solely to hostnames though, which can also be supplied in ACE encoding, bypassing the issue. For now, we assume that hostnames and usercodes are all ASCII-ish and pass UTF-8 into the library. Patches for more comprehensive support welcome.
text is a repository of text-related packages related to internationalization (i18n) and localization (l10n), such as character encodings, text transformations, and locale-specific text handling.
Package i18n is for app Internationalization and Localization.
This is a GSSAPI provider for Go, which expects to be initialized with the name of a dynamically loadable module which can be dlopen'd to get at a C language binding GSSAPI library. The GSSAPI concepts are explained in RFC 2743, "Generic Security Service Application Program Interface Version 2, Update 1". The API calls for C, together with a number of values for constants, come from RFC 2744, "Generic Security Service API Version 2 : C-bindings". Note that the basic GSSAPI bindings for C use the Latin-1 character set. UTF-8 interfaces are specified in RFC 5178, "Generic Security Service Application Program Interface (GSS-API) Internationalization and Domain-Based Service Names and Name Type", in 2008. Looking in 2013, this API does not appear to be provided by either MIT or Heimdal. This API applies solely to hostnames though, which can also be supplied in ACE encoding, bypassing the issue. For now, we assume that hostnames and usercodes are all ASCII-ish and pass UTF-8 into the library. Patches for more comprehensive support welcome.
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross Field and Cross Struct validation for nested structs and has the ability to dive into arrays and maps of any type. Why not a better error message? because this library intends for you to handle your own error messages. Why should I handle my own errors? Many reasons, for us building an internationalized application I needed to know the field and what validation failed so that I could provide an error in the users specific language. Doing things this way is actually the way the standard library does, see the file.Open method here: https://golang.org/pkg/os/#Open. They return type error to avoid the issue discussed in the following, where err is always != nil: http://stackoverflow.com/a/29138676/3158232 https://github.com/bluesuncorp/validator/issues/134 validator only returns nil or ValidationErrors as type error; so in you code all you need to do is check if the error returned is not nil, and if it's not type cast it to type ValidationErrors like so err.(validator.ValidationErrors) Custom functions can be added Cross Field Validation can be done via the following tags: eqfield, nefield, gtfield, gtefield, ltfield, ltefield, eqcsfield, necsfield, gtcsfield, ftecsfield, ltcsfield and ltecsfield. If however some custom cross field validation is required, it can be done using a custom validation. Why not just have cross fields validation tags i.e. only eqcsfield and not eqfield; the reason is efficiency, if you want to check a field within the same struct eqfield only has to find the field on the same struct, 1 level; but if we used eqcsfield it could be multiple levels down. Multiple validators on a field will process in the order defined Bad Validator definitions are not handled by the library NOTE: Baked In Cross field validation only compares fields on the same struct, if cross field + cross struct validation is needed your own custom validator should be implemented. NOTE2: comma is the default separator of validation tags, if you wish to have a comma included within the parameter i.e. excludesall=, you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C NOTE3: pipe is the default separator of or validation tags, if you wish to have a pipe included within the parameter i.e. excludesall=| you will need to use the UTF-8 hex representation 0x7C, which is replaced in the code as a pipe, so the above will become excludesall=0x7C Here is a list of the current built in validators: NOTE: when returning an error the tag returned in FieldError will be the alias tag unless the dive tag is part of the alias; everything after the dive tag is not reported as the alias tag. Also the ActualTag in the before case will be the actual tag within the alias that failed. Here is a list of the current built in alias tags: Validator notes: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.