Package gohg is a Go client library for using the Mercurial dvcs via it's Command Server. For Mercurial see: http://mercurial.selenic.com. For the Hg Command Server see: http://mercurial.selenic.com/wiki/CommandServer. ▪ Mercurial For Mercurial any version starting from 1.9 should be ok, cause that's the one where the Command Server was introduced. If you send wrong options to it through gohg, or commands or options not yet supported (or obsolete) in your Hg version, you'll simply get back an error from Hg itself, as gohg does not check them. But on the other hand gohg allows issuing new commands, not yet implemented by gohg; see further. ▪ Go Currently gohg is currently developed with Go1.2.1. Though I started with the Go1.0 versions, I can't remember having had to change one or two minor things when moving to Go1.1.1. Updating to Go1.1.2 required no changes at all. I had an issue though with Go1.2, on Windows only, causing some tests using os.exec.Command to fail. I'll have to look into that further, to find out if I should report a bug. ▪ Platform I'm developing and testing both on Windows 7 and Ubuntu 12.04/13.04/13.10. But I suppose it should work on any other platform that supports Hg and Go. Only Go and it's standard library. And Mercurial should be installed of course. At the commandline type: to have gohg available in your GOPATH. Start with importing the gohg package. Examples: All interaction with the Mercurial Command Server (Hg CS from now on) happens through the HgClient type, of which you have to create an instance: Then you can connect the Hg CS as follows: 1. The Hg executable: The first parameter is the Mercurial command to use (which 'hg'). You can leave it blanc to let the gohg tool use the default Mercurial command on the system. Having a parameter for the Hg command allows for using a different Hg version, for testing purposes for instance. 2. The repository path: The second parameter is the path to the repository you want to work on. You can leave it blanc to have gohg use the repository it can find for the current path you are running the program in (searching upward in the folder tree eventually). 3. The config for the session: The third parameter allows to provide extra configuration for the session. Though this is currently not implemented yet. 4. Should gohg create a new repo before connecting? This fourth parameter allows you to indicate that you want gohg to first create a new Mercurial repo if it does not already exist in the path given by the second parameter. See the documentation for more detailed info. 5. The returnvalue: The HgClient.Connect() method eventually returns an error, so you can check if the connection succeeded, and if it is safe to go on. Once the work is done, you can disconnect the Hg CS using a typical Go idiom: The gohg tool sets some environment variables for the Hg CS session, to ensure it's good working: Once we have a connection to a Hg CS we can do some work with the repository. This is done with commands, and gohg offers 3 ways to use them. 1. The command methods of the HgClient type. 2. The HgCmd type. 3. The ExecCmd() method of the HgClient type. Each of which has its own reason of existence. Commands return a byte slice containing the resulting data, and eventually an error. But there are a few exceptions (see api docs). If a command fails, the returned error contains 5 elements: 1) the name of the internal routine where the error was trapped, 2) the name of the HgClient command that was run, 3) the returncode by Mercurial, 4) the full command that was passed to the Hg CS, and 5) the eventual error message returned by Mercurial. So the command could return something like the following in the err variable when it fails: The command aliases (like 'id' for 'identify') are not implemented. But there are examples in identify.go and showconfig.go of how you can easily implement them yourself. This is the easiest way, a kind of convenience. And the most readable too. A con is that as a user you cannot know the exact command that was passed to Hg, without some extra mechanics. Each command has the same name as the corresponding Hg command, except it starts with a capital letter of course. An example (also see examples/example1.go): Note that these methods all use the HgCmd type internally. As such they are convenience wrappers around that type. You could also consider them as a kind of syntactic sugar. If you just want to simply issue a command, nothing more, they are the way to go. The only way to obtain the commandstring sent to Hg when using these command methods, is by calling the HgClient.ShowLastCmd() method afterwards before issuing any other commands: Using the HgCmd type is kind of the standard way. It is a struct that you can instantiate for any command, and for which you can set elements Name, Options and Params (see the api docs for more details). It allows for building the command step by step, and also to query the exact command that will be sent to the Hg CS. A pro of this method is that it allows you to obtain the exact command string that will be passed to Mercurial before it is performed, by calling the CmdLine() method of HgCmd. This could be handy for logging, or for showing feedback to the user in a GUI program. (You could even call CmdLine() several times, and show the building of the command step by step.) An example (also see examples/example2.go): As you can see, this way requires some more coding. The source code will also show you that the HgCmd type is indeed used as the underlying type for the convenience HgClient commands, in all the New<hg-command>Cmd() constructors. The HgClient type has an extra method ExecCmd(), allowing you to pass a fully custom built command to Hg. It accepts a string slice that is supposed to contain all the elements of the complete command, as you would type it at the command line. It could be a convenient way for performing commands that are not yet implemented in gohg, or to make use of extensions to Hg (for which gohg offers no support (yet?)). An example (also see examples/example3.go): Just like on the commandline, options come before parameters. Options to commands use the same name as the long form of the Mercurial option they represent, but start with the necessary capital letter. An options value can be of type bool, int or string. You just pass the value as the parameter to the option (= type conversion of the value to the option type). You can pass any number of options, as the elements of a slice. Options can occur more than once if appropriate (see the ones marked with '[+]' in the Mercurial help). Parameters are used to provide any arguments for a command that are not options. They are passed in as a string or a slice of strings, depending on the command. These parameters typically contain revisions, paths or filenames and so. The gohg tool only checks if the options the caller gives are valid for that command. It does not check if the values are valid for the combination of that command and that option, as that is done by Mercurial. No need to implement that again. If an option is not valid for a command, it is silently ignored, so it is not passed to the Hg CS. A few options are not implemented, as they seemed not relevant for use with this tool (for instance: the global --color option, or the --print0 option for status). The gohg tool only returns errors, with an as clear as possible message, and never uses log.Fatal() nor panics, even if those may seem appropriate. It leaves it up to the caller to do that eventually. It's not up to this library to decide whether to do a retry or to abort the complete application. ▪ The following config settings are fixated in the code (at least for now): ▪ As mentioned earlier, passing config info is not implemented yet. ▪ Currently the only support for extensions to Mercurial is through the ExecCmd method. ▪ If multiple Hg CSs are used against the same repo, it is up to Mercurial to handle this correctly. ▪ Mercurial is always run in english. Internationalization is not necessary here, as the conversation with Hg is internal to the application. Please note that this tool is still in it's very early stages. If you have suggestions or requests, or experience any problems, please use the issue tracker at https://bitbucket.org/gohg/gohg/issues?status=new&status=open. Or you could send a patch or a pull request. Copyright 2012-2014, The gohg Authors. All rights reserved. Use of this source code is governed by a BSD style license that can be found in the LICENSE.md file.
Package enmime implements a MIME encoding and decoding library. It's built on top of Go's included mime/multipart support where possible, but is geared towards parsing MIME encoded emails. The enmime API has two conceptual layers. The lower layer is a tree of Part structs, representing each component of a decoded MIME message. The upper layer, called an Envelope provides an intuitive way to interact with a MIME message. Calling ReadParts causes enmime to parse the body of a MIME message into a tree of Part objects, each of which is aware of its content type, filename and headers. The content of a Part is available as a slice of bytes via the Content field. If the part was encoded in quoted-printable or base64, it is decoded prior to being placed in Content. If the Part contains text in a character set other than utf-8, enmime will attempt to convert it to utf-8. To locate a particular Part, pass a custom PartMatcher function into the BreadthMatchFirst() or DepthMatchFirst() methods to search the Part tree. BreadthMatchAll() and DepthMatchAll() will collect all Parts matching your criteria. ReadEnvelope returns an Envelope struct. Behind the scenes a Part tree is constructed, and then sorted into the correct fields of the Envelope. The Envelope contains both the plain text and HTML portions of the email. If there was no plain text Part available, the HTML Part will be down-converted using the html2text library1. The root of the Part tree, as well as slices of the inline and attachment Parts are also available. Every MIME Part has its own headers, accessible via the Part.Header field. The raw headers for an Envelope are available in Root.Header. Envelope also provides helper methods to fetch headers: GetHeader(key) will return the RFC 2047 decoded value of the specified header. AddressList(key) will convert the specified address header into a slice of net/mail.Address values. enmime attempts to be tolerant of poorly encoded MIME messages. In situations where parsing is not possible, the ReadEnvelope and ReadParts functions will return a hard error. If enmime is able to continue parsing the message, it will add an entry to the Errors slice on the relevant Part. After parsing is complete, all Part errors will be appended to the Envelope Errors slice. The Error* constants can be used to identify a specific class of error. Please note that enmime parses messages into memory, so it is not likely to perform well with multi-gigabyte attachments. enmime is open source software released under the MIT License. The latest version can be found at https://github.com/zond/enmime
Use hypothesis.NewClient to create a hypothesis.Client that searches for annotations. NewClient's token param is optional. If it's set to your Hypothesis token, you'll search both public and private annotations. If it's the empty string, you'll only search public annotations. NewClient's hypothesis.SearchParams is likewise optional. If empty, your search will be unfiltered. NewClient's maxSearchResults param determines how many annotations to fetch. If 0, the limit defaults to 400. To search for the most recent 10 public annotations: To search for the most recent 10 public or private annotations, if your token is in an env var called H_TOKEN: To search for the most recent 10 annotations in a private group whose id in an env var called H_GROUP: To search for at most 10 public or private annotations from user 'judell', with the tag 'social media': The Hypothesis search API returns at most 200 annotations. hypothesis.Search encapsulates that API call, and returns an array of hypothesis.Row. Each Row represents one annotation. To fetch more than 200 annotations, use hypothesis.SearchAll. This test should find 2000 recent public or private annotations. For more search examples, see the Steampipe Hypothesis plugin. If you authenticate with your token you can call hypothesis.Client.GetProfile to list your private groups. Here, 'profile' is a hypothesis.Profile which includes hypothesis.Profile.Groups, an array of structs that include the names and ids of your private groups. An annotation may include an array of hypothesis.Selector. These structures define how the annotation "anchors" to the segment it refers to. See anchoring.
Package dataset includes the operations needed for processing collections of JSON documents and their attachments. Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu> Copyright (c) 2021, Caltech All rights not granted herein are expressly reserved by Caltech. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Package dataset includes the operations needed for processing collections of JSON documents and their attachments. Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu> Copyright (c) 2021, Caltech All rights not granted herein are expressly reserved by Caltech. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Package dataset provides a common approach for storing JSON object documents on local disc. It is intended as a single user system for intermediate processing of JSON content for analysis or batch processing. It is not a database management system (if you need a JSON database system I would suggest looking at Couchdb, Mongo and Redis as a starting point). The approach dataset takes is to store JSON documents in a pairtree structure under the collection folder. The keys are the JSON document names. JSON documents (and possibly their attachments) are then stored based on that assignment in the pairtree. Conversely the collection.json document is used to find and retrieve documents from the collection. The layout of the metadata is as follows + Collection - a directory A key feature of dataset is to be Posix shell friendly. This has lead to storing the JSON documents in a directory structure that standard Posix tooling can traverse. It has also mean that the JSON documents themselves remain on "disc" as plain text. This has facilitated integration with many other applications, programming langauages and systems. Attachments are non-JSON documents explicitly "attached" that share the same pairtree path but are placed in a sub directory called "_". If the document name is "Jane.Doe.json" and the attachment is photo.jpg the JSON document is "pairtree/Ja/ne/.D/e./Jane.Doe.json" and the photo is in "pairtree/Ja/ne/.D/e./_/photo.jpg". Additional operations beside storing and reading JSON documents are also supported. These include creating lists (arrays) of JSON documents from a list of keys, listing keys in the collection, counting documents in the collection, indexing and searching by indexes. The primary use case driving the development of dataset is harvesting API content for library systems (e.g. EPrints, Invenio, ArchivesSpace, ORCID, CrossRef, OCLC). The harvesting needed to be done in such a way as to leverage existing Posix tooling (e.g. grep, sed, etc) for processing and analysis. Initial use case: Caltech Library has many repository, catelog and record management systems (e.g. EPrints, Invenion, ArchivesSpace, Islandora, Invenio). It is common practice to harvest data from these systems for analysis or processing. Harvested records typically come in XML or JSON format. JSON has proven a flexibly way for working with the data and in our more modern tools the common format we use to move data around. We needed a way to standardize how we stored these JSON records for intermediate processing to allow us to use the growing ecosystem of JSON related tooling available under Posix/Unix compatible systems. Package dataset includes the operations needed for processing collections of JSON documents and their attachments. Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu> Copyright (c) 2021, Caltech All rights not granted herein are expressly reserved by Caltech. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Package dataset includes the operations needed for processing collections of JSON documents and their attachments. Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu> Copyright (c) 2021, Caltech All rights not granted herein are expressly reserved by Caltech. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Package dataset includes the operations needed for processing collections of JSON documents and their attachments. Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu> Copyright (c) 2021, Caltech All rights not granted herein are expressly reserved by Caltech. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Package dataset includes the operations needed for processing collections of JSON documents and their attachments. Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu> Copyright (c) 2021, Caltech All rights not granted herein are expressly reserved by Caltech. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Package dataset includes the operations needed for processing collections of JSON documents and their attachments. Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu> Copyright (c) 2021, Caltech All rights not granted herein are expressly reserved by Caltech. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Package foursquarego provides a Client for the Foursquare API. Here are some example requests There is a parameters struct if there is more than just 1 parameter. If there are strict options for the parameters then there will be a struct as seen in the search above. For Authentication the just send either the Client Secret or the users's Access Token. If you send both to the client it will send both to foursquare. Foursquare expects that if you're making a request for a user you will send the Access Token. More information can be found on their auth page, https://developer.foursquare.com/docs/api/configuration/authentication
Package marketplaceagreement provides the API client, operations, and parameter types for AWS Marketplace Agreement Service. AWS Marketplace is a curated digital catalog that customers can use to find, buy, deploy, and manage third-party software, data, and services to build solutions and run their businesses. The AWS Marketplace Agreement Service provides an API interface that helps AWS Marketplace sellers manage their product-related agreements, including listing, searching, and filtering agreements. To manage agreements in AWS Marketplace, you must ensure that your AWS Identity and Access Management (IAM) policies and roles are set up. The user must have the required policies/permissions that allow them to carry out the actions in AWS: DescribeAgreement – Grants permission to users to obtain detailed meta data about any of their agreements. GetAgreementTerms – Grants permission to users to obtain details about the terms of an agreement. SearchAgreements – Grants permission to users to search through all their agreements.
Package order enables easier ordering and comparison tasks. This package provides functionality to easily define and apply order on values. It works out of the box for most primitive types and their pointer versions, and enable order of any object using (three-way comparison) https://en.wikipedia.org/wiki/Three-way_comparison with a given `func(T, T) int` function, or by implementing the generic interface: `func (T) Compare(T) int`. Supported Tasks: * [x] `Sort` / `SortStable` - sort a slice. * [x] `Search` - binary search for a value in a slice. * [x] `MinMax` - get indices of minimal and maximal values of a slice. * [X] `Is` - get a comparable object for more readable code. + [x] `Select` - get the K'th greatest value of a slice. * [x] `IsSorted` / `IsStrictSorted` - check if a slice is sorted. Order between values can be more forgiving than strict comparison. This library allows sensible type conversions. A type `U` can be used in order function of type `T` in the following cases: * `U` is a pointer (or pointers chain) to a `T`. * `T` is a pointer (or pointers chain) to a `U`. * `T` and `U` are of the same kind. * `T` and `U` are of the same number kind group (int?, uint?, float?, complex?) and `U`'s bits number is less or equal to `T`'s bits number. * `U` and `T` are assignable structs. Using this library might be less type safe - because of the usage of interfaces API, and less efficient - because of the use of reflection. On the other hand, this library reduce chances for errors by providing a well tested code and more readable code. See below how some order tasks can be translated to be used by this library. A simple example that shows how to use the order library with different basic types. A type may implement a `func (t T) Compare(other T) int` function. In this case it could be just used with the order package functions. An example of ordering struct with multiple fields with different priorities.
Package journal implements WAL-like append-only journals. A journal is split into segments; the last segment is the one being written to. Intended use cases: Features: Suitable for a large number of very short records. Per-record overhead can be as low as 2 bytes. Suitable for very large records, too. (In the future, it will be possible to write records in chunks.) Fault-resistant. Self-healing. Verifies the checksums and truncates corrupted data when opening the journal. Performant. Automatically rotates the files when they reach a certain size. TODO: Trigger rotation based on time (say, each day gets a new segment). Basically limit how old in-progress segments can be. Allow to rotate a file without writing a new record. (Otherwise rarely-used journals will never get archived.) Give work-in-progress file a prefixed name (W*). Auto-commit every N seconds, after K bytes, after M records. Option for millisecond timestamp precision? Reading API. (Search based on time and record ordinals.) Segment files: We always set bit 0 of commit checksums, and we use size*2 when encoding records; so bit 0 of the first byte of an item indicates whether it's a record or a commit. Timestamps are 32-bit unix times and have 1 second precision. (Rationale is that the primary use of timestamps is to search logs by time, and that does not require a higher precision. For high-frequency logs, with 1-second precision, timestamp deltas will typically fit within 1 byte.)
Package turtle is a library for working with emojis. The API ca be used to retrieve emoji for a specific name, a category or a keyword. You can also search emojis if you do not know the name of an emoji.
openfigi: a client for the OpenFIGI API. 3 types of queries: Instructions: Construct a builder. - Search and Filter use BaseItemBuilder, then construct a BaseItem. - Mapping uses MappingItemBuilder, then construct a MappingItem. MappingRequest is []MappingItem. Set the properties through setters. (".Set[...](...)") Build the item: BaseItemBuilder.Build, MappingItemBuilder.Build. The package will validate the content of the item, reducing bad API calls. [optional] API Key, set with SetAPIKey. Use the client to make the request. - BaseItem.Search, BaseItem.Filter, returning SearchResponse or FilterResponse - MappingRequest.Fetch returning []SingleMappingResponse - SearchResponse.Next, FilterResponse.Next to fetch the next page.
Package splunksearchapireceiver contains the Splunk Search API receiver.
Package backupsearch provides the API client, operations, and parameter types for AWS Backup Search. Backup Search is the recovery point and item level search for Backup. For additional information, see: Backup API Reference Backup Developer Guide
Package freegeoip provides an API for searching the geolocation of IP addresses. It uses a database that can be either a local file or a remote resource from a URL. Local databases are monitored by fsnotify and reloaded when the file is either updated or overwritten. Remote databases are automatically downloaded and updated in background so you can focus on using the API and not managing the database.
Package resourcegroupstaggingapi provides the client and types for making API requests to AWS Resource Groups Tagging API. This guide describes the API operations for the resource groups tagging. A tag is a label that you assign to an AWS resource. A tag consists of a key and a value, both of which you define. For example, if you have two Amazon EC2 instances, you might assign both a tag key of "Stack." But the value of "Stack" might be "Testing" for one and "Production" for the other. Tagging can help you organize your resources and enables you to simplify resource management, access management and cost allocation. For more information about tagging, see Working with Tag Editor (http://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/tag-editor.html) and Working with Resource Groups (http://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/resource-groups.html). For more information about permissions you need to use the resource groups tagging APIs, see Obtaining Permissions for Resource Groups (http://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/obtaining-permissions-for-resource-groups.html) and Obtaining Permissions for Tagging (http://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/obtaining-permissions-for-tagging.html). You can use the resource groups tagging APIs to complete the following tasks: Tag and untag supported resources located in the specified region for the AWS account Use tag-based filters to search for resources located in the specified region for the AWS account List all existing tag keys in the specified region for the AWS account List all existing values for the specified key in the specified region for the AWS account Not all resources can have tags. For a lists of resources that you can tag, see Supported Resources (http://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/supported-resources.html) in the AWS Resource Groups and Tag Editor User Guide. To make full use of the resource groups tagging APIs, you might need additional IAM permissions, including permission to access the resources of individual services as well as permission to view and apply tags to those resources. For more information, see Obtaining Permissions for Tagging (http://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/obtaining-permissions-for-tagging.html) in the AWS Resource Groups and Tag Editor User Guide. See https://docs.aws.amazon.com/goto/WebAPI/resourcegroupstaggingapi-2017-01-26 for more information on this service. See resourcegroupstaggingapi package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/resourcegroupstaggingapi/ To AWS Resource Groups Tagging API with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the AWS Resource Groups Tagging API client ResourceGroupsTaggingAPI for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/resourcegroupstaggingapi/#New
Package aw is a utility library/framework for Alfred 3 workflows https://www.alfredapp.com/ It provides APIs for interacting with Alfred (e.g. Script Filter feedback) and the workflow environment (variables, caches, settings). NOTE: AwGo is currently in development. The API *will* change and should not be considered stable until v1.0. Until then, vendoring AwGo (e.g. with dep or vgo) is strongly recommended. As of AwGo 0.14, all applicable features of Alfred 3.6 are supported. The main features are: Typically, you'd call your program's main entry point via Run(). This way, the library will rescue any panic, log the stack trace and show an error message to the user in Alfred. In the Script box (Language = "/bin/bash"): To generate results for Alfred to show in a Script Filter, use the feedback API of Workflow: You can set workflow variables (via feedback) with Workflow.Var, Item.Var and Modifier.Var. See Workflow.SendFeedback for more documentation. Alfred requires a different JSON format if you wish to set workflow variables. Use the ArgVars (named for its equivalent element in Alfred) struct to generate output from Run Script actions. Be sure to set TextErrors to true to prevent Workflow from generating Alfred JSON if it catches a panic: See ArgVars for more information. New() creates a *Workflow using the default values and workflow settings read from environment variables set by Alfred. You can change defaults by passing one or more Options to New(). If you do not want to use Alfred's environment variables, or they aren't set (i.e. you're not running the code in Alfred), you must pass an Env as the first Option to New() using CustomEnv(). A Workflow can be re-configured later using its Configure() method. Check out the _examples/ subdirectory for some simple, but complete, workflows which you can copy to get started. See the documentation for Option for more information on configuring a Workflow. AwGo can filter Script Filter feedback using a Sublime Text-like fuzzy matching algorithm. Workflow.Filter() sorts feedback Items against the provided query, removing those that do not match. Sorting is performed by subpackage fuzzy via the fuzzy.Sortable interface. See _examples/fuzzy for a basic demonstration. See _examples/bookmarks for a demonstration of implementing fuzzy.Sortable on your own structs and customising the fuzzy sort settings. AwGo automatically configures the default log package to write to STDERR (Alfred's debugger) and a log file in the workflow's cache directory. The log file is necessary because background processes aren't connected to Alfred, so their output is only visible in the log. It is rotated when it exceeds 1 MiB in size. One previous log is kept. AwGo detects when Alfred's debugger is open (Workflow.Debug() returns true) and in this case prepends filename:linenumber: to log messages. The Config struct (which is included in Workflow as Workflow.Config) provides an interface to the workflow's settings from the Workflow Environment Variables panel. https://www.alfredapp.com/help/workflows/advanced/variables/#environment Alfred exports these settings as environment variables, and you can read them ad-hoc with the Config.Get*() methods, and save values back to Alfred with Config.Set(). Using Config.To() and Config.From(), you can "bind" your own structs to the settings in Alfred: And to save a struct's fields to the workflow's settings in Alfred: See the documentation for Config.To and Config.From for more information, and _examples/settings for a demo workflow based on the API. The Alfred struct provides methods for the rest of Alfred's AppleScript API. Amongst other things, you can use it to tell Alfred to open, to search for a query, or to browse/action files & directories. See documentation of the Alfred struct for more information. AwGo provides a basic, but useful, API for loading and saving data. In addition to reading/writing bytes and marshalling/unmarshalling to/from JSON, the API can auto-refresh expired cache data. See Cache and Session for the API documentation. Workflow has three caches tied to different directories: These all share the same API. The difference is in when the data go away. Data saved with Session are deleted after the user closes Alfred or starts using a different workflow. The Cache directory is in a system cache directory, so may be deleted by the system or "System Maintenance" tools. The Data directory lives with Alfred's application data and would not normally be deleted. Subpackage util provides several functions for running script files and snippets of AppleScript/JavaScript code. See util for documentation and examples. AwGo offers a simple API to start/stop background processes via Workflow's RunInBackground(), IsRunning() and Kill() methods. This is useful for running checks for updates and other jobs that hit the network or take a significant amount of time to complete, allowing you to keep your Script Filters extremely responsive. See _examples/update and _examples/workflows for demonstrations of this API.
Package resourcegroupstaggingapi provides the client and types for making API requests to AWS Resource Groups Tagging API. This guide describes the API operations for the resource groups tagging. A tag is a label that you assign to an AWS resource. A tag consists of a key and a value, both of which you define. For example, if you have two Amazon EC2 instances, you might assign both a tag key of "Stack." But the value of "Stack" might be "Testing" for one and "Production" for the other. Tagging can help you organize your resources and enables you to simplify resource management, access management and cost allocation. For more information about tagging, see Working with Tag Editor (http://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/tag-editor.html) and Working with Resource Groups (http://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/resource-groups.html). For more information about permissions you need to use the resource groups tagging APIs, see Obtaining Permissions for Resource Groups (http://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/obtaining-permissions-for-resource-groups.html) and Obtaining Permissions for Tagging (http://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/obtaining-permissions-for-tagging.html). You can use the resource groups tagging APIs to complete the following tasks: Tag and untag supported resources located in the specified region for the AWS account Use tag-based filters to search for resources located in the specified region for the AWS account List all existing tag keys in the specified region for the AWS account List all existing values for the specified key in the specified region for the AWS account Not all resources can have tags. For a lists of resources that you can tag, see Supported Resources (http://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/supported-resources.html) in the AWS Resource Groups and Tag Editor User Guide. To make full use of the resource groups tagging APIs, you might need additional IAM permissions, including permission to access the resources of individual services as well as permission to view and apply tags to those resources. For more information, see Obtaining Permissions for Tagging (http://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/obtaining-permissions-for-tagging.html) in the AWS Resource Groups and Tag Editor User Guide. See https://docs.aws.amazon.com/goto/WebAPI/resourcegroupstaggingapi-2017-01-26 for more information on this service. See resourcegroupstaggingapi package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/resourcegroupstaggingapi/ To AWS Resource Groups Tagging API with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the AWS Resource Groups Tagging API client ResourceGroupsTaggingAPI for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/resourcegroupstaggingapi/#New
Package gophersauce is an API wrapper for the SauceNAO.com reverse image search engine. Initializing a client (without additional options): Initializing a client with options: Any of the options can be omitted. By default, MaxResults will be 6 and APIKey will be an empty string. You can also change these properties after instantiating the client: There are three ways in which you can consume the SauceNAO API: URL, file, and reader. Reverse searching an image using a URL: Reverse searching an image using a file path: Reverse searching an image using a reader: API responses have helpful methods, such as First() which will return the first result (which is likely the one that is the most similar to your image), if any: Some of the response fields are, by default, declared as interfaces because of the way the SauceNAO API works. You will have to either check for the type of the field yourself and parse it that way, or use a helper function, such as GetUserID(), GetAccountType() (on type SaucenaoResponse) or GetCreatorString() (on type SearchResult). Example: This will not work: This will work:
Package geoplaces provides the API client, operations, and parameter types for Amazon Location Service Places V2. your applications, offering global coverage with rich, detailed information. Key features include: Forward and reverse geocoding for addresses and coordinates Comprehensive place searches with detailed information, including: Business names and addresses Contact information Hours of operation POI (Points of Interest) categories Food types for restaurants Chain affiliation for relevant businesses Global data coverage with a wide range of POI categories Regular data updates to ensure accuracy and relevance