Package gohg is a Go client library for using the Mercurial dvcs via it's Command Server. For Mercurial see: http://mercurial.selenic.com. For the Hg Command Server see: http://mercurial.selenic.com/wiki/CommandServer. ▪ Mercurial For Mercurial any version starting from 1.9 should be ok, cause that's the one where the Command Server was introduced. If you send wrong options to it through gohg, or commands or options not yet supported (or obsolete) in your Hg version, you'll simply get back an error from Hg itself, as gohg does not check them. But on the other hand gohg allows issuing new commands, not yet implemented by gohg; see further. ▪ Go Currently gohg is currently developed with Go1.2.1. Though I started with the Go1.0 versions, I can't remember having had to change one or two minor things when moving to Go1.1.1. Updating to Go1.1.2 required no changes at all. I had an issue though with Go1.2, on Windows only, causing some tests using os.exec.Command to fail. I'll have to look into that further, to find out if I should report a bug. ▪ Platform I'm developing and testing both on Windows 7 and Ubuntu 12.04/13.04/13.10. But I suppose it should work on any other platform that supports Hg and Go. Only Go and it's standard library. And Mercurial should be installed of course. At the commandline type: to have gohg available in your GOPATH. Start with importing the gohg package. Examples: All interaction with the Mercurial Command Server (Hg CS from now on) happens through the HgClient type, of which you have to create an instance: Then you can connect the Hg CS as follows: 1. The Hg executable: The first parameter is the Mercurial command to use (which 'hg'). You can leave it blanc to let the gohg tool use the default Mercurial command on the system. Having a parameter for the Hg command allows for using a different Hg version, for testing purposes for instance. 2. The repository path: The second parameter is the path to the repository you want to work on. You can leave it blanc to have gohg use the repository it can find for the current path you are running the program in (searching upward in the folder tree eventually). 3. The config for the session: The third parameter allows to provide extra configuration for the session. Though this is currently not implemented yet. 4. Should gohg create a new repo before connecting? This fourth parameter allows you to indicate that you want gohg to first create a new Mercurial repo if it does not already exist in the path given by the second parameter. See the documentation for more detailed info. 5. The returnvalue: The HgClient.Connect() method eventually returns an error, so you can check if the connection succeeded, and if it is safe to go on. Once the work is done, you can disconnect the Hg CS using a typical Go idiom: The gohg tool sets some environment variables for the Hg CS session, to ensure it's good working: Once we have a connection to a Hg CS we can do some work with the repository. This is done with commands, and gohg offers 3 ways to use them. 1. The command methods of the HgClient type. 2. The HgCmd type. 3. The ExecCmd() method of the HgClient type. Each of which has its own reason of existence. Commands return a byte slice containing the resulting data, and eventually an error. But there are a few exceptions (see api docs). If a command fails, the returned error contains 5 elements: 1) the name of the internal routine where the error was trapped, 2) the name of the HgClient command that was run, 3) the returncode by Mercurial, 4) the full command that was passed to the Hg CS, and 5) the eventual error message returned by Mercurial. So the command could return something like the following in the err variable when it fails: The command aliases (like 'id' for 'identify') are not implemented. But there are examples in identify.go and showconfig.go of how you can easily implement them yourself. This is the easiest way, a kind of convenience. And the most readable too. A con is that as a user you cannot know the exact command that was passed to Hg, without some extra mechanics. Each command has the same name as the corresponding Hg command, except it starts with a capital letter of course. An example (also see examples/example1.go): Note that these methods all use the HgCmd type internally. As such they are convenience wrappers around that type. You could also consider them as a kind of syntactic sugar. If you just want to simply issue a command, nothing more, they are the way to go. The only way to obtain the commandstring sent to Hg when using these command methods, is by calling the HgClient.ShowLastCmd() method afterwards before issuing any other commands: Using the HgCmd type is kind of the standard way. It is a struct that you can instantiate for any command, and for which you can set elements Name, Options and Params (see the api docs for more details). It allows for building the command step by step, and also to query the exact command that will be sent to the Hg CS. A pro of this method is that it allows you to obtain the exact command string that will be passed to Mercurial before it is performed, by calling the CmdLine() method of HgCmd. This could be handy for logging, or for showing feedback to the user in a GUI program. (You could even call CmdLine() several times, and show the building of the command step by step.) An example (also see examples/example2.go): As you can see, this way requires some more coding. The source code will also show you that the HgCmd type is indeed used as the underlying type for the convenience HgClient commands, in all the New<hg-command>Cmd() constructors. The HgClient type has an extra method ExecCmd(), allowing you to pass a fully custom built command to Hg. It accepts a string slice that is supposed to contain all the elements of the complete command, as you would type it at the command line. It could be a convenient way for performing commands that are not yet implemented in gohg, or to make use of extensions to Hg (for which gohg offers no support (yet?)). An example (also see examples/example3.go): Just like on the commandline, options come before parameters. Options to commands use the same name as the long form of the Mercurial option they represent, but start with the necessary capital letter. An options value can be of type bool, int or string. You just pass the value as the parameter to the option (= type conversion of the value to the option type). You can pass any number of options, as the elements of a slice. Options can occur more than once if appropriate (see the ones marked with '[+]' in the Mercurial help). Parameters are used to provide any arguments for a command that are not options. They are passed in as a string or a slice of strings, depending on the command. These parameters typically contain revisions, paths or filenames and so. The gohg tool only checks if the options the caller gives are valid for that command. It does not check if the values are valid for the combination of that command and that option, as that is done by Mercurial. No need to implement that again. If an option is not valid for a command, it is silently ignored, so it is not passed to the Hg CS. A few options are not implemented, as they seemed not relevant for use with this tool (for instance: the global --color option, or the --print0 option for status). The gohg tool only returns errors, with an as clear as possible message, and never uses log.Fatal() nor panics, even if those may seem appropriate. It leaves it up to the caller to do that eventually. It's not up to this library to decide whether to do a retry or to abort the complete application. ▪ The following config settings are fixated in the code (at least for now): ▪ As mentioned earlier, passing config info is not implemented yet. ▪ Currently the only support for extensions to Mercurial is through the ExecCmd method. ▪ If multiple Hg CSs are used against the same repo, it is up to Mercurial to handle this correctly. ▪ Mercurial is always run in english. Internationalization is not necessary here, as the conversation with Hg is internal to the application. Please note that this tool is still in it's very early stages. If you have suggestions or requests, or experience any problems, please use the issue tracker at https://bitbucket.org/gohg/gohg/issues?status=new&status=open. Or you could send a patch or a pull request. Copyright 2012-2014, The gohg Authors. All rights reserved. Use of this source code is governed by a BSD style license that can be found in the LICENSE.md file.
Package enmime implements a MIME encoding and decoding library. It's built on top of Go's included mime/multipart support where possible, but is geared towards parsing MIME encoded emails. The enmime API has two conceptual layers. The lower layer is a tree of Part structs, representing each component of a decoded MIME message. The upper layer, called an Envelope provides an intuitive way to interact with a MIME message. Calling ReadParts causes enmime to parse the body of a MIME message into a tree of Part objects, each of which is aware of its content type, filename and headers. The content of a Part is available as a slice of bytes via the Content field. If the part was encoded in quoted-printable or base64, it is decoded prior to being placed in Content. If the Part contains text in a character set other than utf-8, enmime will attempt to convert it to utf-8. To locate a particular Part, pass a custom PartMatcher function into the BreadthMatchFirst() or DepthMatchFirst() methods to search the Part tree. BreadthMatchAll() and DepthMatchAll() will collect all Parts matching your criteria. ReadEnvelope returns an Envelope struct. Behind the scenes a Part tree is constructed, and then sorted into the correct fields of the Envelope. The Envelope contains both the plain text and HTML portions of the email. If there was no plain text Part available, the HTML Part will be down-converted using the html2text library1. The root of the Part tree, as well as slices of the inline and attachment Parts are also available. Every MIME Part has its own headers, accessible via the Part.Header field. The raw headers for an Envelope are available in Root.Header. Envelope also provides helper methods to fetch headers: GetHeader(key) will return the RFC 2047 decoded value of the specified header. AddressList(key) will convert the specified address header into a slice of net/mail.Address values. enmime attempts to be tolerant of poorly encoded MIME messages. In situations where parsing is not possible, the ReadEnvelope and ReadParts functions will return a hard error. If enmime is able to continue parsing the message, it will add an entry to the Errors slice on the relevant Part. After parsing is complete, all Part errors will be appended to the Envelope Errors slice. The Error* constants can be used to identify a specific class of error. Please note that enmime parses messages into memory, so it is not likely to perform well with multi-gigabyte attachments. enmime is open source software released under the MIT License. The latest version can be found at https://github.com/zond/enmime
Use hypothesis.NewClient to create a hypothesis.Client that searches for annotations. NewClient's token param is optional. If it's set to your Hypothesis token, you'll search both public and private annotations. If it's the empty string, you'll only search public annotations. NewClient's hypothesis.SearchParams is likewise optional. If empty, your search will be unfiltered. NewClient's maxSearchResults param determines how many annotations to fetch. If 0, the limit defaults to 400. To search for the most recent 10 public annotations: To search for the most recent 10 public or private annotations, if your token is in an env var called H_TOKEN: To search for the most recent 10 annotations in a private group whose id in an env var called H_GROUP: To search for at most 10 public or private annotations from user 'judell', with the tag 'social media': The Hypothesis search API returns at most 200 annotations. hypothesis.Search encapsulates that API call, and returns an array of hypothesis.Row. Each Row represents one annotation. To fetch more than 200 annotations, use hypothesis.SearchAll. This test should find 2000 recent public or private annotations. For more search examples, see the Steampipe Hypothesis plugin. If you authenticate with your token you can call hypothesis.Client.GetProfile to list your private groups. Here, 'profile' is a hypothesis.Profile which includes hypothesis.Profile.Groups, an array of structs that include the names and ids of your private groups. An annotation may include an array of hypothesis.Selector. These structures define how the annotation "anchors" to the segment it refers to. See anchoring.
Package dataset includes the operations needed for processing collections of JSON documents and their attachments. Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu> Copyright (c) 2021, Caltech All rights not granted herein are expressly reserved by Caltech. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Package dataset includes the operations needed for processing collections of JSON documents and their attachments. Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu> Copyright (c) 2021, Caltech All rights not granted herein are expressly reserved by Caltech. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Package dataset provides a common approach for storing JSON object documents on local disc. It is intended as a single user system for intermediate processing of JSON content for analysis or batch processing. It is not a database management system (if you need a JSON database system I would suggest looking at Couchdb, Mongo and Redis as a starting point). The approach dataset takes is to store JSON documents in a pairtree structure under the collection folder. The keys are the JSON document names. JSON documents (and possibly their attachments) are then stored based on that assignment in the pairtree. Conversely the collection.json document is used to find and retrieve documents from the collection. The layout of the metadata is as follows + Collection - a directory A key feature of dataset is to be Posix shell friendly. This has lead to storing the JSON documents in a directory structure that standard Posix tooling can traverse. It has also mean that the JSON documents themselves remain on "disc" as plain text. This has facilitated integration with many other applications, programming langauages and systems. Attachments are non-JSON documents explicitly "attached" that share the same pairtree path but are placed in a sub directory called "_". If the document name is "Jane.Doe.json" and the attachment is photo.jpg the JSON document is "pairtree/Ja/ne/.D/e./Jane.Doe.json" and the photo is in "pairtree/Ja/ne/.D/e./_/photo.jpg". Additional operations beside storing and reading JSON documents are also supported. These include creating lists (arrays) of JSON documents from a list of keys, listing keys in the collection, counting documents in the collection, indexing and searching by indexes. The primary use case driving the development of dataset is harvesting API content for library systems (e.g. EPrints, Invenio, ArchivesSpace, ORCID, CrossRef, OCLC). The harvesting needed to be done in such a way as to leverage existing Posix tooling (e.g. grep, sed, etc) for processing and analysis. Initial use case: Caltech Library has many repository, catelog and record management systems (e.g. EPrints, Invenion, ArchivesSpace, Islandora, Invenio). It is common practice to harvest data from these systems for analysis or processing. Harvested records typically come in XML or JSON format. JSON has proven a flexibly way for working with the data and in our more modern tools the common format we use to move data around. We needed a way to standardize how we stored these JSON records for intermediate processing to allow us to use the growing ecosystem of JSON related tooling available under Posix/Unix compatible systems. Package dataset includes the operations needed for processing collections of JSON documents and their attachments. Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu> Copyright (c) 2021, Caltech All rights not granted herein are expressly reserved by Caltech. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Package dataset includes the operations needed for processing collections of JSON documents and their attachments. Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu> Copyright (c) 2021, Caltech All rights not granted herein are expressly reserved by Caltech. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Package dataset includes the operations needed for processing collections of JSON documents and their attachments. Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu> Copyright (c) 2021, Caltech All rights not granted herein are expressly reserved by Caltech. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Package dataset includes the operations needed for processing collections of JSON documents and their attachments. Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu> Copyright (c) 2021, Caltech All rights not granted herein are expressly reserved by Caltech. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Package dataset includes the operations needed for processing collections of JSON documents and their attachments. Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu> Copyright (c) 2021, Caltech All rights not granted herein are expressly reserved by Caltech. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Package langchaingo provides a Go implementation of LangChain, a framework for building applications with Large Language Models (LLMs) through composability. LangchainGo enables developers to create powerful AI-driven applications by providing a unified interface to various LLM providers, vector databases, and other AI services. The framework emphasizes modularity, extensibility, and ease of use. The framework is organized around several key packages: Basic text generation with OpenAI: Creating embeddings and using vector search: Building a chain for question answering: LangchainGo supports numerous providers: LLM Providers: Embedding Providers: Vector Stores: Create agents that can use tools to accomplish complex tasks: Maintain conversation context across multiple interactions: Streaming responses: Function calling: Multi-modal inputs (text and images): Most providers require API keys set as environment variables: LangchainGo provides standardized error handling: LangchainGo includes comprehensive testing utilities including HTTP record/replay for internal tests. The httprr package provides deterministic testing of HTTP interactions: See the examples/ directory for complete working examples including: LangchainGo welcomes contributions! The project follows Go best practices and includes comprehensive testing, linting, and documentation standards. See CONTRIBUTING.md for detailed guidelines.
Package s3vectors provides the API client, operations, and parameter types for Amazon S3 Vectors. Amazon S3 vector buckets are a bucket type to store and search vectors with sub-second search times. They are designed to provide dedicated API operations for you to interact with vectors to do similarity search. Within a vector bucket, you use a vector index to organize and logically group your vector data. When you make a write or read request, you direct it to a single vector index. You store your vector data as vectors. A vector contains a key (a name that you assign), a multi-dimensional vector, and, optionally, metadata that describes a vector. The key uniquely identifies the vector in a vector index.
Package fm provides a pure Go wrapper around macOS Foundation Models framework. Foundation Models is Apple's on-device large language model framework introduced in macOS 26 Tahoe, providing privacy-focused AI capabilities without requiring internet connectivity. • Streaming-first text generation with LanguageModelSession • Simulated real-time response streaming with word/sentence chunks • Dynamic tool calling with custom Go tools and input validation • Structured output generation with JSON formatting • Context window management (4096 token limit) • Context cancellation and timeout support • Session lifecycle management with proper memory handling • System instructions support • Generation options for temperature, max tokens, and other parameters • Structured logging with Go slog integration for comprehensive debugging • macOS 26 Tahoe or later • Apple Intelligence enabled • Compatible Apple Silicon device Create a session and generate text: Control output with GenerationOptions: Create a session with specific behavior: Foundation Models has a strict 4096 token context window. Monitor usage: Define custom tools that the model can call: Add validation to your tools for better error handling: Register and use tools: Generate structured JSON responses: Cancel long-running requests with context support: Generate responses with simulated real-time streaming output: Note: Current streaming implementation is simulated (breaks complete response into chunks). Native streaming will be implemented when Foundation Models provides streaming APIs. Check if Foundation Models is available: The package provides comprehensive error handling: Always release sessions to prevent memory leaks: • Foundation Models runs entirely on-device • No internet connection required • Processing time depends on prompt complexity and device capabilities • Context window is limited to 4096 tokens • Token estimation is approximate (4 chars per token) • Use context cancellation for long-running requests • Input validation prevents runtime errors and improves performance The package is not thread-safe. Use appropriate synchronization when accessing sessions from multiple goroutines. Context cancellation is goroutine-safe and can be used from any goroutine. This package automatically manages the Swift shim library (libFMShim.dylib) that bridges Foundation Models APIs to C functions callable from Go via purego. The library search strategy: 1. Look for existing libFMShim.dylib in current directory and common paths 2. If not found, automatically extract embedded library to temp directory 3. Load the library and initialize the Foundation Models interface No manual setup required - the package is fully self-contained! • Foundation Models API is still evolving • Some advanced GenerationOptions may not be fully supported yet • Foundation Models tool invocation can be inconsistent due to safety restrictions • Context cancellation cannot interrupt actual model computation • Streaming is currently simulated (post-processing) - native streaming pending Apple API support • macOS 26 Tahoe only ✅ **What Works:** • Tool registration and parameter definition • Swift ↔ Go callback mechanism • Real data fetching (weather, calculations, etc.) • Error handling and validation • Structured logging with Go slog integration ⚠️ **Foundation Models Behavior:** • Tool calling works but can be inconsistent • Some queries may be blocked by safety guardrails • Success rate varies by tool complexity and phrasing The package provides comprehensive debug logging through Go's slog package: Debug logs include: • Session creation and configuration details • Tool registration and parameter validation • Request/response processing with timing • Context usage and memory management • Swift shim layer interaction details See LICENSE file for details. Package fm provides a pure Go wrapper around macOS Foundation Models framework using purego to call a Swift shim library that exports C functions. Foundation Models (macOS 26 Tahoe) provides on-device LLM capabilities including: - Text generation with LanguageModelSession - Streaming responses via delegates or async sequences - Tool calling with requestToolInvocation:with: - Structured outputs with LanguageModelRequestOptions IMPORTANT: Foundation Models has a strict 4096 token context window limit. This package automatically tracks context usage and validates requests to prevent exceeding the limit. Use GetContextSize(), IsContextNearLimit(), and RefreshSession() to manage long conversations. This implementation uses a Swift shim (libFMShim.dylib) that exports C functions using @_cdecl to bridge Swift async methods to synchronous C calls.
Package foursquarego provides a Client for the Foursquare API. Here are some example requests There is a parameters struct if there is more than just 1 parameter. If there are strict options for the parameters then there will be a struct as seen in the search above. For Authentication the just send either the Client Secret or the users's Access Token. If you send both to the client it will send both to foursquare. Foursquare expects that if you're making a request for a user you will send the Access Token. More information can be found on their auth page, https://developer.foursquare.com/docs/api/configuration/authentication
Package marketplaceagreement provides the API client, operations, and parameter types for AWS Marketplace Agreement Service. AWS Marketplace is a curated digital catalog that customers can use to find, buy, deploy, and manage third-party software, data, and services to build solutions and run their businesses. The AWS Marketplace Agreement Service provides an API interface that helps AWS Marketplace sellers manage their product-related agreements, including listing, searching, and filtering agreements. To manage agreements in AWS Marketplace, you must ensure that your AWS Identity and Access Management (IAM) policies and roles are set up. The user must have the required policies/permissions that allow them to carry out the actions in AWS: DescribeAgreement – Grants permission to users to obtain detailed meta data about any of their agreements. GetAgreementTerms – Grants permission to users to obtain details about the terms of an agreement. SearchAgreements – Grants permission to users to search through all their agreements.
Package order enables easier ordering and comparison tasks. This package provides functionality to easily define and apply order on values. It works out of the box for most primitive types and their pointer versions, and enable order of any object using (three-way comparison) https://en.wikipedia.org/wiki/Three-way_comparison with a given `func(T, T) int` function, or by implementing the generic interface: `func (T) Compare(T) int`. Supported Tasks: * [x] `Sort` / `SortStable` - sort a slice. * [x] `Search` - binary search for a value in a slice. * [x] `MinMax` - get indices of minimal and maximal values of a slice. * [X] `Is` - get a comparable object for more readable code. + [x] `Select` - get the K'th greatest value of a slice. * [x] `IsSorted` / `IsStrictSorted` - check if a slice is sorted. Order between values can be more forgiving than strict comparison. This library allows sensible type conversions. A type `U` can be used in order function of type `T` in the following cases: * `U` is a pointer (or pointers chain) to a `T`. * `T` is a pointer (or pointers chain) to a `U`. * `T` and `U` are of the same kind. * `T` and `U` are of the same number kind group (int?, uint?, float?, complex?) and `U`'s bits number is less or equal to `T`'s bits number. * `U` and `T` are assignable structs. Using this library might be less type safe - because of the usage of interfaces API, and less efficient - because of the use of reflection. On the other hand, this library reduce chances for errors by providing a well tested code and more readable code. See below how some order tasks can be translated to be used by this library. A simple example that shows how to use the order library with different basic types. A type may implement a `func (t T) Compare(other T) int` function. In this case it could be just used with the order package functions. An example of ordering struct with multiple fields with different priorities.
Package journal implements WAL-like append-only journals. A journal is split into segments; the last segment is the one being written to. Intended use cases: Features: Suitable for a large number of very short records. Per-record overhead can be as low as 2 bytes. Suitable for very large records, too. (In the future, it will be possible to write records in chunks.) Fault-resistant. Self-healing. Verifies the checksums and truncates corrupted data when opening the journal. Performant. Automatically rotates the files when they reach a certain size. TODO: Trigger rotation based on time (say, each day gets a new segment). Basically limit how old in-progress segments can be. Allow to rotate a file without writing a new record. (Otherwise rarely-used journals will never get archived.) Give work-in-progress file a prefixed name (W*). Auto-commit every N seconds, after K bytes, after M records. Option for millisecond timestamp precision? Reading API. (Search based on time and record ordinals.) Segment files: We always set bit 0 of commit checksums, and we use size*2 when encoding records; so bit 0 of the first byte of an item indicates whether it's a record or a commit. Timestamps are 32-bit unix times and have 1 second precision. (Rationale is that the primary use of timestamps is to search logs by time, and that does not require a higher precision. For high-frequency logs, with 1-second precision, timestamp deltas will typically fit within 1 byte.)
Package turtle is a library for working with emojis. The API ca be used to retrieve emoji for a specific name, a category or a keyword. You can also search emojis if you do not know the name of an emoji.
openfigi: a client for the OpenFIGI API. 3 types of queries: Instructions: Construct a builder. - Search and Filter use BaseItemBuilder, then construct a BaseItem. - Mapping uses MappingItemBuilder, then construct a MappingItem. MappingRequest is []MappingItem. Set the properties through setters. (".Set[...](...)") Build the item: BaseItemBuilder.Build, MappingItemBuilder.Build. The package will validate the content of the item, reducing bad API calls. [optional] API Key, set with SetAPIKey. Use the client to make the request. - BaseItem.Search, BaseItem.Filter, returning SearchResponse or FilterResponse - MappingRequest.Fetch returning []SingleMappingResponse - SearchResponse.Next, FilterResponse.Next to fetch the next page.
Package splunksearchapireceiver contains the Splunk Search API receiver.
Package backupsearch provides the API client, operations, and parameter types for AWS Backup Search. Backup Search is the recovery point and item level search for Backup. For additional information, see: Backup API Reference Backup Developer Guide
Package freegeoip provides an API for searching the geolocation of IP addresses. It uses a database that can be either a local file or a remote resource from a URL. Local databases are monitored by fsnotify and reloaded when the file is either updated or overwritten. Remote databases are automatically downloaded and updated in background so you can focus on using the API and not managing the database.
Package resourcegroupstaggingapi provides the client and types for making API requests to AWS Resource Groups Tagging API. This guide describes the API operations for the resource groups tagging. A tag is a label that you assign to an AWS resource. A tag consists of a key and a value, both of which you define. For example, if you have two Amazon EC2 instances, you might assign both a tag key of "Stack." But the value of "Stack" might be "Testing" for one and "Production" for the other. Tagging can help you organize your resources and enables you to simplify resource management, access management and cost allocation. For more information about tagging, see Working with Tag Editor (http://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/tag-editor.html) and Working with Resource Groups (http://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/resource-groups.html). For more information about permissions you need to use the resource groups tagging APIs, see Obtaining Permissions for Resource Groups (http://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/obtaining-permissions-for-resource-groups.html) and Obtaining Permissions for Tagging (http://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/obtaining-permissions-for-tagging.html). You can use the resource groups tagging APIs to complete the following tasks: Tag and untag supported resources located in the specified region for the AWS account Use tag-based filters to search for resources located in the specified region for the AWS account List all existing tag keys in the specified region for the AWS account List all existing values for the specified key in the specified region for the AWS account Not all resources can have tags. For a lists of resources that you can tag, see Supported Resources (http://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/supported-resources.html) in the AWS Resource Groups and Tag Editor User Guide. To make full use of the resource groups tagging APIs, you might need additional IAM permissions, including permission to access the resources of individual services as well as permission to view and apply tags to those resources. For more information, see Obtaining Permissions for Tagging (http://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/obtaining-permissions-for-tagging.html) in the AWS Resource Groups and Tag Editor User Guide. See https://docs.aws.amazon.com/goto/WebAPI/resourcegroupstaggingapi-2017-01-26 for more information on this service. See resourcegroupstaggingapi package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/resourcegroupstaggingapi/ To AWS Resource Groups Tagging API with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the AWS Resource Groups Tagging API client ResourceGroupsTaggingAPI for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/resourcegroupstaggingapi/#New