Package gorgonia is a library that helps facilitate machine learning in Go. Write and evaluate mathematical equations involving multidimensional arrays easily. Do differentiation with them just as easily. Autodiff showcases automatic differentiation Basic example of representing mathematical equations as graphs. In this example, we want to represent the following equation Gorgonia provides an API that is fairly idiomatic - most of the functions in in the API return (T, error). This is useful for many cases, such as an interactive shell for deep learning. However, it must also be acknowledged that this makes composing functions together a bit cumbersome. To that end, Gorgonia provides two alternative methods. First, the `Lift` based functions; Second the `Must` function Linear Regression Example The formula for a straight line is We want to find an `m` and a `c` that fits the equation well. We'll do it in both float32 and float64 to showcase the extensibility of Gorgonia This example showcases the reasons for the more confusing functions. This example showcases dealing with errors. This is part 2 of the raison d'ĂŞtre of the more complicated functions - dealing with errors SymbolicDiff showcases symbolic differentiation
Example_audioTranscription demonstrates how to transcribe speech to text using Azure OpenAI's Whisper model. This example shows how to: - Create an Azure OpenAI client with token credentials - Read an audio file and send it to the API - Convert spoken language to written text using the Whisper model - Process the transcription response The example uses environment variables for configuration: - AOAI_WHISPER_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_WHISPER_MODEL: The deployment name of your Whisper model Audio transcription is useful for accessibility features, creating searchable archives of audio content, generating captions or subtitles, and enabling voice commands in applications. Example_audioTranslation demonstrates how to translate speech from one language to English text. This example shows how to: - Create an Azure OpenAI client with token credentials - Read a non-English audio file - Translate the spoken content to English text - Process the translation response The example uses environment variables for configuration: - AOAI_WHISPER_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_WHISPER_MODEL: The deployment name of your Whisper model Speech translation is essential for cross-language communication, creating multilingual content, and building applications that break down language barriers. Example_chatCompletionStream demonstrates streaming responses from the Chat Completions API. This example shows how to: - Create an Azure OpenAI client with token credentials - Set up a streaming chat completion request - Process incremental response chunks - Handle streaming errors and completion The example uses environment variables for configuration: - AOAI_CHAT_COMPLETIONS_MODEL: The deployment name of your chat model - AOAI_CHAT_COMPLETIONS_ENDPOINT: Your Azure OpenAI endpoint URL Streaming is useful for: - Real-time response display - Improved perceived latency - Interactive chat interfaces - Long-form content generation Example_chatCompletionsFunctions demonstrates how to use Azure OpenAI's function calling feature. This example shows how to: - Create an Azure OpenAI client with token credentials - Define a function schema for weather information - Request function execution through the chat API - Parse and handle function call responses The example uses environment variables for configuration: - AOAI_CHAT_COMPLETIONS_MODEL: The deployment name of your chat model - AOAI_CHAT_COMPLETIONS_ENDPOINT: Your Azure OpenAI endpoint URL Function calling is useful for: - Integrating external APIs and services - Structured data extraction from natural language - Task automation and workflow integration - Building context-aware applications Example_chatCompletionsLegacyFunctions demonstrates using the legacy function calling format. This example shows how to: - Create an Azure OpenAI client with token credentials - Define a function schema using the legacy format - Use tools API for backward compatibility - Handle function calling responses The example uses environment variables for configuration: - AOAI_CHAT_COMPLETIONS_MODEL_LEGACY_FUNCTIONS_MODEL: The deployment name of your chat model - AOAI_CHAT_COMPLETIONS_MODEL_LEGACY_FUNCTIONS_ENDPOINT: Your Azure OpenAI endpoint URL Legacy function support ensures: - Compatibility with older implementations - Smooth transition to new tools API - Support for existing function-based workflows Example_chatCompletionsStructuredOutputs demonstrates using structured outputs with function calling. This example shows how to: - Create an Azure OpenAI client with token credentials - Define complex JSON schemas for structured output - Request specific data structures through function calls - Parse and validate structured responses The example uses environment variables for configuration: - AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_MODEL: The deployment name of your chat model - AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_ENDPOINT: Your Azure OpenAI endpoint URL Structured outputs are useful for: - Database query generation - Data extraction and transformation - API request formatting - Consistent response formatting Example_completions demonstrates how to use Azure OpenAI's legacy Completions API. This example shows how to: - Create an Azure OpenAI client with token credentials - Send a simple text completion request - Handle the completion response - Process the generated text output The example uses environment variables for configuration: - AOAI_COMPLETIONS_MODEL: The deployment name of your completions model - AOAI_COMPLETIONS_ENDPOINT: Your Azure OpenAI endpoint URL Legacy completions are useful for: - Simple text generation tasks - Completing partial text - Single-turn interactions - Basic language generation scenarios Example_createImage demonstrates how to generate images using Azure OpenAI's DALL-E model. This example shows how to: - Create an Azure OpenAI client with token credentials - Configure image generation parameters including size and format - Generate an image from a text prompt - Verify the generated image URL is accessible The example uses environment variables for configuration: - AOAI_DALLE_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_DALLE_MODEL: The deployment name of your DALL-E model Image generation is useful for: - Creating custom illustrations and artwork - Generating visual content for applications - Prototyping design concepts - Producing visual aids for documentation Example_embeddings demonstrates how to generate text embeddings using Azure OpenAI's embedding models. This example shows how to: - Create an Azure OpenAI client with token credentials - Convert text input into numerical vector representations - Process the embedding vectors from the response - Handle embedding results for semantic analysis The example uses environment variables for configuration: - AOAI_EMBEDDINGS_MODEL: The deployment name of your embedding model (e.g., text-embedding-ada-002) - AOAI_EMBEDDINGS_ENDPOINT: Your Azure OpenAI endpoint URL Text embeddings are useful for: - Semantic search and information retrieval - Text classification and clustering - Content recommendation systems - Document similarity analysis - Natural language understanding tasks Example_generateSpeechFromText demonstrates how to convert text to speech using Azure OpenAI's text-to-speech service. This example shows how to: - Create an Azure OpenAI client with token credentials - Send text to be converted to speech - Specify voice and audio format parameters - Handle the audio response stream The example uses environment variables for configuration: - AOAI_TTS_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_TTS_MODEL: The deployment name of your text-to-speech model Text-to-speech conversion is valuable for creating audiobooks, virtual assistants, accessibility tools, and adding voice interfaces to applications. Example_getChatCompletions demonstrates how to use Azure OpenAI's Chat Completions API. This example shows how to: - Create an Azure OpenAI client with token credentials - Structure a multi-turn conversation with different message roles - Send a chat completion request and handle the response - Process multiple response choices and finish reasons The example uses environment variables for configuration: - AOAI_CHAT_COMPLETIONS_MODEL: The deployment name of your chat model - AOAI_CHAT_COMPLETIONS_ENDPOINT: Your Azure OpenAI endpoint URL Chat completions are useful for: - Building conversational AI interfaces - Creating chatbots with personality - Maintaining context across multiple interactions - Generating human-like text responses Example_responsesApiChaining demonstrates how to chain multiple responses together in a conversation flow using the Azure OpenAI Responses API. This example shows how to: - Create an initial response - Chain a follow-up response using the previous response ID - Process both responses - Delete both responses to clean up The example uses environment variables for configuration: - AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URL - AZURE_OPENAI_MODEL: The deployment name of your model (e.g., "gpt-4o") Example_responsesApiFunctionCalling demonstrates how to use the Azure OpenAI Responses API with function calling. This example shows how to: - Create an Azure OpenAI client with token credentials - Define tools (functions) that the model can call - Process the response containing function calls - Provide function outputs back to the model - Delete the responses to clean up The example uses environment variables for configuration: - AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URL - AZURE_OPENAI_MODEL: The deployment name of your model (e.g., "gpt-4o") Example_responsesApiImageInput demonstrates how to use the Azure OpenAI Responses API with image input. This example shows how to: - Create an Azure OpenAI client with token credentials - Fetch an image from a URL and encode it to Base64 - Send a query with both text and a Base64-encoded image - Process the response The example uses environment variables for configuration: - AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URL - AZURE_OPENAI_MODEL: The deployment name of your model (e.g., "gpt-4o") Note: This example fetches and encodes an image from a URL because there is a known issue with image url based image input. Currently only base64 encoded images are supported. Example_responsesApiReasoning demonstrates how to use the Azure OpenAI Responses API with reasoning. This example shows how to: - Create an Azure OpenAI client with token credentials - Send a complex problem-solving request that requires reasoning - Enable the reasoning parameter to get step-by-step thought process - Process the response The example uses environment variables for configuration: - AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URL - AZURE_OPENAI_MODEL: The deployment name of your model (e.g., "gpt-4o") Example_responsesApiStreaming demonstrates how to use streaming with the Azure OpenAI Responses API. This example shows how to: - Create a streaming response - Process the stream events as they arrive - Clean up by deleting the response The example uses environment variables for configuration: - AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URL - AZURE_OPENAI_MODEL: The deployment name of your model (e.g., "gpt-4o") Example_responsesApiTextGeneration demonstrates how to use the Azure OpenAI Responses API for text generation. This example shows how to: - Create an Azure OpenAI client with token credentials - Send a simple text prompt - Process the response - Delete the response to clean up The example uses environment variables for configuration: - AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URL - AZURE_OPENAI_MODEL: The deployment name of your model (e.g., "gpt-4o") The Responses API is a new stateful API from Azure OpenAI that brings together capabilities from chat completions and assistants APIs in a unified experience. Example_streamCompletions demonstrates streaming responses from the legacy Completions API. This example shows how to: - Create an Azure OpenAI client with token credentials - Set up a streaming completion request - Process incremental text chunks - Handle streaming errors and completion The example uses environment variables for configuration: - AOAI_COMPLETIONS_MODEL: The deployment name of your completions model - AOAI_COMPLETIONS_ENDPOINT: Your Azure OpenAI endpoint URL Streaming completions are useful for: - Real-time text generation display - Reduced latency in responses - Interactive text generation - Long-form content creation Example_structuredOutputsResponseFormat demonstrates using JSON response formatting. This example shows how to: - Create an Azure OpenAI client with token credentials - Define JSON schema for response formatting - Request structured mathematical solutions - Parse and process formatted JSON responses The example uses environment variables for configuration: - AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_MODEL: The deployment name of your chat model - AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_ENDPOINT: Your Azure OpenAI endpoint URL Response formatting is useful for: - Mathematical problem solving - Step-by-step explanations - Structured data generation - Consistent output formatting Example_usingAzureContentFiltering demonstrates how to use Azure OpenAI's content filtering capabilities. This example shows how to: - Create an Azure OpenAI client with token credentials - Make a chat completion request - Extract and handle content filter results - Process content filter errors - Access Azure-specific content filter information from responses The example uses environment variables for configuration: - AOAI_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_MODEL: The deployment name of your model Content filtering is essential for: - Maintaining content safety and compliance - Monitoring content severity levels - Implementing content moderation policies - Handling filtered content gracefully Example_usingAzureOnYourData demonstrates how to use Azure OpenAI's Azure-On-Your-Data feature. This example shows how to: - Create an Azure OpenAI client with token credentials - Configure an Azure Cognitive Search data source - Send a chat completion request with data source integration - Process Azure-specific response data including citations and content filtering results The example uses environment variables for configuration: - AOAI_OYD_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_OYD_MODEL: The deployment name of your model - COGNITIVE_SEARCH_API_ENDPOINT: Your Azure Cognitive Search endpoint - COGNITIVE_SEARCH_API_INDEX: The name of your search index Azure-On-Your-Data enables you to enhance chat completions with information from your own data sources, allowing for more contextual and accurate responses based on your content. Example_usingAzurePromptFilteringWithStreaming demonstrates how to use Azure OpenAI's prompt filtering with streaming responses. This example shows how to: - Create an Azure OpenAI client with token credentials - Set up a streaming chat completion request - Handle streaming responses with Azure extensions - Monitor prompt filter results in real-time - Accumulate and process streamed content The example uses environment variables for configuration: - AOAI_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_MODEL: The deployment name of your model Streaming with prompt filtering is useful for: - Real-time content moderation - Progressive content delivery - Monitoring content safety during generation - Building responsive applications with content safety checks Example_usingDefaultAzureCredential demonstrates how to authenticate with Azure OpenAI using Azure Active Directory credentials. This example shows how to: - Create an Azure OpenAI client using DefaultAzureCredential - Configure authentication options with tenant ID - Make a simple request to test the authentication The example uses environment variables for configuration: - AOAI_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_MODEL: The deployment name of your model - AZURE_TENANT_ID: Your Azure tenant ID - AZURE_CLIENT_ID: (Optional) Your Azure client ID - AZURE_CLIENT_SECRET: (Optional) Your Azure client secret DefaultAzureCredential supports multiple authentication methods including: - Environment variables - Managed Identity - Azure CLI credentials Example_usingEnhancements demonstrates how to use Azure OpenAI's enhanced features. This example shows how to: - Create an Azure OpenAI client with token credentials - Configure chat completion enhancements like grounding - Process Azure-specific response data including content filtering - Handle message context and citations The example uses environment variables for configuration: - AOAI_OYD_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_OYD_MODEL: The deployment name of your model Azure OpenAI enhancements provide additional capabilities beyond standard OpenAI features, such as improved grounding and content filtering for more accurate and controlled responses. Example_vision demonstrates how to use Azure OpenAI's Vision capabilities for image analysis. This example shows how to: - Create an Azure OpenAI client with token credentials - Send an image URL to the model for analysis - Configure the chat completion request with image content - Process the model's description of the image The example uses environment variables for configuration: - AOAI_VISION_MODEL: The deployment name of your vision-capable model (e.g., gpt-4-vision) - AOAI_VISION_ENDPOINT: Your Azure OpenAI endpoint URL Vision capabilities are useful for: - Image description and analysis - Visual question answering - Content moderation - Accessibility features - Image-based search and retrieval
Package levenshtein implements distance and similarity metrics for strings, based on the Levenshtein measure. The Levenshtein `Distance` between two strings is the minimum total cost of edits that would convert the first string into the second. The allowed edit operations are insertions, deletions, and substitutions, all at character (one UTF-8 code point) level. Each operation has a default cost of 1, but each can be assigned its own cost equal to or greater than 0. A `Distance` of 0 means the two strings are identical, and the higher the value the more different the strings. Since in practice we are interested in finding if the two strings are "close enough", it often does not make sense to continue the calculation once the result is mathematically guaranteed to exceed a desired threshold. Providing this value to the `Distance` function allows it to take a shortcut and return a lower bound instead of an exact cost when the threshold is exceeded. The `Similarity` function calculates the distance, then converts it into a normalized metric within the range 0..1, with 1 meaning the strings are identical, and 0 that they have nothing in common. A minimum similarity threshold can be provided to speed up the calculation of the metric for strings that are far too dissimilar for the purpose at hand. All values under this threshold are rounded down to 0. The `Match` function provides a similarity metric, with the same range and meaning as `Similarity`, but with a bonus for string pairs that share a common prefix and have a similarity above a "bonus threshold". It uses the same method as proposed by Winkler for the Jaro distance, and the reasoning behind it is that these string pairs are very likely spelling variations or errors, and they are more closely linked than the edit distance alone would suggest. The underlying `Calculate` function is also exported, to allow the building of other derivative metrics, if needed.
Package pbc provides structures for building pairing-based cryptosystems. It is a wrapper around the Pairing-Based Cryptography (PBC) Library authored by Ben Lynn (https://crypto.stanford.edu/pbc/). This wrapper provides access to all PBC functions. It supports generation of various types of elliptic curves and pairings, element initialization, I/O, and arithmetic. These features can be used to quickly build pairing-based or conventional cryptosystems. The PBC library is designed to be extremely fast. Internally, it uses GMP for arbitrary-precision arithmetic. It also includes a wide variety of optimizations that make pairing-based cryptography highly efficient. To improve performance, PBC does not perform type checking to ensure that operations actually make sense. The Go wrapper provides the ability to add compatibility checks to most operations, or to use unchecked elements to maximize performance. Since this library provides low-level access to pairing primitives, it is very easy to accidentally construct insecure systems. This library is intended to be used by cryptographers or to implement well-analyzed cryptosystems. Cryptographic pairings are defined over three mathematical groups: G1, G2, and GT, where each group is typically of the same order r. Additionally, a bilinear map e maps a pair of elements — one from G1 and another from G2 — to an element in GT. This map e has the following additional property: If G1 == G2, then a pairing is said to be symmetric. Otherwise, it is asymmetric. Pairings can be used to construct a variety of efficient cryptosystems. The PBC library currently supports 5 different types of pairings, each with configurable parameters. These types are designated alphabetically, roughly in chronological order of introduction. Type A, D, E, F, and G pairings are implemented in the library. Each type has different time and space requirements. For more information about the types, see the documentation for the corresponding generator calls, or the PBC manual page at https://crypto.stanford.edu/pbc/manual/ch05s01.html. This package must be compiled using cgo. It also requires the installation of GMP and PBC. During the build process, this package will attempt to include <gmp.h> and <pbc/pbc.h>, and then dynamically link to GMP and PBC. Most systems include a package for GMP. To install GMP in Debian / Ubuntu: For an RPM installation with YUM: For installation with Fink (http://www.finkproject.org/) on Mac OS X: For more information or to compile from source, visit https://gmplib.org/ To install the PBC library, download the appropriate files for your system from https://crypto.stanford.edu/pbc/download.html. PBC has three dependencies: the gcc compiler, flex (http://flex.sourceforge.net/), and bison (https://www.gnu.org/software/bison/). See the respective sites for installation instructions. Most distributions include packages for these libraries. For example, in Debian / Ubuntu: The PBC source can be compiled and installed using the usual GNU Build System: After installing, you may need to rebuild the search path for libraries: It is possible to install the package on Windows through the use of MinGW and MSYS. MSYS is required for installing PBC, while GMP can be installed through a package. Based on your MinGW installation, you may need to add "-I/usr/local/include" to CPPFLAGS and "-L/usr/local/lib" to LDFLAGS when building PBC. Likewise, you may need to add these options to CGO_CPPFLAGS and CGO_LDFLAGS when installing this package. This package is free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. For additional details, see the COPYING and COPYING.LESSER files. This example generates a pairing and some random group elements, then applies the pairing operation. This example computes and verifies a Boneh-Lynn-Shacham signature in a simulated conversation between Alice and Bob.
Package iotanalytics provides the API client, operations, and parameter types for AWS IoT Analytics. IoT Analytics allows you to collect large amounts of device data, process messages, and store them. You can then query the data and run sophisticated analytics on it. IoT Analytics enables advanced data exploration through integration with Jupyter Notebooks and data visualization through integration with Amazon QuickSight. Traditional analytics and business intelligence tools are designed to process structured data. IoT data often comes from devices that record noisy processes (such as temperature, motion, or sound). As a result the data from these devices can have significant gaps, corrupted messages, and false readings that must be cleaned up before analysis can occur. Also, IoT data is often only meaningful in the context of other data from external sources. IoT Analytics automates the steps required to analyze data from IoT devices. IoT Analytics filters, transforms, and enriches IoT data before storing it in a time-series data store for analysis. You can set up the service to collect only the data you need from your devices, apply mathematical transforms to process the data, and enrich the data with device-specific metadata such as device type and location before storing it. Then, you can analyze your data by running queries using the built-in SQL query engine, or perform more complex analytics and machine learning inference. IoT Analytics includes pre-built models for common IoT use cases so you can answer questions like which devices are about to fail or which customers are at risk of abandoning their wearable devices.
Package math32 provides basic constants and mathematical functions for float32 types. At its core, it's mostly just a wrapper in form of float32(math.XXX). This applies to the following functions: Everything else is a float32 implementation. Implementation schedule is sporadic an uncertain. But eventually all functions will be replaced
Package math provides helper functions for mathematical operations over all integer Go types. Almost all files in this package are automatically generated. To regenerate this package This package relies on github.com/davecheney/godoc2md.
Package srp Secure Remote Password protocol The principal interface provided by this package is the SRP type. The end aim of the caller is to to have an SRP server and SRP client arrive at the same key. See the documentation for the SRP structure and its methods for the nitty gritty of use. BUG(jpg): This does not use the same padding and hashing scheme as in RFC 5054, and therefore is not interoperable with those clients and servers. Perhaps someday we'll add an RFC 5054 mode that does that, but today is not that day. It would be nice if this package could be used without having some understanding of the SRP protocol, but too much of the language and naming depends on at least some familiarity. Here is a summary. The Secure Remote Password protocol involves a server and a client proving to each other that they know (or can derive) their long term secrets. The client's long term secret is known as "x" and the corresponding server secret, the verifier, is known as "v". The verifier is mathematically related to x and is computed by the client on first enrollment and transmitted to the server. Typically the server will store the verifier and the client will derive x from a user secret such as a password. Because the verifier can used like a password hash with respect to cracking, the derivation of x should be designed to resist password cracking if the verifier is compromised. The client and the server must both use the same Diffie-Hellman group to perform their computations. The server and the client each send an ephemeral public key to each other. (The client sends A; the server sends B.) With their private knowledge of their own ephemeral secrets (a or b) and their private knowledge of x (for the client) and v (for the server) along with public knowledge they are able to prove to each other that they know their respective secrets and can generate a session key, K, which may be used for further encryption during the session. Quoting from http://srp.stanford.edu/design.html (with some modification for KDF and and checks) This package does not address the actual communication between client and server. But through the SRP type it not only performs the calculations needed, it also performs safety and sanity checks on its input, and it hides everything from the caller except what the caller absolutely needs to provide. The key derivation function, KDF() 1. Both client and server: Checking whether methods have returned without error. This is particularly true of SRP.Key() and SetOthersPublic() 2. Client: Using an appropriate key derivation function for deriving x from the user's password (and nudging user toward a good password) 3. Server: Storing the v securely (sent by the client on first enrollment). A captured v can be used to impersonate the server. The verifier, v, can also be used like a password hash in a password cracking attempt 4. Both: Proving to each other that both have the same key. The package includes methods that can assist with that. ExampleServerClientKey is an example.
Package vecf64 provides common functions and methods for slices of float64. In the days of yore, scientists who computed with computers would use arrays to represent vectors, each value representing magnitude and/or direction. Then came the C++ Standard Templates Library, which sought to provide this data type in the standard library. Now, everyone conflates a term "vector" with dynamic arrays. In the C++ book, Bjarne Stroustrup has this to say: Go has a better name for representing dynamically allocated arrays of any type - "slice". However, "slice" is both a noun and verb and many libraries that I use already use "slice"-as-a-verb as a name, so I had to settle for the second best name: "vector". It should be noted that while the names used in this package were definitely mathematically inspired, they bear only little resemblance the actual mathematical operations performed. The names of the operations assume you're working with slices of float64s. Hence `Add` performs elementwise addition between two []float64. Operations between []float64 and float64 are also supported, however they are differently named. Here are the equivalents: You may note that for the []float64 - float64 binary operations, the scalar (float64) is always the first operand. In operations that are not commutative, an additional function is provided, suffixed with "R" (for reverse) This package does not provide range checking. If indices are out of range, the functions will panic. This package should play well with BCE. TODO: provide SIMD vectorization for Incr and []float32-float64 functions. Pull requests accepted
Package ecc provides mathematical operations over elliptic curves.
Package decimal provides a high-performance, arbitrary precision, floating-point decimal library. This package provides floating-point decimal numbers, useful for financial programming or calculations where a larger, more accurate representation of a number is required. In addition to basic arithmetic operations (addition, subtraction, multiplication, and division) this package offers various mathematical functions, including the exponential function, various logarithms, and the ability to compute continued fractions. While lean, this package is full of features. It implements interfaces like “fmt.Formatter” and intuitively utilizes verbs and flags as described in the “fmt” package. (Also included: “fmt.Scanner”, “fmt.Stringer”, “encoding.TextUnmarshaler”, and “encoding.TextMarshaler”.) It allows users to specific explicit contexts for arithmetic operations, but doesn't require it. It provides access to NaN payloads and is more lenient when parsing a decimal from a string than the GDA specification requires. API interfaces have been changed slightly to work more seamlessly with existing Go programs. For example, many “Quantize” implementations require a decimal as both the receiver and argument which isn't very user friendly. Instead, this library accepts a simple “int” which can be derived from an existing decimal if required. It contains two modes of operation designed to make transitioning to various GDA "quirks" (like always rounding lossless operations) easier. There are three primary goals of this library: By adhering to the General Decimal Arithmetic specification, this package has a well-defined structure for its arithmetic operations. Decimal libraries are inherently slow; this library works diligently to minimize memory allocations and utilize efficient algorithms. Performance regularly benchmarks as fast or faster than many other popular decimal libraries. Libraries should be intuitive and work out of the box without having to configure too many settings; however, precise settings should still be available. The following type is supported: The zero value for a Big corresponds with 0, meaning all the following are valid: Method naming is the same as math/big's, meaning: In general, its conventions mirror math/big's. It is suggested to read the math/big package comments to gain an understanding of this package's conventions. Arguments to Binary and Unary methods are allowed to alias, so the following is valid: Unless otherwise specified, the only argument that will be modified is the result (“z”). This means the following is valid and race-free: But this is not:
Package srp Secure Remote Password protocol The principle interface provided by this package is the SRP type. The end aim of the caller is to to have an SRP server and SRP client arrive at the same Key. See the documentation for the SRP structure and its methods for the nitty gritty of use. BUG(jpg): This does not use the same padding and hashing scheme as in RFC 5054, and therefore is not interoperable with those clients and servers. Perhaps someday we'll add an RFC 5054 mode that does that, but today is not that day. It would be nice if this package could be used without having some understanding of the SRP protocol, but too much of the language and naming is depends on at least some familiarity. Here is a summary. The Secure Remote Password protocol involves a server and a client proving to each other that they know (or can derive) their long term secrets. The client long term secret is known as "x" and the corresponding server secret, the verifier, is known as "v". The verifier is mathematically related to x and is computed by the client on first enrollment and transmistted to the server. Typically the server will store the verifier and the client will derive x from a user secret such as a password. Because the verifier can used like a password hash with respect to cracking, the derivation of x should be designed to resist password cracking if the verifier compromised. The client and the server must both use the same Diffie-Hellman group to perform their computations. The server and the client each send an ephemeral public key to each other (The client sends A; the server sends B) With their private knowledge of their own ephemeral secrets (a or b) and their private knowledge of x (for the client) and v (for the server) along with public knowledge they are able to prove to each other that they know their respective secrets and can generate a session key, K, which may be used for further encryption during the session. Quoting from http://srp.stanford.edu/design.html (with some modification for KDF) This package does not address the actual communication between client and server. But through the SRP type it not only performs the calculations needed, it also performs safety and sanity checks on its input, and it hides everything from the caller except what the caller absolutely needs to provide. The key derivation function, KDF() 1. Both client and server: Checking whether methods have returned without error. This is particularly true of SRP.Key() and SetOthersPublic() 2. Client: Using an appropriate key derivation function for deriving x from the user's password (and nudging user toward a good password) 3. Server: Storing the v (send by the client on first enrollment) securely. A captured v can be used to masquerade as the server and be used like a password hash in a password cracking attempt 4. Both: Proving to each other that both have the same key. The package includes methods that can assist with that. ExampleServerClientKey is an example
SubOver is a tool for discovering subdomain takeovers
Package xirho implements an iterated function system fractal art renderer. An iterated function system is a collection of functions from points to points. Starting with a randomly selected point, we choose a function at random, apply that function to the point, and plot its new location, then repeat ad infinitum. With some additional steps, the result images can be stunning. The mathematical terminology used in xirho's documentation and API is as follows. A point is an element of RÂł Ă— [0, 1], i.e. a 3D point plus a color coordinate. A function, sometimes function type, is a procedure which maps points to points, possibly using additional fixed parameters to control the exact mapping. (Other IFS implementations typically refer to functions in this sense as variations.) A node is a particular instance of a function and its fixed parameters. An iterated function system, or just system, is a non-empty list of nodes, a Markov chain giving the probability of the algorithm transitioning from each node in the list to each other node in the list, an additional node applied to each output point to serve as a possibly nonlinear camera, and a mapping of color coordinates to colors. The Markov chain of a system may also be called the weights graph, or just the graph. Xirho does not include a designer to produce systems to render. Existing parameters can be loaded through the encoding and encoding/flame subpackages, or programmed by hand. To use xirho to render a system, create a Render containing the System and a Hist to plot points, then call its Render method with a non-trivial context. (The context closing is the only way that Render returns.) Alternatively, the RenderAsync method provides an API to manage rendering concurrently, e.g. to support a UI. For fine-grained control of the rendering process, the System.Iter method can be used directly.
Package pbc provides structures for building pairing-based cryptosystems. It is a wrapper around the Pairing-Based Cryptography (PBC) Library authored by Ben Lynn (https://crypto.stanford.edu/pbc/). This wrapper provides access to all PBC functions. It supports generation of various types of elliptic curves and pairings, element initialization, I/O, and arithmetic. These features can be used to quickly build pairing-based or conventional cryptosystems. The PBC library is designed to be extremely fast. Internally, it uses GMP for arbitrary-precision arithmetic. It also includes a wide variety of optimizations that make pairing-based cryptography highly efficient. To improve performance, PBC does not perform type checking to ensure that operations actually make sense. The Go wrapper provides the ability to add compatibility checks to most operations, or to use unchecked elements to maximize performance. Since this library provides low-level access to pairing primitives, it is very easy to accidentally construct insecure systems. This library is intended to be used by cryptographers or to implement well-analyzed cryptosystems. Cryptographic pairings are defined over three mathematical groups: G1, G2, and GT, where each group is typically of the same order r. Additionally, a bilinear map e maps a pair of elements — one from G1 and another from G2 — to an element in GT. This map e has the following additional property: If G1 == G2, then a pairing is said to be symmetric. Otherwise, it is asymmetric. Pairings can be used to construct a variety of efficient cryptosystems. The PBC library currently supports 5 different types of pairings, each with configurable parameters. These types are designated alphabetically, roughly in chronological order of introduction. Type A, D, E, F, and G pairings are implemented in the library. Each type has different time and space requirements. For more information about the types, see the documentation for the corresponding generator calls, or the PBC manual page at https://crypto.stanford.edu/pbc/manual/ch05s01.html. This package must be compiled using cgo. It also requires the installation of GMP and PBC. During the build process, this package will attempt to include <gmp.h> and <pbc/pbc.h>, and then dynamically link to GMP and PBC. Most systems include a package for GMP. To install GMP in Debian / Ubuntu: For an RPM installation with YUM: For installation with Fink (http://www.finkproject.org/) on Mac OS X: For more information or to compile from source, visit https://gmplib.org/ To install the PBC library, download the appropriate files for your system from https://crypto.stanford.edu/pbc/download.html. PBC has three dependencies: the gcc compiler, flex (http://flex.sourceforge.net/), and bison (https://www.gnu.org/software/bison/). See the respective sites for installation instructions. Most distributions include packages for these libraries. For example, in Debian / Ubuntu: The PBC source can be compiled and installed using the usual GNU Build System: After installing, you may need to rebuild the search path for libraries: It is possible to install the package on Windows through the use of MinGW and MSYS. MSYS is required for installing PBC, while GMP can be installed through a package. Based on your MinGW installation, you may need to add "-I/usr/local/include" to CPPFLAGS and "-L/usr/local/lib" to LDFLAGS when building PBC. Likewise, you may need to add these options to CGO_CPPFLAGS and CGO_LDFLAGS when installing this package. This package is free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. For additional details, see the COPYING and COPYING.LESSER files. This example generates a pairing and some random group elements, then applies the pairing operation. This example computes and verifies a Boneh-Lynn-Shacham signature in a simulated conversation between Alice and Bob.
Package pbc provides structures for building pairing-based cryptosystems. It is a wrapper around the Pairing-Based Cryptography (PBC) Library authored by Ben Lynn (https://crypto.stanford.edu/pbc/). This wrapper provides access to all PBC functions. It supports generation of various types of elliptic curves and pairings, element initialization, I/O, and arithmetic. These features can be used to quickly build pairing-based or conventional cryptosystems. The PBC library is designed to be extremely fast. Internally, it uses GMP for arbitrary-precision arithmetic. It also includes a wide variety of optimizations that make pairing-based cryptography highly efficient. To improve performance, PBC does not perform type checking to ensure that operations actually make sense. The Go wrapper provides the ability to add compatibility checks to most operations, or to use unchecked elements to maximize performance. Since this library provides low-level access to pairing primitives, it is very easy to accidentally construct insecure systems. This library is intended to be used by cryptographers or to implement well-analyzed cryptosystems. Cryptographic pairings are defined over three mathematical groups: G1, G2, and GT, where each group is typically of the same order r. Additionally, a bilinear map e maps a pair of elements — one from G1 and another from G2 — to an element in GT. This map e has the following additional property: If G1 == G2, then a pairing is said to be symmetric. Otherwise, it is asymmetric. Pairings can be used to construct a variety of efficient cryptosystems. The PBC library currently supports 5 different types of pairings, each with configurable parameters. These types are designated alphabetically, roughly in chronological order of introduction. Type A, D, E, F, and G pairings are implemented in the library. Each type has different time and space requirements. For more information about the types, see the documentation for the corresponding generator calls, or the PBC manual page at https://crypto.stanford.edu/pbc/manual/ch05s01.html. This package must be compiled using cgo. It also requires the installation of GMP and PBC. During the build process, this package will attempt to include <gmp.h> and <pbc/pbc.h>, and then dynamically link to GMP and PBC. Most systems include a package for GMP. To install GMP in Debian / Ubuntu: For an RPM installation with YUM: For installation with Fink (http://www.finkproject.org/) on Mac OS X: For more information or to compile from source, visit https://gmplib.org/ To install the PBC library, download the appropriate files for your system from https://crypto.stanford.edu/pbc/download.html. PBC has three dependencies: the gcc compiler, flex (http://flex.sourceforge.net/), and bison (https://www.gnu.org/software/bison/). See the respective sites for installation instructions. Most distributions include packages for these libraries. For example, in Debian / Ubuntu: The PBC source can be compiled and installed using the usual GNU Build System: After installing, you may need to rebuild the search path for libraries: It is possible to install the package on Windows through the use of MinGW and MSYS. MSYS is required for installing PBC, while GMP can be installed through a package. Based on your MinGW installation, you may need to add "-I/usr/local/include" to CPPFLAGS and "-L/usr/local/lib" to LDFLAGS when building PBC. Likewise, you may need to add these options to CGO_CPPFLAGS and CGO_LDFLAGS when installing this package. This package is free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. For additional details, see the COPYING and COPYING.LESSER files. This example generates a pairing and some random group elements, then applies the pairing operation. This example computes and verifies a Boneh-Lynn-Shacham signature in a simulated conversation between Alice and Bob.
Package vecf32 provides common functions and methods for slices of float32. In the days of yore, scientists who computed with computers would use arrays to represent vectors, each value representing magnitude and/or direction. Then came the C++ Standard Templates Library, which sought to provide this data type in the standard library. Now, everyone conflates a term "vector" with dynamic arrays. In the C++ book, Bjarne Stroustrup has this to say: Go has a better name for representing dynamically allocated arrays of any type - "slice". However, "slice" is both a noun and verb and many libraries that I use already use "slice"-as-a-verb as a name, so I had to settle for the second best name: "vector". It should be noted that while the names used in this package were definitely mathematically inspired, they bear only little resemblance the actual mathematical operations performed. The names of the operations assume you're working with slices of float32s. Hence `Add` performs elementwise addition between two []float32. Operations between []float32 and float32 are also supported, however they are differently named. Here are the equivalents: You may note that for the []float64 - float64 binary operations, the scalar (float64) is always the first operand. In operations that are not commutative, an additional function is provided, suffixed with "R" (for reverse) This package does not provide range checking. If indices are out of range, the functions will panic. This package should play well with BCE. TODO(anyone): provide SIMD vectorization for Incr and []float32-float64 functions Pull requests accepted
Package math provides helper functions for mathematical operations over all integer Go types. Almost all files in this package are automatically generated. To regenerate this package This package relies on github.com/davecheney/godoc2md.