Package storage provides an easy way to work with Google Cloud Storage. Google Cloud Storage stores data in named objects, which are grouped into buckets. More information about Google Cloud Storage is available at https://cloud.google.com/storage/docs. See https://pkg.go.dev/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. To start working with this package, create a Client: The client will use your default application credentials. Clients should be reused instead of created as needed. The methods of Client are safe for concurrent use by multiple goroutines. You may configure the client by passing in options from the google.golang.org/api/option package. You may also use options defined in this package, such as WithJSONReads. If you only wish to access public data, you can create an unauthenticated client with To use an emulator with this library, you can set the STORAGE_EMULATOR_HOST environment variable to the address at which your emulator is running. This will send requests to that address instead of to Cloud Storage. You can then create and use a client as usual: Please note that there is no official emulator for Cloud Storage. A Google Cloud Storage bucket is a collection of objects. To work with a bucket, make a bucket handle: A handle is a reference to a bucket. You can have a handle even if the bucket doesn't exist yet. To create a bucket in Google Cloud Storage, call BucketHandle.Create: Note that although buckets are associated with projects, bucket names are global across all projects. Each bucket has associated metadata, represented in this package by BucketAttrs. The third argument to BucketHandle.Create allows you to set the initial BucketAttrs of a bucket. To retrieve a bucket's attributes, use BucketHandle.Attrs: An object holds arbitrary data as a sequence of bytes, like a file. You refer to objects using a handle, just as with buckets, but unlike buckets you don't explicitly create an object. Instead, the first time you write to an object it will be created. You can use the standard Go io.Reader and io.Writer interfaces to read and write object data: Objects also have attributes, which you can fetch with ObjectHandle.Attrs: Listing objects in a bucket is done with the BucketHandle.Objects method: Objects are listed lexicographically by name. To filter objects lexicographically, [Query.StartOffset] and/or [Query.EndOffset] can be used: If only a subset of object attributes is needed when listing, specifying this subset using Query.SetAttrSelection may speed up the listing process: Both objects and buckets have ACLs (Access Control Lists). An ACL is a list of ACLRules, each of which specifies the role of a user, group or project. ACLs are suitable for fine-grained control, but you may prefer using IAM to control access at the project level (see Cloud Storage IAM docs. To list the ACLs of a bucket or object, obtain an ACLHandle and call ACLHandle.List: You can also set and delete ACLs. Every object has a generation and a metageneration. The generation changes whenever the content changes, and the metageneration changes whenever the metadata changes. Conditions let you check these values before an operation; the operation only executes if the conditions match. You can use conditions to prevent race conditions in read-modify-write operations. For example, say you've read an object's metadata into objAttrs. Now you want to write to that object, but only if its contents haven't changed since you read it. Here is how to express that: You can obtain a URL that lets anyone read or write an object for a limited time. Signing a URL requires credentials authorized to sign a URL. To use the same authentication that was used when instantiating the Storage client, use BucketHandle.SignedURL. You can also sign a URL without creating a client. See the documentation of SignedURL for details. A type of signed request that allows uploads through HTML forms directly to Cloud Storage with temporary permission. Conditions can be applied to restrict how the HTML form is used and exercised by a user. For more information, please see the XML POST Object docs as well as the documentation of BucketHandle.GenerateSignedPostPolicyV4. If the GoogleAccessID and PrivateKey option fields are not provided, they will be automatically detected by BucketHandle.SignedURL and BucketHandle.GenerateSignedPostPolicyV4 if any of the following are true: Detecting GoogleAccessID may not be possible if you are authenticated using a token source or using option.WithHTTPClient. In this case, you can provide a service account email for GoogleAccessID and the client will attempt to sign the URL or Post Policy using that service account. To generate the signature, you must have: Errors returned by this client are often of the type googleapi.Error. These errors can be introspected for more information by using errors.As with the richer googleapi.Error type. For example: Methods in this package may retry calls that fail with transient errors. Retrying continues indefinitely unless the controlling context is canceled, the client is closed, or a non-transient error is received. To stop retries from continuing, use context timeouts or cancellation. The retry strategy in this library follows best practices for Cloud Storage. By default, operations are retried only if they are idempotent, and exponential backoff with jitter is employed. In addition, errors are only retried if they are defined as transient by the service. See the Cloud Storage retry docs for more information. Users can configure non-default retry behavior for a single library call (using BucketHandle.Retryer and ObjectHandle.Retryer) or for all calls made by a client (using Client.SetRetry). For example: You can add custom headers to any API call made by this package by using callctx.SetHeaders on the context which is passed to the method. For example, to add a custom audit logging header: This package includes support for the Cloud Storage gRPC API, which is currently in preview. This implementation uses gRPC rather than the current JSON & XML APIs to make requests to Cloud Storage. If you would like to try the API, please contact your GCP account rep for more information. The gRPC API is not yet generally available, so it may be subject to breaking changes. To create a client which will use gRPC, use the alternate constructor: If the application is running within GCP, users may get better performance by enabling DirectPath (enabling requests to skip some proxy steps). To enable, set the environment variable `GOOGLE_CLOUD_ENABLE_DIRECT_PATH_XDS=true` and add the following side-effect imports to your application:
Package gomail provides a simple interface to compose emails and to mail them efficiently. More info on Github: https://github.com/go-gomail/gomail A daemon that listens to a channel and sends all incoming messages. Efficiently send a customized newsletter to a list of recipients. Send an email using a local SMTP server. Send an email using an API or postfix.
Package gomail provides a simple interface to compose emails and to mail them efficiently. More info on Github: https://github.com/go-gomail/gomail A daemon that listens to a channel and sends all incoming messages. Efficiently send a customized newsletter to a list of recipients. Send an email using a local SMTP server. Send an email using an API or postfix.
Package ses provides the API client, operations, and parameter types for Amazon Simple Email Service. Amazon Simple Email Service This document contains reference information for the Amazon Simple Email Service (https://aws.amazon.com/ses/) (Amazon SES) API, version 2010-12-01. This document is best used in conjunction with the Amazon SES Developer Guide (https://docs.aws.amazon.com/ses/latest/DeveloperGuide/Welcome.html) . For a list of Amazon SES endpoints to use in service requests, see Regions and Amazon SES (https://docs.aws.amazon.com/ses/latest/DeveloperGuide/regions.html) in the Amazon SES Developer Guide (https://docs.aws.amazon.com/ses/latest/DeveloperGuide/Welcome.html) . This documentation contains reference information related to the following:
Package sesv2 provides the API client, operations, and parameter types for Amazon Simple Email Service. Amazon SES API v2 Amazon SES (http://aws.amazon.com/ses) is an Amazon Web Services service that you can use to send email messages to your customers. If you're new to Amazon SES API v2, you might find it helpful to review the Amazon Simple Email Service Developer Guide (https://docs.aws.amazon.com/ses/latest/DeveloperGuide/) . The Amazon SES Developer Guide provides information and code samples that demonstrate how to use Amazon SES API v2 features programmatically.
Package gomail provides a simple interface to compose emails and to mail them efficiently. More info on Github: https://github.com/go-mail/mail A daemon that listens to a channel and sends all incoming messages. Efficiently send a customized newsletter to a list of recipients. Send an email using a local SMTP server. Send an email using an API or postfix.
Package enmime implements a MIME encoding and decoding library. It's built on top of Go's included mime/multipart support where possible, but is geared towards parsing MIME encoded emails. The enmime API has two conceptual layers. The lower layer is a tree of Part structs, representing each component of a decoded MIME message. The upper layer, called an Envelope provides an intuitive way to interact with a MIME message. Calling ReadParts causes enmime to parse the body of a MIME message into a tree of Part objects, each of which is aware of its content type, filename and headers. The content of a Part is available as a slice of bytes via the Content field. If the part was encoded in quoted-printable or base64, it is decoded prior to being placed in Content. If the Part contains text in a character set other than utf-8, enmime will attempt to convert it to utf-8. To locate a particular Part, pass a custom PartMatcher function into the BreadthMatchFirst() or DepthMatchFirst() methods to search the Part tree. BreadthMatchAll() and DepthMatchAll() will collect all Parts matching your criteria. ReadEnvelope returns an Envelope struct. Behind the scenes a Part tree is constructed, and then sorted into the correct fields of the Envelope. The Envelope contains both the plain text and HTML portions of the email. If there was no plain text Part available, the HTML Part will be down-converted using the html2text library1. The root of the Part tree, as well as slices of the inline and attachment Parts are also available. Every MIME Part has its own headers, accessible via the Part.Header field. The raw headers for an Envelope are available in Root.Header. Envelope also provides helper methods to fetch headers: GetHeader(key) will return the RFC 2047 decoded value of the specified header. AddressList(key) will convert the specified address header into a slice of net/mail.Address values. enmime attempts to be tolerant of poorly encoded MIME messages. In situations where parsing is not possible, the ReadEnvelope and ReadParts functions will return a hard error. If enmime is able to continue parsing the message, it will add an entry to the Errors slice on the relevant Part. After parsing is complete, all Part errors will be appended to the Envelope Errors slice. The Error* constants can be used to identify a specific class of error. Please note that enmime parses messages into memory, so it is not likely to perform well with multi-gigabyte attachments. enmime is open source software released under the MIT License. The latest version can be found at https://github.com/jhillyerd/enmime
Package gomail provides a simple interface to compose emails and to mail them efficiently. More info on Github: https://github.com/go-mail/mail A daemon that listens to a channel and sends all incoming messages. Efficiently send a customized newsletter to a list of recipients. Send an email using a local SMTP server. Send an email using an API or postfix.
Package mailyak provides a simple interface for generating MIME compliant emails, and optionally sending them over SMTP. Both plain-text and HTML email body content is supported, and their types implement io.Writer allowing easy composition directly from templating engines, etc. Attachments are fully supported including inline attachments, with anything that implements io.Reader suitable as a source (like files on disk, in-memory buffers, etc). The raw MIME content can be retrieved using MimeBuf(), typically used with an API service such as Amazon SES that does not require using an SMTP interface. MailYak supports both plain-text SMTP (which is automatically upgraded to a secure connection with STARTTLS if supported by the SMTP server) and explicit TLS connections.
Package gomail provides a simple interface to compose emails and to mail them efficiently. More info on Github: https://github.com/go-mail/mail A daemon that listens to a channel and sends all incoming messages. Efficiently send a customized newsletter to a list of recipients. Send an email using a local SMTP server. Send an email using an API or postfix.
Package mailyak provides a simple interface for generating MIME compliant emails, and optionally sending them over SMTP. Both plain-text and HTML email body content is supported, and their types implement io.Writer allowing easy composition directly from templating engines, etc. Attachments are fully supported (attach anything that implements io.Reader). The raw MIME content can be retrieved using MimeBuf(), typically used with an API service such as Amazon SES that does not require using an SMTP interface.
Package worklink provides the API client, operations, and parameter types for Amazon WorkLink. Amazon WorkLink is a cloud-based service that provides secure access to internal websites and web apps from iOS and Android phones. In a single step, your users, such as employees, can access internal websites as efficiently as they access any other public website. They enter a URL in their web browser, or choose a link to an internal website in an email. Amazon WorkLink authenticates the user's access and securely renders authorized internal web content in a secure rendering service in the AWS cloud. Amazon WorkLink doesn't download or store any internal web content on mobile devices.
Package pinpointemail provides the API client, operations, and parameter types for Amazon Pinpoint Email Service. Amazon Pinpoint Email Service Welcome to the Amazon Pinpoint Email API Reference. This guide provides information about the Amazon Pinpoint Email API (version 1.0), including supported operations, data types, parameters, and schemas. Amazon Pinpoint (https://aws.amazon.com/pinpoint) is an AWS service that you can use to engage with your customers across multiple messaging channels. You can use Amazon Pinpoint to send email, SMS text messages, voice messages, and push notifications. The Amazon Pinpoint Email API provides programmatic access to options that are unique to the email channel and supplement the options provided by the Amazon Pinpoint API. If you're new to Amazon Pinpoint, you might find it helpful to also review the Amazon Pinpoint Developer Guide (https://docs.aws.amazon.com/pinpoint/latest/developerguide/welcome.html) . The Amazon Pinpoint Developer Guide provides tutorials, code samples, and procedures that demonstrate how to use Amazon Pinpoint features programmatically and how to integrate Amazon Pinpoint functionality into mobile apps and other types of applications. The guide also provides information about key topics such as Amazon Pinpoint integration with other AWS services and the limits that apply to using the service. The Amazon Pinpoint Email API is available in several AWS Regions and it provides an endpoint for each of these Regions. For a list of all the Regions and endpoints where the API is currently available, see AWS Service Endpoints (https://docs.aws.amazon.com/general/latest/gr/rande.html#pinpoint_region) in the Amazon Web Services General Reference. To learn more about AWS Regions, see Managing AWS Regions (https://docs.aws.amazon.com/general/latest/gr/rande-manage.html) in the Amazon Web Services General Reference. In each Region, AWS maintains multiple Availability Zones. These Availability Zones are physically isolated from each other, but are united by private, low-latency, high-throughput, and highly redundant network connections. These Availability Zones enable us to provide very high levels of availability and redundancy, while also minimizing latency. To learn more about the number of Availability Zones that are available in each Region, see AWS Global Infrastructure (http://aws.amazon.com/about-aws/global-infrastructure/) .
Package gomail provides a simple interface to compose emails and to mail them efficiently. More info on Github: https://github.com/go-mail/mail A daemon that listens to a channel and sends all incoming messages. Efficiently send a customized newsletter to a list of recipients. Send an email using a local SMTP server. Send an email using an API or postfix.
Package workmail provides the API client, operations, and parameter types for Amazon WorkMail. WorkMail is a secure, managed business email and calendaring service with support for existing desktop and mobile email clients. You can access your email, contacts, and calendars using Microsoft Outlook, your browser, or other native iOS and Android email applications. You can integrate WorkMail with your existing corporate directory and control both the keys that encrypt your data and the location in which your data is stored. The WorkMail API is designed for the following scenarios: Listing and describing organizations Managing users Managing groups Managing resources All WorkMail API operations are Amazon-authenticated and certificate-signed. They not only require the use of the AWS SDK, but also allow for the exclusive use of AWS Identity and Access Management users and roles to help facilitate access, trust, and permission policies. By creating a role and allowing an IAM user to access the WorkMail site, the IAM user gains full administrative visibility into the entire WorkMail organization (or as set in the IAM policy). This includes, but is not limited to, the ability to create, update, and delete users, groups, and resources. This allows developers to perform the scenarios listed above, as well as give users the ability to grant access on a selective basis using the IAM model.
Package gomail provides a simple interface to compose emails and to mail them efficiently. More info on Github: https://github.com/go-gomail/gomail A daemon that listens to a channel and sends all incoming messages. Efficiently send a customized newsletter to a list of recipients. Send an email using a local SMTP server. Send an email using an API or postfix.
Package gorest - Go RESTful API starter kit with Gin, JWT, GORM (MySQL, PostgreSQL, SQLite), Redis, Mongo, 2FA, email verification, password recovery
Package gomail provides a simple interface to compose emails and to mail them efficiently. More info on Github: https://github.com/go-mail/mail A daemon that listens to a channel and sends all incoming messages. Efficiently send a customized newsletter to a list of recipients. Send an email using a local SMTP server. Send an email using an API or postfix.
Package goa provides the runtime support for goa microservices. goa service development begins with writing the *design* of a service. The design is described using the goa language implemented by the github.com/goadesign/goa/design/apidsl package. The `goagen` tool consumes the metadata produced from executing the design language to generate service specific code that glues the underlying HTTP server with action specific code and data structures. The goa package contains supporting functionality for the generated code including basic request and response state management through the RequestData and ResponseData structs, error handling via error classes, middleware support via the Middleware data structure as well as decoding and encoding algorithms. The RequestData and ResponseData structs provides access to the request and response state. goa request handlers also accept a context.Context interface as first parameter so that deadlines and cancelation signals may easily be implemented. The request state exposes the underlying http.Request object as well as the deserialized payload (request body) and parameters (both path and querystring parameters). Generated action specific contexts wrap the context.Context, ResponseData and RequestData data structures. They expose properly typed fields that correspond to the request parameters and body data structure descriptions appearing in the design. The response state exposes the response status and body length as well as the underlying ResponseWriter. Action contexts provide action specific helper methods that write the responses as described in the design optionally taking an instance of the media type for responses that contain a body. Here is an example showing an "update" action corresponding to following design (extract): The action signature generated by goagen is: where UpdateBottleContext is: and implements: The definitions of the Bottle and UpdateBottlePayload data structures are ommitted for brievity. There is one controller interface generated per resource defined via the design language. The interface exposes the controller actions. User code must provide data structures that implement these interfaces when mounting a controller onto a service. The controller data structure should include an anonymous field of type *goa.Controller which takes care of implementing the middleware handling. A goa middleware is a function that takes and returns a Handler. A Handler is a the low level function which handles incoming HTTP requests. goagen generates the handlers code so each handler creates the action specific context and calls the controller action with it. Middleware can be added to a goa service or a specific controller using the corresponding Use methods. goa comes with a few stock middleware that handle common needs such as logging, panic recovery or using the RequestID header to trace requests across multiple services. The controller action methods generated by goagen such as the Update method of the BottleController interface shown above all return an error value. goa defines an Error struct that action implementations can use to describe the content of the corresponding HTTP response. Errors can be created using error classes which are functions created via NewErrorClass. The ErrorHandler middleware maps errors to HTTP responses. Errors that are instances of the Error struct are mapped using the struct fields while other types of errors return responses with status code 500 and the error message in the body. The goa design language documented in the dsl package makes it possible to attach validations to data structure definitions. One specific type of validation consists of defining the format that a data structure string field must follow. Example of formats include email, data time, hostnames etc. The ValidateFormat function provides the implementation for the format validation invoked from the code generated by goagen. The goa design language makes it possible to specify the encodings supported by the API both as input (Consumes) and output (Produces). goagen uses that information to registed the corresponding packages with the service encoders and decoders via their Register methods. The service exposes the DecodeRequest and EncodeResponse that implement a simple content type negotiation algorithm for picking the right encoder for the "Content-Type" (decoder) or "Accept" (encoder) request header. Package goa standardizes on structured error responses: a request that fails because of an invalid input or an unexpected condition produces a response that contains a structured error. The error data structures returned to clients contains five fields: an ID, a code, a status, a detail and metadata. The ID is unique for the occurrence of the error, it helps correlate the content of the response with the content of the service logs. The code defines the class of error (e.g. "invalid_parameter_type") and the status the corresponding HTTP status (e.g. 400). The detail contains a message specific to the error occurrence. The metadata contains key/value pairs that provide contextual information (name of parameters, value of invalid parameter etc.). Instances of Error can be created via Error Class functions. See http://goa.design/implement/error_handling.html All instance of errors created via a error class implement the ServiceError interface. This interface is leveraged by the error handler middleware to produce the error responses. The code generated by goagen calls the helper functions exposed in this file when it encounters invalid data (wrong type, validation errors etc.) such as InvalidParamTypeError, InvalidAttributeTypeError etc. These methods return errors that get merged with any previously encountered error via the Error Merge method. The helper functions are error classes stored in global variable. This means your code can override their values to produce arbitrary error responses. goa includes an error handler middleware that takes care of mapping back any error returned by previously called middleware or action handler into HTTP responses. If the error was created via an error class then the corresponding content including the HTTP status is used otherwise an internal error is returned. Errors that bubble up all the way to the top (i.e. not handled by the error middleware) also generate an internal error response.
Package gomail provides a simple interface to compose emails and to mail them efficiently. More info on Github: https://github.com/go-gomail/gomail A daemon that listens to a channel and sends all incoming messages. Efficiently send a customized newsletter to a list of recipients. Send an email using a local SMTP server. Send an email using an API or postfix.
Package gomail provides a simple interface to compose emails and to mail them efficiently. More info on Github: https://github.com/go-gomail/gomail A daemon that listens to a channel and sends all incoming messages. Efficiently send a customized newsletter to a list of recipients. Send an email using a local SMTP server. Send an email using an API or postfix.
Package workmailmessageflow provides the API client, operations, and parameter types for Amazon WorkMail Message Flow. The WorkMail Message Flow API provides access to email messages as they are being sent and received by a WorkMail organization.
Package gomail provides a simple interface to compose emails and to mail them efficiently. More info on Github: https://github.com/go-mail/mail A daemon that listens to a channel and sends all incoming messages. Efficiently send a customized newsletter to a list of recipients. Send an email using a local SMTP server. Send an email using an API or postfix.
Package enmime implements a MIME encoding and decoding library. It's built on top of Go's included mime/multipart support where possible, but is geared towards parsing MIME encoded emails. The enmime API has two conceptual layers. The lower layer is a tree of Part structs, representing each component of a decoded MIME message. The upper layer, called an Envelope provides an intuitive way to interact with a MIME message. Calling ReadParts causes enmime to parse the body of a MIME message into a tree of Part objects, each of which is aware of its content type, filename and headers. The content of a Part is available as a slice of bytes via the Content field. If the part was encoded in quoted-printable or base64, it is decoded prior to being placed in Content. If the Part contains text in a character set other than utf-8, enmime will attempt to convert it to utf-8. To locate a particular Part, pass a custom PartMatcher function into the BreadthMatchFirst() or DepthMatchFirst() methods to search the Part tree. BreadthMatchAll() and DepthMatchAll() will collect all Parts matching your criteria. ReadEnvelope returns an Envelope struct. Behind the scenes a Part tree is constructed, and then sorted into the correct fields of the Envelope. The Envelope contains both the plain text and HTML portions of the email. If there was no plain text Part available, the HTML Part will be down-converted using the html2text library1. The root of the Part tree, as well as slices of the inline and attachment Parts are also available. Every MIME Part has its own headers, accessible via the Part.Header field. The raw headers for an Envelope are available in Root.Header. Envelope also provides helper methods to fetch headers: GetHeader(key) will return the RFC 2047 decoded value of the specified header. AddressList(key) will convert the specified address header into a slice of net/mail.Address values. enmime attempts to be tolerant of poorly encoded MIME messages. In situations where parsing is not possible, the ReadEnvelope and ReadParts functions will return a hard error. If enmime is able to continue parsing the message, it will add an entry to the Errors slice on the relevant Part. After parsing is complete, all Part errors will be appended to the Envelope Errors slice. The Error* constants can be used to identify a specific class of error. Please note that enmime parses messages into memory, so it is not likely to perform well with multi-gigabyte attachments. enmime is open source software released under the MIT License. The latest version can be found at https://github.com/zond/enmime
Package gomail provides a simple interface to compose emails and to mail them efficiently. More info on Github: https://github.com/go-gomail/gomail A daemon that listens to a channel and sends all incoming messages. Efficiently send a customized newsletter to a list of recipients. Send an email using a local SMTP server. Send an email using an API or postfix.
Package mailyak provides a simple interface for generating MIME compliant emails, and optionally sending them over SMTP. Both plain-text and HTML email body content is supported, and their types implement io.Writer allowing easy composition directly from templating engines, etc. Attachments are fully supported (attach anything that implements io.Reader). The raw MIME content can be retrieved using MimeBuf(), typically used with an API service such as Amazon SES that does not require using an SMTP interface.
Package mailyak provides a simple interface for generating MIME compliant emails, and optionally sending them over SMTP. Both plain-text and HTML email body content is supported, and their types implement io.Writer allowing easy composition directly from templating engines, etc. Attachments are fully supported (attach anything that implements io.Reader). The raw MIME content can be retrieved using MimeBuf(), typically used with an API service such as Amazon SES that does not require using an SMTP interface.
Package entityresolution provides the API client, operations, and parameter types for AWS EntityResolution. Welcome to the Entity Resolution API Reference. Entity Resolution is an Amazon Web Services service that provides pre-configured entity resolution capabilities that enable developers and analysts at advertising and marketing companies to build an accurate and complete view of their consumers. With Entity Resolution, you can match source records containing consumer identifiers, such as name, email address, and phone number. This is true even when these records have incomplete or conflicting identifiers. For example, Entity Resolution can effectively match a source record from a customer relationship management (CRM) system with a source record from a marketing system containing campaign information. To learn more about Entity Resolution concepts, procedures, and best practices, see the Entity Resolution User Guide (https://docs.aws.amazon.com/entityresolution/latest/userguide/what-is-service.html) .
Package Rye is a simple library to support http services. Rye provides a middleware handler which can be used to chain http handlers together while providing simple statsd metrics for use with a monitoring solution such as DataDog or other logging aggregators. Rye also provides some additional middleware handlers that are entirely optional but easily consumed using Rye. In order to use rye, you should vendor it and the statsd client within your project. Begin by importing the required libraries: Create a statsd client (if desired) and create a rye Config in order to pass in optional dependencies: Create a middleware handler. The purpose of the Handler is to keep Config and to provide an interface for chaining http handlers. Build your http handlers using the Handler type from the **rye** package. Here are some example (custom) handlers: Finally, to setup your handlers in your API Rye comes with built-in configurable `statsd` statistics that you could record to your favorite monitoring system. To configure that, you'll need to set up a `Statter` based on the `github.com/cactus/go-statsd-client` and set it in your instantiation of `MWHandler` through the `rye.Config`. When a middleware is called, it's timing is recorded and a counter is recorded associated directly with the http status code returned during the call. Additionally, an `errors` counter is also sent to the statter which allows you to count any errors that occur with a code equaling or above 500. Example: If you have a middleware handler you've created with a method named `loginHandler`, successful calls to that will be recorded to `handlers.loginHandler.2xx`. Additionally you'll receive stats such as `handlers.loginHandler.400` or `handlers.loginHandler.500`. You also will receive an increase in the `errors` count. If you're sending your logs into a system such as DataDog, be aware that your stats from Rye can have prefixes such as `statsd.my-service.my-k8s-cluster.handlers.loginHandler.2xx` or even `statsd.my-service.my-k8s-cluster.errors`. Just keep in mind your stats could end up in the destination sink system with prefixes. With Golang 1.7, a new feature has been added that supports a request specific context. This is a great feature that Rye supports out-of-the-box. The tricky part of this is how the context is modified on the request. In Golang, the Context is always available on a Request through `http.Request.Context()`. Great! However, if you want to add key/value pairs to the context, you will have to add the context to the request before it gets passed to the next Middleware. To support this, the `rye.Response` has a property called `Context`. This property takes a properly created context (pulled from the `request.Context()` function. When you return a `rye.Response` which has `Context`, the **rye** library will craft a new Request and make sure that the next middleware receives that request. Here's the details of creating a middleware with a proper `Context`. You must first pull from the current request `Context`. In the example below, you see `ctx := r.Context()`. That pulls the current context. Then, you create a NEW context with your additional context key/value. Finally, you return `&rye.Response{Context:ctx}` Now in a later middleware, you can easily retrieve the value you set! For another simple example, look in the JWT middleware - it adds the JWT into the context for use by other middlewares. It uses the `CONTEXT_JWT` key to push the JWT token into the `Context`. Rye comes with various pre-built middleware handlers. Pre-built middlewares source (and docs) can be found in the package dir following the pattern `middleware_*.go`. To use them, specify the constructor of the middleware as one of the middleware handlers when you define your routes: OR The JWT Middleware pushes the JWT token onto the Context for use by other middlewares in the chain. This is a convenience that allows any part of your middleware chain quick access to the JWT. Example usage might include a middleware that needs access to your user id or email address stored in the JWT. To access this `Context` variable, the code is very simple:
Package Rye is a simple library to support http services. Rye provides a middleware handler which can be used to chain http handlers together while providing simple statsd metrics for use with a monitoring solution such as DataDog or other logging aggregators. Rye also provides some additional middleware handlers that are entirely optional but easily consumed using Rye. In order to use rye, you should vendor it and the statsd client within your project. Begin by importing the required libraries: Create a statsd client (if desired) and create a rye Config in order to pass in optional dependencies: Create a middleware handler. The purpose of the Handler is to keep Config and to provide an interface for chaining http handlers. Build your http handlers using the Handler type from the **rye** package. Here are some example (custom) handlers: Finally, to setup your handlers in your API Rye comes with built-in configurable `statsd` statistics that you could record to your favorite monitoring system. To configure that, you'll need to set up a `Statter` based on the `github.com/cactus/go-statsd-client` and set it in your instantiation of `MWHandler` through the `rye.Config`. When a middleware is called, it's timing is recorded and a counter is recorded associated directly with the http status code returned during the call. Additionally, an `errors` counter is also sent to the statter which allows you to count any errors that occur with a code equaling or above 500. Example: If you have a middleware handler you've created with a method named `loginHandler`, successful calls to that will be recorded to `handlers.loginHandler.2xx`. Additionally you'll receive stats such as `handlers.loginHandler.400` or `handlers.loginHandler.500`. You also will receive an increase in the `errors` count. If you're sending your logs into a system such as DataDog, be aware that your stats from Rye can have prefixes such as `statsd.my-service.my-k8s-cluster.handlers.loginHandler.2xx` or even `statsd.my-service.my-k8s-cluster.errors`. Just keep in mind your stats could end up in the destination sink system with prefixes. With Golang 1.7, a new feature has been added that supports a request specific context. This is a great feature that Rye supports out-of-the-box. The tricky part of this is how the context is modified on the request. In Golang, the Context is always available on a Request through `http.Request.Context()`. Great! However, if you want to add key/value pairs to the context, you will have to add the context to the request before it gets passed to the next Middleware. To support this, the `rye.Response` has a property called `Context`. This property takes a properly created context (pulled from the `request.Context()` function. When you return a `rye.Response` which has `Context`, the **rye** library will craft a new Request and make sure that the next middleware receives that request. Here's the details of creating a middleware with a proper `Context`. You must first pull from the current request `Context`. In the example below, you see `ctx := r.Context()`. That pulls the current context. Then, you create a NEW context with your additional context key/value. Finally, you return `&rye.Response{Context:ctx}` Now in a later middleware, you can easily retrieve the value you set! For another simple example, look in the JWT middleware - it adds the JWT into the context for use by other middlewares. It uses the `CONTEXT_JWT` key to push the JWT token into the `Context`. Rye comes with various pre-built middleware handlers. Pre-built middlewares source (and docs) can be found in the package dir following the pattern `middleware_*.go`. To use them, specify the constructor of the middleware as one of the middleware handlers when you define your routes: OR The JWT Middleware pushes the JWT token onto the Context for use by other middlewares in the chain. This is a convenience that allows any part of your middleware chain quick access to the JWT. Example usage might include a middleware that needs access to your user id or email address stored in the JWT. To access this `Context` variable, the code is very simple:
Package Rye is a simple library to support http services. Rye provides a middleware handler which can be used to chain http handlers together while providing simple statsd metrics for use with a monitoring solution such as DataDog or other logging aggregators. Rye also provides some additional middleware handlers that are entirely optional but easily consumed using Rye. In order to use rye, you should vendor it and the statsd client within your project. Begin by importing the required libraries: Create a statsd client (if desired) and create a rye Config in order to pass in optional dependencies: Create a middleware handler. The purpose of the Handler is to keep Config and to provide an interface for chaining http handlers. Build your http handlers using the Handler type from the **rye** package. Here are some example (custom) handlers: Finally, to setup your handlers in your API Rye comes with built-in configurable `statsd` statistics that you could record to your favorite monitoring system. To configure that, you'll need to set up a `Statter` based on the `github.com/cactus/go-statsd-client` and set it in your instantiation of `MWHandler` through the `rye.Config`. When a middleware is called, it's timing is recorded and a counter is recorded associated directly with the http status code returned during the call. Additionally, an `errors` counter is also sent to the statter which allows you to count any errors that occur with a code equaling or above 500. Example: If you have a middleware handler you've created with a method named `loginHandler`, successful calls to that will be recorded to `handlers.loginHandler.2xx`. Additionally you'll receive stats such as `handlers.loginHandler.400` or `handlers.loginHandler.500`. You also will receive an increase in the `errors` count. If you're sending your logs into a system such as DataDog, be aware that your stats from Rye can have prefixes such as `statsd.my-service.my-k8s-cluster.handlers.loginHandler.2xx` or even `statsd.my-service.my-k8s-cluster.errors`. Just keep in mind your stats could end up in the destination sink system with prefixes. With Golang 1.7, a new feature has been added that supports a request specific context. This is a great feature that Rye supports out-of-the-box. The tricky part of this is how the context is modified on the request. In Golang, the Context is always available on a Request through `http.Request.Context()`. Great! However, if you want to add key/value pairs to the context, you will have to add the context to the request before it gets passed to the next Middleware. To support this, the `rye.Response` has a property called `Context`. This property takes a properly created context (pulled from the `request.Context()` function. When you return a `rye.Response` which has `Context`, the **rye** library will craft a new Request and make sure that the next middleware receives that request. Here's the details of creating a middleware with a proper `Context`. You must first pull from the current request `Context`. In the example below, you see `ctx := r.Context()`. That pulls the current context. Then, you create a NEW context with your additional context key/value. Finally, you return `&rye.Response{Context:ctx}` Now in a later middleware, you can easily retrieve the value you set! For another simple example, look in the JWT middleware - it adds the JWT into the context for use by other middlewares. It uses the `CONTEXT_JWT` key to push the JWT token into the `Context`. Rye comes with various pre-built middleware handlers. Pre-built middlewares source (and docs) can be found in the package dir following the pattern `middleware_*.go`. To use them, specify the constructor of the middleware as one of the middleware handlers when you define your routes: OR The JWT Middleware pushes the JWT token onto the Context for use by other middlewares in the chain. This is a convenience that allows any part of your middleware chain quick access to the JWT. Example usage might include a middleware that needs access to your user id or email address stored in the JWT. To access this `Context` variable, the code is very simple:
Package goa provides the runtime support for goa microservices. goa service development begins with writing the *design* of a service. The design is described using the goa language implemented by the github.com/shogo82148/goa-v1/design/apidsl package. The `goagen` tool consumes the metadata produced from executing the design language to generate service specific code that glues the underlying HTTP server with action specific code and data structures. The goa package contains supporting functionality for the generated code including basic request and response state management through the RequestData and ResponseData structs, error handling via error classes, middleware support via the Middleware data structure as well as decoding and encoding algorithms. The RequestData and ResponseData structs provides access to the request and response state. goa request handlers also accept a context.Context interface as first parameter so that deadlines and cancelation signals may easily be implemented. The request state exposes the underlying http.Request object as well as the deserialized payload (request body) and parameters (both path and querystring parameters). Generated action specific contexts wrap the context.Context, ResponseData and RequestData data structures. They expose properly typed fields that correspond to the request parameters and body data structure descriptions appearing in the design. The response state exposes the response status and body length as well as the underlying ResponseWriter. Action contexts provide action specific helper methods that write the responses as described in the design optionally taking an instance of the media type for responses that contain a body. Here is an example showing an "update" action corresponding to following design (extract): The action signature generated by goagen is: where UpdateBottleContext is: and implements: The definitions of the Bottle and UpdateBottlePayload data structures are omitted for brevity. There is one controller interface generated per resource defined via the design language. The interface exposes the controller actions. User code must provide data structures that implement these interfaces when mounting a controller onto a service. The controller data structure should include an anonymous field of type *goa.Controller which takes care of implementing the middleware handling. A goa middleware is a function that takes and returns a Handler. A Handler is a the low level function which handles incoming HTTP requests. goagen generates the handlers code so each handler creates the action specific context and calls the controller action with it. Middleware can be added to a goa service or a specific controller using the corresponding Use methods. goa comes with a few stock middleware that handle common needs such as logging, panic recovery or using the RequestID header to trace requests across multiple services. The controller action methods generated by goagen such as the Update method of the BottleController interface shown above all return an error value. goa defines an Error struct that action implementations can use to describe the content of the corresponding HTTP response. Errors can be created using error classes which are functions created via NewErrorClass. The ErrorHandler middleware maps errors to HTTP responses. Errors that are instances of the Error struct are mapped using the struct fields while other types of errors return responses with status code 500 and the error message in the body. The goa design language documented in the dsl package makes it possible to attach validations to data structure definitions. One specific type of validation consists of defining the format that a data structure string field must follow. Example of formats include email, data time, hostnames etc. The ValidateFormat function provides the implementation for the format validation invoked from the code generated by goagen. The goa design language makes it possible to specify the encodings supported by the API both as input (Consumes) and output (Produces). goagen uses that information to registered the corresponding packages with the service encoders and decoders via their Register methods. The service exposes the DecodeRequest and EncodeResponse that implement a simple content type negotiation algorithm for picking the right encoder for the "Content-Type" (decoder) or "Accept" (encoder) request header. Package goa standardizes on structured error responses: a request that fails because of an invalid input or an unexpected condition produces a response that contains a structured error. The error data structures returned to clients contains five fields: an ID, a code, a status, a detail and metadata. The ID is unique for the occurrence of the error, it helps correlate the content of the response with the content of the service logs. The code defines the class of error (e.g. "invalid_parameter_type") and the status the corresponding HTTP status (e.g. 400). The detail contains a message specific to the error occurrence. The metadata contains key/value pairs that provide contextual information (name of parameters, value of invalid parameter etc.). Instances of Error can be created via Error Class functions. See http://goa.design/implement/error_handling.html All instance of errors created via a error class implement the ServiceError interface. This interface is leveraged by the error handler middleware to produce the error responses. The code generated by goagen calls the helper functions exposed in this file when it encounters invalid data (wrong type, validation errors etc.) such as InvalidParamTypeError, InvalidAttributeTypeError etc. These methods return errors that get merged with any previously encountered error via the Error Merge method. The helper functions are error classes stored in global variable. This means your code can override their values to produce arbitrary error responses. goa includes an error handler middleware that takes care of mapping back any error returned by previously called middleware or action handler into HTTP responses. If the error was created via an error class then the corresponding content including the HTTP status is used otherwise an internal error is returned. Errors that bubble up all the way to the top (i.e. not handled by the error middleware) also generate an internal error response.
Package gomail provides a simple interface to compose emails and to mail them efficiently. More info on Github: https://github.com/go-gomail/gomail A daemon that listens to a channel and sends all incoming messages. Efficiently send a customized newsletter to a list of recipients. Send an email using a local SMTP server. Send an email using an API or postfix.
Package Authaus is an authentication and authorization system. Authaus brings together the following pluggable components: Any of these five components can be swapped out, and in fact the fourth, and fifth ones (Role Groups and User Store) are entirely optional. A typical setup is to use LDAP as an Authenticator, and Postgres as a Session, Permit, and Role Groups database. Your session database does not need to be particularly performant, since Authaus maintains an in-process cache of session keys and their associated tokens. Authaus was NOT designed to be a "Facebook Scale" system. The target audience is a system of perhaps 100,000 users. There is nothing fundamentally limiting about the API of Authaus, but the internals certainly have not been built with millions of users in mind. The intended usage model is this: Authaus is intended to be embedded inside your security system, and run as a standalone HTTP service (aka a REST service). This HTTP service CAN be open to the wide world, but it's also completely OK to let it listen only to servers inside your DMZ. Authaus only gives you the skeleton and some examples of HTTP responders. It's up to you to flesh out the details of your authentication HTTP interface, and whether you'd like that to face the world, or whether it should only be accessible via other services that you control. At startup, your services open an HTTP connection to the Authaus service. This connection will typically live for the duration of the service. For every incoming request, you peel off whatever authentication information is associated with that request. This is either a session key, or a username/password combination. Let's call it the authorization information. You then ask Authaus to tell you WHO this authorization information belongs to, as well as WHAT this authorization information allows the requester to do (ie Authentication and Authorization). Authaus responds either with a 401 (Unauthorized), 403 (Forbidden), or a 200 (OK) and a JSON object that tells you the identity of the agent submitting this request, as well the permissions that this agent posesses. It's up to your individual services to decide what to do with that information. It should be very easy to expose Authaus over a protocol other than HTTP, since Authaus is intended to be easy to embed. The HTTP API is merely an illustrative example. A `Session Key` is the long random number that is typically stored as a cookie. A `Permit` is a set of roles that has been granted to a user. Authaus knows nothing about the contents of a permit. It simply treats it as a binary blob, and when writing it to an SQL database, encodes it as base64. The interpretation of the permit is application dependent. Typically, a Permit will hold information such as "Allowed to view billing information", or "Allowed to paint your bathroom yellow". Authaus does have a built-in module called RoleGroupDB, which has its own interpretation of what a Permit is, but you do not need to use this. A `Token` is the result of a successful authentication. It stores the identity of a user, an expiry date, and a Permit. A token will usually be retrieved by a session key. However, you can also perform a once-off authentication, which also yields you a token, which you will typically throw away when you are finished with it. All public methods of the `Central` object are callable from multiple threads. Reader-Writer locks are used in all of the caching systems. The number of concurrent connections is limited only by the limits of the Go runtime, and the performance limits that are inherent to the simple reader-writer locks used to protect shared state. Authaus must be deployed as a single process (which implies running on a single logical machine). The sole reason why it must run on only one process and not more, is because of the state that lives inside the various Authaus caches. Were it not for these caches, then there would be nothing preventing you from running Authaus on as many machines as necessary. The cached state stored inside the Authaus server is: If you wanted to make Authaus runnable across multiple processes, then you would need to implement a cache invalidation system for these caches. Authaus makes no attempt to mitigate DOS attacks. The most sane approach in this domain seems to be this (http://security.stackexchange.com/questions/12101/prevent-denial-of-service-attacks-against-slow-hashing-functions). The password database (created via NewAuthenticationDB_SQL) stores password hashes using the scrypt key derivation system (http://www.tarsnap.com/scrypt.html). Internally, we store our hash in a format that can later be extended, should we wish to double-hash the passwords, etc. The hash is 65 bytes and looks like this: The first byte of the hash is a version number of the hash. The remaining 64 bytes are the salt and the hash itself. At present, only one version is supported, which is version 1. It consists of 32 bytes of salt, and 32 bytes of scrypt'ed hash, with scrypt parameters N=256 r=8 p=1. Note that the parameter N=256 is quite low, meaning that it is possible to compute this in approximately 1 millisecond (1,000,000 nanoseconds) on a 2009-era Intel Core i7. This is a deliberate tradeoff. On the same CPU, a SHA256 hash takes about 500 nanoseconds to compute, so we are still making it 2000 times harder to brute force the passwords than an equivalent system storing only a SHA256 salted hash. This discussion is only of relevance in the event that the password table is compromised. No cookie signing mechanism is implemented. Cookies are not presently transmitted with Secure:true. This must change. The LDAP Authenticator is extremely simple, and provides only one function: Authenticate a user against an LDAP system (often this means Active Directory, AKA a Windows Domain). It calls the LDAP "Bind" method, and if that succeeds for the given identity/password, then the user is considered authenticated. We take care not to allow an "anonymous bind", which many LDAP servers allow when the password is blank. The Session Database runs on Postgres. It stores a table of sessions, where each row contains the following information: When a permit is altered with Authaus, then all existing sessions have their permits altered transparently. For example, imagine User X is logged in, and his administrator grants him a new permission. User X does not need to log out and log back in again in order for his new permissions to be reflected. His new permissions will be available immediately. Similarly, if a password is changed with Authaus, then all sessions are invalidated. Do take note though, that if a password is changed through an external mechanism (such as with LDAP), then Authaus will have no way of knowing this, and will continue to serve up sessions that were authenticated with the old password. This is a problem that needs addressing. You can limit the number of concurrent sessions per user to 1, by setting MaxActiveSessions.ConfigSessionDB to 1. This setting may only be zero or one. Zero, which is the default, means an unlimited number of concurrent sessions per user. Authaus will always place your Session Database behind its own Session Cache. This session cache is a very simple single-process in-memory cache of recent sessions. The limit on the number of entries in this cache is hard-coded, and that should probably change. The Permit database runs on Postgres. It stores a table of permits, which is simply a 1:1 mapping from Identity -> Permit. The Permit is just an array of bytes, which we store base64 encoded, inside a text field. This part of the system doesn't care how you interpret that blob. The Role Group Database is an entirely optional component of Authaus. The other components of Authaus (Authenticator, PermitDB, SessionDB) do not understand your Permits. To them, a Permit is simply an arbitrary array of bytes. The Role Group Database is a component that adds a specific meaning to a permit blob. Let's see what that specific meaning looks like... The built-in Role Group Database interprets a permit blob as a string of 32-bit integer IDs: These 32-bit integer IDs refer to "role groups" inside a database table. The "role groups" table might look like this: The Role Group IDs use 32-bit indices, because we assume that you are not going to create more than 2^32 different role groups. The worst case we assume here is that of an automated system that creates 100,000 roles per day. Such a system would run for more than 100 years, given a 32-bit ID. These constraints are extraordinary, suggesting that we do not even need 32 bits, but could even get away with just a 16-bit group ID. However, we expect the number of groups to be relatively small. Our aim here, arbitrary though it may be, is to fit the permit and identity into a single ethernet packet, which one can reasonably peg at 1500 bytes. 1500 / 4 = 375. We assume that no sane human administrator will assign 375 security groups to any individual. We expect the number of groups assigned to any individual to be in the range of 1 to 20. This makes 375 a gigantic buffer. OAuth support in Authaus is limited to a very simple scenario: * You wish to allow your users to login using an OAuth service - thereby outsourcing the Authentication to that external service, and using it to populate the email address of your users. OAuth was developed in order to work with Microsoft Azure Active Directory, however it should be fairly easy to extend the code to be able to handle other OAuth providers. Inside the database are two tables related to OAuth: oauthchallenge: The challenge table holds OAuth sessions which have been started, and which are expected to either succeed or fail within the next few minutes. The default timeout for a challenge is 5 minutes. A challenge record is usually created the moment the user clicks on the "Sign in with Microsoft" button on your site, and it tracks that authentication attempt. oauthsession: The session table holds OAuth sessions which have successfully authenticated, and also the token that was retrieved by a successful authorization. If a token has expired, then it is refreshed and updated in-place, inside the oauthsession table. An OAuth login follows this sequence of events: 1. User clicks on a "Signin with X" button on your login page 2. A record is created in the oauthchallenge table, with a unique ID. This ID is a secret known only to the authaus server and the OAuth server. It is used as the `state` parameter in the OAuth login mechanism. 3. The HTTP call which prompts #2 return a redirect URL (eg via an HTTP 302 response), which redirects the user's browser to the OAuth website, so that the user can either grant or refuse access. If the user refuses, or fails to login, then the login sequence ends here. 4. Upon successful authorization with the OAuth system, the OAuth website redirects the user back to your website, to a URL such as example.com/auth/oauth/finish, and you'll typically want Authaus to handle this request directly (via HttpHandlerOAuthFinish). Authaus will extract the secrets from the URL, perform any validations necessary, and then move the record from the oauthchallenge table, into the oauthsession table. While 'moving' the record over, it will also add any additional information that was provided by the successful authentication, such as the token provided by the OAuth provider. 5. Authaus makes an API call to the OAuth system, to retrieve the email address and name of the person that just logged in, using the token just received. 6. If that email address does not exist inside authuserstore, then create a new user record for this identity. 7. Log the user into Authaus, by creating a record inside authsession, for the relevant identity. Inside the authsession table, store a link to the oauthsession record, so that there is a 1:1 link from the authsession table, to the oauthsession table (ie Authaus Session to OAuth Token). 8. Return an Authaus session cookie to the browser, thereby completing the login. Although we only use our OAuth token a single time, during login, to retrieve the user's email address and name, we retain the OAuth token, and so we maintain the ability to make other API calls on behalf of that user. This hasn't proven necessary yet, but it seems like a reasonable bit of future-proofing. See the guidelines at the top of all_test.go for testing instructions.
Package gomail provides a simple interface to compose emails and to mail them efficiently. More info on Github: https://github.com/go-mail/mail A daemon that listens to a channel and sends all incoming messages. Efficiently send a customized newsletter to a list of recipients. Send an email using a local SMTP server. Send an email using an API or postfix.
Package mailyak provides a simple interface for generating MIME compliant emails, and optionally sending them over SMTP. Both plain-text and HTML email body content is supported, and their types implement io.Writer allowing easy composition directly from templating engines, etc. Attachments are fully supported including inline attachments, with anything that implements io.Reader suitable as a source (like files on disk, in-memory buffers, etc). The raw MIME content can be retrieved using MimeBuf(), typically used with an API service such as Amazon SES that does not require using an SMTP interface. MailYak supports both plain-text SMTP (which is automatically upgraded to a secure connection with STARTTLS if supported by the SMTP server) and explicit TLS connections.
Package reply package is essentially a source code conversion of the ruby library https://github.com/discourse/email_reply_trimmer. The core logic is a almost line by line conversion. This package has a dependency on excellent regex library github.com/dlclark/regexp2. The reason for not using the standard regex library was due to the fact that the regex package from the stdlib is not compatible with the library from the Ruby stdlib. All the tests were taken from the email_reply_trimmer library. Note: This code is not idiomatic go code, as, it was mostly adapted from the ruby code, however, the public APIs were kept simple as possible and does not expose any internal.
Package goa provides the runtime support for goa microservices. goa service development begins with writing the *design* of a service. The design is described using the goa language implemented by the github.com/goadesign/goa/design/apidsl package. The `goagen` tool consumes the metadata produced from executing the design language to generate service specific code that glues the underlying HTTP server with action specific code and data structures. The goa package contains supporting functionality for the generated code including basic request and response state management through the RequestData and ResponseData structs, error handling via error classes, middleware support via the Middleware data structure as well as decoding and encoding algorithms. The RequestData and ResponseData structs provides access to the request and response state. goa request handlers also accept a context.Context interface as first parameter so that deadlines and cancelation signals may easily be implemented. The request state exposes the underlying http.Request object as well as the deserialized payload (request body) and parameters (both path and querystring parameters). Generated action specific contexts wrap the context.Context, ResponseData and RequestData data structures. They expose properly typed fields that correspond to the request parameters and body data structure descriptions appearing in the design. The response state exposes the response status and body length as well as the underlying ResponseWriter. Action contexts provide action specific helper methods that write the responses as described in the design optionally taking an instance of the media type for responses that contain a body. Here is an example showing an "update" action corresponding to following design (extract): The action signature generated by goagen is: where UpdateBottleContext is: and implements: The definitions of the Bottle and UpdateBottlePayload data structures are ommitted for brievity. There is one controller interface generated per resource defined via the design language. The interface exposes the controller actions. User code must provide data structures that implement these interfaces when mounting a controller onto a service. The controller data structure should include an anonymous field of type *goa.Controller which takes care of implementing the middleware handling. A goa middleware is a function that takes and returns a Handler. A Handler is a the low level function which handles incoming HTTP requests. goagen generates the handlers code so each handler creates the action specific context and calls the controller action with it. Middleware can be added to a goa service or a specific controller using the corresponding Use methods. goa comes with a few stock middleware that handle common needs such as logging, panic recovery or using the RequestID header to trace requests across multiple services. The controller action methods generated by goagen such as the Update method of the BottleController interface shown above all return an error value. goa defines an Error struct that action implementations can use to describe the content of the corresponding HTTP response. Errors can be created using error classes which are functions created via NewErrorClass. The ErrorHandler middleware maps errors to HTTP responses. Errors that are instances of the Error struct are mapped using the struct fields while other types of errors return responses with status code 500 and the error message in the body. The goa design language documented in the dsl package makes it possible to attach validations to data structure definitions. One specific type of validation consists of defining the format that a data structure string field must follow. Example of formats include email, data time, hostnames etc. The ValidateFormat function provides the implementation for the format validation invoked from the code generated by goagen. The goa design language makes it possible to specify the encodings supported by the API both as input (Consumes) and output (Produces). goagen uses that information to registed the corresponding packages with the service encoders and decoders via their Register methods. The service exposes the DecodeRequest and EncodeResponse that implement a simple content type negotiation algorithm for picking the right encoder for the "Content-Type" (decoder) or "Accept" (encoder) request header. Package goa standardizes on structured error responses: a request that fails because of an invalid input or an unexpected condition produces a response that contains a structured error. The error data structures returned to clients contains five fields: an ID, a code, a status, a detail and metadata. The ID is unique for the occurrence of the error, it helps correlate the content of the response with the content of the service logs. The code defines the class of error (e.g. "invalid_parameter_type") and the status the corresponding HTTP status (e.g. 400). The detail contains a message specific to the error occurrence. The metadata contains key/value pairs that provide contextual information (name of parameters, value of invalid parameter etc.). Instances of Error can be created via Error Class functions. See http://goa.design/implement/error_handling.html All instance of errors created via a error class implement the ServiceError interface. This interface is leveraged by the error handler middleware to produce the error responses. The code generated by goagen calls the helper functions exposed in this file when it encounters invalid data (wrong type, validation errors etc.) such as InvalidParamTypeError, InvalidAttributeTypeError etc. These methods return errors that get merged with any previously encountered error via the Error Merge method. The helper functions are error classes stored in global variable. This means your code can override their values to produce arbitrary error responses. goa includes an error handler middleware that takes care of mapping back any error returned by previously called middleware or action handler into HTTP responses. If the error was created via an error class then the corresponding content including the HTTP status is used otherwise an internal error is returned. Errors that bubble up all the way to the top (i.e. not handled by the error middleware) also generate an internal error response.
Package gomail provides a simple interface to compose emails and to mail them efficiently. More info on Github: https://github.com/go-gomail/gomail A daemon that listens to a channel and sends all incoming messages. Efficiently send a customized newsletter to a list of recipients. Send an email using a local SMTP server. Send an email using an API or postfix.
Package mta provides a simple interface to compose emails and to mail them efficiently. More info on Github: https://github.com/rovergulf/mta A daemon that listens to a channel and sends all incoming messages. Efficiently send a customized newsletter to a list of recipients. Send an email using a local SMTP server. Send an email using an API or postfix.
Package emailsupport contains some more esoteric routines useful for parsing and handling email. For instance, some baroque regular expressions. No APIs are stable. Make sure you handle your dependencies accordingly. The package creates a number of exported regular expression objects, initialised at init time, pulling up the creation cost to program start. Per the documentation for the `regexp` package, “A Regexp is safe for concurrent use by multiple goroutines.” Each regular expression comes in two forms, `Foo` and `FooUnanchored`. The short name is anchored to the start and end of the pattern namespace, so that by default if you match an variable against `EmailAddress`, you are confirming that the contents of the variable are an email address, not accidentally only matching that there's something address-like somewhere within. The longer name (I need something briefer which is still clear) provides a pattern which can be used to find a regular expression elsewhere. Each regular expression is available as a pattern string in a form with a `Txt` prefix, which can be used to build larger regular expressions. The pattern is wrapped with `(?:...)` to be a non-capturing group which can be qualified or otherwise treated as a single unit. No regular expression has any capturing groups, letting the caller manage capturing indices. Thus for any `Foo`, this package provides: The list includes: `EmailAddress`: an RFC5321 email address, the part within angle-brackets or as is used in SMTP. This is _not_ an RFC5322 message header email email address, with display forms and comments. `EmailDomain`: a domain which can be used in email addresses; this is the base specification form and does not handle internationalisation, though this regexp should be correct to apply against punycode-encoded domains. This does handle embedded IPv4 and IPv6 address literals, but not the General-address-literal which is a grammar hook for future extension (because that inherently can't be handled until defined). `EmailLHS`: the Left-Hand-Side (or "local part") of an email address; this handles unquoted and quoted forms. `EmailAddressOrUnqualified`: either an address or a LHS, this is a form often used in mail configuration files where a domain is implicit. `IPv4Address`, `IPv6Address`: an IPv4 or IPv6 address `IPv4Netblock`, `IPv6Netblock`, IPNetblock: a netblock in CIDR prefix/len notation (used for source ACLs) `IPv4Octet`: a number 0 to 255 The IPv6 address regexp is taken from RFC3986 (the one which gets it right) and is a careful copy/paste and edit of a version which has been used and gradually debugged for years, including in a tool I released called `emit_ipv6_regexp`. The patterns used in `EmailLHS` (and thus also in items which include an email left-hand-side) can be in one of two forms, and selecting between them is a compile-time decision. The rules can be either those from RFC2822 or those from RFC5321. By default, those from RFC5321 are used. Build with a `rfc2822` build-tag to get the older definitions. If a future RFC changes the rules again, then the default patterns in this package may change; the build-tag `rfc5321` is currently unused, but is reserved for the future to force selecting the rules which are now current. It is safe (harmless) to supply that build-tag now (but not together with `rfc2822`).
Package gomail provides a simple interface to compose emails and to mail them efficiently. More info on Github: https://github.com/go-gomail/gomail A daemon that listens to a channel and sends all incoming messages. Efficiently send a customized newsletter to a list of recipients. Send an email using a local SMTP server. Send an email using an API or postfix.
Package trivial_tickets is a basic implementation of a support ticket system in Go. Customers of a company can easily create tickets with the form on the home page and display the created tickets over a static permalink. The author of a ticket is informed about every action the ticket experiences by an email written to his email address. Registered users are the assignees of the system and work on the customer's tickets. They can log in and off the system and assign a ticket to themselves or to other users. An assignee can only edit information on those ticket that he is assigned to. He can release his tickets, add comments to it or change the status of the ticket. It is also possible to merge two tickets to one, but only if the customer and the assignee of both tickets match. Additionally, an assignee may indicate that he is on holiday. In this case, tickets cannot be assigned to him. The ticket system offers an E-Mail Recipience and Dispatch API for a mailing service to interact with the system. A request with the properties `from` (the sender's e-mail address, i.e the customer), `subject` (the ticket's subject) and `message` (the actual message) set can be delivered to the server in order to create new tickets by mail. If the subject contains the ticket id of an already existing ticket in a special markup the e-mail creates a new answer to this ticket instead of a new ticket. Likewise, the mailing service can fetch e-mails created by the server on special events (such as creation of a new ticket or update of a ticket). The server then sends back all e-mails remaining to be sent to the customer. After the service has sent the received e-mails it can confirm the successful sending of the concerned e-mails by sending a verification request to the server. This is done for each e-mail by applying the id of the certain mail to the request. If the server can ensure that the respective e-mail exists the mail can be safely deleted and the verification result is returned to the service. More information on the Mail API and its functionality can be found in the Mail API Reference on https://mortenterhart.github.io/trivial-tickets/wiki/Mail-API-Reference. The repository contains a command-line tool which serves as simple example for an external mailing service. Although it cannot send or receive genuine e-mails it can communicate with the mail API of the server. The user can write really simple e-mails with e-mail address, subject and message to be sent to the server. Alternatively, he may retrieve all server-side created e-mails that are print to the console then. All fetched e-mails are verified against the server who can delete the mails then safely. Consult the User Manual for more information (currently only available in German) or alternatively checkout our Wiki Pages on https://mortenterhart.github.io/trivial-tickets/wiki. The User Manual is available at https://mortenterhart.github.io/trivial-tickets/docs/Ticketsystem_User_Manual_DE.pdf.
Package mailyak provides a simple interface for generating MIME compliant emails, and optionally sending them over SMTP. Both plain-text and HTML email body content is supported, and their types implement io.Writer allowing easy composition directly from templating engines, etc. Attachments are fully supported (attach anything that implements io.Reader). The raw MIME content can be retrieved using MimeBuf(), typically used with an API service such as Amazon SES that does not require using an SMTP interface.
Package gomail provides a simple interface to compose emails and to mail them efficiently. More info on Github: https://github.com/go-gomail/gomail A daemon that listens to a channel and sends all incoming messages. Efficiently send a customized newsletter to a list of recipients. Send an email using a local SMTP server. Send an email using an API or postfix.
Package gomail provides a simple interface to compose emails and to mail them efficiently. More info on Github: https://github.com/go-gomail/gomail A daemon that listens to a channel and sends all incoming messages. Efficiently send a customized newsletter to a list of recipients. Send an email using a local SMTP server. Send an email using an API or postfix.
Package enmime implements a MIME encoding and decoding library. It's built on top of Go's included mime/multipart support where possible, but is geared towards parsing MIME encoded emails. The enmime API has two conceptual layers. The lower layer is a tree of Part structs, representing each component of a decoded MIME message. The upper layer, called an Envelope provides an intuitive way to interact with a MIME message. Calling ReadParts causes enmime to parse the body of a MIME message into a tree of Part objects, each of which is aware of its content type, filename and headers. The content of a Part is available as a slice of bytes via the Content field. If the part was encoded in quoted-printable or base64, it is decoded prior to being placed in Content. If the Part contains text in a character set other than utf-8, enmime will attempt to convert it to utf-8. To locate a particular Part, pass a custom PartMatcher function into the BreadthMatchFirst() or DepthMatchFirst() methods to search the Part tree. BreadthMatchAll() and DepthMatchAll() will collect all Parts matching your criteria. ReadEnvelope returns an Envelope struct. Behind the scenes a Part tree is constructed, and then sorted into the correct fields of the Envelope. The Envelope contains both the plain text and HTML portions of the email. If there was no plain text Part available, the HTML Part will be down-converted using the html2text library1. The root of the Part tree, as well as slices of the inline and attachment Parts are also available. Every MIME Part has its own headers, accessible via the Part.Header field. The raw headers for an Envelope are available in Root.Header. Envelope also provides helper methods to fetch headers: GetHeader(key) will return the RFC 2047 decoded value of the specified header. AddressList(key) will convert the specified address header into a slice of net/mail.Address values. enmime attempts to be tolerant of poorly encoded MIME messages. In situations where parsing is not possible, the ReadEnvelope and ReadParts functions will return a hard error. If enmime is able to continue parsing the message, it will add an entry to the Errors slice on the relevant Part. After parsing is complete, all Part errors will be appended to the Envelope Errors slice. The Error* constants can be used to identify a specific class of error. Please note that enmime parses messages into memory, so it is not likely to perform well with multi-gigabyte attachments. enmime is open source software released under the MIT License. The latest version can be found at https://github.com/jhillyerd/enmime