Package pulsedav provides a WebDAV server with S3-compatible storage backend. The server supports basic WebDAV operations with a focus on file uploads to S3. File uploads (PUT requests) are stored in the S3 bucket under the path "incoming/{userID}/{filename}". Other WebDAV operations are not supported. Server Configuration: Usage: Required environment variables: Optional environment variables: Security Features: Authentication: The server uses Basic Authentication and validates credentials against an external authentication API. The API must accept POST requests with username/password and return a user ID that will be used in the S3 path structure. WebDAV Operations:
Package couchdb is a driver for connecting with a CouchDB server over HTTP. Use the `couch` driver name when using this driver. The DSN should be a full URL, likely with login credentials: The CouchDB driver generally interprets kivik.Options keys and values as URL query parameters. Values of the following types will be converted to their appropriate string representation when URL-encoded: Passing any other type will return an error. The only exceptions to the above rule are: The CouchDB driver supports a number of authentication methods. For most uses, you don't need to worry about authentication at all--just include authentication credentials in your connection DSN: This will use Cookie authentication by default. To use one of the explicit authentication mechanisms, you'll need to use kivik's Authenticate method. For example: Normally, to include an attachment in a CouchDB document, it must be base-64 encoded, which leads to increased network traffic and higher CPU load. CouchDB also supports the option to upload multiple attachments in a single request using the 'multipart/related' content type. See http://docs.couchdb.org/en/stable/api/document/common.html#creating-multiple-attachments As an experimental feature, this is now supported by the Kivik CouchDB driver as well. To take advantage of this capability, the `doc` argument to the Put() method must be either: With this in place, the CouchDB driver will switch to `multipart/related` mode, sending each attachment in binary format, rather than base-64 encoding it. To function properly, each attachment must have an accurate Size value. If the Size value is unset, the entirely attachment may be read to determine its size, prior to sending it over the network, leading to delays and unnecessary I/O and CPU usage. The simplest way to ensure efficiency is to use the NewAttachment() method, provided by this package. See the documentation on that method for proper usage. Example: To disable the `multipart/related` capabilities entirely, you may pass the `NoMultipartPut` option, with any value. This will fallback to the default of inline base-64 encoding the attachments. Example: If you find yourself wanting to disable this feature, due to bugs or performance, please consider filing a bug report against Kivik as well, so we can look for a solution that will allow using this optimization.
Package saml contains a partial implementation of the SAML standard in golang. SAML is a standard for identity federation, i.e. either allowing a third party to authenticate your users or allowing third parties to rely on us to authenticate their users. In SAML parlance an Identity Provider (IDP) is a service that knows how to authenticate users. A Service Provider (SP) is a service that delegates authentication to an IDP. If you are building a service where users log in with someone else's credentials, then you are a Service Provider. This package supports implementing both service providers and identity providers. The core package contains the implementation of SAML. The package samlsp provides helper middleware suitable for use in Service Provider applications. The package samlidp provides a rudimentary IDP service that is useful for testing or as a starting point for other integrations. Version 0.4.0 introduces a few breaking changes to the _samlsp_ package in order to make the package more extensible, and to clean up the interfaces a bit. The default behavior remains the same, but you can now provide interface implementations of _RequestTracker_ (which tracks pending requests), _Session_ (which handles maintaining a session) and _OnError_ which handles reporting errors. Public fields of _samlsp.Middleware_ have changed, so some usages may require adjustment. See [issue 231](https://github.com/crewjam/saml/issues/231) for details. The option to provide an IDP metadata URL has been deprecated. Instead, we recommend that you use the `FetchMetadata()` function, or fetch the metadata yourself and use the new `ParseMetadata()` function, and pass the metadata in _samlsp.Options.IDPMetadata_. Similarly, the _HTTPClient_ field is now deprecated because it was only used for fetching metdata, which is no longer directly implemented. The fields that manage how cookies are set are deprecated as well. To customize how cookies are managed, provide custom implementation of _RequestTracker_ and/or _Session_, perhaps by extending the default implementations. The deprecated fields have not been removed from the Options structure, but will be in future. In particular we have deprecated the following fields in _samlsp.Options_: - `Logger` - This was used to emit errors while validating, which is an anti-pattern. - `IDPMetadataURL` - Instead use `FetchMetadata()` - `HTTPClient` - Instead pass httpClient to FetchMetadata - `CookieMaxAge` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieName` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieDomain` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieDomain` - Instead assign a custom CookieRequestTracker or CookieSessionProvider Let us assume we have a simple web application to protect. We'll modify this application so it uses SAML to authenticate users. ```golang package main import ( ) ``` Each service provider must have an self-signed X.509 key pair established. You can generate your own with something like this: We will use `samlsp.Middleware` to wrap the endpoint we want to protect. Middleware provides both an `http.Handler` to serve the SAML specific URLs and a set of wrappers to require the user to be logged in. We also provide the URL where the service provider can fetch the metadata from the IDP at startup. In our case, we'll use [samltest.id](https://samltest.id/), an identity provider designed for testing. ```golang package main import ( ) ``` Next we'll have to register our service provider with the identity provider to establish trust from the service provider to the IDP. For [samltest.id](https://samltest.id/), you can do something like: Navigate to https://samltest.id/upload.php and upload the file you fetched. Now you should be able to authenticate. The flow should look like this: 1. You browse to `localhost:8000/hello` 1. The middleware redirects you to `https://samltest.id/idp/profile/SAML2/Redirect/SSO` 1. samltest.id prompts you for a username and password. 1. samltest.id returns you an HTML document which contains an HTML form setup to POST to `localhost:8000/saml/acs`. The form is automatically submitted if you have javascript enabled. 1. The local service validates the response, issues a session cookie, and redirects you to the original URL, `localhost:8000/hello`. 1. This time when `localhost:8000/hello` is requested there is a valid session and so the main content is served. Please see `example/idp/` for a substantially complete example of how to use the library and helpers to be an identity provider. The SAML standard is huge and complex with many dark corners and strange, unused features. This package implements the most commonly used subset of these features required to provide a single sign on experience. The package supports at least the subset of SAML known as [interoperable SAML](http://saml2int.org). This package supports the Web SSO profile. Message flows from the service provider to the IDP are supported using the HTTP Redirect binding and the HTTP POST binding. Message flows from the IDP to the service provider are supported via the HTTP POST binding. The package can produce signed SAML assertions, and can validate both signed and encrypted SAML assertions. It does not support signed or encrypted requests. The _RelayState_ parameter allows you to pass user state information across the authentication flow. The most common use for this is to allow a user to request a deep link into your site, be redirected through the SAML login flow, and upon successful completion, be directed to the originally requested link, rather than the root. Unfortunately, _RelayState_ is less useful than it could be. Firstly, it is not authenticated, so anything you supply must be signed to avoid XSS or CSRF. Secondly, it is limited to 80 bytes in length, which precludes signing. (See section 3.6.3.1 of SAMLProfiles.) The SAML specification is a collection of PDFs (sadly): - [SAMLCore](http://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf) defines data types. - [SAMLBindings](http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf) defines the details of the HTTP requests in play. - [SAMLProfiles](http://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf) describes data flows. - [SAMLConformance](http://docs.oasis-open.org/security/saml/v2.0/saml-conformance-2.0-os.pdf) includes a support matrix for various parts of the protocol. [SAMLtest](https://samltest.id/) is a testing ground for SAML service and identity providers. Please do not report security issues in the issue tracker. Rather, please contact me directly at ross@kndr.org ([PGP Key `78B6038B3B9DFB88`](https://keybase.io/crewjam)).
This app is intented to be go-port of the defunckt's gist library in Ruby Currently, uploading single and multiple files are available. You can also create secret gists, and both anonymous and user gists. Author: Viyat Bhalodia
Package bind converts between form encoding and Go values. It comes with binders for all values, time.Time, arbitrary structs, and slices. In particular, binding functions are provided for the following types: Callers may also hook into the process and provide a custom binding function. This example binds data from embedded URL arguments, the query string, and a posted form. Booleans are converted to Go by comparing against the following strings: The "on" / "" syntax is supported as the default behavior for HTML checkboxes. The SQL standard time formats [“2006-01-02”, “2006-01-02 15:04”] are recognized by the default datetime binder. More may be added by the application to the TimeFormats variable, like this: File uploads may be bound to any of the following types: This is a wrapper around the upload handling provided by Go’s multipart package. The bytes stay in memory unless they exceed a threshold (10MB by default), in which case they are written to a temp file. Note: Binding a file upload to os.File requires it to be written to a temp file (if it wasn’t already), making it less efficient than the other types. Both indexed and unindexed slices are supported. These two forms are bound as unordered slices: This is bound as an ordered slice: The two forms may be combined, with unindexed elements filling any gaps between indexed elements. Note that if the slice element is a struct, it must use the indexed notation. Structs are bound using a dot notation. For example: Struct fields must be exported to be bound. Additionally, all params may be bound as members of a struct, rather than extracting a single field.
Package builder exports nothing, its functionality is implemented via Scratching own itch. Manually updating several boxes/virtual machines for different platforms/architectures and invoking the tests there is boring and time consuming even for a single project. Builder enables to achieve this for multiple projects at the cost of setting up every machine only once. Builders can then update themselves as well as their task list and run the builds/tests automatically - if commanded so using cron(8) or other mechanism. No new idea here. Just a tiny implementation doing only what I need. If the builder updates its results it will attempt to synchronize with upstream. This will fail if you don't have the appropriate access rights for the builder repository. The merged results from all builders will appear in file 'results' in the repository root. The above can be invoked automatically by cron, example script is in cron.sh: To initialize a builder run Invoking `$ go test` attempts to ensure only a single instance is running. Builder executes the tasks sequentially. To add/remove/edit a task for a builder adjust the tasks variable literal in builder_test.go accordingly. Nothing special, except the builder will detect when it's checked out on other branch than 'master'. In such case it will not attempt to upload any results. To create a builder for a domain other than modernc.org, fork this repository and replace all hard-coded modernc.org instances with your own domain. Or consider contributing a parameterization mechanism. It's not there as I don't need it (yet) and I'm lazy, sorry.
Package uploadhandler implements a simple handler for HTTP file uploads. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3 of the License, or (at your option) any later version. This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. You should have received a copy of the GNU General Public License along with this program. If not, see the [GNU General Public License](http://www.gnu.org/licenses/gpl.html) for details.
Package tusd provides ways to accept tus 1.0 calls using HTTP. tus is a protocol based on HTTP for resumable file uploads. Resumable means that an upload can be interrupted at any moment and can be resumed without re-uploading the previous data again. An interruption may happen willingly, if the user wants to pause, or by accident in case of an network issue or server outage (http://tus.io). tusd was designed in way which allows an flexible and customizable usage. We wanted to avoid binding this package to a specific storage system – particularly a proprietary third-party software. Therefore tusd is an abstract layer whose only job is to accept incoming HTTP requests, validate them according to the specification and finally passes them to the data store. The data store is another important component in tusd's architecture whose purpose is to do the actual file handling. It has to write the incoming upload to a persistent storage system and retrieve information about an upload's current state. Therefore it is the only part of the system which communicates directly with the underlying storage system, whether it be the local disk, a remote FTP server or cloud providers such as AWS S3. The only hard requirements for a data store can be found in the DataStore interface. It contains methods for creating uploads (NewUpload), writing to them (WriteChunk) and retrieving their status (GetInfo). However, there are many more features which are not mandatory but may still be used. These are contained in their own interfaces which all share the *DataStore suffix. For example, GetReaderDataStore which enables downloading uploads or TerminaterDataStore which allows uploads to be terminated. The store composer offers a way to combine the basic data store - the core - implementation and these additional extensions: The corresponding methods for adding an extension to the composer are prefixed with Use* followed by the name of the corresponding interface. However, most data store provide multiple extensions and adding all of them manually can be tedious and error-prone. Therefore, all data store distributed with tusd provide an UseIn() method which does this job automatically. For example, this is the S3 store in action (see S3Store.UseIn): Finally, once you are done with composing your data store, you can pass it inside the Config struct in order to create create a new tusd HTTP handler: This handler can then be mounted to a specific path, e.g. /files:
Package couchdb is a driver for connecting with a CouchDB server over HTTP. Use the `couch` driver name when using this driver. The DSN should be a full URL, likely with login credentials: The CouchDB driver generally interprets kivik.Options keys and values as URL query parameters. Values of the following types will be converted to their appropriate string representation when URL-encoded: Passing any other type will return an error. The only exceptions to the above rule are: The CouchDB driver supports a number of authentication methods. For most uses, you don't need to worry about authentication at all--just include authentication credentials in your connection DSN: This will use Cookie authentication by default. To use one of the explicit authentication mechanisms, you'll need to use kivik's Authenticate method. For example: Normally, to include an attachment in a CouchDB document, it must be base-64 encoded, which leads to increased network traffic and higher CPU load. CouchDB also supports the option to upload multiple attachments in a single request using the 'multipart/related' content type. See http://docs.couchdb.org/en/stable/api/document/common.html#creating-multiple-attachments As an experimental feature, this is now supported by the Kivik CouchDB driver as well. To take advantage of this capability, the `doc` argument to the Put() method must be either: With this in place, the CouchDB driver will switch to `multipart/related` mode, sending each attachment in binary format, rather than base-64 encoding it. To function properly, each attachment must have an accurate Size value. If the Size value is unset, the entirely attachment may be read to determine its size, prior to sending it over the network, leading to delays and unnecessary I/O and CPU usage. The simplest way to ensure efficiency is to use the NewAttachment() method, provided by this package. See the documentation on that method for proper usage. Example: To disable the `multipart/related` capabilities entirely, you may pass the `NoMultipartPut` option, with any value. This will fallback to the default of inline base-64 encoding the attachments. Example: If you find yourself wanting to disable this feature, due to bugs or performance, please consider filing a bug report against Kivik as well, so we can look for a solution that will allow using this optimization.
Package go_fsspec provides a unified interface for interacting with different cloud storage systems such as AWS S3, Google Cloud Storage, and Azure Blob Storage. go_fsspec offers a simple and powerful way to manage files and directories across multiple cloud providers without having to deal with provider-specific APIs or implementations. go_fsspec contains the following key functionalities: The core interface `CloudStorage` provides a set of common methods for file and directory operations, allowing users to interact with any supported cloud provider using the same interface. Supported operations include creating directories, listing files, downloading and uploading files, and deleting files and directories. Additional methods for downloading/uploading entire directories are also available for bulk operations. Each cloud provider (AWS S3, Google Cloud Storage, Azure Blob Storage) has its own implementation of the `CloudStorage` interface, allowing developers to switch providers with minimal code changes. Future versions of go_fsspec will include support for more cloud providers, advanced file versioning, and cross-cloud transfer capabilities. Contributions and feedback are welcome! Feel free to check out the repository and get involved.
Package saml contains a partial implementation of the SAML standard in golang. SAML is a standard for identity federation, i.e. either allowing a third party to authenticate your users or allowing third parties to rely on us to authenticate their users. In SAML parlance an Identity Provider (IDP) is a service that knows how to authenticate users. A Service Provider (SP) is a service that delegates authentication to an IDP. If you are building a service where users log in with someone else's credentials, then you are a Service Provider. This package supports implementing both service providers and identity providers. The core package contains the implementation of SAML. The package samlsp provides helper middleware suitable for use in Service Provider applications. The package samlidp provides a rudimentary IDP service that is useful for testing or as a starting point for other integrations. Version 0.4.0 introduces a few breaking changes to the _samlsp_ package in order to make the package more extensible, and to clean up the interfaces a bit. The default behavior remains the same, but you can now provide interface implementations of _RequestTracker_ (which tracks pending requests), _Session_ (which handles maintaining a session) and _OnError_ which handles reporting errors. Public fields of _samlsp.Middleware_ have changed, so some usages may require adjustment. See [issue 231](https://github.com/crewjam/saml/issues/231) for details. The option to provide an IDP metadata URL has been deprecated. Instead, we recommend that you use the `FetchMetadata()` function, or fetch the metadata yourself and use the new `ParseMetadata()` function, and pass the metadata in _samlsp.Options.IDPMetadata_. Similarly, the _HTTPClient_ field is now deprecated because it was only used for fetching metdata, which is no longer directly implemented. The fields that manage how cookies are set are deprecated as well. To customize how cookies are managed, provide custom implementation of _RequestTracker_ and/or _Session_, perhaps by extending the default implementations. The deprecated fields have not been removed from the Options structure, but will be in future. In particular we have deprecated the following fields in _samlsp.Options_: - `Logger` - This was used to emit errors while validating, which is an anti-pattern. - `IDPMetadataURL` - Instead use `FetchMetadata()` - `HTTPClient` - Instead pass httpClient to FetchMetadata - `CookieMaxAge` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieName` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieDomain` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieDomain` - Instead assign a custom CookieRequestTracker or CookieSessionProvider Let us assume we have a simple web application to protect. We'll modify this application so it uses SAML to authenticate users. ```golang package main import ( ) ``` Each service provider must have an self-signed X.509 key pair established. You can generate your own with something like this: We will use `samlsp.Middleware` to wrap the endpoint we want to protect. Middleware provides both an `http.Handler` to serve the SAML specific URLs and a set of wrappers to require the user to be logged in. We also provide the URL where the service provider can fetch the metadata from the IDP at startup. In our case, we'll use [samltest.id](https://samltest.id/), an identity provider designed for testing. ```golang package main import ( ) ``` Next we'll have to register our service provider with the identity provider to establish trust from the service provider to the IDP. For [samltest.id](https://samltest.id/), you can do something like: Navigate to https://samltest.id/upload.php and upload the file you fetched. Now you should be able to authenticate. The flow should look like this: 1. You browse to `localhost:8000/hello` 1. The middleware redirects you to `https://samltest.id/idp/profile/SAML2/Redirect/SSO` 1. samltest.id prompts you for a username and password. 1. samltest.id returns you an HTML document which contains an HTML form setup to POST to `localhost:8000/saml/acs`. The form is automatically submitted if you have javascript enabled. 1. The local service validates the response, issues a session cookie, and redirects you to the original URL, `localhost:8000/hello`. 1. This time when `localhost:8000/hello` is requested there is a valid session and so the main content is served. Please see `example/idp/` for a substantially complete example of how to use the library and helpers to be an identity provider. The SAML standard is huge and complex with many dark corners and strange, unused features. This package implements the most commonly used subset of these features required to provide a single sign on experience. The package supports at least the subset of SAML known as [interoperable SAML](http://saml2int.org). This package supports the Web SSO profile. Message flows from the service provider to the IDP are supported using the HTTP Redirect binding and the HTTP POST binding. Message flows from the IDP to the service provider are supported via the HTTP POST binding. The package can produce signed SAML assertions, and can validate both signed and encrypted SAML assertions. It does not support signed or encrypted requests. The _RelayState_ parameter allows you to pass user state information across the authentication flow. The most common use for this is to allow a user to request a deep link into your site, be redirected through the SAML login flow, and upon successful completion, be directed to the originally requested link, rather than the root. Unfortunately, _RelayState_ is less useful than it could be. Firstly, it is not authenticated, so anything you supply must be signed to avoid XSS or CSRF. Secondly, it is limited to 80 bytes in length, which precludes signing. (See section 3.6.3.1 of SAMLProfiles.) The SAML specification is a collection of PDFs (sadly): - [SAMLCore](http://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf) defines data types. - [SAMLBindings](http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf) defines the details of the HTTP requests in play. - [SAMLProfiles](http://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf) describes data flows. - [SAMLConformance](http://docs.oasis-open.org/security/saml/v2.0/saml-conformance-2.0-os.pdf) includes a support matrix for various parts of the protocol. [SAMLtest](https://samltest.id/) is a testing ground for SAML service and identity providers. Please do not report security issues in the issue tracker. Rather, please contact me directly at ross@kndr.org ([PGP Key `78B6038B3B9DFB88`](https://keybase.io/crewjam)).
This is a small HTTP server implementing the "photobackup server" endpoint documented here: https://github.com/PhotoBackup/api/ Written because the existing servers make me a touch sad; go means we can avoid a pile of runtime dependencies. Build-time dependencies are being kept low; bcrypt, homedir and graceful are the only luxuries. Adding gorilla mux and, perhaps, negroni, would probably be overkill. We're trying to be compatible, so config file is INI-format: ~/.photobackup In addition to these keys, I'm also supporting: The original server was intended to run over HTTP, I think, hence the client sending a SHA512'd password. We support this scheme, but the on-disc storage format is really better off being bcrypt(sha512(password)), so I've added that. Adding BindAddress and HTTPPrefix means that mounting this behind a HTTP reverse proxy is quite doable, and lets us offload HTTPS to that as well. That's how I'm intending to use it. I think the original servers are designed so you can connect to them using just HTTP; hence the sha512(password) scheme. This is short-sighted; the only thing it gets you is (weak) protection against sniffing if you happen to use the same password elsewhere. Sniffers in this scenario can still upload to your server and view your photos. At some point in the future I might add direct HTTPS support as well, but I don't need it. @author Nick Thomas <photobackup@ur.gs>
This is a small HTTP server implementing the "photobackup server" endpoint documented here: https://github.com/PhotoBackup/api/ Written because the existing servers make me a touch sad; go means we can avoid a pile of runtime dependencies. Build-time dependencies are being kept low; bcrypt, homedir and graceful are the only luxuries. Adding gorilla mux and, perhaps, negroni, would probably be overkill. We're trying to be compatible, so config file is INI-format: ~/.photobackup In addition to these keys, I'm also supporting: The original server was intended to run over HTTP, I think, hence the client sending a SHA512'd password. We support this scheme, but the on-disc storage format is really better off being bcrypt(sha512(password)), so I've added that. Adding BindAddress and HTTPPrefix means that mounting this behind a HTTP reverse proxy is quite doable, and lets us offload HTTPS to that as well. That's how I'm intending to use it. I think the original servers are designed so you can connect to them using just HTTP; hence the sha512(password) scheme. This is short-sighted; the only thing it gets you is (weak) protection against sniffing if you happen to use the same password elsewhere. Sniffers in this scenario can still upload to your server and view your photos. At some point in the future I might add direct HTTPS support as well, but I don't need it. @author Nick Thomas <photobackup@ur.gs>
Package upload contains a HTTP handler that provides facilities for uploading files. Use flags for http server implementations other than Go's own, like this: Those tags start with the first version, followed by all major.minor up to its current version. Please see how Go does it: https://golang.org/pkg/go/build/#hdr-Build_Constraints Absent any meaningful flags use the http.Handler implementation (see the following example).
shortinette is the core framework for managing and automating the process of grading coding bootcamps (Shorts). It provides a comprehensive set of tools for running and testing student submissions across various programming languages. The shortinette package is composed of several sub-packages, each responsible for a specific aspect of the grading pipeline: `logger`: Handles logging for the framework, including general informational messages, error reporting, and trace logging for feedback on individual submissions. This package ensures that all important events and errors are captured for debugging and auditing purposes. `requirements`: Validates the necessary environment variables and dependencies required by the framework. This includes checking for essential configuration values in a `.env` file and ensuring that all necessary tools (e.g., Docker images) are available before grading begins. `testutils`: Provides utility functions for compiling and running code submissions. This includes functions for compiling Rust code, running executables with various options (such as timeouts and real-time output), and manipulating files. The utility functions are designed to handle the intricacies of running untrusted student code safely and efficiently. `git`: Manages interactions with GitHub, including cloning repositories, managing collaborators, and uploading files. This package abstracts the GitHub API to simplify common tasks such as adding collaborators to repositories, creating branches, and pushing code or data to specific locations in a repository. `exercise`: Defines the structure and behavior of individual coding exercises. This includes specifying the files that students are allowed to submit, the expected output, and the functions to be tested. The `exercise` package provides the framework for setting up exercises, running tests, and reporting results. `module`: Organizes exercises into modules, allowing for the grouping of related exercises into a coherent curriculum. The `module` package handles the execution of all exercises within a module, aggregates results, and manages the overall grading process. `webhook`: Enables automatic grading triggered by GitHub webhooks. This allows for a fully automated workflow where student submissions are graded as soon as they are pushed to a specific branch in a GitHub repository. `short`: The central orchestrator of the grading process, integrating all sub-packages into a cohesive system. The `short` package handles the setup and teardown of grading environments, manages the execution of modules and exercises, and ensures that all results are properly recorded and reported.
Package httpfstream provides HTTP handlers for simultaneous streaming uploads and downloads of files
Package s3pp creates POST policies for uploading files directly to Amazon S3. See: http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-HTTPPOSTConstructPolicy.html