Elastic Guardian is a tiny reverse proxy that can offer authentication (using HTTP Basic Auth) as well as authorization. While it was originally meant as a thin layer between Elasticsearch (which has no builtin authentication/authorization) and the World, there is nothing specific to Elasticsearch (other than a few defaults which can be changed via command line flags). The generic use case for Elastic Guardian is to restrict access to a HTTP API with HTTP Basic Auth and authorization rules. It currently offers: It currently supports loading the authentication and authorization data from two different backends: Whether the external files are used or not can be controled (at compile time) via AllowAuthFromFiles constant. See that constant definition for further details. Please see authentication and authorization packages for further details. Commandline help can be accessed with: That will also display the default values for all flags. Log output will go to console (stdout) by default.
Package goinside 는 Go로 구현한 비공식 디시인사이드 API입니다. 1. 유동닉 또는 고정닉으로 글과 댓글의 작성 및 삭제 2. 추천과 비추천 3. 특정 갤러리의 특정 페이지에 있는 게시물 및 댓글 가져오기 4. 모든 일반 갤러리, 마이너 갤러리 정보 가져오기 5. 프록시 모드 6. 검색 글이나 댓글을 작성하거나 삭제하려면 우선 세션을 생성해야 합니다. 유동닉 세션은 Guest 함수로 생성하며, 닉네임과 비밀번호를 인자로 받습니다. 빈 문자열을 인자로 넘길 경우 에러를 반환합니다. 고정닉 세션은 Login 함수로 생성합니다. 디시인사이드 ID와 비밀번호를 인자로 받습니다. 로그인에 실패할 경우 에러를 반환합니다. 글이나 댓글을 작성하기 위해서는 Draft를 먼저 생성해야 합니다. Draft를 생성하기 위해 NewArticleDraft, NewCommentDraft 함수가 있습니다. 해당 함수로 생성된 Draft 객체를 Wrtie 메소드로 전달하여 글을 작성합니다. 갤러리의 글을 가져오는 데는 세션이 필요하지 않습니다. 다음 코드는 programming 갤러리의 개념글 목록 1페이지에 있는 글들을 가져옵니다. 다음은 가져온 목록의 첫 번째 글에 댓글을 작성하는 코드입니다. 다음은 가져온 목록의 첫 번째 글을 삭제하는 코드입니다. 삭제할 때는 해당 세션의 정보로 삭제를 시도합니다. 유동닉의 경우 닉네임과 비밀번호가 글을 작성할 때 사용했던 것과 일치해야 하며, 고정닉의 경우 해당 세션의 고정닉으로 작성했던 글이어야 삭제할 수 있습니다. 일치하지 않는 세션으로 삭제 요청을 할 경우, 오류를 반환합니다. 가져온 글을 Write 메소드에 넘겨서 바로 재작성 할 수도 있습니다. 그러나 FetchList, FetchBestList 함수로 가져온 Item들은 아직 글의 내용을 알 수 없는 상태입니다. 이 Item이 Write 함수의 인자로 전달될 때는 글의 제목을 그대로 내용으로 쓰도록 되어있습니다. 다음은 FetchArticle 함수로 가져온 글을 재작성하는 코드입니다. 해당 세션을 프록시로 전환할 수도 있습니다. 아래 코드의 proxy 변수는 *url.URL 타입이라고 가정합니다. http 요청에 타임아웃을 설정할 수도 있습니다.
Package throttleproxy provides an adaptive backpressure proxy mechanism for dynamically managing traffic and protecting backend services using Prometheus metrics. Usage Example: Use Cases: The package supports both server-side HTTP proxy and client-side RoundTripper implementations, providing flexible integration options.
Package proxy provides an http server to act as a signing proxy for SDKs calling AWS X-Ray APIs
Continuation of byosh and SimpleSNIProxy projects. To ensure that Sniproxy works correctly, it's important to have ports 80, 443, and 53 open. However, on Ubuntu, it's possible that port 53 may be in use by systemd-resolved. To disable systemd-resolved and free up the port, follow these instructions. If you prefer to keep systemd-resolved and just disable the built-in resolver, you can use the following command: The simplest way to install the software is by utilizing the pre-built binaries available on the releases page. Alternatively, there are other ways to install, which include: Using "go install" command: Using Docker or Podman: Using the installer script: sniproxy can be configured using a configuration file or command line flags. The configuration file is a JSON file, and an example configuration file can be found under config.sample.json. Flags: In this tutorial, we will go over the steps to set up an SNI proxy using Vultr as a service provider. This will allow you to serve multiple SSL-enabled websites from a single IP address. - A Vultr account. If you don't have one, you can sign up for free using my Vultr referal link ## Step 1: Create a Vultr Server First, log in to your Vultr account and click on the "Instances" tab in the top menu. Then, click the "+" button to deploy a new server. On the "Deploy New Instance" page, select the following options: - Choose Server: Choose "Cloud Compute" - CPU & Storage Technology: Any of the choices should work perfectly fine - Server Location: Choose the location of the server. This will affect the latency of your website, so it's a good idea to choose a location that is close to your target audience. - Server Image: Any OS listed there is supported. If you're not sure what to choose, Ubuntu is a good option - Server Size: Choose a server size that is suitable for your needs. A small or medium-sized server should be sufficient for most SNI proxy setups. Pay attention to the monthly bandwidth usage as well - "Add Auto Backups": not strictly needed for sniproxy. - "SSH Keys": choose a SSH key to facilitate logging in later on. you can always use Vultr's builtin console as well. - Server Hostname: Choose a hostname for your server. This can be any name you like. After you have selected the appropriate options, click the "Deploy Now" button to create your server. ## Step 2: Install the SNI Proxy Once your server has been created, log in to the server using SSH or console. The root password is available under the "Overview" tab in instances list. Ensure the firewall (firewalld, ufw or iptables) is allowing connectivity to ports 80/TCP, 443/TCP and 53/UDP. For `ufw`, allow these ports with: once you have a shell in front of you, run the following (assuming you're on Ubuntu 22.04) above script is an interactive installer, it will ask you a few questions and then install sniproxy for you. it also installs sniproxy as a systemd servers, and enables it to start on boot. above wizard will set up execution arguments for sniproxy. you can edit them by running and then edit the execStart line to your liking. for example, if you want to use a different port for HTTP, you can edit the line to
Package micro is a pluggable framework for microservices
Package relay is a standard `httptest.Server` on steroid for end-to-end HTTP testing. It implements the test server with a delay middleware to simulate latency before the target test server's handler. Relay consists of two components, a Proxy and Switcher. They are HTTP middlewares which wrap the target httptest.Server's handler, thus behaving like a proxy server. It is used with httptest.Server to simulate latent proxy servers or load balancers for use in end-to-end HTTP tests. Proxy can be placed before a HTTPTestServer (httptest.Server, Proxy, or Switcher) to simulate a proxy server or a connection with some latency. It takes a latency unit and a backend HTTPTestServer as arguments. Switcher behaves similarly to a proxy, but with each request it switches between several test servers in a round-robin fashion. Switcher takes a latency unit and a []HTTPTestServer to which it will circulate requests. Let's begin setting up a basic `httptest.Server` to send request to. Now, let's use Proxy to simulate a slow connection through which a HTTP request can be sent to the test server. Note that the latency will double because of the round trip to and from the server. Proxy can be placed in front of another proxy, and vice versa. So you can create a chain of proxies this way: Each hop to and from the target server will be delayed for one second. Switcher can be used instead of Proxy to simulate a round-robin load-balancing proxy or just to switch between several test servers' handlers for convenience.
Flixproxy - DNS, HTTP and TLS proxy Please see https://github.com/snabb/flixproxy for more information.
Package ngrok makes it easy to work with the ngrok API from Go. The package is fully code generated and should always be up to date with the latest ngrok API. Full documentation of the ngrok API can be found at: https://ngrok.com/docs/api This package follows the best practices outlined for Go modules. All releases are tagged and any breaking changes will be reflected as a new major version. You should only import this package for production applications by pointing at a stable tagged version. The following example code demonstrates typical initialization and usage of the package to make an API call: API client configuration and all of the datatypes exchanged by the API are defined in this base package. There are subpackages for every API service and a Client type defined in those packages with methods to interact with that API service. It's usually easiest to find the subpackage of the service you want to work with and begin consulting the documentation there. It is recommended to construct the service-specific clients once at initialization time. The ClientConfig object in the root package supports functional options for configuration. The most common option to use is `WithHTTPClient()` which allows the caller to specify a different net/http.Client object. This allows the caller full customization over the transport if needed for use with proxies, custom TLS setups, observability and tracing, etc. Some arguments to methods in the ngrok API are optional and must be meaningfully distinguished from zero values, especially in Update() methods. This allows the API to distinguish between choosing not to update a value vs. setting it to zero or the empty string. For these arguments, ngrok follows the industry standard practice of using pointers to the primitive types and providing convenince functions like ngrok.String() and ngrok.Bool() for the caller to wrap literals as pointer values. For example: All List methods in the ngrok API are paged. This package abstracts that problem away from you by returning an iterator from any List API call. As you advance the iterator it will transparently fetch new pages of values for you behind the scenes. Note that the context supplied to the initial List() call will be used for all subsequent page fetches so it must be long enough to work through the entire list. Here's an example of paging through all of the TLS certificates on your account. Note that you must check for an error after Next() returns false to determine if the iterator failed to fetch the next page of results. All errors returned by the ngrok API are returned as structured payloads for easy error handling. Most non-networking errors returned by API calls in this package will be an ngrok.Error type. The ngrok.Error type exposes important metadata that will help you handle errors. Specifically it includes the HTTP status code of any failed operation as well as an error code value that uniquely identifies the failure condition. There are two helper functions that will make error handling easy: IsNotFound and IsErrorCode. IsNotFound helps identify the common case of accessing an API resource that no longer exists: IsErrorCode helps you identify specific ngrok errors by their unique ngrok error code. All ngrok error codes are documented at https://ngrok.com/docs/errors To check for a specific error condition, you would structure your code like the following example: All ngrok datatypes in this package define String() and GoString() methods so that they can be formatted into strings in helpful representations. The GoString() method is defined to pretty-print an object for debugging purposes with the "%#v" formatting verb.
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package httpproxy provides an HTTP proxy implementation
Package micro is a pluggable framework for microservices
Package httpcache provides a http.RoundTripper implementation that works as a mostly RFC-compliant cache for http responses. It is only suitable for use as a 'private' cache (i.e. for a web-browser or an API-client and not for a shared proxy).
Taken from $GOROOT/src/pkg/net/http/chunked needed to write https responses to client. Package goproxy provides a customizable HTTP proxy, supporting hijacking HTTPS connection. The intent of the proxy, is to be usable with reasonable amount of traffic yet, customizable and programmable. The proxy itself is simply an `net/http` handler. Typical usage is Adding a header to each request For printing the content type of all incoming responses note that we used the ProxyCtx context variable here. It contains the request and the response (Req and Resp, Resp is nil if unavailable) of this specific client interaction with the proxy. To print the content type of all responses from a certain url, we'll add a ReqCondition to the OnResponse function: We can write the condition ourselves, conditions can be set on request and on response Caution! If you give a RespCondition to the OnRequest function, you'll get a run time panic! It doesn't make sense to read the response, if you still haven't got it! Finally, we have convenience function to throw a quick response we close the body of the original response, and return a new 403 response with a short message. Example use cases: 1. https://github.com/elazarl/goproxy/tree/master/examples/goproxy-avgsize To measure the average size of an Html served in your site. One can ask all the QA team to access the website by a proxy, and the proxy will measure the average size of all text/html responses from your host. 2. [not yet implemented] All requests to your web servers should be directed through the proxy, when the proxy will detect html pieces sent as a response to AJAX request, it'll send a warning email. 3. https://github.com/elazarl/goproxy/blob/master/examples/goproxy-httpdump/ Generate a real traffic to your website by real users using through proxy. Record the traffic, and try it again for more real load testing. 4. https://github.com/elazarl/goproxy/tree/master/examples/goproxy-no-reddit-at-worktime Will allow browsing to reddit.com between 8:00am and 17:00pm 5. https://github.com/elazarl/goproxy/tree/master/examples/goproxy-jquery-version Will warn if multiple versions of jquery are used in the same domain. 6. https://github.com/elazarl/goproxy/blob/master/examples/goproxy-upside-down-ternet/ Modifies image files in an HTTP response via goproxy's image extension found in ext/.
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package yarpc provides the YARPC service framework. With hundreds to thousands of services communicating with RPC, transport protocols (like HTTP and TChannel), encoding protocols (like JSON or Thrift), and peer choosers are the concepts that vary year over year. Separating these concerns allows services to change transports and wire protocols without changing call sites or request handlers, build proxies and wire protocol bridges, or experiment with load balancing strategies. YARPC is a toolkit for services and proxies. YARPC breaks RPC into interchangeable encodings, transports, and peer choosers. YARPC for Go provides reference implementations for HTTP/1.1, TChannel and gRPC transports, and also raw, JSON, Thrift, and Protobuf encodings. YARPC for Go provides a round robin peer chooser and experimental implementations for debug pages and rate limiting. YARPC for Go plans to provide a load balancer that uses a least-pending-requests strategy. Peer choosers can implement any strategy, including load balancing or sharding, in turn bound to any peer list updater. Regardless of transport, every RPC has some common properties: caller name, service name, procedure name, encoding name, deadline or TTL, headers, baggage (multi-hop headers), and tracing. Each RPC can also have an optional shard key, routing key, or routing delegate for advanced routing. YARPC transports use a shared API for capturing RPC metadata, so middleware can apply to requests over any transport. Each YARPC transport protocol can implement inbound handlers and outbound callers. Each of these can support different RPC types, like unary (request and response) or oneway (request and receipt) RPC. A future release of YARPC will add support for other RPC types including variations on streaming and pubsub.
This is a small HTTP server implementing the "photobackup server" endpoint documented here: https://github.com/PhotoBackup/api/ Written because the existing servers make me a touch sad; go means we can avoid a pile of runtime dependencies. Build-time dependencies are being kept low; bcrypt, homedir and graceful are the only luxuries. Adding gorilla mux and, perhaps, negroni, would probably be overkill. We're trying to be compatible, so config file is INI-format: ~/.photobackup In addition to these keys, I'm also supporting: The original server was intended to run over HTTP, I think, hence the client sending a SHA512'd password. We support this scheme, but the on-disc storage format is really better off being bcrypt(sha512(password)), so I've added that. Adding BindAddress and HTTPPrefix means that mounting this behind a HTTP reverse proxy is quite doable, and lets us offload HTTPS to that as well. That's how I'm intending to use it. I think the original servers are designed so you can connect to them using just HTTP; hence the sha512(password) scheme. This is short-sighted; the only thing it gets you is (weak) protection against sniffing if you happen to use the same password elsewhere. Sniffers in this scenario can still upload to your server and view your photos. At some point in the future I might add direct HTTPS support as well, but I don't need it. @author Nick Thomas <photobackup@ur.gs>
This is a small HTTP server implementing the "photobackup server" endpoint documented here: https://github.com/PhotoBackup/api/ Written because the existing servers make me a touch sad; go means we can avoid a pile of runtime dependencies. Build-time dependencies are being kept low; bcrypt, homedir and graceful are the only luxuries. Adding gorilla mux and, perhaps, negroni, would probably be overkill. We're trying to be compatible, so config file is INI-format: ~/.photobackup In addition to these keys, I'm also supporting: The original server was intended to run over HTTP, I think, hence the client sending a SHA512'd password. We support this scheme, but the on-disc storage format is really better off being bcrypt(sha512(password)), so I've added that. Adding BindAddress and HTTPPrefix means that mounting this behind a HTTP reverse proxy is quite doable, and lets us offload HTTPS to that as well. That's how I'm intending to use it. I think the original servers are designed so you can connect to them using just HTTP; hence the sha512(password) scheme. This is short-sighted; the only thing it gets you is (weak) protection against sniffing if you happen to use the same password elsewhere. Sniffers in this scenario can still upload to your server and view your photos. At some point in the future I might add direct HTTPS support as well, but I don't need it. @author Nick Thomas <photobackup@ur.gs>
Package ngrok makes it easy to work with the ngrok API from Go. The package is fully code generated and should always be up to date with the latest ngrok API. Full documentation of the ngrok API can be found at: https://ngrok.com/docs/api This package follows the best practices outlined for Go modules. All releases are tagged and any breaking changes will be reflected as a new major version. You should only import this package for production applications by pointing at a stable tagged version. The following example code demonstrates typical initialization and usage of the package to make an API call: API client configuration and all of the datatypes exchanged by the API are defined in this base package. There are subpackages for every API service and a Client type defined in those packages with methods to interact with that API service. It's usually easiest to find the subpackage of the service you want to work with and begin consulting the documentation there. It is recommended to construct the service-specific clients once at initialization time. The ClientConfig object in the root package supports functional options for configuration. The most common option to use is `WithHTTPClient()` which allows the caller to specify a different net/http.Client object. This allows the caller full customization over the transport if needed for use with proxies, custom TLS setups, observability and tracing, etc. Some arguments to methods in the ngrok API are optional and must be meaningfully distinguished from zero values, especially in Update() methods. This allows the API to distinguish between choosing not to update a value vs. setting it to zero or the empty string. For these arguments, ngrok follows the industry standard practice of using pointers to the primitive types and providing convenince functions like ngrok.String() and ngrok.Bool() for the caller to wrap literals as pointer values. For example: All List methods in the ngrok API are paged. This package abstracts that problem away from you by returning an iterator from any List API call. As you advance the iterator it will transparently fetch new pages of values for you behind the scenes. Note that the context supplied to the initial List() call will be used for all subsequent page fetches so it must be long enough to work through the entire list. Here's an example of paging through all of the TLS certificates on your account. Note that you must check for an error after Next() returns false to determine if the iterator failed to fetch the next page of results. All errors returned by the ngrok API are returned as structured payloads for easy error handling. Most non-networking errors returned by API calls in this package will be an ngrok.Error type. The ngrok.Error type exposes important metadata that will help you handle errors. Specifically it includes the HTTP status code of any failed operation as well as an error code value that uniquely identifies the failure condition. There are two helper functions that will make error handling easy: IsNotFound and IsErrorCode. IsNotFound helps identify the common case of accessing an API resource that no longer exists: IsErrorCode helps you identify specific ngrok errors by their unique ngrok error code. All ngrok error codes are documented at https://ngrok.com/docs/errors To check for a specific error condition, you would structure your code like the following example: All ngrok datatypes in this package define String() and GoString() methods so that they can be formatted into strings in helpful representations. The GoString() method is defined to pretty-print an object for debugging purposes with the "%#v" formatting verb.
Package lunk provides a set of tools for structured logging in the style of Google's Dapper or Twitter's Zipkin. When we consider a complex event in a distributed system, we're actually considering a partially-ordered tree of events from various services, libraries, and modules. Consider a user-initiated web request. Their browser sends an HTTP request to an edge server, which extracts the credentials (e.g., OAuth token) and authenticates the request by communicating with an internal authentication service, which returns a signed set of internal credentials (e.g., signed user ID). The edge web server then proxies the request to a cluster of web servers, each running a PHP application. The PHP application loads some data from several databases, places the user in a number of treatment groups for running A/B experiments, writes some data to a Dynamo-style distributed database, and returns an HTML response. The edge server receives this response and proxies it to the user's browser. In this scenario we have a number of infrastructure-specific events: This scenario also involves a number of events which have little to do with the infrastructure, but are still critical information for the business the system supports: There are a number of different teams all trying to monitor and improve aspects of this system. Operational staff need to know if a particular host or service is experiencing a latency spike or drop in throughput. Development staff need to know if their application's response times have gone down as a result of a recent deploy. Customer support staff need to know if the system is operating nominally as a whole, and for customers in particular. Product designers and managers need to know the effect of an A/B test on user behavior. But the fact that these teams will be consuming the data in different ways for different purposes does mean that they are working on different systems. In order to instrument the various components of the system, we need a common data model. We adopt Dapper's notion of a tree to mean a partially-ordered tree of events from a distributed system. A tree in Lunk is identified by its root ID, which is the unique ID of its root event. All events in a common tree share a root ID. In our photo example, we would assign a unique root ID as soon as the edge server received the request. Events inside a tree are causally ordered: each event has a unique ID, and an optional parent ID. By passing the IDs across systems, we establish causal ordering between events. In our photo example, the two database queries from the app would share the same parent ID--the ID of the event corresponding to the app handling the request which caused those queries. Each event has a schema of properties, which allow us to record specific pieces of information about each event. For HTTP requests, we can record the method, the request URI, the elapsed time to handle the request, etc. Lunk is agnostic in terms of aggregation technologies, but two use cases seem clear: real-time process monitoring and offline causational analysis. For real-time process monitoring, events can be streamed to a aggregation service like Riemann (http://riemann.io) or Storm (http://storm.incubator.apache.org), which can calculate process statistics (e.g., the 95th percentile latency for the edge server responses) in real-time. This allows for adaptive monitoring of all services, with the option of including example root IDs in the alerts (e.g., 95th percentile latency is over 300ms, mostly as a result of requests like those in tree XXXXX). For offline causational analysis, events can be written in batches to batch processing systems like Hadoop or OLAP databases like Vertica. These aggregates can be queried to answer questions traditionally reserved for A/B testing systems. "Did users who were show the new navbar view more photos?" "Did the new image optimization algorithm we enabled for 1% of views run faster? Did it produce smaller images? Did it have any effect on user engagement?" "Did any services have increased exception rates after any recent deploys?" &tc &tc By capturing the root ID of a particular web request, we can assemble a partially-ordered tree of events which were involved in the handling of that request. All events with a common root ID are in a common tree, which allows for O(M) retrieval for a tree of M events. To send a request with a root ID and a parent ID, use the Event-ID HTTP header: The header value is simply the root ID and event ID, hex-encoded and separated with a slash. If the event has a parent ID, that may be included as an optional third parameter. A server that receives a request with this header can use this to properly parent its own events. Each event has a set of named properties, the keys and values of which are strings. This allows aggregation layers to take advantage of simplifying assumptions and either store events in normalized form (with event data separate from property data) or in denormalized form (essentially pre-materializing an outer join of the normalized relations). Durations are always recorded as fractional milliseconds. Lunk currently provides two formats for log entries: text and JSON. Text-based logs encode each entry as a single line of text, using key="value" formatting for all properties. Event property keys are scoped to avoid collisions. JSON logs encode each entry as a single JSON object.
Package http provides a micro to http proxy