Comparing version 5.23.0 to 5.24.0-test.0
@@ -20,7 +20,9 @@ # Class: Client | ||
> ⚠️ Warning: The `H2` support is experimental. | ||
* **bodyTimeout** `number | null` (optional) - Default: `300e3` - The timeout after which a request will time out, in milliseconds. Monitors time between receiving body data. Use `0` to disable it entirely. Defaults to 300 seconds. | ||
* **headersTimeout** `number | null` (optional) - Default: `300e3` - The amount of time the parser will wait to receive the complete HTTP headers while not sending the request. Defaults to 300 seconds. | ||
* **keepAliveMaxTimeout** `number | null` (optional) - Default: `600e3` - The maximum allowed `keepAliveTimeout` when overridden by *keep-alive* hints from the server. Defaults to 10 minutes. | ||
* **keepAliveTimeout** `number | null` (optional) - Default: `4e3` - The timeout after which a socket without active requests will time out. Monitors time between activity on a connected socket. This value may be overridden by *keep-alive* hints from the server. See [MDN: HTTP - Headers - Keep-Alive directives](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Keep-Alive#directives) for more details. Defaults to 4 seconds. | ||
* **keepAliveTimeoutThreshold** `number | null` (optional) - Default: `1e3` - A number subtracted from server *keep-alive* hints when overriding `keepAliveTimeout` to account for timing inaccuracies caused by e.g. transport latency. Defaults to 1 second. | ||
* **headersTimeout** `number | null` (optional) - Default: `300e3` - The amount of time, in milliseconds, the parser will wait to receive the complete HTTP headers while not sending the request. Defaults to 300 seconds. | ||
* **keepAliveMaxTimeout** `number | null` (optional) - Default: `600e3` - The maximum allowed `keepAliveTimeout`, in milliseconds, when overridden by *keep-alive* hints from the server. Defaults to 10 minutes. | ||
* **keepAliveTimeout** `number | null` (optional) - Default: `4e3` - The timeout, in milliseconds, after which a socket without active requests will time out. Monitors time between activity on a connected socket. This value may be overridden by *keep-alive* hints from the server. See [MDN: HTTP - Headers - Keep-Alive directives](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Keep-Alive#directives) for more details. Defaults to 4 seconds. | ||
* **keepAliveTimeoutThreshold** `number | null` (optional) - Default: `1e3` - A number of milliseconds subtracted from server *keep-alive* hints when overriding `keepAliveTimeout` to account for timing inaccuracies caused by e.g. transport latency. Defaults to 1 second. | ||
* **maxHeaderSize** `number | null` (optional) - Default: `16384` - The maximum length of request headers in bytes. Defaults to 16KiB. | ||
@@ -34,2 +36,4 @@ * **maxResponseSize** `number | null` (optional) - Default: `-1` - The maximum length of response body in bytes. Set to `-1` to disable. | ||
* **autoSelectFamilyAttemptTimeout**: `number` - Default: depends on local Node version, on Node 18.13.0 and above is `250`. The amount of time in milliseconds to wait for a connection attempt to finish before trying the next address when using the `autoSelectFamily` option. See [here](https://nodejs.org/api/net.html#socketconnectoptions-connectlistener) for more details. | ||
* **allowH2**: `boolean` - Default: `false`. Enables support for H2 if the server has assigned bigger priority to it through ALPN negotiation. | ||
* **maxConcurrentStreams**: `number` - Default: `100`. Dictates the maximum number of concurrent streams for a single H2 session. It can be overriden by a SETTINGS remote frame. | ||
@@ -43,3 +47,3 @@ #### Parameter: `ConnectOptions` | ||
* **maxCachedSessions** `number | null` (optional) - Default: `100` - Maximum number of TLS cached sessions. Use 0 to disable TLS session caching. Default: 100. | ||
* **timeout** `number | null` (optional) - Default `10e3` | ||
* **timeout** `number | null` (optional) - In milliseconds, Default `10e3`. | ||
* **servername** `string | null` (optional) | ||
@@ -46,0 +50,0 @@ * **keepAlive** `boolean | null` (optional) - Default: `true` - TCP keep-alive enabled |
@@ -16,4 +16,4 @@ # Connector | ||
* **socketPath** `string | null` (optional) - Default: `null` - An IPC endpoint, either Unix domain socket or Windows named pipe. | ||
* **maxCachedSessions** `number | null` (optional) - Default: `100` - Maximum number of TLS cached sessions. Use 0 to disable TLS session caching. Default: 100. | ||
* **timeout** `number | null` (optional) - Default `10e3` | ||
* **maxCachedSessions** `number | null` (optional) - Default: `100` - Maximum number of TLS cached sessions. Use 0 to disable TLS session caching. Default: `100`. | ||
* **timeout** `number | null` (optional) - In milliseconds. Default `10e3`. | ||
* **servername** `string | null` (optional) | ||
@@ -20,0 +20,0 @@ |
@@ -203,4 +203,5 @@ # Dispatcher | ||
* **bodyTimeout** `number | null` (optional) - The timeout after which a request will time out, in milliseconds. Monitors time between receiving body data. Use `0` to disable it entirely. Defaults to 300 seconds. | ||
* **headersTimeout** `number | null` (optional) - The amount of time the parser will wait to receive the complete HTTP headers while not sending the request. Defaults to 300 seconds. | ||
* **headersTimeout** `number | null` (optional) - The amount of time, in milliseconds, the parser will wait to receive the complete HTTP headers while not sending the request. Defaults to 300 seconds. | ||
* **throwOnError** `boolean` (optional) - Default: `false` - Whether Undici should throw an error upon receiving a 4xx or 5xx response from the server. | ||
* **expectContinue** `boolean` (optional) - Default: `false` - For H2, it appends the expect: 100-continue header, and halts the request body until a 100-continue is received from the remote server | ||
@@ -207,0 +208,0 @@ #### Parameter: `DispatchHandler` |
@@ -38,3 +38,4 @@ # Class: MockPool | ||
This method defines the interception rules for matching against requests for a MockPool or MockPool. We can intercept multiple times on a single instance. | ||
This method defines the interception rules for matching against requests for a MockPool or MockPool. We can intercept multiple times on a single instance, but each intercept is only used once. | ||
For example if you expect to make 2 requests inside a test, you need to call `intercept()` twice. Assuming you use `disableNetConnect()` you will get `MockNotMatchedError` on the second request when you only call `intercept()` once. | ||
@@ -41,0 +42,0 @@ When defining interception rules, all the rules must pass for a request to be intercepted. If a request is not intercepted, a real request will be attempted. |
@@ -5,5 +5,5 @@ 'use strict' | ||
module.exports.fetch = async function fetch (resource) { | ||
module.exports.fetch = async function fetch (resource, init = undefined) { | ||
try { | ||
return await fetchImpl(...arguments) | ||
return await fetchImpl(resource, init) | ||
} catch (err) { | ||
@@ -18,1 +18,2 @@ Error.captureStackTrace(err, this) | ||
module.exports.Request = require('./lib/fetch/request').Request | ||
module.exports.WebSocket = require('./lib/websocket/websocket').WebSocket |
@@ -109,3 +109,6 @@ 'use strict' | ||
} catch (err) { | ||
Error.captureStackTrace(err, this) | ||
if (typeof err === 'object') { | ||
Error.captureStackTrace(err, this) | ||
} | ||
throw err | ||
@@ -112,0 +115,0 @@ } |
'use strict' | ||
const { AsyncResource } = require('async_hooks') | ||
const { InvalidArgumentError, RequestAbortedError, SocketError } = require('../core/errors') | ||
const { AsyncResource } = require('async_hooks') | ||
const util = require('../core/util') | ||
@@ -53,3 +53,9 @@ const { addSignal, removeSignal } = require('./abort-signal') | ||
this.callback = null | ||
const headers = this.responseHeaders === 'raw' ? util.parseRawHeaders(rawHeaders) : util.parseHeaders(rawHeaders) | ||
let headers = rawHeaders | ||
// Indicates is an HTTP2Session | ||
if (headers != null) { | ||
headers = this.responseHeaders === 'raw' ? util.parseRawHeaders(rawHeaders) : util.parseHeaders(rawHeaders) | ||
} | ||
this.runInAsyncScope(callback, null, null, { | ||
@@ -56,0 +62,0 @@ statusCode, |
@@ -98,3 +98,2 @@ 'use strict' | ||
this.res = body | ||
if (callback !== null) { | ||
@@ -101,0 +100,0 @@ if (this.throwOnError && statusCode >= 400) { |
@@ -382,7 +382,3 @@ 'use strict' | ||
// 11.3 | ||
readAllBytes( | ||
reader, | ||
(bytes) => bodyReadPromise.resolve(bytes), | ||
(error) => bodyReadPromise.reject(error) | ||
) | ||
readAllBytes(reader).then(bodyReadPromise.resolve, bodyReadPromise.reject) | ||
} else { | ||
@@ -389,0 +385,0 @@ bodyReadPromise.resolve(undefined) |
@@ -9,2 +9,3 @@ // @ts-check | ||
const net = require('net') | ||
const { pipeline } = require('stream') | ||
const util = require('./core/util') | ||
@@ -71,4 +72,36 @@ const timers = require('./timers') | ||
kLocalAddress, | ||
kMaxResponseSize | ||
kMaxResponseSize, | ||
kHTTPConnVersion, | ||
// HTTP2 | ||
kHost, | ||
kHTTP2Session, | ||
kHTTP2SessionState, | ||
kHTTP2BuildRequest, | ||
kHTTP2CopyHeaders, | ||
kHTTP1BuildRequest | ||
} = require('./core/symbols') | ||
/** @type {import('http2')} */ | ||
let http2 | ||
try { | ||
http2 = require('http2') | ||
} catch { | ||
// @ts-ignore | ||
http2 = { constants: {} } | ||
} | ||
const { | ||
constants: { | ||
HTTP2_HEADER_AUTHORITY, | ||
HTTP2_HEADER_METHOD, | ||
HTTP2_HEADER_PATH, | ||
HTTP2_HEADER_CONTENT_LENGTH, | ||
HTTP2_HEADER_EXPECT, | ||
HTTP2_HEADER_STATUS | ||
} | ||
} = http2 | ||
// Experimental | ||
let h2ExperimentalWarned = false | ||
const FastBuffer = Buffer[Symbol.species] | ||
@@ -127,3 +160,6 @@ | ||
autoSelectFamily, | ||
autoSelectFamilyAttemptTimeout | ||
autoSelectFamilyAttemptTimeout, | ||
// h2 | ||
allowH2, | ||
maxConcurrentStreams | ||
} = {}) { | ||
@@ -211,2 +247,11 @@ super() | ||
// h2 | ||
if (allowH2 != null && typeof allowH2 !== 'boolean') { | ||
throw new InvalidArgumentError('allowH2 must be a valid boolean value') | ||
} | ||
if (maxConcurrentStreams != null && (typeof maxConcurrentStreams !== 'number' || maxConcurrentStreams < 1)) { | ||
throw new InvalidArgumentError('maxConcurrentStreams must be a possitive integer, greater than 0') | ||
} | ||
if (typeof connect !== 'function') { | ||
@@ -216,2 +261,3 @@ connect = buildConnector({ | ||
maxCachedSessions, | ||
allowH2, | ||
socketPath, | ||
@@ -248,3 +294,15 @@ timeout: connectTimeout, | ||
this[kMaxResponseSize] = maxResponseSize > -1 ? maxResponseSize : -1 | ||
this[kHTTPConnVersion] = 'h1' | ||
// HTTP/2 | ||
this[kHTTP2Session] = null | ||
this[kHTTP2SessionState] = !allowH2 | ||
? null | ||
: { | ||
// streams: null, // Fixed queue of streams - For future support of `push` | ||
openStreams: 0, // Keep track of them to decide wether or not unref the session | ||
maxConcurrentStreams: maxConcurrentStreams != null ? maxConcurrentStreams : 100 // Max peerConcurrentStreams for a Node h2 server | ||
} | ||
this[kHost] = `${this[kUrl].hostname}${this[kUrl].port ? `:${this[kUrl].port}` : ''}` | ||
// kQueue is built up of 3 sections separated by | ||
@@ -307,3 +365,5 @@ // the kRunningIdx and kPendingIdx indices. | ||
const request = new Request(origin, opts, handler) | ||
const request = this[kHTTPConnVersion] === 'h2' | ||
? Request[kHTTP2BuildRequest](origin, opts, handler) | ||
: Request[kHTTP1BuildRequest](origin, opts, handler) | ||
@@ -329,2 +389,4 @@ this[kQueue].push(request) | ||
async [kClose] () { | ||
// TODO: for H2 we need to gracefully flush the remaining enqueued | ||
// request and close each stream. | ||
return new Promise((resolve) => { | ||
@@ -356,2 +418,8 @@ if (!this[kSize]) { | ||
if (this[kHTTP2Session] != null) { | ||
util.destroy(this[kHTTP2Session], err) | ||
this[kHTTP2Session] = null | ||
this[kHTTP2SessionState] = null | ||
} | ||
if (!this[kSocket]) { | ||
@@ -368,2 +436,60 @@ queueMicrotask(callback) | ||
function onHttp2SessionError (err) { | ||
assert(err.code !== 'ERR_TLS_CERT_ALTNAME_INVALID') | ||
this[kSocket][kError] = err | ||
onError(this[kClient], err) | ||
} | ||
function onHttp2FrameError (type, code, id) { | ||
const err = new InformationalError(`HTTP/2: "frameError" received - type ${type}, code ${code}`) | ||
if (id === 0) { | ||
this[kSocket][kError] = err | ||
onError(this[kClient], err) | ||
} | ||
} | ||
function onHttp2SessionEnd () { | ||
util.destroy(this, new SocketError('other side closed')) | ||
util.destroy(this[kSocket], new SocketError('other side closed')) | ||
} | ||
function onHTTP2GoAway (code) { | ||
const client = this[kClient] | ||
const err = new InformationalError(`HTTP/2: "GOAWAY" frame received with code ${code}`) | ||
client[kSocket] = null | ||
client[kHTTP2Session] = null | ||
if (client.destroyed) { | ||
assert(this[kPending] === 0) | ||
// Fail entire queue. | ||
const requests = client[kQueue].splice(client[kRunningIdx]) | ||
for (let i = 0; i < requests.length; i++) { | ||
const request = requests[i] | ||
errorRequest(this, request, err) | ||
} | ||
} else if (client[kRunning] > 0) { | ||
// Fail head of pipeline. | ||
const request = client[kQueue][client[kRunningIdx]] | ||
client[kQueue][client[kRunningIdx]++] = null | ||
errorRequest(client, request, err) | ||
} | ||
client[kPendingIdx] = client[kRunningIdx] | ||
assert(client[kRunning] === 0) | ||
client.emit('disconnect', | ||
client[kUrl], | ||
[client], | ||
err | ||
) | ||
resume(client) | ||
} | ||
const constants = require('./llhttp/constants') | ||
@@ -959,12 +1085,14 @@ const createRedirectInterceptor = require('./interceptor/redirectInterceptor') | ||
function onSocketError (err) { | ||
const { [kParser]: parser } = this | ||
const { [kClient]: client, [kParser]: parser } = this | ||
assert(err.code !== 'ERR_TLS_CERT_ALTNAME_INVALID') | ||
// On Mac OS, we get an ECONNRESET even if there is a full body to be forwarded | ||
// to the user. | ||
if (err.code === 'ECONNRESET' && parser.statusCode && !parser.shouldKeepAlive) { | ||
// We treat all incoming data so for as a valid response. | ||
parser.onMessageComplete() | ||
return | ||
if (client[kHTTPConnVersion] !== 'h2') { | ||
// On Mac OS, we get an ECONNRESET even if there is a full body to be forwarded | ||
// to the user. | ||
if (err.code === 'ECONNRESET' && parser.statusCode && !parser.shouldKeepAlive) { | ||
// We treat all incoming data so for as a valid response. | ||
parser.onMessageComplete() | ||
return | ||
} | ||
} | ||
@@ -998,8 +1126,10 @@ | ||
function onSocketEnd () { | ||
const { [kParser]: parser } = this | ||
const { [kParser]: parser, [kClient]: client } = this | ||
if (parser.statusCode && !parser.shouldKeepAlive) { | ||
// We treat all incoming data so far as a valid response. | ||
parser.onMessageComplete() | ||
return | ||
if (client[kHTTPConnVersion] !== 'h2') { | ||
if (parser.statusCode && !parser.shouldKeepAlive) { | ||
// We treat all incoming data so far as a valid response. | ||
parser.onMessageComplete() | ||
return | ||
} | ||
} | ||
@@ -1011,12 +1141,14 @@ | ||
function onSocketClose () { | ||
const { [kClient]: client } = this | ||
const { [kClient]: client, [kParser]: parser } = this | ||
if (!this[kError] && this[kParser].statusCode && !this[kParser].shouldKeepAlive) { | ||
// We treat all incoming data so far as a valid response. | ||
this[kParser].onMessageComplete() | ||
if (client[kHTTPConnVersion] === 'h1' && parser) { | ||
if (!this[kError] && parser.statusCode && !parser.shouldKeepAlive) { | ||
// We treat all incoming data so far as a valid response. | ||
parser.onMessageComplete() | ||
} | ||
this[kParser].destroy() | ||
this[kParser] = null | ||
} | ||
this[kParser].destroy() | ||
this[kParser] = null | ||
const err = this[kError] || new SocketError('closed', util.getSocketInfo(this)) | ||
@@ -1108,7 +1240,2 @@ | ||
if (!llhttpInstance) { | ||
llhttpInstance = await llhttpPromise | ||
llhttpPromise = null | ||
} | ||
client[kConnecting] = false | ||
@@ -1118,11 +1245,46 @@ | ||
socket[kNoRef] = false | ||
socket[kWriting] = false | ||
socket[kReset] = false | ||
socket[kBlocking] = false | ||
socket[kError] = null | ||
socket[kParser] = new Parser(client, socket, llhttpInstance) | ||
socket[kClient] = client | ||
const isH2 = socket.alpnProtocol === 'h2' | ||
if (isH2) { | ||
if (!h2ExperimentalWarned) { | ||
h2ExperimentalWarned = true | ||
process.emitWarning('H2 support is experimental, expect them to change at any time.', { | ||
code: 'UNDICI-H2' | ||
}) | ||
} | ||
const session = http2.connect(client[kUrl], { | ||
createConnection: () => socket, | ||
peerMaxConcurrentStreams: client[kHTTP2SessionState].maxConcurrentStreams | ||
}) | ||
client[kHTTPConnVersion] = 'h2' | ||
session[kClient] = client | ||
session[kSocket] = socket | ||
session.on('error', onHttp2SessionError) | ||
session.on('frameError', onHttp2FrameError) | ||
session.on('end', onHttp2SessionEnd) | ||
session.on('goaway', onHTTP2GoAway) | ||
session.on('close', onSocketClose) | ||
session.unref() | ||
client[kHTTP2Session] = session | ||
socket[kHTTP2Session] = session | ||
} else { | ||
if (!llhttpInstance) { | ||
llhttpInstance = await llhttpPromise | ||
llhttpPromise = null | ||
} | ||
socket[kNoRef] = false | ||
socket[kWriting] = false | ||
socket[kReset] = false | ||
socket[kBlocking] = false | ||
socket[kParser] = new Parser(client, socket, llhttpInstance) | ||
} | ||
socket[kCounter] = 0 | ||
socket[kMaxRequests] = client[kMaxRequests] | ||
socket[kClient] = client | ||
socket[kError] = null | ||
socket | ||
@@ -1226,3 +1388,3 @@ .on('error', onSocketError) | ||
if (socket && !socket.destroyed) { | ||
if (socket && !socket.destroyed && socket.alpnProtocol !== 'h2') { | ||
if (client[kSize] === 0) { | ||
@@ -1292,3 +1454,3 @@ if (!socket[kNoRef] && socket.unref) { | ||
if (!socket) { | ||
if (!socket && !client[kHTTP2Session]) { | ||
connect(client) | ||
@@ -1354,2 +1516,7 @@ return | ||
function write (client, request) { | ||
if (client[kHTTPConnVersion] === 'h2') { | ||
writeH2(client, client[kHTTP2Session], request) | ||
return | ||
} | ||
const { body, method, path, host, upgrade, headers, blocking, reset } = request | ||
@@ -1510,5 +1677,287 @@ | ||
function writeStream ({ body, client, request, socket, contentLength, header, expectsPayload }) { | ||
function writeH2 (client, session, request) { | ||
const { body, method, path, host, upgrade, expectContinue, signal, headers: reqHeaders } = request | ||
let headers | ||
if (typeof reqHeaders === 'string') headers = Request[kHTTP2CopyHeaders](reqHeaders.trim()) | ||
else headers = reqHeaders | ||
if (upgrade) { | ||
errorRequest(client, request, new Error('Upgrade not supported for H2')) | ||
return false | ||
} | ||
try { | ||
// TODO(HTTP/2): Should we call onConnect immediately or on stream ready event? | ||
request.onConnect((err) => { | ||
if (request.aborted || request.completed) { | ||
return | ||
} | ||
errorRequest(client, request, err || new RequestAbortedError()) | ||
}) | ||
} catch (err) { | ||
errorRequest(client, request, err) | ||
} | ||
if (request.aborted) { | ||
return false | ||
} | ||
let stream | ||
const h2State = client[kHTTP2SessionState] | ||
headers[HTTP2_HEADER_AUTHORITY] = host || client[kHost] | ||
headers[HTTP2_HEADER_PATH] = path | ||
if (method === 'CONNECT') { | ||
session.ref() | ||
// we are already connected, streams are pending, first request | ||
// will create a new stream. We trigger a request to create the stream and wait until | ||
// `ready` event is triggered | ||
// We disabled endStream to allow the user to write to the stream | ||
stream = session.request(headers, { endStream: false, signal }) | ||
if (stream.id && !stream.pending) { | ||
request.onUpgrade(null, null, stream) | ||
++h2State.openStreams | ||
} else { | ||
stream.once('ready', () => { | ||
request.onUpgrade(null, null, stream) | ||
++h2State.openStreams | ||
}) | ||
} | ||
stream.once('close', () => { | ||
h2State.openStreams -= 1 | ||
// TODO(HTTP/2): unref only if current streams count is 0 | ||
if (h2State.openStreams === 0) session.unref() | ||
}) | ||
return true | ||
} else { | ||
headers[HTTP2_HEADER_METHOD] = method | ||
} | ||
// https://tools.ietf.org/html/rfc7231#section-4.3.1 | ||
// https://tools.ietf.org/html/rfc7231#section-4.3.2 | ||
// https://tools.ietf.org/html/rfc7231#section-4.3.5 | ||
// Sending a payload body on a request that does not | ||
// expect it can cause undefined behavior on some | ||
// servers and corrupt connection state. Do not | ||
// re-use the connection for further requests. | ||
const expectsPayload = ( | ||
method === 'PUT' || | ||
method === 'POST' || | ||
method === 'PATCH' | ||
) | ||
if (body && typeof body.read === 'function') { | ||
// Try to read EOF in order to get length. | ||
body.read(0) | ||
} | ||
let contentLength = util.bodyLength(body) | ||
if (contentLength == null) { | ||
contentLength = request.contentLength | ||
} | ||
if (contentLength === 0 || !expectsPayload) { | ||
// https://tools.ietf.org/html/rfc7230#section-3.3.2 | ||
// A user agent SHOULD NOT send a Content-Length header field when | ||
// the request message does not contain a payload body and the method | ||
// semantics do not anticipate such a body. | ||
contentLength = null | ||
} | ||
if (request.contentLength != null && request.contentLength !== contentLength) { | ||
if (client[kStrictContentLength]) { | ||
errorRequest(client, request, new RequestContentLengthMismatchError()) | ||
return false | ||
} | ||
process.emitWarning(new RequestContentLengthMismatchError()) | ||
} | ||
if (contentLength != null) { | ||
assert(body, 'no body must not have content length') | ||
headers[HTTP2_HEADER_CONTENT_LENGTH] = `${contentLength}` | ||
} | ||
session.ref() | ||
const shouldEndStream = method === 'GET' || method === 'HEAD' | ||
if (expectContinue) { | ||
headers[HTTP2_HEADER_EXPECT] = '100-continue' | ||
/** | ||
* @type {import('node:http2').ClientHttp2Stream} | ||
*/ | ||
stream = session.request(headers, { endStream: shouldEndStream, signal }) | ||
stream.once('continue', writeBodyH2) | ||
} else { | ||
/** @type {import('node:http2').ClientHttp2Stream} */ | ||
stream = session.request(headers, { | ||
endStream: shouldEndStream, | ||
signal | ||
}) | ||
writeBodyH2() | ||
} | ||
// Increment counter as we have new several streams open | ||
++h2State.openStreams | ||
stream.once('response', headers => { | ||
if (request.onHeaders(Number(headers[HTTP2_HEADER_STATUS]), headers, stream.resume.bind(stream), '') === false) { | ||
stream.pause() | ||
} | ||
}) | ||
stream.once('end', () => { | ||
request.onComplete([]) | ||
}) | ||
stream.on('data', (chunk) => { | ||
if (request.onData(chunk) === false) stream.pause() | ||
}) | ||
stream.once('close', () => { | ||
h2State.openStreams -= 1 | ||
// TODO(HTTP/2): unref only if current streams count is 0 | ||
if (h2State.openStreams === 0) session.unref() | ||
}) | ||
stream.once('error', function (err) { | ||
if (client[kHTTP2Session] && !client[kHTTP2Session].destroyed && !this.closed && !this.destroyed) { | ||
h2State.streams -= 1 | ||
util.destroy(stream, err) | ||
} | ||
}) | ||
stream.once('frameError', (type, code) => { | ||
const err = new InformationalError(`HTTP/2: "frameError" received - type ${type}, code ${code}`) | ||
errorRequest(client, request, err) | ||
if (client[kHTTP2Session] && !client[kHTTP2Session].destroyed && !this.closed && !this.destroyed) { | ||
h2State.streams -= 1 | ||
util.destroy(stream, err) | ||
} | ||
}) | ||
// stream.on('aborted', () => { | ||
// // TODO(HTTP/2): Support aborted | ||
// }) | ||
// stream.on('timeout', () => { | ||
// // TODO(HTTP/2): Support timeout | ||
// }) | ||
// stream.on('push', headers => { | ||
// // TODO(HTTP/2): Suppor push | ||
// }) | ||
// stream.on('trailers', headers => { | ||
// // TODO(HTTP/2): Support trailers | ||
// }) | ||
return true | ||
function writeBodyH2 () { | ||
/* istanbul ignore else: assertion */ | ||
if (!body) { | ||
request.onRequestSent() | ||
} else if (util.isBuffer(body)) { | ||
assert(contentLength === body.byteLength, 'buffer body must have content length') | ||
stream.cork() | ||
stream.write(body) | ||
stream.uncork() | ||
request.onBodySent(body) | ||
request.onRequestSent() | ||
} else if (util.isBlobLike(body)) { | ||
if (typeof body.stream === 'function') { | ||
writeIterable({ | ||
client, | ||
request, | ||
contentLength, | ||
h2stream: stream, | ||
expectsPayload, | ||
body: body.stream(), | ||
socket: client[kSocket], | ||
header: '' | ||
}) | ||
} else { | ||
writeBlob({ | ||
body, | ||
client, | ||
request, | ||
contentLength, | ||
expectsPayload, | ||
h2stream: stream, | ||
header: '', | ||
socket: client[kSocket] | ||
}) | ||
} | ||
} else if (util.isStream(body)) { | ||
writeStream({ | ||
body, | ||
client, | ||
request, | ||
contentLength, | ||
expectsPayload, | ||
socket: client[kSocket], | ||
h2stream: stream, | ||
header: '' | ||
}) | ||
} else if (util.isIterable(body)) { | ||
writeIterable({ | ||
body, | ||
client, | ||
request, | ||
contentLength, | ||
expectsPayload, | ||
header: '', | ||
h2stream: stream, | ||
socket: client[kSocket] | ||
}) | ||
} else { | ||
assert(false) | ||
} | ||
} | ||
} | ||
function writeStream ({ h2stream, body, client, request, socket, contentLength, header, expectsPayload }) { | ||
assert(contentLength !== 0 || client[kRunning] === 0, 'stream body cannot be pipelined') | ||
if (client[kHTTPConnVersion] === 'h2') { | ||
// For HTTP/2, is enough to pipe the stream | ||
const pipe = pipeline( | ||
body, | ||
h2stream, | ||
(err) => { | ||
if (err) { | ||
util.destroy(body, err) | ||
util.destroy(h2stream, err) | ||
} else { | ||
request.onRequestSent() | ||
} | ||
} | ||
) | ||
pipe.on('data', onPipeData) | ||
pipe.once('end', () => { | ||
pipe.removeListener('data', onPipeData) | ||
util.destroy(pipe) | ||
}) | ||
function onPipeData (chunk) { | ||
request.onBodySent(chunk) | ||
} | ||
return | ||
} | ||
let finished = false | ||
@@ -1594,5 +2043,6 @@ | ||
async function writeBlob ({ body, client, request, socket, contentLength, header, expectsPayload }) { | ||
async function writeBlob ({ h2stream, body, client, request, socket, contentLength, header, expectsPayload }) { | ||
assert(contentLength === body.size, 'blob body must have content length') | ||
const isH2 = client[kHTTPConnVersion] === 'h2' | ||
try { | ||
@@ -1605,6 +2055,12 @@ if (contentLength != null && contentLength !== body.size) { | ||
socket.cork() | ||
socket.write(`${header}content-length: ${contentLength}\r\n\r\n`, 'latin1') | ||
socket.write(buffer) | ||
socket.uncork() | ||
if (isH2) { | ||
h2stream.cork() | ||
h2stream.write(buffer) | ||
h2stream.uncork() | ||
} else { | ||
socket.cork() | ||
socket.write(`${header}content-length: ${contentLength}\r\n\r\n`, 'latin1') | ||
socket.write(buffer) | ||
socket.uncork() | ||
} | ||
@@ -1620,7 +2076,7 @@ request.onBodySent(buffer) | ||
} catch (err) { | ||
util.destroy(socket, err) | ||
util.destroy(isH2 ? h2stream : socket, err) | ||
} | ||
} | ||
async function writeIterable ({ body, client, request, socket, contentLength, header, expectsPayload }) { | ||
async function writeIterable ({ h2stream, body, client, request, socket, contentLength, header, expectsPayload }) { | ||
assert(contentLength !== 0 || client[kRunning] === 0, 'iterator body cannot be pipelined') | ||
@@ -1647,2 +2103,29 @@ | ||
if (client[kHTTPConnVersion] === 'h2') { | ||
h2stream | ||
.on('close', onDrain) | ||
.on('drain', onDrain) | ||
try { | ||
// It's up to the user to somehow abort the async iterable. | ||
for await (const chunk of body) { | ||
if (socket[kError]) { | ||
throw socket[kError] | ||
} | ||
if (!h2stream.write(chunk)) { | ||
await waitForDrain() | ||
} | ||
} | ||
} catch (err) { | ||
h2stream.destroy(err) | ||
} finally { | ||
h2stream | ||
.off('close', onDrain) | ||
.off('drain', onDrain) | ||
} | ||
return | ||
} | ||
socket | ||
@@ -1649,0 +2132,0 @@ .on('close', onDrain) |
@@ -74,3 +74,3 @@ 'use strict' | ||
function buildConnector ({ maxCachedSessions, socketPath, timeout, ...opts }) { | ||
function buildConnector ({ allowH2, maxCachedSessions, socketPath, timeout, ...opts }) { | ||
if (maxCachedSessions != null && (!Number.isInteger(maxCachedSessions) || maxCachedSessions < 0)) { | ||
@@ -83,3 +83,3 @@ throw new InvalidArgumentError('maxCachedSessions must be a positive integer or zero') | ||
timeout = timeout == null ? 10e3 : timeout | ||
allowH2 = allowH2 != null ? allowH2 : false | ||
return function connect ({ hostname, host, protocol, port, servername, localAddress, httpSocket }, callback) { | ||
@@ -104,2 +104,4 @@ let socket | ||
localAddress, | ||
// TODO(HTTP/2): Add support for h2c | ||
ALPNProtocols: allowH2 ? ['http/1.1', 'h2'] : ['http/1.1'], | ||
socket: httpSocket, // upgrade socket connection | ||
@@ -106,0 +108,0 @@ port: port || 443, |
@@ -8,2 +8,3 @@ 'use strict' | ||
const assert = require('assert') | ||
const { kHTTP2BuildRequest, kHTTP2CopyHeaders, kHTTP1BuildRequest } = require('./symbols') | ||
const util = require('./util') | ||
@@ -66,3 +67,4 @@ | ||
reset, | ||
throwOnError | ||
throwOnError, | ||
expectContinue | ||
}, handler) { | ||
@@ -103,2 +105,6 @@ if (typeof path !== 'string') { | ||
if (expectContinue != null && typeof expectContinue !== 'boolean') { | ||
throw new InvalidArgumentError('invalid expectContinue') | ||
} | ||
this.headersTimeout = headersTimeout | ||
@@ -156,2 +162,5 @@ | ||
// Only for H2 | ||
this.expectContinue = expectContinue != null ? expectContinue : false | ||
if (Array.isArray(headers)) { | ||
@@ -276,2 +285,3 @@ if (headers.length % 2 !== 0) { | ||
// TODO: adjust to support H2 | ||
addHeader (key, value) { | ||
@@ -281,5 +291,55 @@ processHeader(this, key, value) | ||
} | ||
static [kHTTP1BuildRequest] (origin, opts, handler) { | ||
// TODO: Migrate header parsing here, to make Requests | ||
// HTTP agnostic | ||
return new Request(origin, opts, handler) | ||
} | ||
static [kHTTP2BuildRequest] (origin, opts, handler) { | ||
const headers = opts.headers | ||
opts = { ...opts, headers: null } | ||
const request = new Request(origin, opts, handler) | ||
request.headers = {} | ||
if (Array.isArray(headers)) { | ||
if (headers.length % 2 !== 0) { | ||
throw new InvalidArgumentError('headers array must be even') | ||
} | ||
for (let i = 0; i < headers.length; i += 2) { | ||
processHeader(request, headers[i], headers[i + 1], true) | ||
} | ||
} else if (headers && typeof headers === 'object') { | ||
const keys = Object.keys(headers) | ||
for (let i = 0; i < keys.length; i++) { | ||
const key = keys[i] | ||
processHeader(request, key, headers[key], true) | ||
} | ||
} else if (headers != null) { | ||
throw new InvalidArgumentError('headers must be an object or an array') | ||
} | ||
return request | ||
} | ||
static [kHTTP2CopyHeaders] (raw) { | ||
const rawHeaders = raw.split('\r\n') | ||
const headers = {} | ||
for (const header of rawHeaders) { | ||
const [key, value] = header.split(': ') | ||
if (value == null || value.length === 0) continue | ||
if (headers[key]) headers[key] += `,${value}` | ||
else headers[key] = value | ||
} | ||
return headers | ||
} | ||
} | ||
function processHeaderValue (key, val) { | ||
function processHeaderValue (key, val, skipAppend) { | ||
if (val && typeof val === 'object') { | ||
@@ -295,6 +355,6 @@ throw new InvalidArgumentError(`invalid ${key} header`) | ||
return `${key}: ${val}\r\n` | ||
return skipAppend ? val : `${key}: ${val}\r\n` | ||
} | ||
function processHeader (request, key, val) { | ||
function processHeader (request, key, val, skipAppend = false) { | ||
if (val && (typeof val === 'object' && !Array.isArray(val))) { | ||
@@ -367,6 +427,12 @@ throw new InvalidArgumentError(`invalid ${key} header`) | ||
for (let i = 0; i < val.length; i++) { | ||
request.headers += processHeaderValue(key, val[i]) | ||
if (skipAppend) { | ||
if (request.headers[key]) request.headers[key] += `,${processHeaderValue(key, val[i], skipAppend)}` | ||
else request.headers[key] = processHeaderValue(key, val[i], skipAppend) | ||
} else { | ||
request.headers += processHeaderValue(key, val[i]) | ||
} | ||
} | ||
} else { | ||
request.headers += processHeaderValue(key, val) | ||
if (skipAppend) request.headers[key] = processHeaderValue(key, val, skipAppend) | ||
else request.headers += processHeaderValue(key, val) | ||
} | ||
@@ -373,0 +439,0 @@ } |
@@ -54,3 +54,9 @@ module.exports = { | ||
kInterceptors: Symbol('dispatch interceptors'), | ||
kMaxResponseSize: Symbol('max response size') | ||
kMaxResponseSize: Symbol('max response size'), | ||
kHTTP2Session: Symbol('http2Session'), | ||
kHTTP2SessionState: Symbol('http2Session state'), | ||
kHTTP2BuildRequest: Symbol('http2 build request'), | ||
kHTTP1BuildRequest: Symbol('http1 build request'), | ||
kHTTP2CopyHeaders: Symbol('http2 copy headers'), | ||
kHTTPConnVersion: Symbol('http connection version') | ||
} |
@@ -202,2 +202,3 @@ 'use strict' | ||
} | ||
stream.destroy(err) | ||
@@ -222,2 +223,5 @@ } else if (err) { | ||
function parseHeaders (headers, obj = {}) { | ||
// For H2 support | ||
if (!Array.isArray(headers)) return headers | ||
for (let i = 0; i < headers.length; i += 2) { | ||
@@ -360,2 +364,8 @@ const key = headers[i].toString().toLowerCase() | ||
async function * convertIterableToBuffer (iterable) { | ||
for await (const chunk of iterable) { | ||
yield Buffer.isBuffer(chunk) ? chunk : Buffer.from(chunk) | ||
} | ||
} | ||
let ReadableStream | ||
@@ -368,4 +378,3 @@ function ReadableStreamFrom (iterable) { | ||
if (ReadableStream.from) { | ||
// https://github.com/whatwg/streams/pull/1083 | ||
return ReadableStream.from(iterable) | ||
return ReadableStream.from(convertIterableToBuffer(iterable)) | ||
} | ||
@@ -372,0 +381,0 @@ |
@@ -390,2 +390,3 @@ 'use strict' | ||
headers, | ||
preservePath: true, | ||
defParamCharset: 'utf8' | ||
@@ -536,3 +537,3 @@ }) | ||
// errorSteps, and object’s relevant global object. | ||
fullyReadBody(object[kState].body, successSteps, errorSteps) | ||
await fullyReadBody(object[kState].body, successSteps, errorSteps) | ||
@@ -539,0 +540,0 @@ // 7. Return promise. |
@@ -52,3 +52,3 @@ 'use strict' | ||
// https://fetch.spec.whatwg.org/#dom-response-json | ||
static json (data = undefined, init = {}) { | ||
static json (data, init = {}) { | ||
webidl.argumentLengthCheck(arguments, 1, { header: 'Response.json' }) | ||
@@ -430,3 +430,3 @@ | ||
// https://fetch.spec.whatwg.org/#appropriate-network-error | ||
function makeAppropriateNetworkError (fetchParams) { | ||
function makeAppropriateNetworkError (fetchParams, err = null) { | ||
// 1. Assert: fetchParams is canceled. | ||
@@ -438,4 +438,4 @@ assert(isCancelled(fetchParams)) | ||
return isAborted(fetchParams) | ||
? makeNetworkError(new DOMException('The operation was aborted.', 'AbortError')) | ||
: makeNetworkError('Request was cancelled.') | ||
? makeNetworkError(Object.assign(new DOMException('The operation was aborted.', 'AbortError'), { cause: err })) | ||
: makeNetworkError(Object.assign(new DOMException('Request was cancelled.'), { cause: err })) | ||
} | ||
@@ -442,0 +442,0 @@ |
@@ -559,7 +559,18 @@ 'use strict' | ||
// 2. Let expectedValue be the val component of item. | ||
const expectedValue = item.hash | ||
let expectedValue = item.hash | ||
// See https://github.com/web-platform-tests/wpt/commit/e4c5cc7a5e48093220528dfdd1c4012dc3837a0e | ||
// "be liberal with padding". This is annoying, and it's not even in the spec. | ||
if (expectedValue.endsWith('==')) { | ||
expectedValue = expectedValue.slice(0, -2) | ||
} | ||
// 3. Let actualValue be the result of applying algorithm to bytes. | ||
const actualValue = crypto.createHash(algorithm).update(bytes).digest('base64') | ||
let actualValue = crypto.createHash(algorithm).update(bytes).digest('base64') | ||
if (actualValue.endsWith('==')) { | ||
actualValue = actualValue.slice(0, -2) | ||
} | ||
// 4. If actualValue is a case-sensitive match for expectedValue, | ||
@@ -570,2 +581,12 @@ // return true. | ||
} | ||
let actualBase64URL = crypto.createHash(algorithm).update(bytes).digest('base64url') | ||
if (actualBase64URL.endsWith('==')) { | ||
actualBase64URL = actualBase64URL.slice(0, -2) | ||
} | ||
if (actualBase64URL === expectedValue) { | ||
return true | ||
} | ||
} | ||
@@ -817,3 +838,3 @@ | ||
*/ | ||
function fullyReadBody (body, processBody, processBodyError) { | ||
async function fullyReadBody (body, processBody, processBodyError) { | ||
// 1. If taskDestination is null, then set taskDestination to | ||
@@ -824,7 +845,7 @@ // the result of starting a new parallel queue. | ||
// fetch task to run processBody given bytes, with taskDestination. | ||
const successSteps = (bytes) => queueMicrotask(() => processBody(bytes)) | ||
const successSteps = processBody | ||
// 3. Let errorSteps be to queue a fetch task to run processBodyError, | ||
// with taskDestination. | ||
const errorSteps = (error) => queueMicrotask(() => processBodyError(error)) | ||
const errorSteps = processBodyError | ||
@@ -844,3 +865,8 @@ // 4. Let reader be the result of getting a reader for body’s stream. | ||
// 5. Read all bytes from reader, given successSteps and errorSteps. | ||
readAllBytes(reader, successSteps, errorSteps) | ||
try { | ||
const result = await readAllBytes(reader) | ||
successSteps(result) | ||
} catch (e) { | ||
errorSteps(e) | ||
} | ||
} | ||
@@ -914,6 +940,4 @@ | ||
* @param {ReadableStreamDefaultReader} reader | ||
* @param {(bytes: Uint8Array) => void} successSteps | ||
* @param {(error: Error) => void} failureSteps | ||
*/ | ||
async function readAllBytes (reader, successSteps, failureSteps) { | ||
async function readAllBytes (reader) { | ||
const bytes = [] | ||
@@ -923,17 +947,7 @@ let byteLength = 0 | ||
while (true) { | ||
let done | ||
let chunk | ||
const { done, value: chunk } = await reader.read() | ||
try { | ||
({ done, value: chunk } = await reader.read()) | ||
} catch (e) { | ||
// 1. Call failureSteps with e. | ||
failureSteps(e) | ||
return | ||
} | ||
if (done) { | ||
// 1. Call successSteps with bytes. | ||
successSteps(Buffer.concat(bytes, byteLength)) | ||
return | ||
return Buffer.concat(bytes, byteLength) | ||
} | ||
@@ -944,4 +958,3 @@ | ||
if (!isUint8Array(chunk)) { | ||
failureSteps(new TypeError('Received non-Uint8Array chunk')) | ||
return | ||
throw new TypeError('Received non-Uint8Array chunk') | ||
} | ||
@@ -948,0 +961,0 @@ |
@@ -6,2 +6,3 @@ 'use strict' | ||
const { URLSerializer } = require('../fetch/dataURL') | ||
const { getGlobalOrigin } = require('../fetch/global') | ||
const { staticPropertyDescriptors, states, opcodes, emptyBuffer } = require('./constants') | ||
@@ -61,14 +62,24 @@ const { | ||
// 1. Let urlRecord be the result of applying the URL parser to url. | ||
// 1. Let baseURL be this's relevant settings object's API base URL. | ||
const baseURL = getGlobalOrigin() | ||
// 1. Let urlRecord be the result of applying the URL parser to url with baseURL. | ||
let urlRecord | ||
try { | ||
urlRecord = new URL(url) | ||
urlRecord = new URL(url, baseURL) | ||
} catch (e) { | ||
// 2. If urlRecord is failure, then throw a "SyntaxError" DOMException. | ||
// 3. If urlRecord is failure, then throw a "SyntaxError" DOMException. | ||
throw new DOMException(e, 'SyntaxError') | ||
} | ||
// 3. If urlRecord’s scheme is not "ws" or "wss", then throw a | ||
// "SyntaxError" DOMException. | ||
// 4. If urlRecord’s scheme is "http", then set urlRecord’s scheme to "ws". | ||
if (urlRecord.protocol === 'http:') { | ||
urlRecord.protocol = 'ws:' | ||
} else if (urlRecord.protocol === 'https:') { | ||
// 5. Otherwise, if urlRecord’s scheme is "https", set urlRecord’s scheme to "wss". | ||
urlRecord.protocol = 'wss:' | ||
} | ||
// 6. If urlRecord’s scheme is not "ws" or "wss", then throw a "SyntaxError" DOMException. | ||
if (urlRecord.protocol !== 'ws:' && urlRecord.protocol !== 'wss:') { | ||
@@ -81,9 +92,9 @@ throw new DOMException( | ||
// 4. If urlRecord’s fragment is non-null, then throw a "SyntaxError" | ||
// 7. If urlRecord’s fragment is non-null, then throw a "SyntaxError" | ||
// DOMException. | ||
if (urlRecord.hash) { | ||
if (urlRecord.hash || urlRecord.href.endsWith('#')) { | ||
throw new DOMException('Got fragment', 'SyntaxError') | ||
} | ||
// 5. If protocols is a string, set protocols to a sequence consisting | ||
// 8. If protocols is a string, set protocols to a sequence consisting | ||
// of just that string. | ||
@@ -94,3 +105,3 @@ if (typeof protocols === 'string') { | ||
// 6. If any of the values in protocols occur more than once or otherwise | ||
// 9. If any of the values in protocols occur more than once or otherwise | ||
// fail to match the requirements for elements that comprise the value | ||
@@ -107,8 +118,8 @@ // of `Sec-WebSocket-Protocol` fields as defined by The WebSocket | ||
// 7. Set this's url to urlRecord. | ||
this[kWebSocketURL] = urlRecord | ||
// 10. Set this's url to urlRecord. | ||
this[kWebSocketURL] = new URL(urlRecord.href) | ||
// 8. Let client be this's relevant settings object. | ||
// 11. Let client be this's relevant settings object. | ||
// 9. Run this step in parallel: | ||
// 12. Run this step in parallel: | ||
@@ -115,0 +126,0 @@ // 1. Establish a WebSocket connection given urlRecord, protocols, |
{ | ||
"name": "undici", | ||
"version": "5.23.0", | ||
"version": "5.24.0-test.0", | ||
"description": "An HTTP/1.1 client, written from scratch for Node.js", | ||
@@ -67,6 +67,7 @@ "homepage": "https://undici.nodejs.org", | ||
"prepare": "husky install", | ||
"postpublish": "node scripts/publish-types.js", | ||
"fuzz": "jsfuzz test/fuzzing/fuzz.js corpus" | ||
}, | ||
"devDependencies": { | ||
"@sinonjs/fake-timers": "^10.0.2", | ||
"@sinonjs/fake-timers": "^11.1.0", | ||
"@types/node": "^18.0.3", | ||
@@ -102,3 +103,3 @@ "abort-controller": "^3.0.0", | ||
"tap": "^16.1.0", | ||
"tsd": "^0.28.1", | ||
"tsd": "^0.29.0", | ||
"typescript": "^5.0.2", | ||
@@ -105,0 +106,0 @@ "wait-on": "^7.0.1", |
@@ -21,16 +21,18 @@ # undici | ||
The benchmark is a simple `hello world` [example](benchmarks/benchmark.js) using a | ||
number of unix sockets (connections) with a pipelining depth of 10 running on Node 16. | ||
The benchmarks below have the [simd](https://github.com/WebAssembly/simd) feature enabled. | ||
number of unix sockets (connections) with a pipelining depth of 10 running on Node 20.6.0. | ||
### Connections 1 | ||
| Tests | Samples | Result | Tolerance | Difference with slowest | | ||
|---------------------|---------|---------------|-----------|-------------------------| | ||
| http - no keepalive | 15 | 4.63 req/sec | ± 2.77 % | - | | ||
| http - keepalive | 10 | 4.81 req/sec | ± 2.16 % | + 3.94 % | | ||
| undici - stream | 25 | 62.22 req/sec | ± 2.67 % | + 1244.58 % | | ||
| undici - dispatch | 15 | 64.33 req/sec | ± 2.47 % | + 1290.24 % | | ||
| undici - request | 15 | 66.08 req/sec | ± 2.48 % | + 1327.88 % | | ||
| undici - pipeline | 10 | 66.13 req/sec | ± 1.39 % | + 1329.08 % | | ||
| http - no keepalive | 15 | 5.32 req/sec | ± 2.61 % | - | | ||
| http - keepalive | 10 | 5.35 req/sec | ± 2.47 % | + 0.44 % | | ||
| undici - fetch | 15 | 41.85 req/sec | ± 2.49 % | + 686.04 % | | ||
| undici - pipeline | 40 | 50.36 req/sec | ± 2.77 % | + 845.92 % | | ||
| undici - stream | 15 | 60.58 req/sec | ± 2.75 % | + 1037.72 % | | ||
| undici - request | 10 | 61.19 req/sec | ± 2.60 % | + 1049.24 % | | ||
| undici - dispatch | 20 | 64.84 req/sec | ± 2.81 % | + 1117.81 % | | ||
### Connections 50 | ||
@@ -40,9 +42,11 @@ | ||
|---------------------|---------|------------------|-----------|-------------------------| | ||
| http - no keepalive | 50 | 3546.49 req/sec | ± 2.90 % | - | | ||
| http - keepalive | 15 | 5692.67 req/sec | ± 2.48 % | + 60.52 % | | ||
| undici - pipeline | 25 | 8478.71 req/sec | ± 2.62 % | + 139.07 % | | ||
| undici - request | 20 | 9766.66 req/sec | ± 2.79 % | + 175.39 % | | ||
| undici - stream | 15 | 10109.74 req/sec | ± 2.94 % | + 185.06 % | | ||
| undici - dispatch | 25 | 10949.73 req/sec | ± 2.54 % | + 208.75 % | | ||
| undici - fetch | 30 | 2107.19 req/sec | ± 2.69 % | - | | ||
| http - no keepalive | 10 | 2698.90 req/sec | ± 2.68 % | + 28.08 % | | ||
| http - keepalive | 10 | 4639.49 req/sec | ± 2.55 % | + 120.17 % | | ||
| undici - pipeline | 40 | 6123.33 req/sec | ± 2.97 % | + 190.59 % | | ||
| undici - stream | 50 | 9426.51 req/sec | ± 2.92 % | + 347.35 % | | ||
| undici - request | 10 | 10162.88 req/sec | ± 2.13 % | + 382.29 % | | ||
| undici - dispatch | 50 | 11191.11 req/sec | ± 2.98 % | + 431.09 % | | ||
## Quick Start | ||
@@ -49,0 +53,0 @@ |
import { URL } from 'url' | ||
import { TlsOptions } from 'tls' | ||
import Dispatcher from './dispatcher' | ||
import DispatchInterceptor from './dispatcher' | ||
import buildConnector from "./connector"; | ||
@@ -22,3 +21,3 @@ | ||
export interface OptionsInterceptors { | ||
Client: readonly DispatchInterceptor[]; | ||
Client: readonly Dispatcher.DispatchInterceptor[]; | ||
} | ||
@@ -30,3 +29,3 @@ export interface Options { | ||
maxHeaderSize?: number; | ||
/** The amount of time the parser will wait to receive the complete HTTP headers (Node 14 and above only). Default: `300e3` milliseconds (300s). */ | ||
/** The amount of time, in milliseconds, the parser will wait to receive the complete HTTP headers (Node 14 and above only). Default: `300e3` milliseconds (300s). */ | ||
headersTimeout?: number; | ||
@@ -45,9 +44,9 @@ /** @deprecated unsupported socketTimeout, use headersTimeout & bodyTimeout instead */ | ||
keepAlive?: never; | ||
/** the timeout after which a socket without active requests will time out. Monitors time between activity on a connected socket. This value may be overridden by *keep-alive* hints from the server. Default: `4e3` milliseconds (4s). */ | ||
/** the timeout, in milliseconds, after which a socket without active requests will time out. Monitors time between activity on a connected socket. This value may be overridden by *keep-alive* hints from the server. Default: `4e3` milliseconds (4s). */ | ||
keepAliveTimeout?: number; | ||
/** @deprecated unsupported maxKeepAliveTimeout, use keepAliveMaxTimeout instead */ | ||
maxKeepAliveTimeout?: never; | ||
/** the maximum allowed `idleTimeout` when overridden by *keep-alive* hints from the server. Default: `600e3` milliseconds (10min). */ | ||
/** the maximum allowed `idleTimeout`, in milliseconds, when overridden by *keep-alive* hints from the server. Default: `600e3` milliseconds (10min). */ | ||
keepAliveMaxTimeout?: number; | ||
/** A number subtracted from server *keep-alive* hints when overriding `idleTimeout` to account for timing inaccuracies caused by e.g. transport latency. Default: `1e3` milliseconds (1s). */ | ||
/** A number of milliseconds subtracted from server *keep-alive* hints when overriding `idleTimeout` to account for timing inaccuracies caused by e.g. transport latency. Default: `1e3` milliseconds (1s). */ | ||
keepAliveTimeoutThreshold?: number; | ||
@@ -77,3 +76,13 @@ /** TODO */ | ||
/** The amount of time in milliseconds to wait for a connection attempt to finish before trying the next address when using the `autoSelectFamily` option. */ | ||
autoSelectFamilyAttemptTimeout?: number; | ||
autoSelectFamilyAttemptTimeout?: number; | ||
/** | ||
* @description Enables support for H2 if the server has assigned bigger priority to it through ALPN negotiation. | ||
* @default false | ||
*/ | ||
allowH2?: boolean; | ||
/** | ||
* @description Dictates the maximum number of concurrent streams for a single H2 session. It can be overriden by a SETTINGS remote frame. | ||
* @default 100 | ||
*/ | ||
maxConcurrentStreams?: number | ||
} | ||
@@ -80,0 +89,0 @@ export interface SocketInfo { |
@@ -112,3 +112,3 @@ import { URL } from 'url' | ||
upgrade?: boolean | string | null; | ||
/** The amount of time the parser will wait to receive the complete HTTP headers. Defaults to 300 seconds. */ | ||
/** The amount of time, in milliseconds, the parser will wait to receive the complete HTTP headers. Defaults to 300 seconds. */ | ||
headersTimeout?: number | null; | ||
@@ -121,2 +121,4 @@ /** The timeout after which a request will time out, in milliseconds. Monitors time between receiving body data. Use 0 to disable it entirely. Defaults to 300 seconds. */ | ||
throwOnError?: boolean; | ||
/** For H2, it appends the expect: 100-continue header, and halts the request body until a 100-continue is received from the remote server*/ | ||
expectContinue?: boolean; | ||
} | ||
@@ -123,0 +125,0 @@ export interface ConnectOptions { |
Sorry, the diff of this file is too big to display
License Policy Violation
LicenseThis package is not allowed per your license policy. Review the package's license to ensure compliance.
Found 1 instance in 1 package
Network access
Supply chain riskThis module accesses the network.
Found 1 instance in 1 package
No v1
QualityPackage is not semver >=1. This means it is not stable and does not support ^ ranges.
Found 1 instance in 1 package
License Policy Violation
LicenseThis package is not allowed per your license policy. Review the package's license to ensure compliance.
Found 1 instance in 1 package
1142308
19908
443
1
9