Security News
Supply Chain Attack Detected in Solana's web3.js Library
A supply chain attack has been detected in versions 1.95.6 and 1.95.7 of the popular @solana/web3.js library.
ssh2-sftp-client-fork
Advanced tools
an SFTP client for node.js, a wrapper around SSH2 which provides a high level convenience abstraction as well as a Promise based API.
Documentation on the methods and available options in the underlying modules can be found on the SSH2 and SSH2-STREAMS project pages.
Current stable release is v5.2.1.
Code has been tested against Node versions 12.18.2 and 13.14.0
Node versions < 10.x are not supported.
WARNING There is currently an issue with both the fastPut() and fastGet() methods when using Node versions greater than 14.0.0. This is a bug in the underlying ssh2-streams library and needs to be fixed upstream. The issue appears to be related to the concurrency operations of these two functions. A workaround is to set concurrency to 1 using the options object. Alternatively, use get() or put(), which do not use concurrency and which will provide the same performance as fastGet() or fastPut() when they are set to use a concurrency of 1. A bug report has been logged against the ssh2-streams library as issue 156.
npm install ssh2-sftp-client
let Client = require('ssh2-sftp-client');
let sftp = new Client();
sftp.connect({
host: '127.0.0.1',
port: '8080',
username: 'username',
password: '******'
}).then(() => {
return sftp.list('/pathname');
}).then(data => {
console.log(data, 'the data info');
}).catch(err => {
console.log(err, 'catch error');
});
auxList()
is available in list()
, making auxList()
unnecessary.''
if the path does not exist rather than throwing an exception.ssh2
and ssh2-streams
libraries use events to signal errors. Providing a clean Promise based API and managing these events can be challenging as an error event can fire at any time (including in-between the resolution of one promise and the commencement of another). As you cannot use try/catch
blocks to reliably manage error events (for a similar reason - see Node's event documentation for details), a slightly more complex solution was required. See the section below on Error Event Handling for more details. In basic terms, a default handler is now used that will log the error and clear the SFTP connection if no Promise error handler has handled the error. This prevents the uncaughtException error and provides a reasonably clean way to deal with unexpected errors that fire in-between Promise execution activities.end()
processing. At least one SFTP server (Azure SFTP) seems to generate an error in response to the end()
call. As end()
has been called, we don't really care if an error occurs provided the connection is closed. Therefore, a new default error listener for the end()
method has been added that will simply ignore any errors which occur during a call to end the connection.Providing a clean Promise API for the SSH2 to manage basic SFTP functionality presents a couple of challenges for managing errors. The SSH2
module uses events to communicate various state changes and error conditions. These events can fire at any time.
On the client side, we wrap basic SFTP actions in Javascript Promises, allowing clients to use either the standard Promise API or async/await to model SFTP interactions. Creating an SFTP connection returns a promise, which resolves if a connection is successfully established and is rejected otherwise. Downloading a file using get()
or fastGet()
generates a new Promise which is either resolved, indicating file has been successfully downloaded or rejected, indicating the download failed. All pretty straight-forward.
When the Promise is created, an error event handler is added to the SFTP object to catch any errors that fire during the execution of the promise. If an error event fires, the Promise is rejected and the error returned to the client as part of the rejection. After the Promise has resolved or rejected, the error listener is removed (the error listener is specific to each promise because it needs to call the reject method associated with that promise). As a promise can only be resolved or rejected once, after the Promise has completed, the error listener is of no further use.
This all works fine when an error event fires during the execution of a Promise. However, what about outside promise execution? Consider the following flow;
What happens at this point? There is no active promise executing, so no Promise specific error handler in play. Your script is off processing the data from the previously downloaded file, so there is no currently executing try/catch block around the SFTP client object. Basically, there is nothing listening of any errors at this point. What will happen?
Well, basically, the error event will bubble up to the top level of the node process context and cause an uncaughtException error, display the error and dump a stack trace and cause the node process to exit. In basic terms, your process will crash. Not a great outcome.
There are a number of things we can do to improve the situation. However, nearly all of them have some drawbacks. We could -
client.on()
method would allow you to add your own error handler. This would provide a way to manage error events, but you want to make sure you only handle error events not handled already by the Promise error handlers. Worse yet, you cannot know before hand the processing context of your script at the point the error event fires. This means your error handling is likely to be complex and difficult to manage. Worse yet, these types of errors are quite rare in most situations and your now being required to add significant additional complexity to deal with a rare edge case. However, sometimes, you just need to deal with this sort of complexity and the client.on()
method does give you that option.What we really want is a solution which will be simple for the majority of clients, but provide additional power when needed. What we have done is add a default error handler which will only take action if no Promise error handler has fired. All the default error handler does is log the error to console.error() and set the SFTP connection to undefined so that any further attempts to use the connection will throw an error inside the Promise which attempts to use it.
The advantage of this approach is that it stops the abrupt exiting of the node script due to an uncaught exception error and provides a reasonable outcome for most use cases. For example, in the scenario outlined above, if an error event fires while your script is processing the data already downloaded, it will not impact on your script immediately. An error will be logged to console.error(), but your script will continue to run. Once you have completed processing your data, if you attempt another SFTP call, it will fail with an error about no available SFTP connections. As this will occur within the context of interacting with the SFTP server, your script can take appropriate action to resolve the issue (such as re-connecting to the server). On the other hand, if after processing the file your done and just want to end, then you can just ignore the error, perform any necessary cleanup work and exit successfully.
The event handlers added by each Promise are added using the prependListener()
function. This ensures the handler is fired before any other error handlers which may be defined. As part of the processing, these error handler set a flag property this.errorHandled
to true, indicating the error has been handled.
In addition to the Promise error handlers, there is a default error handler which will fire after any Promise error handler. The default error handler looks to see if the this.errorHandler
flag is true. If it is, it knows the error has been handled and it just resets it to false, taking no other action (so taht we are ready for the next error). If the flag is false, the default handler knows it must handle the error. In this case, the handler will log the error to console.error()
, will set the SFTP connection to undefined to prevent any further attempts to use it and finally, ensure the this.errorHandler
flag is reset to false in preparation for the next error.
The connection options are the same as those offered by the underlying SSH2 module. For full details, please see SSH2 client methods
All the methods will return a Promise, except for on()
and removeListener()
, which are typically only used in special use cases.
All remote paths must either be absolute e.g. /absolute/path/to/file
or they can be relative with a prefix of either ./
(relative to current remote directory) or ../
(relative to parent of current remote directory) e.g. ./relative/path/to/file
or ../relative/to/parent/file
. It is also possible to do things like ../../../file
to specify the parent of the parent of the parent of the current remote directory. The shell tilde (~
) and common environment variables like $HOME
are NOT supported.
It is important to recognise that the current remote directory may not always be what you may expect. A lot will depend on the remote platform of the SFTP server and how the SFTP server has been configured. When things don't seem to be working as expected, it is often a good idea to verify your assumptions regarding the remote directory and remote paths. One way to do this is to login using a command line program like sftp
or lftp
.
There is a small performance hit for using ./
and ../
as the module must query the remote server to determine what the root path is and derive the absolute path. Using absolute paths are therefore more efficient and likely more robust.
When specifying file paths, ensure to include a full path i.e. include the remote filename. Don't expect the module to append the local file name to the path you provide. For example, the following will not work
client.put('/home/fred/test.txt', '/remote/dir');
will not result in the file test.txt
being copied to /remote/dir/test.txt
. You need to specify the target filename as well e.g.
client.put('/home/fred/test.txt', '/remote/dir/test.txt');
Note that the remote file name does not have to be the same as the local file name. The following works fine;
client.put('/home/fred/test.txt', '/remote/dir/test-copy.txt');
This will copy the local file test.txt
to the remote file test-copy.txt
in the directory /remote/dir
.
Constructor to create a new ssh2-sftp-client
object. An optional name
string can be provided, which will be used in error messages to help identify which client has thrown the error.
Constructor Arguments
Example Use
'use strict';
const Client = require('ssh2-sftp-client');
const config = {
host: 'example.com',
username: 'donald',
password: 'my-secret'
};
const sftp = new Client('example-client');
sftp.connect(config)
.then(() => {
return sftp.cwd();
})
.then(p => {
console.log(`Remote working directory is ${p}`);
return sftp.end();
})
.catch(err => {
console.log(`Error: ${err.message}`); // error message will include 'example-client'
});
Connect to an sftp server. Full documentation for connection options is available here
Connection Options
This module is based on the excellent SSH2 module. That module is a general SSH2 client and server library and provides much more functionality than just SFTP connectivity. Many of the connect options provided by that module are less relevant for SFTP connections. It is recommended you keep the config options to the minimum needed and stick to the options listed in the commonOpts
below.
The retries
, retry_factor
and retry_minTimeout
options are not part of the SSH2 module. These are part of the configuration for the retry package and what is used to enable retrying of sftp connection attempts. See the documentation for that package for an explanation of these values.
// common options
let commonOpts {
host: 'localhost', // string Hostname or IP of server.
port: 22, // Port number of the server.
forceIPv4: false, // boolean (optional) Only connect via IPv4 address
forceIPv6: false, // boolean (optional) Only connect via IPv6 address
username: 'donald', // string Username for authentication.
password: 'borsch', // string Password for password-based user authentication
agent: process.env.SSH_AGENT, // string - Path to ssh-agent's UNIX socket
privateKey: fs.readFileSync('/path/to/key'), // Buffer or string that contains
passphrase; 'a pass phrase', // string - For an encrypted private key
readyTimeout: 20000, // integer How long (in ms) to wait for the SSH handshake
strictVendor: true // boolean - Performs a strict server vendor check
debug: myDebug // function - Set this to a function that receives a single
// string argument to get detailed (local) debug information.
retries: 2 // integer. Number of times to retry connecting
retry_factor: 2 // integer. Time factor used to calculate time between retries
retry_minTimeout: 2000 // integer. Minimum timeout between attempts
};
// rarely used options
let advancedOpts {
localAddress,
localPort,
hostHash,
hostVerifier,
agentForward,
localHostname,
localUsername,
tryKeyboard,
authHandler,
keepaliveInterval,
keepaliveCountMax,
sock,
algorithms,
compress
};
Example Use
sftp.connect({
host: example.com,
port: 22,
username: 'donald',
password: 'youarefired'
});
Retrieves a directory listing. This method returns a Promise, which once realised, returns an array of objects representing items in the remote directory.
/.*/
.Example Use
const Client = require('ssh2-sftp-client');
const config = {
host: 'example.com',
port: 22,
username: 'red-don',
password: 'my-secret'
};
let sftp = new Client;
sftp.connect(config)
.then(() => {
return sftp.list('/path/to/remote/dir');
})
.then(data => {
console.log(data);
})
.then(() => {
sftp.end();
})
.catch(err => {
console.error(err.message);
});
Return Objects
The objects in the array returned by list()
have the following properties;
{
type: // file type(-, d, l)
name: // file name
size: // file size
modifyTime: // file timestamp of modified time
accessTime: // file timestamp of access time
rights: {
user:
group:
other:
},
owner: // user ID
group: // group ID
}
Pattern Filter
The filter options can be a regular expression (most powerful option) or a simple glob-like string where * will match any number of characters, e.g.
foo* => foo, foobar, foobaz
*bar => bar, foobar, tabbar
*oo* => foo, foobar, look, book
The glob-style matching is very simple. In most cases, you are best off using a real regular expression which will allow you to do more powerful matching and anchor matches to the beginning/end of the string etc.
Tests to see if remote file or directory exists. Returns type of remote object if it exists or false if it does not.
Example Use
const Client = require('ssh2-sftp-client');
const config = {
host: 'example.com',
port: 22,
username: 'red-don',
password: 'my-secret'
};
let sftp = new Client;
sftp.connect(config)
.then(() => {
return sftp.exists('/path/to/remote/dir');
})
.then(data => {
console.log(data); // will be false or d, -, l (dir, file or link)
})
.then(() => {
sftp.end();
})
.catch(err => {
console.error(err.message);
});
Returns the attributes associated with the object pointed to by path
.
Attributes
The stat()
method returns an object with the following properties;
let stats = {
mode: 33279, // integer representing type and permissions
uid: 1000, // user ID
gid: 985, // group ID
size: 5, // file size
accessTime: 1566868566000, // Last access time. milliseconds
modifyTime: 1566868566000, // last modify time. milliseconds
isDirectory: false, // true if object is a directory
isFile: true, // true if object is a file
isBlockDevice: false, // true if object is a block device
isCharacterDevice: false, // true if object is a character device
isSymbolicLink: false, // true if object is a symbolic link
isFIFO: false, // true if object is a FIFO
isSocket: false // true if object is a socket
};
Example Use
let client = new Client();
client.connect(config)
.then(() => {
return client.stat('/path/to/remote/file');
})
.then(data => {
// do something with data
})
.then(() => {
client.end();
})
.catch(err => {
console.error(err.message);
});
Retrieve a file from a remote SFTP server. The dst
argument defines the destination and can be either a string, a stream object or undefined. If it is a string, it is interpreted as the path to a location on the local file system (path should include the file name). If it is a stream object, the remote data is passed to it via a call to pipe(). If dst
is undefined, the method will put the data into a buffer and return that buffer when the Promise is resolved. If dst
is defined, it is returned when the Promise is resolved.
In general, if your going to pass in a string as the destination, you are better off using the fastGet()
method.
get()
command (see below).Options
The options object can be used to pass options to the underlying readStream used to read the data from the remote server.
{
flags: 'r',
encoding: null,
handle: null,
mode: 0o666,
autoClose: true
}
Most of the time, you won't want to use any options. Sometimes, it may be useful to set the encoding. For example, to 'utf-8'. However, it is important not to do this for binary files to avoid data corruption.
Example Use
let client = new Client();
let remotePath = '/remote/server/path/file.txt';
let dst = fs.createWriteStream('/local/file/path/copy.txt');
client.connect(config)
.then(() => {
return client.get(remotePath, dst);
})
.then(() => {
client.end();
})
.catch(err => {
console.error(err.message);
});
zlib.createGunzip()
writeable stream, you can both download and decompress a gzip file 'on the fly'.Downloads a file at remotePath to localPath using parallel reads for faster throughput. This is the simplest method if you just want to download a file.
fastGet()
(see below)Options
{
concurrency: 64, // integer. Number of concurrent reads to use
chunkSize: 32768, // integer. Size of each read in bytes
step: function(total_transferred, chunk, total) // callback called each time a
// chunk is transferred
}
Sample Use
let client = new Client();
let remotePath = '/server/path/file.txt';
let localPath = '/local/path/file.txt';
client.connect(config)
.then(() => {
client.fastGet(remotePath, localPath);
})
.then(() => {
client.end();
})
.catch(err => {
console.error(err.message);
});
Upload data from local system to remote server. If the src
argument is a string, it is interpreted as a local file path to be used for the data to transfer. If the src
argument is a buffer, the contents of the buffer are copied to the remote file and if it is a readable stream, the contents of that stream are piped to the remotePath
on the server.
Options
The following options are supported;
{
flags: 'w', // w - write and a - append
encoding: null, // use null for binary files
mode: 0o666, // mode to use for created file (rwx)
autoClose: true // automatically close the write stream when finished
}
The most common options to use are mode and encoding. The values shown above are the defaults. You do not have to set encoding to utf-8 for text files, null is fine for all file types. However, using utf-8 encoding for binary files will often result in data corruption.
Example Use
let client = new Client();
let data = fs.createReadStream('/path/to/local/file.txt');
let remote = '/path/to/remote/file.txt';
client.connect(config)
.then(() => {
return client.put(data, remote);
})
.then(() => {
return client.end();
})
.catch(err => {
console.error(err.message);
});
fastPut()
.Uploads the data in file at localPath
to a new file on remote server at remotePath
using concurrency. The options object allows tweaking of the fast put process.
Options
{
concurrency: 64, // integer. Number of concurrent reads
chunkSize: 32768, // integer. Size of each read in bytes
mode: 0o755, // mixed. Integer or string representing the file mode to set
step: function(total_transferred, chunk, total) // function. Called every time
// a part of a file was transferred
}
Example Use
let localFile = '/path/to/file.txt';
let remoteFile = '/path/to/remote/file.txt';
let client = new Client();
client.connect(config)
.then(() => {
client.fastPut(localFile, remoteFile);
})
.then(() => {
client.end();
})
.catch(err => {
console.error(err.message);
});
Append the input
data to an existing remote file. There is no integrity checking performed apart from normal writeStream checks. This function simply opens a writeStream on the remote file in append mode and writes the data passed in to the file.
Options
The following options are supported;
{
flags: 'a', // w - write and a - append
encoding: null, // use null for binary files
mode: 0o666, // mode to use for created file (rwx)
autoClose: true // automatically close the write stream when finished
}
The most common options to use are mode and encoding. The values shown above are the defaults. You do not have to set encoding to utf-8 for text files, null is fine for all file types. Generally, I would not attempt to append binary files.
Example Use
let remotePath = '/path/to/remote/file.txt';
let client = new Client();
client.connect(config)
.then(() => {
return client.append(Buffer.from('Hello world'), remotePath);
})
.then(() => {
return client.end();
})
.catch(err => {
console.error(err.message);
});
Create a new directory. If the recursive flag is set to true, the method will create any directories in the path which do not already exist. Recursive flag defaults to false.
Example Use
let remoteDir = '/path/to/new/dir';
let client = new Client();
client.connect(config)
.then(() => {
return client.mkdir(remoteDir, true);
})
.then(() => {
return client.end();
})
.catch(err => {
console.error(err.message);
});
Remove a directory. If removing a directory and recursive flag is set to true
, the specified directory and all sub-directories and files will be deleted. If set to false and the directory has sub-directories or files, the action will fail.
Example Use
let remoteDir = '/path/to/remote/dir';
let client = new Client();
client.connect(config)
.then(() => {
return client.rmdir(remoteDir, true);
})
.then(() => {
return client.end();
})
.catch(err => {
console.error(err.message);
});
Delete a file on the remote server.
Example Use
let remoteFile = '/path/to/remote/file.txt';
let client = new Client();
client.connect(config)
.then(() => {
return client.delete(remoteFile);
})
.then(() => {
return client.end();
})
.catch(err => {
console.error(err.message);
});
Rename a file or directory from fromPath
to toPath
. You must have the necessary permissions to modify the remote file.
Example Use
let from = '/remote/path/to/old.txt';
let to = '/remote/path/to/new.txt';
let client = new Client();
client.connect(config)
.then(() => {
return client.rename(from, to);
})
.then(() => {
return client.end();
})
.catch(err => {
console.error(err.message);
});
This method uses the openssh POSIX rename extension introduced in OpenSSH 4.8. The advantage of this version of rename over standard SFTP rename is that it is an atomic operation and will allow renaming a resource where the destination name exists. The POSIX rename will also work on some filesystems which do not support standard SFTP rename because they don't support the system hardlink() call. The POSIX rename extension is available on all openSSH servers from 4.8 and some other implementations. This is an extension to the standard SFTP protocol and therefore is not supported on all sSFTP servers.
let from = '/remote/path/to/old.txt';
let to = '/remote/path/to/new.txt';
let client = new Client();
client.connect(config)
.then(() => {
return client.posixRename(from, to);
})
.then(() => {
return client.end();
})
.catch(err => {
console.error(err.message);
});
Change the mode (read, write or execute permissions) of a remote file or directory.
Example Use
let path = '/path/to/remote/file.txt';
let ndwMode = 0o644; // rw-r-r
let client = new Client();
client.connect(config)
.then(() => {
return client.chmod(path, newMode);
})
.then(() => {
return client.end();
})
.catch(err => {
console.error(err.message);
});
Converts a relative path to an absolute path on the remote server. This method is mainly used internally to resolve remote path names. Returns '' if the path is not valid.
Returns what the server believes is the current remote working directory.
Upload the directory specified by srcDir
to the remote directory specified by dstDir
. The dstDir
will be created if necessary. Any sub directories within srcDir
will also be uploaded. Any existing files in the remote path will be overwritten.
The upload process also emits 'upload' events. These events are fired for each successfully uploaded file. The upload
event calls listeners with 1 argument, an object which has properties source and destination. The source property is the path of the file uploaded and the destination property is the path to where the file was uploaded to. The purpose of this event is to provide some way for client code to get feedback on the upload progress. You can add your own lisener using the on()
method.
Example
'use strict';
// Example of using the uploadDir() method to upload a directory
// to a remote SFTP server
const path = require('path');
const SftpClient = require('../src/index');
const dotenvPath = path.join(__dirname, '..', '.env');
require('dotenv').config({path: dotenvPath});
const config = {
host: process.env.SFTP_SERVER,
username: process.env.SFTP_USER,
password: process.env.SFTP_PASSWORD,
port: process.env.SFTP_PORT || 22
};
async function main() {
const client = new SftpClient('upload-test');
const src = path.join(__dirname, '..', 'test', 'testData', 'upload-src');
const dst = '/home/tim/upload-test';
try {
await client.connect(config);
client.on('upload', info => {
console.log(`Listener: Uploaded ${info.source}`);
});
let rslt = await client.uploadDir(src, dst);
return rslt;
} finally {
client.end();
}
}
main()
.then(msg => {
console.log(msg);
})
.catch(err => {
console.log(`main error: ${err.message}`);
});
Download the remote directory specified by srcDir
to the local file system directory specified by dstDir
. The dstDir
directory will be created if required. All sub directories within srcDir
will also be copied. Any existing files in the local path will be overwritten. No files in the local path will be deleted.
The method also emites download
events to provide a way to monitor download progress. The download event listener is called with one argument, an object with two properties, source and destination. The source property is the path to the remote file that has been downloaded and the destination is the local path to where the file was downloaded to. You can add a listener for this event using the on()
method.
Example
'use strict';
// Example of using the downloadDir() method to upload a directory
// to a remote SFTP server
const path = require('path');
const SftpClient = require('../src/index');
const dotenvPath = path.join(__dirname, '..', '.env');
require('dotenv').config({path: dotenvPath});
const config = {
host: process.env.SFTP_SERVER,
username: process.env.SFTP_USER,
password: process.env.SFTP_PASSWORD,
port: process.env.SFTP_PORT || 22
};
async function main() {
const client = new SftpClient('upload-test');
const dst = '/tmp';
const src = '/home/tim/upload-test';
try {
await client.connect(config);
client.on('download', info => {
console.log(`Listener: Download ${info.source}`);
});
let rslt = await client.downloadDir(src, dst);
return rslt;
} finally {
client.end();
}
}
main()
.then(msg => {
console.log(msg);
})
.catch(err => {
console.log(`main error: ${err.message}`);
});
Ends the current client session, releasing the client socket and associated resources. This function also removes all listeners associated with the client.
Example Use
let client = new Client();
client.connect(config)
.then(() => {
// do some sftp stuff
})
.then(() => {
return client.end();
})
.catch(err => {
console.error(err.message);
});
Although normally not required, you can add and remove custom listeners on the ssh2 client object. This object supports a number of events, but only a few of them have any meaning in the context of SFTP. These are
on(eventType, listener)
Adds the specified listener to the specified event type. It the event type is error
, the listener should accept 1 argument, which will be an Error object. If the event type is close
, the listener should accept one argument of a boolean type, which will be true when the client connection was closed due to errors.
removeListener(eventType, listener)
Removes the specified listener from the event specified in eventType. Note that the end()
method automatically removes all listeners from the client object.
Many SFTP servers have rate limiting protection which will drop connections once a limit has been reached. In particular, openSSH has the setting MaxStartups
, which can be a tuple of the form max:drop:full
where max
is the maximum allowed unauthenticated connections, drop
is a percentage value which specifies percentage of connections to be dropped once max
connections has been reached and full
is the number of connections at which point all subsequent connections will be dropped. e.g. 10:30:60
means allow up to 10 unauthenticated connections after which drop 30% of connection attempts until reaching 60 unauthenticated connections, at which time, drop all attempts.
Clients first make an unauthenticated connection to the SFTP server to begin negotiation of protocol settings (cipher, authentication method etc). If you are creating multiple connections in a script, it is easy to exceed the limit, resulting in some connections being dropped. As SSH2 only raises an 'end' event for these dropped connections, no error is detected. The ssh2-sftp-client
now listens for end
events during the connection process and if one is detected, will reject the connection promise.
One way to avoid this type of issue is to add a delay between connection attempts. It does not need to be a very long delay - just sufficient to permit the previous connection to be authenticated. In fact, the default setting for openSSH is 10:30:60
, so you really just need to have enough delay to ensure that the 1st connection has completed authentication before the 11th connection is attempted.
If the dst argument passed to the get method is a writeable stream, the remote file will be piped into that writeable. If the writeable you pass in is a writeable stream created with fs.createWriteStream()
, the data will be written to the file specified in the constructor call to createWriteStream()
.
The writeable stream can be any type of write stream. For example, the below code will convert all the characters in the remote file to upper case before it is saved to the local file system. This could just as easily be something like a gunzip stream from zlib
, enabling you to decompress remote zipped files as you bring them across before saving to local file system.
'use strict';
// Example of using a writeable with get to retrieve a file.
// This code will read the remote file, convert all characters to upper case
// and then save it to a local file
const Client = require('../src/index.js');
const path = require('path');
const fs = require('fs');
const through = require('through2');
const config = {
host: 'arch-vbox',
port: 22,
username: 'tim',
password: 'xxxx'
};
const sftp = new Client();
const remoteDir = '/home/tim/testServer';
function toupper() {
return through(function(buf, enc, next) {
next(null, buf.toString().toUpperCase());
});
}
sftp
.connect(config)
.then(() => {
return sftp.list(remoteDir);
})
.then(data => {
// list of files in testServer
console.dir(data);
let remoteFile = path.join(remoteDir, 'test.txt');
let upperWtr = toupper();
let fileWtr = fs.createWriteStream(path.join(__dirname, 'loud-text.txt'));
upperWtr.pipe(fileWtr);
return sftp.get(remoteFile, upperWtr);
})
.then(() => {
return sftp.end();
})
.catch(err => {
console.error(err.message);
});
There are a couple of ways to do this. Essentially, you want to setup SSH keys and use these for authentication to the remote server.
One solution, provided by @KalleVuorjoki is to use the SSH agent process. Note: SSHAUTHSOCK is normally created by your OS when you load the ssh-agent as part of the login session.
let sftp = new Client();
sftp.connect({
host: 'YOUR-HOST',
port: 'YOUR-PORT',
username: 'YOUR-USERNAME',
agent: process.env.SSH_AUTH_SOCK
}).then(() => {
sftp.fastPut(/* ... */)
}
Another alternative is to just pass in the SSH key directly as part of the configuration.
let sftp = new Client();
sftp.connect({
host: 'YOUR-HOST',
port: 'YOUR-PORT',
username: 'YOUR-USERNAME',
privateKey: fs.readFileSync('/path/to/ssh/key')
}).then(() => {
sftp.fastPut(/* ... */)
}
This solution was provided by @jmorino.
import { SocksClient } from 'socks';
import SFTPClient from 'ssh2-sftp-client';
const host = 'my-sftp-server.net';
const port = 22; // default SSH/SFTP port on remote server
// connect to SOCKS 5 proxy
const { socket } = await SocksClient.createConnection({
proxy: {
host: 'my.proxy', // proxy hostname
port: 1080, // proxy port
type: 5, // for SOCKS v5
},
command: 'connect',
destination: { host, port } // the remote SFTP server
});
const client = new SFTPClient();
client.connect({
host,
sock: socket, // pass the socket to proxy here (see ssh2 doc)
username: '.....',
privateKey: '.....'
})
// client is connected
Some users have encountered the error 'Timeout while waiting for handshake' or 'Handshake failed, no matching client->server ciphers. This is often due to the client not having the correct configuration for the transport layer algorithms used by ssh2. One of the connect options provided by the ssh2 module is algorithm
, which is an object that allows you to explicitly set the key exchange, ciphers, hmac and compression algorithms as well as server host key used to establish the initial secure connection. See the SSH2 documentation for details. Getting these parameters correct usually resolves the issue.
I have started collecting example scripts in the example directory of the repository. These are mainly scripts I have put together in order to investigate issues or provide samples for users. They are not robust, lack adequate error handling and may contain errors. However, I think they are still useful for helping developers see how the module and API can be used.
uploadDir()
and downloadDir()
auxList()
methodprependListener
to ensure they are called before any additional custom handlers added by client code.end()
call are now ignored.error.code
instead of matching on error.message
.exist()
where tests on root directory returned falseexample
directoryend
event bugconnect()
being called on already connected clientexample
directoryend()
call to resolve into close hookput()
and get()
from creating empty files in destination when unable to read sourceappend()
realPath()
methodcwd()
methodget()
stat()
(same as mode property). Added additional properties describing the type of object.removeListener()
method to compliment the existing on()
method.stat
methodfastGet
and fastPut
method.mkdir
file exists decision logicsftp.get
will return chunk not stream anymorechmod
method pr#33this.client.sftp
to connect
functionThe ssh2-sftp-client
module is essentially a wrapper around the ssh2
and ssh2-streams
modules, providing a higher level promise
based API. When you run into issues, it is important to try and determine where the issue lies - either in the ssh2-sftp-client module or the underlying ssh2
and ssh2-streams
modules. One way to do this is to first identify a minimal reproducible example which reproduces the issue. Once you have that, try to replicate the functionality just using the ssh2
and ssh2-streams
modules. If the issue still occurs, then you can be fairly confident it is something related to those later 2 modules and therefore and issue which should be referred to the maintainer of that module.
The ssh2
and ssh2-streams
modules are very solid, high quality modules with a large user base. Most of the time, issues with those modules are due to client misconfiguration. It is therefore very important when trying to diagnose an issue to also check the documentation for both ssh2
and ssh2-streams
. While these modules have good defaults, the flexibility of the ssh2 protocol means that not all options are available by default. You may need to tweak the connection options, ssh2 algorithms and ciphers etc for some remote servers. The documentation for both the ssh2
and ssh2-streams
module is quite comprehensive and there is lots of valuable information in the issue logs.
If you run into an issue which is not repeatable with just the ssh2
and ssh2-streams
modules, then please log an issue against the ssh2-sftp-client
module and I will investigate. Please note the next section on logging issues.
Note also that in the repository there are two useful directories. The first is the examples directory, which contain some examples of using ssh2-sftp-client
to perform common tasks. A few minutes reviewing these examples can provide that additional bit of detail to help fix any problems you are encountering.
The second directory is the tools directory. I have some very basic simple scripts in this directory which perform basic tasks using only the ssh2
and ssh2-streams
modules (no ssh2-sftp-client module). These can be useful when trying to determine if the issue is with the underlying ssh2
and ssh2-streams
modules.
There are some common errors people tend to make when using Promises or Asyc/Await. These are by far the most common problem found in issues logged against this module. Please check for some of these before logging your issue.
then()
blockAll methods in ssh2-sftp-client
return a Promise. This means methods are executed asynchrnously. When you call a method inside the then()
block of a promise chain, it is critical that you return the Promise that call generates. Failing to do this will result in the then()
block completing and your code starting execution of the next then()
, catch()
or finally()
block before your promise has been fulfilled. For exmaple, the following will not do what you expect
sftp.connect(config)
.then(() => {
sftp.fastGet('foo.txt', 'bar.txt');
}).then(rslt => {
console.log(rslt);
sftp.end();
}).catch(e => {
console.error(e.message);
});
In the above code, the sftp.end()
method will almost certainly be called before sftp.gastGet()
has been fulfilled (unless the foo.txt file is really small!). In fact, the whole promise chain will complete and exit even before the sftp.end()
call has been fulfilled. The correct code would be something like
sftp.connect(config)
.then(() => {
return sftp.fastGet('foo.txt', 'bar.txt');
}).then(rslt => {
console.log(rslt);
return sftp.end();
}).catch(e => {
console.error(e.message);
});
Note the return
statements. These ensure that the Promise returned by the client method is returned into the promise chain. It will be this promise the next block in the chain will wait on to be fulfilled before the next block is executed. Without the return statement, that block will return the default promise for that block, which essentially says this block has been fulfilled. What you really want is the promise which says your sftp client method call has been fulfilled.
A common symptom of this type of error is for file uploads or download to fail to complete or for data in those files to be truncated. What is happening is that the connection is being ended before the transfer has completed.
Another common error is to mix Promise chains and async/await calls. This is rarely a great idea. While you can do this, it tends to create complicated and difficult to maintain code. Select one approach and stick with it. Both approaches are functionally equivalent, so there is no reason to mix up the two paradigms. My personal preference would be to use async/await as I think that is more natural for most developers. For example, the following is more complex and difficult to follow than necessary (and has a bug!)
sftp.connect(config)
.then(() => {
return sftp.cwd();
}).then(async (d) => {
console.log(`Remote directory is ${d}`);
try {
await sftp.fastGet(`${d}/foo.txt`, `./bar.txt`);
}.catch(e => {
console.error(e.message);
});
}).catch(e => {
console.error(e.message);
}).finally(() => {
sftp.end();
});
The main bug in the above code is the then()
block is not returning the Promise generated by the call to sftp.fastGet()
. What it is actually returning is a fulfilled promise which says the then()
block has been run (note that the await'ed promise is not being returned and is therefore outside the main Promise chain). As a result, the finally()
block will be executed before the await promise has been fulfilled.
Using async/await inside the promise chain has created unnecessary complexity and leads to incorrect assumptions regarding how the code will execute. A quick glance at the code is likely to give the impression that execution will wait for the sftp.fastGet()
call to be fulfilled before continuing. This is not the case. The code would be more clearly expressed as either
sftp.connect(config)
.then(() => {
return sftp.cwd();
}).then(d => {
console.log(`remote dir ${d}`);
return sftp.fastGet(`${d}/foot.txt`, 'bar.txt');
}).catch(e => {
console.error(e.message);
}).finally(() => {
return sftp.end();
});
or, using async/await
async function doSftp() {
try {
let sftp = await sftp.connect(conf);
let d = await sftp.cwd();
console.log(`remote dir is ${d}`);
await sftp.fastGet(`${d}/foo.txt`, 'bat.txt');
} catch (e) {
console.error(e.message);
} finally () {
await sftp.end();
}
}
Another common error is to try and use a try/catch block to catch event signals, such as an error event. In general, you cannot use try/catch blocks for asynchronous code and expect errors to be caught by the catch
block. Handling errors in asynchronous code is one of the key reasons we now have the Promise and async/await frameworks.
The basic problem is that the try/catch block will have completed execution before the asynchronous code has completed. If the asynchronous code has not compleed, then there is a potential for it to raise an error. However, as the try/catch block has already completed, there is no catch waiting to catch the error. It will bubble up and probably result in your script exiting with an uncaught exception error.
Error events are essentially asynchronous code. You don't know when such events will fire. Therefore, you cannot use a try/catch block to catch such event errors. Even creating an error handler which then throws an exception won't help as the key problem is that your try/catch block has already executed. There are a number of alternative ways to deal with this situation. However, the key symptom is that you see occasional uncaught error exceptions that cause your script to exit abnormally despite having try/catch blocks in your script. What you need to do is look at your code and find where errors are raised asynchronously and use an event handler or some other mechanism to manage any errors raised.
You can add a debug
property to the config object passed in to connect()
to turn on debugging. This will generate quite a lot of output. The value of the property should be a function which accepts a single string argument. For example;
config.debug = msg => {
console.error(msg);
};
Enabling debugging can generate a lot of output. If you use console.error() as the output (as in the example above), you can redirect the output to a file using shell redirection e.g.
node script.js 2> debug.log
Please log an issue for all bugs, questions, feature and enhancement requests. Please ensure you include the module version, node version and platform.
I am happy to try and help diagnose and fix any issues you encounter while using the ssh2-sftp-client
module. However, I will only put in effort if you are prepared to put in the effort to provide the information necessary to reproduce the issue. Things which will help
Perhaps the best assistance is a minimal reproducible example of the issue. Once the issue can be readily reproduced, it can usually be fixed very quickly.
Pull requests are always welcomed. However, please ensure your changes pass all tests and if your adding a new feature, that tests for that feature are included. Likewise, for new features or enhancements, please include any relevant documentation updates.
This module will adopt a standard semantic versioning policy. Please indicate in your pull request what level of change it represents i.e.
This module was initially written by jyu213. On August 23rd, 2019, theophilusx took over responsibility for maintaining this module. A number of other people have contributed to this module, but until now, this was not tracked. My intention is to credit anyone who contributes going forward.
Thanks to the following for their contributions -
FAQs
Fix uploadDir ssh2 sftp client for node
We found that ssh2-sftp-client-fork demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
A supply chain attack has been detected in versions 1.95.6 and 1.95.7 of the popular @solana/web3.js library.
Research
Security News
A malicious npm package targets Solana developers, rerouting funds in 2% of transactions to a hardcoded address.
Security News
Research
Socket researchers have discovered malicious npm packages targeting crypto developers, stealing credentials and wallet data using spyware delivered through typosquats of popular cryptographic libraries.