Security News
The Unpaid Backbone of Open Source: Solo Maintainers Face Increasing Security Demands
Solo open source maintainers face burnout and security challenges, with 60% unpaid and 60% considering quitting.
@lighthouse_labs/workshopper
Advanced tools
A terminal workshop runner framework
Workshopper is used by learnyounode, an introduction to Node.js workshop application, and levelmeup, an introduction to Level* / NodeBase workshop application.
Mostly you should just follow how learnyounode and levelmeup works.
Create a new Node project, add a "bin"
that looks something like this:
#!/usr/bin/env node
const Workshopper = require('workshopper')
, path = require('path')
Workshopper({
name : 'learnyounode'
, title : 'LEARN YOU THE NODE.JS FOR MUCH WIN!'
, appDir : __dirname
, helpFile : path.join(__dirname, 'help.txt')
, prerequisitesFile : path.join(__dirname, 'prerequisites.txt')
, creditsFile : path.join(__dirname, 'credits.txt')
}).init()
Additionally you can supply a 'subtitle'
String option and a 'menu'
object option that will be passed to terminal-menu so you can change the 'bg'
and 'fg'
colours.
The 'helpFile'
option is optional but if you supply one, users will get the contents of this file when they type app help
(where 'app' is your workshop application name). They will also see a pointer to this whenever they select a new exercise, so make sure you fill it up with details of where to receive actual help for your workshop: a repo, a Google Group, an IRC channel, whatever.
The 'prerequisitesFile'
option is optional but if you supply one, users will get the contents of this file when they type app prerequisites
(where 'app' is your workshop application name). This file is intended to contain any instructions required for installations or set ups which are prerequisites needed to begin the lessons. They will also see a pointer to this whenever they select a new exercise.
The 'creditsFile'
option is optional but if you supply one, users will get the contents of this file when they type app credits
(where 'app' is your workshop application name). This file is intended to give credit to those who have added or assisted in creating the exercises. They will also see a pointer to this whenever they select a new exercise.
Create a menu.json file in your project that looks something like this:
[
"HELLO WORLD"
, "BABY STEPS"
, "MY FIRST I/O!"
...
]
Where the menu items correspond to lower-case, punctuation-free directories in a problems/ directory.
Each subdirectory in the problems/ directory should contain the following files:
{appname}
to substitute in the name you provided to Workshopper()
and {rootdir}
to substitute for the absolute path of where your application has been installed on the users' system.Workshopper should also be largely compatible with the exercises in stream-adventure. Feel free to mix and match exercises from the projects that use Workshopper!
The most complicated part of creating an exercise is the setup.js file which needs to set up your test environment with any fixtures, perform any additional verification required beyond what workshopper does and then any cleanup.
Validation by default is performed by comparing the stdout of the official solution to the stdout of the submission. Both are executed in separate child processes and the stdout is captured and compared, line by line. Any discrepancies are counted as a failure.
Your exercises (for now) should focus on how to funnel comparable data to stdout. The easiest form is to instruct the user to print solution values with console.log()
.
The absolute simplest setup.js is one that simply defers to the solution to do the work.
module.exports = function () {
return {}
}
In this case we are returning an empty options object so the defaults will be used. This means that no additional command-line arguments are supplied to child processes, nothing special is done to make stdout comparable, no additional verification stage is required and no cleanup needs to be performed. The exercise is simply asking the user to print something to the console that we can compare with the official solution.
This is normally only useful in a HELLO WORLD. Normally you should be passing dynamic content of some kind to the programs. But a simple HELLO WORLD is a good way to get the user ready for the format.
module.exports = function () {
return { long: true }
}
Tell workshopper that the stdout will consist of long lines to have it print the actual vs output on separate lines. Otherwise the output of both processes will be printed out side-by-side.
module.exports = function () {
return { args: [ Math.random(), Math.random() ] }
}
In this case we are supplying two command-line arguments to the child processes. They will get the same arguments and we expect their stdout to match. This may be a mathematical exercise.
If you need to supply different arguments to the submission and the solution programs then you can differentiate:
module.exports = function () {
return {
submissionAgrs: [ 8000 ]
, solutionAgrs: [ 8001 ]
}
}
In this case we may be wanting the programs to listen to a TCP port but we don't want to have conflicts when both processes are running simultaneously. We instruct the user to listen to the port number supplied as the first command-line argument.
module.exports = function () {
var server = http.createServer(function (req, res) {
res.end('boom!')
}).listen(9345)
return {
args : [ 'http://localhost:9345' ]
, close : server.close.bind(server)
}
}
In this case we are starting an HTTP server which must be cleaned up after the solution and submission programs have finished running, otherwise the workshop application will not exit. Supply the 'close'
option as a function that will be called after the child processes have completed.
Often a problem will not lend itself to concocting console output so you need to verify by other means. The simplest route is to repurpose stdout and send it separate output for both the submission and the solution based on something done by the solution.
var PassThrough = require('stream').PassThrough || require('readable-stream/passthrough')
, hyperquest = require('hyperquest')
module.exports = function (run) {
var submissionOut = new PassThrough()
, solutionout = new PassThrough()
setTimeout(function () {
hyperquest.get('http://localhost:8000').pipe(submissionOut)
if (!run) hyperquest.get('http://localhost:8001').pipe(solutionOut)
}, 500)
return {
submissionAgrs : [ 8000 ]
, solutionAgrs : [ 8001 ]
, a : submissionOut
, b : solutionOut
}
}
In this case we are telling out user to create an HTTP server that listens to the port supplied as the first command-line argument.
We have also instructed Workshopper to replace the stdout from the child processes with substitute streams. PassThrough
streams work well for this.
We are expecting that it will take no longer than 500ms for the child processes to get ready and then we fetch the output and pipe it to our PassThrough
streams. Workshopper will then compare these streams as if they were stdout.
There is a problem with the previous example because Workshopper allows the user to use both "run" and "verify" modes. When using "run" mode, only the submission is run and the stdout that Workshopper would normally verify (including stdout that you may have generated with a PassThrough
) will be sent to stdout. in this case the solution program is not executed. So, when we use hyperquest.get()
on the solution port it will fail because there is no server running.
The solution is to use the first argument to the setup.js export function, this is a boolean that will indicate whether we are in "run" mode or not. If we are in "run" mode then we can skip anything that may need to be done against the solution executable.
var PassThrough = require('stream').PassThrough || require('readable-stream/passthrough')
, hyperquest = require('hyperquest')
module.exports = function (run) {
var submissionOut = new PassThrough()
, solutionout = new PassThrough()
setTimeout(function () {
hyperquest.get('http://localhost:8000').pipe(submissionOut)
if (!run)
hyperquest.get('http://localhost:8001').pipe(solutionOut)
}, 500)
return {
submissionAgrs : [ 8000 ]
, solutionAgrs : [ 8001 ]
, a : submissionOut
, b : solutionOut
}
}
You must always consider whether "run" mode is going to cause problems with your setup.js
I'm still documenting this, check back later for more content here or just read the existing exercises available and go from there! Feel free to open an issue here if you have questions.
workshopper
is proudly brought to you by the following hackers:
Workshopper is Copyright (c) 2013 Rod Vagg @rvagg and licenced under the MIT licence. All rights not explicitly granted in the MIT license are reserved. See the included LICENSE file for more details.
Workshopper builds on the excellent work by @substack and @maxogden who created stream-adventure which serves as the original foundation for Workshopper and learnyounode. Portions of Workshopper may also be Copyright (c) 2013 @substack and @maxogden given that it builds on their original code.
FAQs
A terminal workshop runner framework
We found that @lighthouse_labs/workshopper demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 2 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Solo open source maintainers face burnout and security challenges, with 60% unpaid and 60% considering quitting.
Security News
License exceptions modify the terms of open source licenses, impacting how software can be used, modified, and distributed. Developers should be aware of the legal implications of these exceptions.
Security News
A developer is accusing Tencent of violating the GPL by modifying a Python utility and changing its license to BSD, highlighting the importance of copyleft compliance.