What is glsl-tokenizer?
The glsl-tokenizer npm package is a tool for tokenizing GLSL (OpenGL Shading Language) code. It breaks down GLSL code into a stream of tokens, which can then be used for further processing, such as parsing, analysis, or transformation.
What are glsl-tokenizer's main functionalities?
Tokenizing GLSL code
This feature allows you to tokenize a GLSL shader code. The code sample reads a GLSL file, tokenizes its content, and prints the tokens to the console.
const tokenize = require('glsl-tokenizer');
const fs = require('fs');
const glslCode = fs.readFileSync('shader.glsl', 'utf8');
const tokens = tokenize(glslCode);
console.log(tokens);
Handling different types of tokens
This feature demonstrates how to handle different types of tokens produced by the tokenizer. The code sample tokenizes a simple GLSL code snippet and prints the type and data of each token.
const tokenize = require('glsl-tokenizer');
const glslCode = 'void main() { gl_FragColor = vec4(1.0); }';
const tokens = tokenize(glslCode);
for (const token of tokens) {
console.log(`Type: ${token.type}, Data: ${token.data}`);
}
Customizing tokenization
This feature shows how to customize the tokenization process by specifying options. The code sample tokenizes GLSL code with a specified GLSL version.
const tokenize = require('glsl-tokenizer');
const glslCode = 'void main() { gl_FragColor = vec4(1.0); }';
const tokens = tokenize(glslCode, { version: '300 es' });
console.log(tokens);
Other packages similar to glsl-tokenizer
glsl-parser
The glsl-parser package is used for parsing GLSL code into an abstract syntax tree (AST). While glsl-tokenizer focuses on breaking down the code into tokens, glsl-parser goes a step further by providing a structured representation of the code. This can be useful for more complex analysis and transformations.
glsl-transpiler
The glsl-transpiler package is designed to transpile GLSL code to other shading languages or JavaScript. It includes tokenization and parsing as part of its process but extends the functionality to code transformation. This makes it more comprehensive compared to glsl-tokenizer, which is focused solely on tokenization.
glsl-optimizer
The glsl-optimizer package is used for optimizing GLSL code. It includes tokenization and parsing as part of its optimization pipeline. While glsl-tokenizer provides the basic building blocks for breaking down GLSL code, glsl-optimizer focuses on improving the performance and efficiency of the code.
glsl-tokenizer 
Maps GLSL string data into GLSL tokens, either synchronously or using a
streaming API.
var tokenString = require('glsl-tokenizer/string')
var tokenStream = require('glsl-tokenizer/stream')
var fs = require('fs')
var tokens = tokenString(fs.readFileSync('some.glsl'))
fs.createReadStream('some.glsl')
.pipe(tokenStream())
.on('data', function(token) {
console.log(token.data, token.position, token.type)
})
API
tokens = require('glsl-tokenizer/string')(src, [opt])
Returns an array of tokens
given the GLSL source string src
You can specify opt.version
string to use different keywords/builtins, such as '300 es'
for WebGL2. Otherwise, will assume GLSL 100 (WebGL1).
var tokens = tokenizer(src, {
version: '300 es'
})
stream = require('glsl-tokenizer/stream')([opt])
Emits 'data' events whenever a token is parsed with a token object as output.
As above, you can specify opt.version
.
Tokens
{ 'type': TOKEN_TYPE
, 'data': "string of constituent data"
, 'position': integer position within the GLSL source
, 'line': line number within the GLSL source
, 'column': column number within the GLSL source }
The available token types are:
block-comment
: /* ... */
line-comment
: // ... \n
preprocessor
: # ... \n
operator
: Any operator. If it looks like punctuation, it's an operator.float
: Optionally suffixed with f
ident
: User defined identifier.builtin
: Builtin function.eof
: Emitted on end
; data will === '(eof)'
.integer
whitespace
keyword
License
MIT, see LICENSE.md for further information.