REINFORCEjs (fork)
REINFORCEjs is a Reinforcement Learning library by Andrej Karpathy that implements several common RL algorithms, all with web demos. In particular, the library currently includes:
- Dynamic Programming methods
- (Tabular) Temporal Difference Learning (SARSA/Q-Learning)
- Deep Q-Learning for Q-Learning with function approximation with Neural Networks
- Stochastic/Deterministic Policy Gradients and Actor Critic architectures for dealing with continuous action spaces. (very alpha, likely buggy or at the very least finicky and inconsistent)
See the main webpage for many more details, documentation and demos.
This fork adds node.js and ESM support.
Getting Started
Install the library as a dependency:
npm install @neurosity/reinforcejs
The library also includes a fork of Andrej's project recurrentjs with various utilities for building expression graphs (e.g. LSTMs) and performing automatic backpropagation. Agents for reinforncejs include:
DPAgent
for finite state/action spaces with environment dynamicsTDAgent
for finite state/action spacesDQNAgent
for continuous state features but discrete actions
A typical usage might look something like:
import { DQNAgent } from "@neurosity/reinforcejs";
const env = {
getNumStates: () => 8,
getMaxNumActions: () => 4
};
const spec = { alpha: 0.01 };
agent = new DQNAgent(env, spec);
setInterval(function () {
const action = agent.act(s);
agent.learn(reward);
}, 0);
The full documentation and demos are on the main webpage.
License
MIT.