🚀 Big News: Socket Acquires Coana to Bring Reachability Analysis to Every Appsec Team.Learn more
Socket
Sign inDemoInstall
Socket

queueing-rnn

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

queueing-rnn

Queueing Recurrent Neural Network

1.0.4
PyPI
Maintainers
1

Queueing Recurrent Neural Network (Q-RNN) made-with-python

Queueing Recurrent Neural Network (Q-RNN) is a new kind of Artificial Neural Network that has been designed to use in time-series forecasting applications. According to experiments that have been run, QRNN has a potential to outperform the LSTM, Simple RNN and GRU, in the cases where the dataset has highly non-linear characteristics.

Table of contents

What is Q-RNN? 🤔

Random Neuron

(Look of a Random Neuron [4])

It is a compose of Simple RNN and Random Neural Network. Queueing RNN uses the fundamental math of Queueing Theory and G-Queues while combining it with the powerful architecture of Recurrent Neural Networks. For more detailed explanation about the theoretical background of QRNN check the mathematical-model folder and references section.

Comparison 📊

In order to evaluate the performance of QRNN, it has been compared with LSTM, GRU and Simple RNN using Keras with TensorFlow backend. During the experiments, 4 different data sets (google stock price, bike sharing, pm2.5 concentration, traffic volume) and 10 different optimization algorithms have been used. The mean square error distribution on the test set is given in the image below. As it seems QRNN manages to reach 3 lowest rms error out of 4.

Overall Comparison

Check the test_results folder to see detailed results 🔎.

Installation 🛠

Installing via pip package manager:

pip install queueing-rnn

Installing via GitHub:

git clone https://github.com/bilkosem/queueing-rnn
cd queueing-rnn
python setup.py install

Usage 👩‍💻

from queueing_rnn import QRNN

data=data.reshape((samples,timesteps,features))
qrnn = QRNN([features, hidden neurons, output neurons]) # Shape of the network

for s in range(samples):
    qrnn.feedforward()
    # Calculate Loss
    qrnn.backpropagation()

Check the examples folder to see detailed use 🔎.

References 📚

[1] Gelenbe, Erol. (1989). Random Neural Networks with Negative and Positive Signals and Product Form Solution. Neural Computation - NECO. 1. 502-510. 10.1162/neco.1989.1.4.502.

[2] Gelenbe, Erol. (1993). Learning in the Recurrent Random Neural Network. Neural Computation. 5. 154-164. 10.1162/neco.1993.5.1.154.

[3] Basterrech, S., & Rubino, G. (2015). Random Neural Network Model for Supervised Learning Problems. Neural Network World, 25, 457-499.

[4] Hossam Abdelbaki (2020). rnnsimv2.zip (https://www.mathworks.com/matlabcentral/fileexchange/91-rnnsimv2-zip), MATLAB Central File Exchange. Retrieved September 22, 2020.

License

GitHub license

FAQs

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts