<input> Hi Sara, how are you?
(prompt / response) All systems operational!
<input> How is the weather in (where are we)?
(prompt / response) The current weather in Amsterdam, Netherlands is:
(weatherdetails)
<input> _
Attention:
This package is currently a work in progress Do not install vianpm install @ztik.nl/sara
Clone or download from Sara @ Github
Github documentation will be the Current/Latest testing build
NPM will be pushed occasionally when there shouldn't be any app-breaking bugs
Many changes are to be expected, do not expect backwards compatibility
Current version: 0.3.1
When the core program is more complete I will start semantic version 1.0.0
Sara is a command prompt, that listens for keyboard input or voice commands
Sara has a voice, and is able to respond to commands through text as well as audio
Sara is my (poor) attempt at making my own Jarvis/Alexa/Hey Google/Hi Bixby/Voice Response System
It runs in Node.js on a Raspberry Pi 3B, but should be able to run on earlier versions as well as other linux distro
It has some internal commands, but can be extended through a self-made plugin system
Hearing works
Voice commands can be sent to the command line for editing, or immediately be processed without user intervention
This option selection is currently hidden away in hearing.js, but will be in the commandline arguments and config.json soon
Voice works
Voice output works, but further testing is required
Different voices (male and female) are now possible, soon there will be an option to select, as well as a way to display a list of voices for each language!
Vision works
All it does is take a picture every 30 minutes using a USB webcam
Pi camera not supported yet, will be supported later
There are object/face detection functions, as well as some other functions (age/expression/gender labeling) but NONE of these functions are connected to the webcam source image yet!
There are NO object/face recognition functions at this moment, but this will be added soon
Sara ignores the following words at sentence start:
sara
can you
will you
would you
could you
tell me
let me know
please
Sara also ignores the word please and the ? character at the end of commands
After stripping these words, the command is compared to internal commands, and if it doesnt match, it will be compared to a regex string contained in every plugin .json file
Sara listens to the keyword 'Sara'
Requirements:
Hardware:
A Raspberry Pi (3B tested, older models should work)
fswebcam (I installed it, didnt touch a single config file) apt-get install fswebcam
Other:
Google Cloud API key (one key to rule them all!)
This is free for a certain amount of requests, see Sonus/Google Cloud Speech/Vision API for more details The same key is used for the translate plugin, speech recognition, generating voices and face/object detection Face recognition will be calculated in-app, so it will not make requests to the Google Cloud Vision API
newsapi.org API key (optional)
Free for personal use, used for the news plugin
I have tried to keep everything modular, so if something doesn't work on your system, you can disable that function through commandline arguments, config.json options file, or in the app itself
The vision command will be extended with object/face recognition, if I canwhen I get that to work properly
start/stop verbose turns on/off verbose mode
Verbose mode will turn on display of output with a 'data' or 'warn' type
Help:
help displays the main 'help' section list help displays a list of all help topics help <topic> displays help on the topic requested (still needs to be populated) help <plugin.function> displays help on the requested plugin function (currently placeholders) add help fill in the form and a new help topic is born! edit help <topic> find an error in a certain help topic, you can fix it.
Hearing:
start/stop listening turns on/off speech recognition start/stop hearing same as above
Voice:
start/stop voice turns on/off text-to-speech start/stop talking same as above start/stop speaking same as above silence stop speaking the current sentence/item
Vision:
start/stop vision turns on/off timer (30 min) for webcam snapshot to ./resources/vision/frame.png start/stop watching same as above
Nothing is done with this image at this time, but there are tests being done with detection and recognition...
Face/object detection works, but is not connected yet, it will be soon after some more testing
Face recognition does not work yet, this will need a more complex neural net to connect the dots between different images
Regular Expression matches:
Sara needs to 'understand' commands, and does this by comparing input to a regular expression found inside each plugin function's .json file
This regular expression matches the following sentences:
what is (-)10(.12) plus/and/+/& (-)10(.12)
what (-)10(.12) plus/and/+/& (-)10(.12) is
how much is (-)10(.12) plus/and/+/& (-)10(.12)
how much (-)10(.12) plus/and/+/& (-)10(.12) is
(-)10(.12) plus/and/+/& (-)10(.12) is
(-)10(.12) plus/and/+/& (-)10(.12)
Because Sara strips starting input, this allows to recognize sentences such as:
Sara can you please tell me what 10 + -9 is?
In the above regex line. most groups are not captured (?:xxx)
The capture fields (-?[0-9]+.?(?:[0-9]+)?) grabs these values and push them back to math.js which includes the function for processing these values
In the above example, math.js will receive an array object containing 3(!) items:
[0] the complete input string, in case the plugin still requires this string.
[1] the first captured group
[2] the second captured group
Therefore, the function math.add will receive these 3 array items, and return the calculation of add x[1] + x[2]
x[0] is always the entire matching regex string
Using the input sentence above, then:
x[0] == "what 10 + -9 is"
x[1] == 10
x[2] == -9
Layered commands:
(I am not a native English speaker, and I am not certain this is the correct term)
Sara is able to process subcommands through the use of parenthesis encapsulation
Example:
Sara can you tell me how much is 9 + (10 + 16)?
In this example, Sara will calculate 10 + 16 first, then calculate 9 + 26 afterwards
You can layer as many commands as you need, they will be processed starting with the most outer subcommand first:
((10 + ((root of 9) * (5³))) / 77) * (√9)
how is the weather in (where i am)
translate to german (what is gold)
Plugins:
These are created using (at least) 2 files:
pluginname_function.json
pluginname.js
The .js file contains all the javascript to deal with request X and push back a result
The .json file contains the name of the plugin, the name of the module (the .js file name), a Regular Expression string, and a small description
One .js file can contain multiple module.exports functions, each function requires its own .json file
Example:
Regular Expressions in these .json files need special characters to be escaped twice: "regex": "/^(?:what|how\\smuch)?\\s?(?:is)?\\s?(-?[0-9]+\\.?(?:[0-9]+)?)\\s?(?:\\+|plus|\\&|and)\\s?(-?[0-9]+\\.?(?:[0-9]+)?)\\s?(?:is)?$/i",
Since Sara removes certain words from the start of the sentence, all that the regex requires is the intent and if variables need to be passed to the function, one or more working capture groups
Provided plugins:
All commands listed are functional, although some plugins will require adding more commands (math.power, etc)
More plugins are coming, see Todo list for what I'd like to add (if possible)...
Math:
what is 7 + 9
10 - 3.3
9 * 4
4 divided by 3
how much is 12 squared
root of 10
what 10³ is
Conversation:
hi
hello
hey
yo
good morning/afternoon/evening/night
how are you
how are you doing
how are you feeling
how are you doing today
how are you feeling at the moment
Location:
where am I
where are you
what city are we in
what time zone are we in
in which province are we
what are your actual coordinates
Which country is this
Weather:
weather
how is the weather
how is the weather in/around/near <place>
what is the weather like in/around/near <place>
weather forecast
what is the weather forecast
what is the weather forecast for <place>
XBMC remote:
Add connection details to file plugins/xbmc-remote/connection.json (see example file connection_example.json)
stop video/movie/film/playback/episode
stop the video/movie/film/playback/episode
stop this video/movie/film/playback/episode
pause/pause video/movie/film/playback/episode
resume the video/movie/film/playback/episode
continue this video/movie/film/playback/episode
media menu select
media menu back
media menu move up/down/left/right
media menu move up/down/left/right 5x
media menu move up/down/left/right 5*
media menu move up/down/left/right 5 times
media menu move up/down/left/right 5 entries
media menu move up/down/left/right 1 entry
media menu home
media menu info/information
media menu context
media menu submenu
Timedate:
what time is it
what is the date
what year is it
what month is it
what day it is
what is the week number
Wikipedia:
what is <subject>
more about <subject>
News:
Add newsapi,org api key to file plugins/news/newsorg.json (see example file newsorg_example.json)
news headlines
tech news headlines
news headlines from bbc-news
news headlines in US
news headlines on bitcoin
Translate:
Add google cloud api key to file resources/apikeys/googlecloud.json (see example file googlecloud_example.json)
translate to french <input>
translate to english <input>
translate to dutch <input>
translate to german <input>
Games:
rock
paper
scissors
tictactoe
Audio in/out issues:
The only advise I can give is to make sure that alsa has the correct in/output device registered
My Raspberry Pi config:
I use the HDMI output on my raspi for audio out, so I am using card 2, device 1 here
My config file:
ztik@sara:~/ $ cat ~/.asoundrc
pcm.!default {
type asym
playback.pcm {
type plug
slave.pcm "hw:2,1"
}
capture.pcm {
type plug
slave.pcm "hw:0,0"
}
}
This solved every issue I had with aplay and arecord
Using these settings I am able to record from the proper input device with the following command:
arecord -d 10 test.wav
and play that recording using:
aplay test.wav
Anything on support beyond this should be requested at alsa/linux forums I guess
Feel free to ask, but don't expect an answer...
Other issues:
Sonus/Google Cloud Speech API:
I understand people can have problems getting through this, so here is a small guide (thanks to smart-mirror.io)
Setting up Speech Recognition
Sara uses Sonus with Google Cloud Speech for keyword spotting and recognition.
To set that up, you'll need to create a new project in the Cloud Platform Console:
In the Cloud Platform Console, go to the Projects page and select or create a new project GO TO THE PROJECTS PAGE
Create a new JSON service account key, edit it with a text editor and copy the contents to ./resources/apikeys/googlespeech.json
When prompted to create a new service account select 'Owner' or 'Project Owner'
As I understand, 90% of problems with Sonus are related to billing issues in Google Cloud
Haobosou USB microphone:
The microphone I use is a 'C-Media Haobosou G11 Touch Induction' and for a couple of days I have been having problems with it
When connecting the microphone, the blue power indicator would light up, and after 2 seconds it would turn off again
Pressing the touch induction area has no effect, and thus I am left with a disabled mic
It IS recognized by lsusb/hwinfo/arecord -l/dmesg but it is OFF
After three days of wrestling, I found the solution somewhere online (lost the url, no credits, sry)
ztik@sara:~/nodejs/sara $ amixer set Mic 80% cap
ztik@sara:~/nodejs/sara $ amixer
Simple mixer control 'Mic',0
Capabilities: cvolume cvolume-joined cswitch cswitch-joined
Capture channels: Mono
Limits: Capture 0 - 62
Mono: Capture 50 [81%] [16.59dB] [on]
There is probably a better command for turning the mic on, but this also sets the recording volume at 80%, which is my personal preference
Known:
The vision module works, but all it does is take a picture every 30 min, no further processing connected at this moment
Todo:
General
Scan for .config file, load settings from there
Overwrite settings with arguments
Rewrite console.log() to response.conlog()
Change eval() functions, find better approach for plugin loading
Correct hardcoded file locations to cleaned up path
Blacklist certain plugin names, to avoid overwriting internal functions
Prompt function
Write 'vocal_stringify()' function, to replace strings with proper written words ('ztik' becomes 'ZTiK')
Help function
Create help documentation for internal functions and plugins (currently populated with placeholders)
Add .json file import to help function, so plugins can add topics to the function
Add list help command, and/or display all topics using help
Speech recognition
Add option to select if speech commands are pushed to command line or processed immediately
Write speechparse() function, to replace strings such as 'subcommand start' with '(' and 'subcommand end' with ')'
Voice synthesis
Hook command results to voice synthesis
Add voices list display/selection
Add voice settings to config.json
Create option for voice to be heard on all output, instead of on response only (--speak-all, --speak-response-only)
Create 'speak' command, which will force the following command output to be spoken completely
(normal behaviour is to use voice only on 'response' type items, all other types (such as data, info, status) are skipped)
Write 'vocalise()' function, to replace strings with proper sounding words ('ZTiK' becomes 'Stick')
Add SSML language markup support
Add SSML markup to plugin outputs where needed (math functions)
Add .json file import to vocalise() function, so plugins or end-users can add words to the list
Vision
Support for USB Webcams
Support for the Pi camera
Image manipulation through imagemagick
Object/face detection
Object/face recognition
Plugins
Rename 'commands' folder to 'plugins'
Check for plugins in an external folder
Add weather plugin
Add more commands: sun, sunrise, sunset, wind, rain
Add news plugin
Finish news plugin
Add conversation plugin
Finish conversation plugin
Add Gmail plugin
Add Google Translate plugin
Add more languages, currently supported: french, english, dutch, german
Add CLI Games plugin
Add more games
Highscore system implementation
Add time/date plugin
Add IMDB plugin
Add Wolfram Alpha plugin
Add Dictionary plugin
Add Wikipedia plugin
Add topic overview/selection on multiple matches with request
Ask if user wants to know more after reading topic description
Add more functions to remote control (next, previous, rewind, forward, sendtext)
Add Image based Object Detection
Add Image based Face Recognition
Add events for Face Recognition
Add Cloud Storage plugin (connect with dropbox etc.)
Add Discord connectivity
... suggestions?
Long term goals:
Language support... eventually (this is depending on my personal skills as well as Google Speech and Text-To-Speech language availability)
Remote Control Windows PC within the network (this would require a seperate app for Windows to receive commands and process them)
Devise a way to incorporate a mood-function, simulate emotions
Connect a LCD/TFT screen, give Sara a face with expressions
Neural Net / Machine learning capabilities for influencing stock market
Build datacenter deep underground, preferably a remote island close to a submarine communications cable
Self awareness
Credits:
I would like to point out that I simply put this hardware and these programs and modules together, but without the people who created those, I would have had nothing at all!
Thank you to those involved making:
Hope I didn't miss anyone here, if so, please let me know and I will update!
Apologies:
I am a complete moron when it comes to asynchronous programming, and I am positive that many functions could have been written better/cleaner/more efficient.
I made this project to enhance my understanding of Node.js/Javascript, so please remain calm if/when I don't understand your comment/code/bugfix/pull request/advice/issue at first glance.
The npm package @ztiknl/sara receives a total of 2 weekly downloads. As such, @ztiknl/sara popularity was classified as not popular.
We found that @ztiknl/sara demonstrated a not healthy version release cadence and project activity because the last version was released a year ago.It has 1 open source maintainer collaborating on the project.
Last updated on 10 Aug 2019
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
In an unprecedented surge, May 2024 saw the publication of over 5,000 CVEs, marking a historic milestone in cybersecurity with an average of 164 CVEs per day, nearly double the 2023 daily average.
The White House is addressing fragmented cybersecurity regulations as CISOs report spending up to 50% of their time on compliance, aiming to harmonize requirements and improve cybersecurity outcomes.
The Socket Research Team has identified a malicious Python package that is typosquatting the popular crytic-compile utility, frequently used in popular toolkits and development environments for smart contracts and crypto applications.
By Socket Research Team, Sarah Gooding - Jun 04, 2024