toon

Description
Additional tools for neuroscience experiments, including:
- A framework for polling input devices on a separate process.
- A framework for keyframe-based animation.
- High-resolution clocks.
Everything should work on Windows/Mac/Linux.
Install
Current release:
pip install toon
Development version:
pip install -i https://test.pypi.org/simple/ toon --pre
Or for the latest commit (requires compilation):
pip install git+https://github.com/aforren1/toon
See the demos/ folder for usage examples (note: some require additional packages).
Overview
Input
toon
provides a framework for polling from input devices, including common peripherals like mice and keyboards, with the flexibility to handle less-common devices like eyetrackers, motion trackers, and custom devices (see toon/input/
for examples). The goal is to make it easier to use a wide variety of devices, including those with sampling rates >1kHz, with minimal performance impact on the main process.
We use the built-in multiprocessing
module to control a separate process that hosts the device, and, in concert with numpy
, to move data to the main process via shared memory. It seems that under typical conditions, we can expect single read()
operations to take less than 500 microseconds (and more often < 100 us). See demos/bench_plot.py for an example of measuring user-side read performance.
Typical use looks like this:
from toon.input import MpDevice
from mymouse import Mouse
from timeit import default_timer
device = MpDevice(Mouse())
with device:
t1 = default_timer() + 10
while default_timer() < t1:
res = device.read()
if res:
time, data = res
print(data)
print(time)
Creating a custom device is relatively straightforward, though there are a few boxes to check.
from ctypes import c_double
class MyDevice(BaseDevice):
sampling_frequency = 500
shape = (3, 3)
ctype = c_double
def __init__(self):
pass
def enter(self):
pass
def exit(self):
pass
def read(self):
time = self.clock()
data = get_data()
return time, data
This device can then be passed to a toon.input.MpDevice
, which preallocates the shared memory and handles other details.
A few things to be aware of for data returned by MpDevice
:
- If there's no data for a given
read
, None
is returned.
- The returned data is a copy of the local copy of the data. If you don't need copies, set
use_views=True
when instantiating the MpDevice
.
- If receiving batches of data when reading from the device, you can return a list of (time, data) tuples.
- You can optionally use
device.start()
/device.stop()
instead of a context manager.
- You can check for remote errors at any point using
device.check_error()
, though this automatically happens after entering the context manager and when reading.
- In addition to python types/dtypes/ctypes, devices can return
ctypes.Structure
s (see input tests or the example_devices folder for examples).
Animation
This is still a work in progress, though I think it has some utility as-is. It's a port of the animation component in the Magnum framework, though lacking some of the features (e.g. Track extrapolation, proper handling of time scaling).
Example:
from math import sin, pi
from time import sleep
from timeit import default_timer
import matplotlib.pyplot as plt
from toon.anim import Track, Player
from toon.anim.easing import LINEAR, ELASTIC_IN
class Circle(object):
x = 0
y = 0
circle = Circle()
keyframes = [(0.2, -0.5), (0.5, 0), (3, 0.5)]
x_track = Track(keyframes, easing=LINEAR)
y_track = Track(keyframes, easing=ELASTIC_IN)
player = Player(repeats=3)
player.add(x_track, 'x', obj=circle)
def y_cb(val, obj):
obj.y = val
player.add(y_track, y_cb, obj=circle)
t0 = default_timer()
player.start(t0)
vals = []
times = []
while player.is_playing:
t = default_timer()
player.advance(t)
times.append(t)
vals.append([circle.x, circle.y])
plt.plot(times, vals)
plt.show()
Other notes:
- Non-numeric attributes, like color strings, can also be modified in this framework (easing is ignored).
- Multiple objects can be modified simultaneously by feeding a list of objects into
player.add()
.
Utilities
The util
module includes high-resolution clocks/timers via QueryPerformanceCounter/Frequency
on Windows, mach_absolute_time
on MacOS, and clock_gettime(CLOCK_MONOTONIC)
on Linux. The class is called MonoClock
, and an instantiation called mono_clock
is created upon import. Usage:
from toon.util import mono_clock, MonoClock
clk = mono_clock
clk2 = MonoClock(relative=False)
t0 = clk.get_time()
Another utility currently included is a priority
function, which tries to improve the determinism of the calling script. This is derived from Psychtoolbox's Priority
(doc here). General usage is:
from toon.util import priority
if not priority(1):
raise RuntimeError('Failed to raise priority.')
priority(0)
The input should be a 0 (no priority/cancel), 1 (higher priority), or 2 (realtime). If the requested level is rejected, the function will return False
. The exact implementational details depend on the host operating system. All implementations disable garbage collection.
Windows
- Uses
SetPriorityClass
and SetThreadPriority
/AvSetMmMaxThreadCharacteristics
.
level = 2
only seems to work if running Python as administrator.
MacOS
- Only disables/enables garbage collection; I don't have a Mac to test on.
Linux
- Sets the scheduler policy and parameters
sched_setscheduler
.
- If
level == 2
, locks the calling process's virtual address space into RAM via mlockall
.
- Any
level > 0
seems to fail unless the user is either superuser, or has the right capability. I've used setcap: sudo setcap cap_sys_nice=eip <path to python>
(disable by passing sudo setcap cap_sys_nice= <path>
). For memory locking, I've used Psychtoolbox's 99-psychtoolboxlimits.conf and added myself to the psychtoolbox group.
Your mileage may vary on whether these actually improve latency/determinism. When in doubt, measure! Read the warnings here.
Notes about checking whether parts are working:
Windows
- In the task manager under details, right-clicking on python and mousing over "Set priority" will show the current priority level. I haven't figured out how to verify the Avrt threading parts are working.
Linux
- Check
mlockall
with cat /proc/{python pid}/status | grep VmLck
- Check priority with
top -c -p $(pgrep -d',' -f python)