Open Azure Kinect
Cross-platform Python playback library for Azure Kinect MKV files.
Calibration Example
It is possible to playback Azure Kinect videos files (mkv) without using the official SDK. This allows the software to be used on systems where the depth engine is not implemented, such as MacOS. The library currently only supports the playback of mkv files and does not provide direct access to the Azure Kinect device.
The following functions are currently supported:
Installation
pip install open-azure-kinect
Usage
In order to load an MKV file, it is necessary to create a new instance of the OpenK4APlayback
class. Note that if the is_looping
flag is set, the stream will not stop playing at the EOF of the stream. It will automatically close and reopen the file.
from openk4a.playback import OpenK4APlayback
azure = OpenK4APlayback("my-file.mkv")
azure.is_looping = True
azure.open()
After that, it is possible to read the available stream information.
for stream in azure.streams:
print(stream)
print(azure.duration_ms)
And read the actual capture information (image data).
while capture := azure.read():
color_image = capture.color
print(azure.timestamp_ms)
Seek
With seek(timestamp_ms: int)
it is possible to jump to a specific position in the video. The current implementation is not very efficient as the library just skips frames until the timestamp is reached. In the future, this should be replaced with a ffmpeg controlled seek.
azure.seek(azure.timestamp_ms + 1000)
Calibration Data
To access the calibration data of the two cameras (Color
, Depth
), use the parsed information property.
color_calib = azure.color_calibration
depth_calib = azure.depth_calibration
Image and Point Transformations
The class CameraTransform
handles the transformation task between the different cameras.
⚠️ Be aware that this part of the framework is still very much under development! And the methods are not as accurate as the Azure Kinect SDK because some optimisations have not been taken into account yet. Please open a PR if you like to improve it.
import numpy as np
from openk4a.transform import CameraTransform
estimated_depth_mm = 1500
transform = CameraTransform(azure.color_calibration, azure.depth_calibration, estimated_depth_mm)
depth_points = transform.transform_2d_color_to_depth(np.array([[300, 400], [200, 200]]))
transformed_color = transform.align_image_depth_to_color(color_image)
Development and Examples
To run the examples or develop the library please install the dev-requirements.txt
and requirements.txt
.
pip install -r dev-requirements.txt
pip install -r requirements.txt
There is already an example script demo.py which provides insights in how to use the library.
About
Thanks to tikuma-lsuhsc for creating python-ffmpegio and helping me extract the Azure Kinect data.