New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details →
Socket
Book a DemoSign in
Socket

filterpy

Package Overview
Dependencies
Maintainers
1
Versions
47
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

filterpy - pypi Package Compare versions

Comparing version
1.3.1
to
1.3.2
+1
-1
filterpy.egg-info/PKG-INFO
Metadata-Version: 1.1
Name: filterpy
Version: 1.3.1
Version: 1.3.2
Summary: Kalman filtering and optimal estimation library

@@ -5,0 +5,0 @@ Home-page: https://github.com/rlabbe/filterpy

@@ -17,2 +17,2 @@ # -*- coding: utf-8 -*-

__version__ = "1.3.1"
__version__ = "1.3.2"

@@ -0,1 +1,21 @@

Version 1.3.2
=============
Fixed build error in Python 2.7 due to using print function without importing
it from future.
Added filterpy.common.Saver class, which can save all the attribute of any
filtering class. Replaces KalmanFilter.Saver, which only worked for the
KalmanFilter class.
Added optional parameter specifying a Saver object to be passed into all of the
batch_filter() functions/methods.
Added attribute z to most of the filter classes. This is mostly so the
Changes to documentation - mostly making it more consistent.
Version 1.3.1

@@ -13,3 +33,3 @@ =============

* Fixed bug where multivariate_gaussian accepted negative covariances
* #108 used pylint symbolic names
* #108 used pylint symbolic names
* Got code in compliance with pylint

@@ -24,3 +44,3 @@ * Fixed #105 - this was just test code, and it turns out the code as was was correct, so I deleted the second return statement

#102 - Bug: UKF was using slow code path when using np.subtract.
#102 - Bug: UKF was using slow code path when using np.subtract.

@@ -78,3 +98,3 @@

* Altered RTS_smoother algorithm to also return the
* Altered RTS_smoother algorithm to also return the
predicted covariances

@@ -95,5 +115,5 @@

* Fixed #54: Comments in multivariate_multiply incorrectly called the
covariance the mean.
covariance the mean.
* Added logpdf to stats. Computes logpdf - mostly a wrapper around
stats.multivariate_normal.logpdf as older versions of that function
stats.multivariate_normal.logpdf as older versions of that function
do not support the allow_singular keyword. But it also flattens out the

@@ -106,3 +126,3 @@ vectors for you so you do not have to do anything special with column vectors.

* Added Cubature Kalman filter.
* Bug in Q_continuous_white_noise(). The first term in the matrix should be (dt**3)/3, not (dt**4)/3.
* Bug in Q_continuous_white_noise(). The first term in the matrix should be (dt**3)/3, not (dt**4)/3.
* Added log-likelihood computation to UKF.

@@ -121,3 +141,3 @@ * Added simplex points for UKF

* Github issue #40. Fixed behavior of multivariate_gaussian to accept list as
the covariance matrix.
the covariance matrix.

@@ -141,7 +161,7 @@

pip install numpydoc
docs\conf.py has been modified to use numpydoc.
docs\conf.py has been modified to use numpydoc.
Version 0.1.0

@@ -159,3 +179,3 @@ ==============

filter that I have found in various book - Simon, Crassidis, and Grewal.
None seem to work very well. I have code that works pretty good when R
None seem to work very well. I have code that works pretty good when R
is < 0.5 or so, but then the filter diverges when R is larger. I'm not seeing

@@ -196,3 +216,3 @@ much in the literature that explains this very well, nor any evidence of

Version 0.0.26

@@ -202,3 +222,3 @@ ==============

* Added likelihood and log-likelihood to the KalmanFilter
class.
class.

@@ -256,4 +276,4 @@ * Added an MMAE filter bank class.

Added monte_carlo module which contains routines for MCMC - mostly
for particle filtering.
Added monte_carlo module which contains routines for MCMC - mostly
for particle filtering.

@@ -282,3 +302,3 @@

The unscented kalman filter code has been significantly altered. Your
existing code will no longer run. Sorry, but it had to be done.
existing code will no longer run. Sorry, but it had to be done.

@@ -309,3 +329,3 @@ As of version 0.0.18 there were separate classes for the UKF (Julier's)

* Added args parameters to Hx and HJacobian of the ExtendedKalmanFilter
class so you can pass additional data to them.
class so you can pass additional data to them.

@@ -325,3 +345,3 @@ * Made an exception more human readable by including the size of the

* Added multivariate_multiply to stats module.
* IMPORTANT: bug fix in the UKF RTS smoother routine.
* IMPORTANT: bug fix in the UKF RTS smoother routine.
* various typo fixes.

@@ -338,4 +358,4 @@

The change to _dt was stupid in 0.0.13 . I put it back to _dt, and
then added an optional dt parameter to the predict() function.
The change to _dt was stupid in 0.0.13 . I put it back to _dt, and
then added an optional dt parameter to the predict() function.

@@ -382,3 +402,3 @@

Version 0.0.9

@@ -385,0 +405,0 @@ =============

@@ -20,2 +20,160 @@ # -*- coding: utf-8 -*-

from __future__ import print_function
from collections import defaultdict
import copy
import numpy as np
class Saver(object):
"""
Helper class to save the states of any filter object.
Each time you call save() all of the attributes (state, covariances, etc)
are appended to lists.
Generally you would do this once per epoch - predict/update.
Then, you can access any of the states by using the [] syntax or by
using the . operator.
.. code-block:: Python
my_saver = Saver()
... do some filtering
x = my_saver['x']
x = my_save.x
Either returns a list of all of the state `x` values for the entire
filtering process.
If you want to convert all saved lists into numpy arrays, call to_array().
Parameters
----------
kf : object
any object with a __dict__ attribute, but intended to be one of the
filtering classes
save_current : bool, default=True
save the current state of `kf` when the object is created;
skip_private: bool, default=False
Control skipping any private attribute (anything starting with '_')
Turning this on saves memory, but slows down execution a bit.
skip_callable: bool, default=False
Control skipping any attribute which is a method. Turning this on
saves memory, but slows down execution a bit.
ignore: (str,) tuple of strings
list of keys to ignore.
Examples
--------
.. code-block:: Python
kf = KalmanFilter(...whatever)
# initialize kf here
saver = Saver(kf) # save data for kf filter
for z in zs:
kf.predict()
kf.update(z)
saver.save()
x = np.array(s.x) # get the kf.x state in an np.array
plt.plot(x[:, 0], x[:, 2])
# ... or ...
s.to_array()
plt.plot(s.x[:, 0], s.x[:, 2])
"""
def __init__(self, kf, save_current=False,
skip_private=False,
skip_callable=False,
ignore=()):
""" Construct the save object, optionally saving the current
state of the filter"""
#pylint: disable=too-many-arguments
self._kf = kf
self._DL = defaultdict(list)
self._skip_private = skip_private
self._skip_callable = skip_callable
self._ignore = ignore
self._len = 0
if save_current:
self.save()
def save(self):
""" save the current state of the Kalman filter"""
kf = self._kf
v = copy.deepcopy(kf.__dict__)
if self._skip_private:
for key in list(v.keys()):
if key.startswith('_'):
print('deleting', key)
del v[key]
if self._skip_callable:
for key in list(v.keys()):
if callable(v[key]):
del v[key]
for ig in self._ignore:
if ig in v:
del v[ig]
for key in list(v.keys()):
self._DL[key].append(v[key])
self.__dict__.update(self._DL)
self._len += 1
def __getitem__(self, key):
return self._DL[key]
def __len__(self):
return self._len
@property
def keys(self):
""" list of all keys"""
return list(self._DL.keys())
def to_array(self):
"""
Convert all saved attributes from a list to np.array.
This may or may not work - every saved attribute must have the
same shape for every instance. i.e., if `K` changes shape due to `z`
changing shape then the call will raise an exception.
This can also happen if the default initialization in __init__ gives
the variable a different shape then it becomes after a predict/update
cycle.
"""
for key in self.keys:
try:
self.__dict__[key] = np.array(self._DL[key])
except:
# get back to lists so we are in a valid state
self.__dict__.update(self._DL)
raise ValueError("could not convert {} into np.array".format(key))
def runge_kutta4(y, x, dx, f):

@@ -116,1 +274,20 @@ """computes 4th order Runge-Kutta for dy/dx.

print(pretty_str(label, arr), **kwargs)
def reshape_z(z, dim_z, ndim):
""" ensure z is a (dim_z, 1) shaped vector"""
z = np.atleast_2d(z)
if z.shape[1] == dim_z:
z = z.T
if z.shape != (dim_z, 1):
raise ValueError('z must be convertible to shape ({}, 1)'.format(dim_z))
if ndim == 1:
z = z[:, 0]
if ndim == 0:
z = z[0, 0]
return z

@@ -17,10 +17,10 @@ # -*- coding: utf-8 -*-

from filterpy.common import kinematic_kf
from filterpy.common import kinematic_kf, Saver
import numpy as np
from numpy.linalg import inv
from filterpy.kalman import (MerweScaledSigmaPoints, UnscentedKalmanFilter,
ExtendedKalmanFilter)
def test_kinematic_filter():

@@ -57,4 +57,98 @@ global kf

def test_saver_UKF():
def fx(x, dt):
F = np.array([[1, dt, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, dt],
[0, 0, 0, 1]], dtype=float)
return np.dot(F, x)
def hx(x):
return np.array([x[0], x[2]])
dt = 0.1
points = MerweScaledSigmaPoints(4, .1, 2., -1)
kf = UnscentedKalmanFilter(dim_x=4, dim_z=2, dt=dt, fx=fx, hx=hx, points=points)
z_std = 0.1
kf.R = np.diag([z_std**2, z_std**2]) # 1 standard
kf.x = np.array([-1., 1., -1., 1])
kf.P *= 1.
zs = [[i, i] for i in range(40)]
s = Saver(kf, skip_private=False, skip_callable=False, ignore=['z_mean'])
for z in zs:
kf.predict()
kf.update(z)
#print(kf.x, kf.log_likelihood, kf.P.diagonal())
s.save()
s.to_array()
def test_saver_kf():
kf = kinematic_kf(3, 2, dt=0.1, dim_z=3)
s = Saver(kf)
for i in range(10):
kf.predict()
kf.update([i, i, i])
s.save()
# this will assert if the KalmanFilter did not properly assert
s.to_array()
assert len(s.x) == 10
assert len(s.y) == 10
assert len(s.K) == 10
assert (len(s) == len(s.x))
# Force an exception to occur my malforming K
kf = kinematic_kf(3, 2, dt=0.1, dim_z=3)
kf.K = 0.
s2 = Saver(kf)
for i in range(10):
kf.predict()
kf.update([i, i, i])
s2.save()
# this should raise an exception because K is malformed
try:
s2.to_array()
assert False, "Should not have been able to convert s.K into an array"
except:
pass
def test_saver_ekf():
def HJ(x):
return np.array([[1, 1]])
def hx(x):
return np.array([x[0]])
kf = ExtendedKalmanFilter(2, 1)
s = Saver(kf)
for i in range(3):
kf.predict()
kf.update([[i]], HJ, hx)
s.save()
# this will assert if the KalmanFilter did not properly assert
s.to_array()
assert len(s.x) == 3
assert len(s.y) == 3
assert len(s.K) == 3
if __name__ == "__main__":
test_saver_kf()
test_saver_ekf()
ITERS = 1000000

@@ -64,1 +158,3 @@ #test_mahalanobis()

test_kinematic_filter()
# -*- coding: utf-8 -*-
from __future__ import (absolute_import, division, print_function,
unicode_literals)
"""Copyright 2015 Roger R Labbe Jr.

@@ -23,5 +28,2 @@

from __future__ import (absolute_import, division, print_function,
unicode_literals)
import numpy as np

@@ -28,0 +30,0 @@ import scipy.linalg as linalg

# -*- coding: utf-8 -*-
# pylint: disable=C0103, R0913, R0902, C0326, R0903, W1401
# pylint: disable=C0103, R0913, R0902, C0326, R0903, W1401, too-many-lines
# disable snake_case warning, too many arguments, too many attributes,

@@ -38,58 +38,78 @@ # one space before assignment, too few public methods, anomalous backslash

|
|
**Methods**
"""
Parameters
----------
x0 : 1D np.array or scalar
Initial value for the filter state. Each value can be a scalar
or a np.array.
def __init__(self, x0, dt, order, g, h=None, k=None):
""" Creates a g-h filter of order 0, 1, or 2.
You can use a scalar for x0. If order > 0, then 0.0 is assumed
for the higher order terms.
Parameters
----------
x[0] is the value being tracked
x[1] is the first derivative (for order 1 and 2 filters)
x[2] is the second derivative (for order 2 filters)
x0 : 1D np.array or scalar
Initial value for the filter state. Each value can be a scalar
or a np.array.
dt : scalar
timestep
You can use a scalar for x0. If order > 0, then 0.0 is assumed
for the higher order terms.
order : int
order of the filter. Defines the order of the system
0 - assumes system of form x = a_0 + a_1*t
1 - assumes system of form x = a_0 +a_1*t + a_2*t^2
2 - assumes system of form x = a_0 +a_1*t + a_2*t^2 + a_3*t^3
x[0] is the value being tracked
x[1] is the first derivative (for order 1 and 2 filters)
x[2] is the second derivative (for order 2 filters)
g : float
filter g gain parameter.
dt : scalar
timestep
h : float, optional
filter h gain parameter, order 1 and 2 only
order : int
order of the filter. Defines the order of the system
0 - assumes system of form x = a_0 + a_1*t
1 - assumes system of form x = a_0 +a_1*t + a_2*t^2
2 - assumes system of form x = a_0 +a_1*t + a_2*t^2 + a_3*t^3
k : float, optional
filter k gain parameter, order 2 only
g : float
filter g gain parameter.
Atrributes
-------
h : float, optional
filter h gain parameter, order 1 and 2 only
x : np.array
State of the filter.
k : float, optional
filter k gain parameter, order 2 only
x[0] is the value being tracked
x[1] is the derivative of x[0] (order 1 and 2 only)
x[2] is the 2nd derivative of x[0] (order 2 only)
Members
-------
This is always an np.array, even for order 0 where you can
initialize x0 with a scalar.
self.x : np.array
State of the filter.
x[0] is the value being tracked
x[1] is the derivative of x[0] (order 1 and 2 only)
x[2] is the 2nd derivative of x[0] (order 2 only)
y : np.array
Residual - difference between the measurement and the prediction
This is always an np.array, even for order 0 where you can
initialize x0 with a scalar.
dt : scalar
timestep
self.y : np.array
difference between the measurement and the prediction
order : int
order of the filter. Defines the order of the system
0 - assumes system of form x = a_0 + a_1*t
1 - assumes system of form x = a_0 +a_1*t + a_2*t^2
2 - assumes system of form x = a_0 +a_1*t + a_2*t^2 + a_3*t^3
g : float
filter g gain parameter.
h : float
filter h gain parameter, order 1 and 2 only
k : float
filter k gain parameter, order 2 only
z : 1D np.array or scalar
measurement passed into update()
"""
def __init__(self, x0, dt, order, g, h=None, k=None):
""" Creates a g-h filter of order 0, 1, or 2.
"""

@@ -112,7 +132,10 @@

self.k = k
self.y = 0.
self.y = np.zeros(len(self.x)) # residual
self.z = np.zeros(len(self.x)) # last measurement
def update(self, z, g=None, h=None, k=None):
""" update the filter with measurement z. z must be the same type
"""
Update the filter with measurement z. z must be the same type
or treatable as the same type as self.x[0].

@@ -140,2 +163,4 @@ """

self.z = z
else: # order == 2

@@ -161,5 +186,19 @@ if g is None:

def __repr__(self):
return '\n'.join([
'GHFilterOrder object',
pretty_str('dt', self.dt),
pretty_str('order', self.order),
pretty_str('x', self.x),
pretty_str('g', self.g),
pretty_str('h', self.h),
pretty_str('k', self.k),
pretty_str('y', self.y),
pretty_str('z', self.z)
])
class GHFilter(object):
""" Implements the g-h filter. The topic is too large to cover in
"""
Implements the g-h filter. The topic is too large to cover in
this comment. See my book "Kalman and Bayesian Filters in Python" [1]

@@ -171,2 +210,59 @@ or Eli Brookner's "Tracking and Kalman Filters Made Easy" [2].

Parameters
----------
x : 1D np.array or scalar
Initial value for the filter state. Each value can be a scalar
or a np.array.
You can use a scalar for x0. If order > 0, then 0.0 is assumed
for the higher order terms.
x[0] is the value being tracked
x[1] is the first derivative (for order 1 and 2 filters)
x[2] is the second derivative (for order 2 filters)
dx : 1D np.array or scalar
Initial value for the derivative of the filter state.
dt : scalar
time step
g : float
filter g gain parameter.
h : float
filter h gain parameter.
Attributes
----------
x : 1D np.array or scalar
filter state
dx : 1D np.array or scalar
derivative of the filter state.
x_prediction : 1D np.array or scalar
predicted filter state
dx_prediction : 1D np.array or scalar
predicted derivative of the filter state.
dt : scalar
time step
g : float
filter g gain parameter.
h : float
filter h gain parameter.
y : np.array, or scalar
residual (difference between measurement and prior)
z : np.array, or scalar
measurement passed into update()
Examples

@@ -211,6 +307,2 @@ --------

|
|
**Methods**
"""

@@ -220,31 +312,2 @@

def __init__(self, x, dx, dt, g, h):
""" Creates a g-h filter.
Parameters
----------
x : 1D np.array or scalar
Initial value for the filter state. Each value can be a scalar
or a np.array.
You can use a scalar for x0. If order > 0, then 0.0 is assumed
for the higher order terms.
x[0] is the value being tracked
x[1] is the first derivative (for order 1 and 2 filters)
x[2] is the second derivative (for order 2 filters)
dx : 1D np.array or scalar
Initial value for the derivative of the filter state.
dt : scalar
time step
g : float
filter g gain parameter.
h : float
filter h gain parameter.
"""
self.x = x

@@ -257,8 +320,14 @@ self.dx = dx

self.x_prediction = self.x
self.y = 0. # residual
if np.ndim(x) == 0:
self.y = 0. # residual
self.z = 0.
else:
self.y = np.zeros(len(x))
self.z = np.zeros(len(x))
def update(self, z, g=None, h=None):
"""performs the g-h filter predict and update step on the
"""
performs the g-h filter predict and update step on the
measurement z. Modifies the member variables listed below,

@@ -319,3 +388,3 @@ and returns the state of x and dx as a tuple as a convienence.

def batch_filter(self, data, save_predictions=False):
def batch_filter(self, data, save_predictions=False, saver=None):
"""

@@ -342,2 +411,7 @@ Given a sequenced list of data, performs g-h filter

saver : filterpy.common.Saver, optional
filterpy.common.Saver object. If provided, saver.save() will be
called after every epoch
Returns

@@ -385,2 +459,5 @@ -------

if saver is not None:
saver.save()
if save_predictions:

@@ -393,3 +470,4 @@ return results, predictions

def VRF_prediction(self):
""" Returns the Variance Reduction Factor of the prediction
"""
Returns the Variance Reduction Factor of the prediction
step of the filter. The VRF is the

@@ -416,3 +494,4 @@ normalized variance for the filter, as given in the equation below.

def VRF(self):
""" Returns the Variance Reduction Factor (VRF) of the state variable
"""
Returns the Variance Reduction Factor (VRF) of the state variable
of the filter (x) and its derivatives (dx, ddx). The VRF is the

@@ -455,56 +534,91 @@ normalized variance for the filter, as given in the equations below.

pretty_str('dx_prediction', self.dx_prediction),
pretty_str('y', self.y)
pretty_str('y', self.y),
pretty_str('z', self.z)
])
class GHKFilter(object):
""" Implements the g-h-k filter.
"""
Implements the g-h-k filter.
References
Parameters
----------
Brookner, "Tracking and Kalman Filters Made Easy". John Wiley and
Sons, 1998.
x : 1D np.array or scalar
Initial value for the filter state. Each value can be a scalar
or a np.array.
|
|
You can use a scalar for x0. If order > 0, then 0.0 is assumed
for the higher order terms.
**Methods**
"""
x[0] is the value being tracked
x[1] is the first derivative (for order 1 and 2 filters)
x[2] is the second derivative (for order 2 filters)
def __init__(self, x, dx, ddx, dt, g, h, k):
""" Creates a g-h filter.
dx : 1D np.array or scalar
Initial value for the derivative of the filter state.
Parameters
----------
ddx : 1D np.array or scalar
Initial value for the second derivative of the filter state.
x : 1D np.array or scalar
Initial value for the filter state. Each value can be a scalar
or a np.array.
dt : scalar
time step
You can use a scalar for x0. If order > 0, then 0.0 is assumed
for the higher order terms.
g : float
filter g gain parameter.
x[0] is the value being tracked
x[1] is the first derivative (for order 1 and 2 filters)
x[2] is the second derivative (for order 2 filters)
h : float
filter h gain parameter.
dx : 1D np.array or scalar
Initial value for the derivative of the filter state.
k : float
filter k gain parameter.
ddx : 1D np.array or scalar
Initial value for the second derivative of the filter state.
dt : scalar
time step
g : float
filter g gain parameter.
Attributes
----------
x : 1D np.array or scalar
filter state
h : float
filter h gain parameter.
dx : 1D np.array or scalar
derivative of the filter state.
k : float
filter k gain parameter.
"""
ddx : 1D np.array or scalar
second derivative of the filter state.
x_prediction : 1D np.array or scalar
predicted filter state
dx_prediction : 1D np.array or scalar
predicted derivative of the filter state.
ddx_prediction : 1D np.array or scalar
second predicted derivative of the filter state.
dt : scalar
time step
g : float
filter g gain parameter.
h : float
filter h gain parameter.
k : float
filter k gain parameter.
y : np.array, or scalar
residual (difference between measurement and prior)
z : np.array, or scalar
measurement passed into update()
References
----------
Brookner, "Tracking and Kalman Filters Made Easy". John Wiley and
Sons, 1998.
"""
def __init__(self, x, dx, ddx, dt, g, h, k):
self.x = x

@@ -521,7 +635,14 @@ self.dx = dx

self.k = k
self.y = 0. # residual
if np.ndim(x) == 0:
self.y = 0. # residual
self.z = 0.
else:
self.y = np.zeros(len(x))
self.z = np.zeros(len(x))
def update(self, z, g=None, h=None, k=None):
"""performs the g-h filter predict and update step on the
"""
Performs the g-h filter predict and update step on the
measurement z.

@@ -646,3 +767,4 @@

def VRF_prediction(self):
""" Returns the Variance Reduction Factor for x of the prediction
"""
Returns the Variance Reduction Factor for x of the prediction
step of the filter.

@@ -672,3 +794,4 @@

def bias_error(self, dddx):
""" Returns the bias error given the specified constant jerk(dddx)
"""
Returns the bias error given the specified constant jerk(dddx)

@@ -692,3 +815,4 @@ Parameters

def VRF(self):
""" Returns the Variance Reduction Factor (VRF) of the state variable
"""
Returns the Variance Reduction Factor (VRF) of the state variable
of the filter (x) and its derivatives (dx, ddx). The VRF is the

@@ -746,3 +870,4 @@ normalized variance for the filter, as given in the equations below.

pretty_str('ddx_prediction', self.dx_prediction),
pretty_str('y', self.y)
pretty_str('y', self.y),
pretty_str('z', self.z)
])

@@ -749,0 +874,0 @@

@@ -20,2 +20,3 @@ # -*- coding: utf-8 -*-

from filterpy.common import Saver
from filterpy.gh import (GHFilter, GHKFilter, least_squares_parameters,

@@ -47,2 +48,5 @@ optimal_noise_smoothing, GHFilterOrder)

str(f1)
str(f2)
# test both give same answers, and that we can

@@ -60,2 +64,5 @@ # use a scalar for the measurment

# test using an array for the measurement
s1 = Saver(f1)
s2 = Saver(f2)
for i in range(1,10):

@@ -65,2 +72,5 @@ f1.update(i)

s1.save()
s2.save()
assert f1.x == f2.x[0]

@@ -70,2 +80,4 @@ assert f1.dx == f2.dx[0]

assert f1.VRF() == f2.VRF()
s1.to_array()
s2.to_array()

@@ -72,0 +84,0 @@

@@ -22,2 +22,3 @@ # -*- coding: utf-8 -*-

from __future__ import absolute_import, division
import copy
import warnings

@@ -87,2 +88,3 @@ import numpy as np

self.y = zeros((dim_z, 1))
self.z = zeros((dim_z, 1))

@@ -138,3 +140,9 @@ # identity matrix. Do not alter this.

# pylint: disable=bare-except
try:
self.z = z[:]
except:
self.z = copy.deepcopy(z)
def predict(self, u=0):

@@ -155,3 +163,3 @@ """

def batch_filter(self, Zs,update_first=False):
def batch_filter(self, Zs,update_first=False, saver=False):
""" Batch processes a sequences of measurements.

@@ -169,2 +177,6 @@

saver : filterpy.common.Saver, optional
filterpy.common.Saver object. If provided, saver.save() will be
called after every epoch
Returns

@@ -183,3 +195,2 @@ -------

# mean estimates from H-Infinity Filter

@@ -197,2 +208,5 @@ means = zeros((n, self.dim_x, 1))

self.predict()
if saver is not None:
saver.save()
else:

@@ -206,2 +220,5 @@ for i, z in enumerate(Zs):

if saver is not None:
saver.save()
return (means, covariances)

@@ -208,0 +225,0 @@

@@ -112,2 +112,74 @@ # -*- coding: utf-8 -*-

Parameters
----------
dim_x : int
Number of state variables for the filter. For example, if
you are tracking the position and velocity of an object in two
dimensions, dim_x would be 4.
dim_z : int
Number of of measurement inputs. For example, if the sensor
provides you with position in (x,y), dim_z would be 2.
dt : float
Time between steps in seconds.
hx : function(x)
Measurement function. Converts state vector x into a measurement
vector of shape (dim_z).
fx : function(x, dt)
function that returns the state x transformed by the
state transistion function. dt is the time step in seconds.
x_mean_fn : callable (sigma_points, weights), optional
Function that computes the mean of the provided sigma points
and weights. Use this if your state variable contains nonlinear
values such as angles which cannot be summed.
.. code-block:: Python
def state_mean(sigmas, Wm):
x = np.zeros(3)
sum_sin, sum_cos = 0., 0.
for i in range(len(sigmas)):
s = sigmas[i]
x[0] += s[0] * Wm[i]
x[1] += s[1] * Wm[i]
sum_sin += sin(s[2])*Wm[i]
sum_cos += cos(s[2])*Wm[i]
x[2] = atan2(sum_sin, sum_cos)
return x
z_mean_fn : callable (sigma_points, weights), optional
Same as x_mean_fn, except it is called for sigma points which
form the measurements after being passed through hx().
residual_x : callable (x, y), optional
residual_z : callable (x, y), optional
Function that computes the residual (difference) between x and y.
You will have to supply this if your state variable cannot support
subtraction, such as angles (359-1 degreees is 2, not 358). x and y
are state vectors, not scalars. One is for the state variable,
the other is for the measurement state.
.. code-block:: Python
def residual(a, b):
y = a[0] - b[0]
if y > np.pi:
y -= 2*np.pi
if y < -np.pi:
y = 2*np.pi
return y
compute_log_likelihood : bool (default = True)
Computes log likelihood by default, but this can be a slow
computation, so if you never use it you can turn this computation
off.
Attributes

@@ -137,3 +209,11 @@ ----------

likelihood : float
likelihood of last measurment. Read only.
Computed from the log-likelihood. The log-likelihood can be very
small, meaning a large negative value such as -28000. Taking the
exp() of that results in 0.0, which can break typical algorithms
which multiply by this value, so by default we always return a
number >= sys.float_info.min.
References

@@ -154,78 +234,2 @@ ----------

r""" Create a Cubature Kalman filter. You are responsible for setting
the various state variables to reasonable values; the defaults will
not give you a functional filter.
Parameters
----------
dim_x : int
Number of state variables for the filter. For example, if
you are tracking the position and velocity of an object in two
dimensions, dim_x would be 4.
dim_z : int
Number of of measurement inputs. For example, if the sensor
provides you with position in (x,y), dim_z would be 2.
dt : float
Time between steps in seconds.
hx : function(x)
Measurement function. Converts state vector x into a measurement
vector of shape (dim_z).
fx : function(x, dt)
function that returns the state x transformed by the
state transistion function. dt is the time step in seconds.
x_mean_fn : callable (sigma_points, weights), optional
Function that computes the mean of the provided sigma points
and weights. Use this if your state variable contains nonlinear
values such as angles which cannot be summed.
.. code-block:: Python
def state_mean(sigmas, Wm):
x = np.zeros(3)
sum_sin, sum_cos = 0., 0.
for i in range(len(sigmas)):
s = sigmas[i]
x[0] += s[0] * Wm[i]
x[1] += s[1] * Wm[i]
sum_sin += sin(s[2])*Wm[i]
sum_cos += cos(s[2])*Wm[i]
x[2] = atan2(sum_sin, sum_cos)
return x
z_mean_fn : callable (sigma_points, weights), optional
Same as x_mean_fn, except it is called for sigma points which
form the measurements after being passed through hx().
residual_x : callable (x, y), optional
residual_z : callable (x, y), optional
Function that computes the residual (difference) between x and y.
You will have to supply this if your state variable cannot support
subtraction, such as angles (359-1 degreees is 2, not 358). x and y
are state vectors, not scalars. One is for the state variable,
the other is for the measurement state.
.. code-block:: Python
def residual(a, b):
y = a[0] - b[0]
if y > np.pi:
y -= 2*np.pi
if y < -np.pi:
y = 2*np.pi
return y
compute_log_likelihood : bool (default = True)
Computes log likelihood by default, but this can be a slow
computation, so if you never use it you can turn this computation
off.
"""
self.Q = eye(dim_x)

@@ -264,2 +268,3 @@ self.R = eye(dim_z)

self.log_likelihood = math.log(sys.float_info.min)
self.likelihood = sys.float_info.min

@@ -298,3 +303,2 @@

self.x, self.P = ckf_transform(self.sigmas_f, self.Q)

@@ -360,2 +364,5 @@

self.log_likelihood = logpdf(x=self.y, cov=Pz)
self.likelihood = math.exp(self.log_likelihood)
if self.likelihood == 0:
self.likelihood = sys.float_info.min

@@ -375,22 +382,4 @@

pretty_str('y', self.y),
pretty_str('log-likelihood', self.log_likelihood)])
@property
def likelihood(self):
"""
likelihood of last measurment.
Computed from the log-likelihood. The log-likelihood can be very
small, meaning a large negative value such as -28000. Taking the
exp() of that results in 0.0, which can break typical algorithms
which multiply by this value, so by default we always return a
number >= sys.float_info.min.
But really, this is a bad measure because of the scaling that is
involved - try to use log-likelihood in your equations!"""
lh = math.exp(self.log_likelihood)
if lh == 0:
lh = sys.float_info.min
return lh
pretty_str('log-likelihood', self.log_likelihood),
pretty_str('likelihood', self.likelihood)
])

@@ -46,2 +46,21 @@ # -*- coding: utf-8 -*-

Parameters
----------
dim_x : int
Number of state variables for the Kalman filter. For example, if
you are tracking the position and velocity of an object in two
dimensions, dim_x would be 4.
This is used to set the default size of P, Q, and u
dim_z : int
Number of of measurement inputs. For example, if the sensor
provides you with position in (x,y), dim_z would be 2.
compute_log_likelihood : bool (default = True)
Computes log likelihood by default, but this can be a slow
computation, so if you never use it you can turn this computation
off.
Attributes

@@ -55,2 +74,8 @@ ----------

x_prior : numpy.array(dim_x, 1)
Prior (predicted) state estimate
P_prior : numpy.array(dim_x, dim_x)
Prior (predicted) state covariance matrix
R : numpy.array(dim_z, dim_z)

@@ -80,2 +105,11 @@ Measurement noise matrix

likelihood : float
likelihood of last measurment. Read only.
Computed from the log-likelihood. The log-likelihood can be very
small, meaning a large negative value such as -28000. Taking the
exp() of that results in 0.0, which can break typical algorithms
which multiply by this value, so by default we always return a
number >= sys.float_info.min.
Examples

@@ -89,32 +123,3 @@ --------

def __init__(self, dim_x, dim_z, dim_u=0, compute_log_likelihood=True):
""" Extended Kalman filter. You are responsible for setting the
various state variables to reasonable values; the defaults below will
not give you a functional filter.
Examples
--------
See the EKF chapter in my book Kalman and Bayesian Filters in Python
https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/11-Extended-Kalman-Filters.ipynb
Parameters
----------
dim_x : int
Number of state variables for the Kalman filter. For example, if
you are tracking the position and velocity of an object in two
dimensions, dim_x would be 4.
This is used to set the default size of P, Q, and u
dim_z : int
Number of of measurement inputs. For example, if the sensor
provides you with position in (x,y), dim_z would be 2.
compute_log_likelihood : bool (default = True)
Computes log likelihood by default, but this can be a slow
computation, so if you never use it you can turn this computation
off.
"""
self.dim_x = dim_x

@@ -145,4 +150,9 @@ self.dim_z = dim_z

self.log_likelihood = math.log(sys.float_info.min)
self.likelihood = sys.float_info.min
# Priors. Will always be a copy of x, P after predict() is called.
self.x_prior = self.x[:]
self.P_prior = self.P[:]
def predict_update(self, z, HJacobian, Hx, args=(), hx_args=(), u=0):

@@ -204,2 +214,6 @@ """ Performs the predict/update innovation of the extended Kalman

# save prior
self.x_prior = self.x[:]
self.P_prior = self.P[:]
# update step

@@ -218,2 +232,5 @@ PHT = dot(P, H.T)

self.log_likelihood = logpdf(x=self.y, cov=self.S)
self.likelihood = math.exp(self.log_likelihood)
if self.likelihood == 0:
self.likelihood = sys.float_info.min

@@ -288,2 +305,5 @@

# P = (I-KH)P(I-KH)' + KRK' is more numerically stable
# and works for non-optimal K vs the equation
# P = (I-KH)P usually seen in the literature.
I_KH = self._I - dot(self.K, H)

@@ -294,10 +314,14 @@ self.P = dot(I_KH, self.P).dot(I_KH.T) + dot(self.K, R).dot(self.K.T)

self.log_likelihood = logpdf(x=self.y, cov=self.S)
self.likelihood = math.exp(self.log_likelihood)
if self.likelihood == 0:
self.likelihood = sys.float_info.min
def predict_x(self, u=0):
""" predicts the next state of X. If you need to
"""
Predicts the next state of X. If you need to
compute the next state yourself, override this function. You would
need to do this, for example, if the usual Taylor expansion to
generate F is not providing accurate results for you. """
generate F is not providing accurate results for you.
"""
self.x = dot(self.F, self.x) + dot(self.B, u)

@@ -307,3 +331,5 @@

def predict(self, u=0):
""" Predict next position.
"""
Predict next state (prior) using the Kalman filter state propagation
equations.

@@ -321,23 +347,7 @@ Parameters

# save prior
self.x_prior = self.x[:]
self.P_prior = self.P[:]
@property
def likelihood(self):
"""
likelihood of last measurment.
Computed from the log-likelihood. The log-likelihood can be very
small, meaning a large negative value such as -28000. Taking the
exp() of that results in 0.0, which can break typical algorithms
which multiply by this value, so by default we always return a
number >= sys.float_info.min.
But really, this is a bad measure because of the scaling that is
involved - try to use log-likelihood in your equations!"""
lh = math.exp(self.log_likelihood)
if lh == 0:
lh = sys.float_info.min
return lh
def __repr__(self):

@@ -348,2 +358,4 @@ return '\n'.join([

pretty_str('P', self.P),
pretty_str('x_prior', self.x_prior),
pretty_str('P_prior', self.P_prior),
pretty_str('F', self.F),

@@ -355,2 +367,4 @@ pretty_str('Q', self.Q),

pretty_str('S', self.S),
pretty_str('log-likelihood', self.log_likelihood)])
pretty_str('likelihood', self.likelihood),
pretty_str('log-likelihood', self.log_likelihood)
])
# -*- coding: utf-8 -*-
# pylint: disable=invalid-name, too-many-arguments, too-many-instance-attributes
# pylint: disable=attribute-defined-outside-init

@@ -25,3 +26,2 @@ """Copyright 2015 Roger R Labbe Jr.

from numpy.random import multivariate_normal
from scipy.linalg import inv
from filterpy.common import pretty_str

@@ -43,81 +43,109 @@

Examples
--------
Parameters
----------
See my book Kalman and Bayesian Filters in Python
https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python
x : np.array(dim_x)
state mean
References
P : np.array((dim_x, dim_x))
covariance of the state
dim_z : int
Number of of measurement inputs. For example, if the sensor
provides you with position in (x,y), dim_z would be 2.
dt : float
time step in seconds
N : int
number of sigma points (ensembles). Must be greater than 1.
K : np.array
Kalman gain
hx : function hx(x)
Measurement function. May be linear or nonlinear - converts state
x into a measurement. Return must be an np.array of the same
dimensionality as the measurement vector.
fx : function fx(x, dt)
State transition function. May be linear or nonlinear. Projects
state x into the next time period. Returns the projected state x.
Attributes
----------
x : numpy.array(dim_x, 1)
State estimate
- [1] John L Crassidis and John L. Junkins. "Optimal Estimation of
Dynamic Systems. CRC Press, second edition. 2012. pp, 257-9.
"""
P : numpy.array(dim_x, dim_x)
State covariance matrix
def __init__(self, x, P, dim_z, dt, N, hx, fx):
"""
Create an ensemble Kalman filter. You are responsible for setting the
various state variables to reasonable values; the defaults below will
not give you a functional filter.
x_prior : numpy.array(dim_x, 1)
Prior (predicted) state estimate
Parameters
----------
P_prior : numpy.array(dim_x, dim_x)
Prior (predicted) state covariance matrix
x : np.array(dim_x)
state mean
R : numpy.array(dim_z, dim_z)
Measurement noise matrix
P : np.array((dim_x, dim_x))
covariance of the state
Q : numpy.array(dim_x, dim_x)
Process noise matrix
dim_z : int
Number of of measurement inputs. For example, if the sensor
provides you with position in (x,y), dim_z would be 2.
fx : callable (x, dt)
State transition function
dt : float
time step in seconds
hx : callable (x)
Measurement function. Convert state `x` into a measurement
N : int
number of sigma points (ensembles). Must be greater than 1.
K : numpy.array(dim_x, dim_z)
Kalman gain of the update step. Read only.
K : np.array
Kalman gain
log_likelihood : float
log-likelihood of the last measurement. Read only.
hx : function hx(x)
Measurement function. May be linear or nonlinear - converts state
x into a measurement. Return must be an np.array of the same
dimensionality as the measurement vector.
inv : function, default numpy.linalg.inv
If you prefer another inverse function, such as the Moore-Penrose
pseudo inverse, set it to that instead: kf.inv = np.linalg.pinv
fx : function fx(x, dt)
State transition function. May be linear or nonlinear. Projects
state x into the next time period. Returns the projected state x.
Examples
--------
Examples
--------
.. code-block:: Python
.. code-block:: Python
def hx(x):
return np.array([x[0]])
def hx(x):
return np.array([x[0]])
F = np.array([[1., 1.],
[0., 1.]])
def fx(x, dt):
return np.dot(F, x)
F = np.array([[1., 1.],
[0., 1.]])
def fx(x, dt):
return np.dot(F, x)
x = np.array([0., 1.])
P = np.eye(2) * 100.
dt = 0.1
f = EnKF(x=x, P=P, dim_z=1, dt=dt, N=8,
hx=hx, fx=fx)
x = np.array([0., 1.])
P = np.eye(2) * 100.
dt = 0.1
f = EnKF(x=x, P=P, dim_z=1, dt=dt, N=8,
hx=hx, fx=fx)
std_noise = 3.
f.R *= std_noise**2
f.Q = Q_discrete_white_noise(2, dt, .01)
std_noise = 3.
f.R *= std_noise**2
f.Q = Q_discrete_white_noise(2, dt, .01)
while True:
z = read_sensor()
f.predict()
f.update(np.asarray([z]))
while True:
z = read_sensor()
f.predict()
f.update(np.asarray([z]))
"""
See my book Kalman and Bayesian Filters in Python
https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python
References
----------
- [1] John L Crassidis and John L. Junkins. "Optimal Estimation of
Dynamic Systems. CRC Press, second edition. 2012. pp, 257-9.
"""
def __init__(self, x, P, dim_z, dt, N, hx, fx):
if dim_z <= 0:

@@ -129,3 +157,4 @@ raise ValueError('dim_z must be greater than zero')

self.dim_x = len(x)
dim_x = len(x)
self.dim_x = dim_x
self.dim_z = dim_z

@@ -136,12 +165,15 @@ self.dt = dt

self.fx = fx
self.K = 0
self.x = x
self.P = P
self.K = np.zeros((dim_x, dim_z))
self.Q = eye(self.dim_x) # process uncertainty
self.R = eye(self.dim_z) # state uncertainty
self.mean = [0]*self.dim_x
self.initialize(x, P)
self.Q = eye(dim_x) # process uncertainty
self.R = eye(dim_z) # state uncertainty
self.inv = np.linalg.inv
# used to create error terms centered at 0 mean for state and measurement
self._mean = np.zeros(dim_x)
self._mean_z = np.zeros(dim_z)
def initialize(self, x, P):

@@ -169,4 +201,7 @@ """

self.P = P
self.x_prior = x[:]
self.P_prior = P[:]
def update(self, z, R=None):

@@ -217,5 +252,5 @@ """

self.K = dot(P_xz, inv(P_zz))
self.K = dot(P_xz, self.inv(P_zz))
e_r = multivariate_normal([0]*dim_z, R, N)
e_r = multivariate_normal(self._mean_z, R, N)
for i in range(N):

@@ -235,5 +270,4 @@ self.sigmas[i] += dot(self.K, z + e_r[i] - sigmas_h[i])

e = multivariate_normal(self.mean, self.Q, N)
e = multivariate_normal(self._mean, self.Q, N)
self.sigmas += e
#self.x = np.mean(self.sigmas , axis=0)

@@ -247,3 +281,7 @@ P = 0

# save prior
self.x_prior = self.x[:]
self.P_prior = self.P[:]
def __repr__(self):

@@ -257,6 +295,7 @@ return '\n'.join([

pretty_str('P', self.P),
pretty_str('x_prior', self.x_prior),
pretty_str('P_prior', self.P_prior),
pretty_str('Q', self.Q),
pretty_str('R', self.R),
pretty_str('K', self.K),
pretty_str('mean', self.mean),
pretty_str('sigmas', self.sigmas),

@@ -263,0 +302,0 @@ pretty_str('hx', self.hx),

@@ -26,3 +26,3 @@ # -*- coding: utf-8 -*-

from filterpy.stats import logpdf
from filterpy.common import pretty_str
from filterpy.common import pretty_str, reshape_z

@@ -32,3 +32,4 @@

"""
Create a linear Information filter. Information filters compute the
Create a linear Information filter. Information filters
compute the
inverse of the Kalman filter, allowing you to easily denote having

@@ -70,2 +71,8 @@ no information at initialization.

x_prior : numpy.array(dim_x, 1)
Prior (predicted) dtate estimate vector
P_inv_prior : numpy.array(dim_x, dim_x)
prior (predicted) inverse state covariance matrix
R_inv : numpy.array(dim_z, dim_z)

@@ -92,2 +99,11 @@ inverse of measurement noise matrix

likelihood : float
likelihood of last measurment. Read only.
Computed from the log-likelihood. The log-likelihood can be very
small, meaning a large negative value such as -28000. Taking the
exp() of that results in 0.0, which can break typical algorithms
which multiply by this value, so by default we always return a
number >= sys.float_info.min.
inv : function, default numpy.linalg.inv

@@ -133,2 +149,3 @@ If you prefer another inverse function, such as the Moore-Penrose

self.y = zeros((dim_z, 1))
self.z = zeros((dim_z, 1))
self.S = 0. # system uncertainty in measurement space

@@ -142,6 +159,11 @@

self.log_likelihood = math.log(sys.float_info.min)
self.likelihood = sys.float_info.min
self.inv = np.linalg.inv
# save priors
self.x_prior = self.x[:]
self.P_inv_prior = self.P_inv[:]
def update(self, z, R_inv=None):

@@ -181,2 +203,3 @@ """

self.log_likelihood = math.log(sys.float_info.min)
self.likelihood = sys.float_info.min

@@ -198,4 +221,9 @@ else:

self.z = reshape_z(z, self.dim_z, np.ndim(self.x))[:]
if self.compute_log_likelihood:
self.log_likelihood = logpdf(x=self.y, cov=self.S)
self.likelihood = math.exp(self.log_likelihood)
if self.likelihood == 0:
self.likelihood = sys.float_info.min

@@ -234,2 +262,6 @@

self.P_inv = self.inv(AI + self.Q)
self.P_inv_prior = self.P_inv[:]
# save priors
self.x_prior = self.x[:]
else:

@@ -242,4 +274,8 @@ I_PF = self._I - dot(self.P_inv, self._F_inv)

# save priors
self.x_prior = self.x[:]
self.P_inv_prior = AQI[:]
def batch_filter(self, zs, Rs=None, update_first=False):
def batch_filter(self, zs, Rs=None, update_first=False, saver=None):
""" Batch processes a sequences of measurements.

@@ -263,2 +299,6 @@

saver : filterpy.common.Saver, optional
filterpy.common.Saver object. If provided, saver.save() will be
called after every epoch
Returns

@@ -299,2 +339,5 @@ -------

self.predict()
if saver is not None:
saver.save()
else:

@@ -308,2 +351,5 @@ for i, (z, r) in enumerate(zip(zs, Rs)):

if saver is not None:
saver.save()
return (means, covariances)

@@ -313,21 +359,2 @@

@property
def likelihood(self):
"""
likelihood of last measurment.
Computed from the log-likelihood. The log-likelihood can be very
small, meaning a large negative value such as -28000. Taking the
exp() of that results in 0.0, which can break typical algorithms
which multiply by this value, so by default we always return a
number >= sys.float_info.min.
But really, this is a bad measure because of the scaling that is
involved - try to use log-likelihood in your equations!"""
lh = math.exp(self.log_likelihood)
if lh == 0:
lh = sys.float_info.min
@property
def F(self):

@@ -352,2 +379,4 @@ """State Transition matrix"""

pretty_str('P_inv', self.P_inv),
pretty_str('x_prior', self.x_prior),
pretty_str('P_inv_prior', self.P_inv_prior),
pretty_str('F', self.F),

@@ -360,6 +389,8 @@ pretty_str('_F_inv', self._F_inv),

pretty_str('y', self.y),
pretty_str('z', self.z),
pretty_str('S', self.S),
pretty_str('B', self.B),
pretty_str('log-likelihood', self.log_likelihood),
pretty_str('inv', self.inv),
pretty_str('likelihood', self.likelihood),
pretty_str('inv', self.inv)
])

@@ -116,2 +116,3 @@ # -*- coding: utf-8 -*-

import sys
import warnings
import math

@@ -122,3 +123,3 @@ import numpy as np

from filterpy.stats import logpdf
from filterpy.common import pretty_str
from filterpy.common import pretty_str, reshape_z

@@ -141,2 +142,23 @@

Parameters
----------
dim_x : int
Number of state variables for the Kalman filter. For example, if
you are tracking the position and velocity of an object in two
dimensions, dim_x would be 4.
This is used to set the default size of P, Q, and u
dim_z : int
Number of of measurement inputs. For example, if the sensor
provides you with position in (x,y), dim_z would be 2.
dim_u : int (optional)
size of the control input, if it is being used.
Default value of 0 indicates it is not used.
compute_log_likelihood : bool (default = True)
Computes log likelihood by default, but this can be a slow
computation, so if you never use it you can turn this computation
off.
Attributes

@@ -150,2 +172,8 @@ ----------

x_prior : numpy.array(dim_x, 1)
Prior (predicted) state estimate
P_prior : numpy.array(dim_x, dim_x)
Prior (predicted) state covariance matrix
R : numpy.array(dim_z, dim_z)

@@ -172,5 +200,17 @@ Measurement noise matrix

z : ndarray
Last measurement used in update(). Read only.
log_likelihood : float
log-likelihood of the last measurement. Read only.
likelihood : float
likelihood of last measurment. Read only.
Computed from the log-likelihood. The log-likelihood can be very
small, meaning a large negative value such as -28000. Taking the
exp() of that results in 0.0, which can break typical algorithms
which multiply by this value, so by default we always return a
number >= sys.float_info.min.
inv : function, default numpy.linalg.inv

@@ -189,28 +229,2 @@ If you prefer another inverse function, such as the Moore-Penrose

def __init__(self, dim_x, dim_z, dim_u=0, compute_log_likelihood=True):
""" Create a Kalman filter. You are responsible for setting the
various state variables to reasonable values; the defaults below will
not give you a functional filter.
Parameters
----------
dim_x : int
Number of state variables for the Kalman filter. For example, if
you are tracking the position and velocity of an object in two
dimensions, dim_x would be 4.
This is used to set the default size of P, Q, and u
dim_z : int
Number of of measurement inputs. For example, if the sensor
provides you with position in (x,y), dim_z would be 2.
dim_u : int (optional)
size of the control input, if it is being used.
Default value of 0 indicates it is not used.
compute_log_likelihood : bool (default = True)
Computes log likelihood by default, but this can be a slow
computation, so if you never use it you can turn this computation
off.
"""
if dim_z < 1:

@@ -237,6 +251,8 @@ raise ValueError('dim_x must be 1 or greater')

self.z = reshape_z(zeros((dim_z)), self.dim_z, self.x.ndim)
# gain and residual are computed during the innovation step. We
# save them so that in case you want to inspect them for various
# purposes
self.K = np.zeros(self.x.shape) # kalman gain
self.K = np.zeros((dim_x, dim_z)) # kalman gain
self.y = zeros((dim_z, 1))

@@ -249,7 +265,8 @@ self.S = np.zeros((dim_z, dim_z)) # system uncertainty

# these will always be a copy of x,P after predict() is called
self.x_pred = zeros((dim_x, 1))
self.P_pred = eye(dim_x)
self.x_prior = self.x[:]
self.P_prior = self.P[:]
self.compute_log_likelihood = compute_log_likelihood
self.log_likelihood = math.log(sys.float_info.min)
self.likelihood = sys.float_info.min

@@ -259,2 +276,50 @@ self.inv = np.linalg.inv

def predict(self, u=0, B=None, F=None, Q=None):
"""
Predict next state (prior) using the Kalman filter state propagation
equations.
Parameters
----------
u : np.array
Optional control vector. If non-zero, it is multiplied by B
to create the control input into the system.
B : np.array(dim_x, dim_z), or None
Optional control transition matrix; a value of None in
any position will cause the filter to use `self.B`.
F : np.array(dim_x, dim_x), or None
Optional state transition matrix; a value of None in
any position will cause the filter to use `self.F`.
Q : np.array(dim_x, dim_x), scalar, or None
Optional process noise matrix; a value of None in
any position will cause the filter to use `self.Q`.
"""
if B is None:
B = self.B
if F is None:
F = self.F
if Q is None:
Q = self.Q
elif isscalar(Q):
Q = eye(self.dim_x) * Q
# x = Fx + Bu
if B is not None:
self.x = dot(F, self.x) + dot(B, u)
else:
self.x = dot(F, self.x)
# P = FPF' + Q
self.P = self._alpha_sq * dot(dot(F, self.P), F.T) + Q
# save prior
self.x_prior = self.x[:]
self.P_prior = self.P[:]
def update(self, z, R=None, H=None):

@@ -283,3 +348,3 @@ """

z = _reshape_z(z, self.dim_z, self.x.ndim)
z = reshape_z(z, self.dim_z, self.x.ndim)

@@ -314,9 +379,51 @@ if R is None:

# P = (I-KH)P(I-KH)' + KRK'
# This is more numerically stable
# and works for non-optimal K vs the equation
# P = (I-KH)P usually seen in the literature.
I_KH = self._I - dot(self.K, H)
self.P = dot(dot(I_KH, self.P), I_KH.T) + dot(dot(self.K, R), self.K.T)
self.z = z[:] # save the measurement
if self.compute_log_likelihood:
self.log_likelihood = logpdf(x=self.y, cov=self.S)
self.likelihood = math.exp(self.log_likelihood)
if self.likelihood == 0:
self.likelihood = sys.float_info.min
def predict_steadystate(self, u=0, B=None):
"""
Predict state (prior) using the Kalman filter state propagation
equations. Only x is updated, P is left unchanged. See
update_steadstate() for a longer explanation of when to use this
method.
Parameters
----------
u : np.array
Optional control vector. If non-zero, it is multiplied by B
to create the control input into the system.
B : np.array(dim_x, dim_z), or None
Optional control transition matrix; a value of None in
any position will cause the filter to use `self.B`.
"""
if B is None:
B = self.B
# x = Fx + Bu
if B is not None:
self.x = dot(self.F, self.x) + dot(B, u)
else:
self.x = dot(self.F, self.x)
# save prior
self.x_prior = self.x[:]
self.P_prior = self.P[:]
def update_steadystate(self, z):

@@ -369,3 +476,3 @@ """

z = _reshape_z(z, self.dim_z, self.x.ndim)
z = reshape_z(z, self.dim_z, self.x.ndim)

@@ -383,4 +490,9 @@ # y = z - Hx

self.z = z[:] # save the measurement
if self.compute_log_likelihood:
self.log_likelihood = logpdf(x=self.y, cov=S)
self.likelihood = math.exp(self.log_likelihood)
if self.likelihood == 0:
self.likelihood = sys.float_info.min

@@ -413,3 +525,3 @@

z = _reshape_z(z, self.dim_z, self.x.ndim)
z = reshape_z(z, self.dim_z, self.x.ndim)

@@ -452,189 +564,15 @@ if R is None:

self.z = z[:] # save the measurement
if self.compute_log_likelihood:
self.log_likelihood = logpdf(x=self.y, cov=self.S)
self.likelihood = math.exp(self.log_likelihood)
if self.likelihood == 0:
self.likelihood = sys.float_info.min
def test_matrix_dimensions(self, z=None, H=None, R=None, F=None, Q=None):
""" Performs a series of asserts to check that the size of everything
is what it should be. This can help you debug problems in your design.
If you pass in H, R, F, Q those will be used instead of this object's
value for those matrices.
Testing `z` (the measurement) is problamatic. x is a vector, and can be
implemented as either a 1D array or as a nx1 column vector. Thus Hx
can be of different shapes. Then, if Hx is a single value, it can
be either a 1D array or 2D vector. If either is true, z can reasonably
be a scalar (either '3' or np.array('3') are scalars under this
definition), a 1D, 1 element array, or a 2D, 1 element array. You are
allowed to pass in any combination that works.
"""
if H is None:
H = self.H
if R is None:
R = self.R
if F is None:
F = self.F
if Q is None:
Q = self.Q
x = self.x
P = self.P
assert x.ndim == 1 or x.ndim == 2, \
"x must have one or two dimensions, but has {}".format(x.ndim)
if x.ndim == 1:
assert x.shape[0] == self.dim_x, \
"Shape of x must be ({},{}), but is {}".format(
self.dim_x, 1, x.shape)
else:
assert x.shape == (self.dim_x, 1), \
"Shape of x must be ({},{}), but is {}".format(
self.dim_x, 1, x.shape)
assert P.shape == (self.dim_x, self.dim_x), \
"Shape of P must be ({},{}), but is {}".format(
self.dim_x, self.dim_x, P.shape)
assert Q.shape == (self.dim_x, self.dim_x), \
"Shape of P must be ({},{}), but is {}".format(
self.dim_x, self.dim_x, P.shape)
assert F.shape == (self.dim_x, self.dim_x), \
"Shape of F must be ({},{}), but is {}".format(
self.dim_x, self.dim_x, F.shape)
assert np.ndim(H) == 2, \
"Shape of H must be (dim_z, {}), but is {}".format(
P.shape[0], shape(H))
assert H.shape[1] == P.shape[0], \
"Shape of H must be (dim_z, {}), but is {}".format(
P.shape[0], H.shape)
# shape of R must be the same as HPH'
hph_shape = (H.shape[0], H.shape[0])
r_shape = shape(R)
if H.shape[0] == 1:
# r can be scalar, 1D, or 2D in this case
assert r_shape == () or r_shape == (1,) or r_shape == (1, 1), \
"R must be scalar or one element array, but is shaped {}".format(
r_shape)
else:
assert r_shape == hph_shape, \
"shape of R should be {} but it is {}".format(hph_shape, r_shape)
if z is not None:
z_shape = shape(z)
else:
z_shape = (self.dim_z, 1)
# H@x must have shape of z
Hx = dot(H, x)
if z_shape == (): # scalar or np.array(scalar)
assert Hx.ndim == 1 or shape(Hx) == (1, 1), \
"shape of z should be {}, not {} for the given H".format(
shape(Hx), z_shape)
elif shape(Hx) == (1,):
assert z_shape[0] == 1, 'Shape of z must be {} for the given H'.format(shape(Hx))
else:
assert (z_shape == shape(Hx) or
(len(z_shape) == 1 and shape(Hx) == (z_shape[0], 1))), \
"shape of z should be {}, not {} for the given H".format(
shape(Hx), z_shape)
if np.ndim(Hx) > 1 and shape(Hx) != (1, 1):
assert shape(Hx) == z_shape, \
'shape of z should be {} for the given H, but it is {}'.format(
shape(Hx), z_shape)
def predict(self, u=0, B=None, F=None, Q=None):
"""
Predict next state (prior) using the Kalman filter state propagation
equations.
Parameters
----------
u : np.array
Optional control vector. If non-zero, it is multiplied by B
to create the control input into the system.
B : np.array(dim_x, dim_z), or None
Optional control transition matrix; a value of None in
any position will cause the filter to use `self.B`.
F : np.array(dim_x, dim_x), or None
Optional state transition matrix; a value of None in
any position will cause the filter to use `self.F`.
Q : np.array(dim_x, dim_x), scalar, or None
Optional process noise matrix; a value of None in
any position will cause the filter to use `self.Q`.
"""
if B is None:
B = self.B
if F is None:
F = self.F
if Q is None:
Q = self.Q
elif isscalar(Q):
Q = eye(self.dim_x) * Q
# x = Fx + Bu
if B is not None:
self.x = dot(F, self.x) + dot(B, u)
else:
self.x = dot(F, self.x)
# P = FPF' + Q
self.P = self._alpha_sq * dot(dot(F, self.P), F.T) + Q
self.x_pred = self.x[:]
self.P_pred = self.P[:]
def predict_steadystate(self, u=0, B=None):
"""
Predict state (prior) using the Kalman filter state propagation
equations. Only x is updated, P is left unchanged. See
update_steadstate() for a longer explanation of when to use this
method.
Parameters
----------
u : np.array
Optional control vector. If non-zero, it is multiplied by B
to create the control input into the system.
B : np.array(dim_x, dim_z), or None
Optional control transition matrix; a value of None in
any position will cause the filter to use `self.B`.
"""
if B is None:
B = self.B
# x = Fx + Bu
if B is not None:
self.x = dot(self.F, self.x) + dot(B, u)
else:
self.x = dot(self.F, self.x)
self.x_pred = self.x[:]
# strictly speaking not necessary, but if the user initialized k and
# P manually, ensure it is properly set
self.P_pred = self.P[:]
def batch_filter(self, zs, Fs=None, Qs=None, Hs=None,
Rs=None, Bs=None, us=None, update_first=False):
Rs=None, Bs=None, us=None, update_first=False,
saver=None):
""" Batch processes a sequences of measurements.

@@ -688,2 +626,6 @@

saver : filterpy.common.Saver, optional
filterpy.common.Saver object. If provided, saver.save() will be
called after every epoch
Returns

@@ -722,5 +664,5 @@ -------

(xs, Ps, Ks) = kf.rts_smoother(mu, cov, Fs=Fs, Qs=None)
"""
#pylint: disable=too-many-statements
n = np.size(zs, 0)

@@ -771,2 +713,5 @@ if Fs is None:

covariances_p[i, :, :] = self.P
if saver is not None:
saver.save()
else:

@@ -783,2 +728,5 @@ for i, (z, F, Q, H, R, B, u) in enumerate(zip(zs, Fs, Qs, Hs, Rs, Bs, us)):

if saver is not None:
saver.save()
return (means, covariances, means_p, covariances_p)

@@ -788,3 +736,4 @@

def rts_smoother(self, Xs, Ps, Fs=None, Qs=None):
""" Runs the Rauch-Tung-Striebal Kalman smoother on a set of
"""
Runs the Rauch-Tung-Striebal Kalman smoother on a set of
means and covariances computed by a Kalman filter. The usual input

@@ -865,4 +814,5 @@ would come from the output of `KalmanFilter.batch_filter()`.

def get_prediction(self, u=0):
""" Predicts the next state of the filter and returns it. Does not
alter the state of the filter.
"""
Predicts the next state of the filter and returns it without
altering the state of the filter.

@@ -888,4 +838,5 @@ Parameters

def get_update(self, z=None):
""" Computes the new estimate based on measurement `z`. Does not
alter the state of the filter.
"""
Computes the new estimate based on measurement `z` and returns it
without altering the state of the filter.

@@ -904,3 +855,2 @@ Parameters

State vector and covariance array of the update.
"""

@@ -910,3 +860,3 @@

return self.x, self.P
z = _reshape_z(z, self.dim_z, self.x.ndim)
z = reshape_z(z, self.dim_z, self.x.ndim)

@@ -941,10 +891,12 @@ R = self.R

def residual_of(self, z):
""" returns the residual for the given measurement (z). Does not alter
"""
Returns the residual for the given measurement (z). Does not alter
the state of the filter.
"""
return z - dot(self.H, self.x_pred)
return z - dot(self.H, self.x_prior)
def measurement_of_state(self, x):
""" Helper function that converts a state into a measurement.
"""
Helper function that converts a state into a measurement.

@@ -988,22 +940,2 @@ Parameters

@property
def likelihood(self):
"""
likelihood of last measurment.
Computed from the log-likelihood. The log-likelihood can be very
small, meaning a large negative value such as -28000. Taking the
exp() of that results in 0.0, which can break typical algorithms
which multiply by this value, so by default we always return a
number >= sys.float_info.min.
But really, this is a bad measure because of the scaling that is
involved - try to use log-likelihood in your equations!"""
lh = math.exp(self.log_likelihood)
if lh == 0:
lh = sys.float_info.min
return lh
def log_likelihood_of(self, z):

@@ -1022,6 +954,5 @@ """

def alpha(self, value):
if not np.isscalar(value) or value < 1:
raise ValueError('alpha must be a float greater than 1')
if not np.isscalar(value) or value <= 0:
raise ValueError('alpha must be a float greater than 0')
self._alpha_sq = value**2

@@ -1038,2 +969,4 @@

pretty_str('P', self.P),
pretty_str('x_prior', self.x_prior),
pretty_str('P_prior', self.P_prior),
pretty_str('F', self.F),

@@ -1048,2 +981,3 @@ pretty_str('Q', self.Q),

pretty_str('B', self.B),
pretty_str('z', self.z),
pretty_str('log-likelihood', self.log_likelihood),

@@ -1054,3 +988,104 @@ pretty_str('alpha', self.alpha),

def test_matrix_dimensions(self, z=None, H=None, R=None, F=None, Q=None):
"""
Performs a series of asserts to check that the size of everything
is what it should be. This can help you debug problems in your design.
If you pass in H, R, F, Q those will be used instead of this object's
value for those matrices.
Testing `z` (the measurement) is problamatic. x is a vector, and can be
implemented as either a 1D array or as a nx1 column vector. Thus Hx
can be of different shapes. Then, if Hx is a single value, it can
be either a 1D array or 2D vector. If either is true, z can reasonably
be a scalar (either '3' or np.array('3') are scalars under this
definition), a 1D, 1 element array, or a 2D, 1 element array. You are
allowed to pass in any combination that works.
"""
if H is None:
H = self.H
if R is None:
R = self.R
if F is None:
F = self.F
if Q is None:
Q = self.Q
x = self.x
P = self.P
assert x.ndim == 1 or x.ndim == 2, \
"x must have one or two dimensions, but has {}".format(x.ndim)
if x.ndim == 1:
assert x.shape[0] == self.dim_x, \
"Shape of x must be ({},{}), but is {}".format(
self.dim_x, 1, x.shape)
else:
assert x.shape == (self.dim_x, 1), \
"Shape of x must be ({},{}), but is {}".format(
self.dim_x, 1, x.shape)
assert P.shape == (self.dim_x, self.dim_x), \
"Shape of P must be ({},{}), but is {}".format(
self.dim_x, self.dim_x, P.shape)
assert Q.shape == (self.dim_x, self.dim_x), \
"Shape of P must be ({},{}), but is {}".format(
self.dim_x, self.dim_x, P.shape)
assert F.shape == (self.dim_x, self.dim_x), \
"Shape of F must be ({},{}), but is {}".format(
self.dim_x, self.dim_x, F.shape)
assert np.ndim(H) == 2, \
"Shape of H must be (dim_z, {}), but is {}".format(
P.shape[0], shape(H))
assert H.shape[1] == P.shape[0], \
"Shape of H must be (dim_z, {}), but is {}".format(
P.shape[0], H.shape)
# shape of R must be the same as HPH'
hph_shape = (H.shape[0], H.shape[0])
r_shape = shape(R)
if H.shape[0] == 1:
# r can be scalar, 1D, or 2D in this case
assert r_shape == () or r_shape == (1,) or r_shape == (1, 1), \
"R must be scalar or one element array, but is shaped {}".format(
r_shape)
else:
assert r_shape == hph_shape, \
"shape of R should be {} but it is {}".format(hph_shape, r_shape)
if z is not None:
z_shape = shape(z)
else:
z_shape = (self.dim_z, 1)
# H@x must have shape of z
Hx = dot(H, x)
if z_shape == (): # scalar or np.array(scalar)
assert Hx.ndim == 1 or shape(Hx) == (1, 1), \
"shape of z should be {}, not {} for the given H".format(
shape(Hx), z_shape)
elif shape(Hx) == (1,):
assert z_shape[0] == 1, 'Shape of z must be {} for the given H'.format(shape(Hx))
else:
assert (z_shape == shape(Hx) or
(len(z_shape) == 1 and shape(Hx) == (z_shape[0], 1))), \
"shape of z should be {}, not {} for the given H".format(
shape(Hx), z_shape)
if np.ndim(Hx) > 1 and shape(Hx) != (1, 1):
assert shape(Hx) == z_shape, \
'shape of z should be {} for the given H, but it is {}'.format(
shape(Hx), z_shape)
def update(x, P, z, R, H=None, return_all=False):

@@ -1129,3 +1164,3 @@ """

Hx = np.atleast_1d(dot(H, x))
z = _reshape_z(z, Hx.shape[0], x.ndim)
z = reshape_z(z, Hx.shape[0], x.ndim)

@@ -1218,3 +1253,3 @@ # error (residual) between measurement and prediction

Hx = np.atleast_1d(dot(H, x))
z = _reshape_z(z, Hx.shape[0], x.ndim)
z = reshape_z(z, Hx.shape[0], x.ndim)

@@ -1229,3 +1264,4 @@ # error (residual) between measurement and prediction

def predict(x, P, F=1, Q=0, u=0, B=1, alpha=1.):
""" Predict next state (prior) using the Kalman filter state propagation
"""
Predict next state (prior) using the Kalman filter state propagation
equations.

@@ -1282,3 +1318,4 @@

def predict_steadystate(x, F=1, u=0, B=1):
""" Predict next state (prior) using the Kalman filter state propagation
"""
Predict next state (prior) using the Kalman filter state propagation
equations. This steady state form only computes x, assuming that the

@@ -1321,4 +1358,6 @@ covariance is constant.

def batch_filter(x, P, zs, Fs, Qs, Hs, Rs, Bs=None, us=None, update_first=False):
""" Batch processes a sequences of measurements.
def batch_filter(x, P, zs, Fs, Qs, Hs, Rs, Bs=None, us=None,
update_first=False, saver=None):
"""
Batch processes a sequences of measurements.

@@ -1360,2 +1399,5 @@ Parameters

saver : filterpy.common.Saver, optional
filterpy.common.Saver object. If provided, saver.save() will be
called after every epoch

@@ -1436,2 +1478,4 @@ Returns

covariances_p[i, :, :] = P
if saver is not None:
saver.save()
else:

@@ -1447,2 +1491,4 @@ for i, (z, F, Q, H, R, B, u) in enumerate(zip(zs, Fs, Qs, Hs, Rs, Bs, us)):

covariances[i, :, :] = P
if saver is not None:
saver.save()

@@ -1454,3 +1500,4 @@ return (means, covariances, means_p, covariances_p)

def rts_smoother(Xs, Ps, Fs, Qs):
""" Runs the Rauch-Tung-Striebal Kalman smoother on a set of
"""
Runs the Rauch-Tung-Striebal Kalman smoother on a set of
means and covariances computed by a Kalman filter. The usual input

@@ -1523,3 +1570,6 @@ would come from the output of `KalmanFilter.batch_filter()`.

class Saver(object):
""" Helper class to save the states of the KalmanFilter class.
"""
Deprecated. Use filterpy.common.Saver instead.
Helper class to save the states of the KalmanFilter class.
Each time you call save() the current states are appended to lists.

@@ -1556,2 +1606,6 @@ Generally you would do this once per epoch - predict/update.

warnings.warn(
'Use filterpy.common.Saver instead of this, as it works for any filter clase',
DeprecationWarning)
self.xs = []

@@ -1561,4 +1615,4 @@ self.Ps = []

self.ys = []
self.xs_pred = []
self.Ps_pred = []
self.xs_prior = []
self.Ps_prior = []
self.kf = kf

@@ -1577,4 +1631,4 @@ if save_current:

self.ys.append(kf.y[:])
self.xs_pred.append(kf.x_pred[:])
self.Ps_pred.append(kf.P_pred[:])
self.xs_prior.append(kf.x_prior[:])
self.Ps_prior.append(kf.P_prior[:])

@@ -1589,22 +1643,3 @@

self.ys = np.array(self.ys)
self.xs_pred = np.array(self.xs_pred)
self.Ps_pred = np.array(self.Ps_pred)
def _reshape_z(z, dim_z, ndim):
""" ensure z is a (dim_z, 1) shaped vector"""
z = np.atleast_2d(z)
if z.shape[1] == dim_z:
z = z.T
if z.shape != (dim_z, 1):
raise ValueError('z must be convertible to shape ({}, 1)'.format(dim_z))
if ndim == 1:
z = z[:, 0]
if ndim == 0:
z = z[0, 0]
return z
self.xs_prior = np.array(self.xs_prior)
self.Ps_prior = np.array(self.Ps_prior)

@@ -24,3 +24,4 @@ # -*- coding: utf-8 -*-

class MMAEFilterBank(object):
""" Implements the fixed Multiple Model Adaptive Estimator (MMAE). This
"""
Implements the fixed Multiple Model Adaptive Estimator (MMAE). This
is a bank of independent Kalman filters. This estimator computes the

@@ -30,2 +31,18 @@ likelihood that each filter is the correct one, and blends their state

Parameters
----------
filters : list of Kalman filters
List of Kalman filters.
p : list-like of floats
Initial probability that each filter is the correct one. In general
you'd probably set each element to 1./len(p).
dim_x : float
number of random variables in the state X
H : Measurement matrix
Examples

@@ -60,20 +77,2 @@ --------

def __init__(self, filters, p, dim_x, H=None):
""" Creates an fixed MMAE Estimator.
Parameters
----------
filters : list of Kalman filters
List of Kalman filters.
p : list-like of floats
Initial probability that each filter is the correct one. In general
you'd probably set each element to 1./len(p).
dim_x : float
number of random variables in the state X
H : Measurement matrix
"""
if len(filters) != len(p):

@@ -88,9 +87,19 @@ raise ValueError('length of filters and p must be the same')

self.dim_x = dim_x
self.x = None
self.P = None
self.H = H
self.H = H[:]
# try to form a reasonable initial values, but good luck!
try:
self.z = filters[0].z[:]
self.x = filters[0].x[:]
self.P = filters[0].P[:]
except AttributeError:
self.z = 0
self.x = None
self.P = None
def predict(self, u=0):
""" Predict next position using the Kalman filter state propagation
"""
Predict next position using the Kalman filter state propagation
equations for each filter in the bank.

@@ -159,2 +168,8 @@

try:
self.z = z[:]
except IndexError:
self.z = z
def __repr__(self):

@@ -161,0 +176,0 @@ return '\n'.join([

@@ -29,70 +29,92 @@ # -*- coding: utf-8 -*-

"""
Attributes
Create a Kalman filter which uses a square root implementation.
This uses the square root of the state covariance matrix, which doubles
the numerical precision of the filter, Therebuy reducing the effect
of round off errors.
It is likely that you do not need to use this algorithm; we understand
divergence issues very well now. However, if you expect the covariance
matrix P to vary by 20 or more orders of magnitude then perhaps this
will be useful to you, as the square root will vary by 10 orders
of magnitude. From my point of view this is merely a 'reference'
algorithm; I have not used this code in real world software. Brown[1]
has a useful discussion of when you might need to use the square
root form of this algorithm.
You are responsible for setting the various state variables to
reasonable values; the defaults below will not give you a functional
filter.
Parameters
----------
x : ndarray (dim_x, 1), default = [0,0,0...0]
state of the filter
dim_x : int
Number of state variables for the Kalman filter. For example, if
you are tracking the position and velocity of an object in two
dimensions, dim_x would be 4.
H : ndarray (dim_z, dim_x)
measurement function
This is used to set the default size of P, Q, and u
F : ndarray (dim_x, dim_x)
state transistion matrix
dim_z : int
Number of of measurement inputs. For example, if the sensor
provides you with position in (x,y), dim_z would be 2.
B : ndarray (dim_x, dim_u), default 0
control transition matrix
dim_u : int (optional)
size of the control input, if it is being used.
Default value of 0 indicates it is not used.
Examples
--------
See my book Kalman and Bayesian Filters in Python
https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python
"""
def __init__(self, dim_x, dim_z, dim_u=0):
""" Create a Kalman filter which uses a square root implementation.
This uses the square root of the state covariance matrix, which doubles
the numerical precision of the filter, Therebuy reducing the effect
of round off errors.
Attributes
----------
It is likely that you do not need to use this algorithm; we understand
divergence issues very well now. However, if you expect the covariance
matrix P to vary by 20 or more orders of magnitude then perhaps this
will be useful to you, as the square root will vary by 10 orders
of magnitude. From my point of view this is merely a 'reference'
algorithm; I have not used this code in real world software. Brown[1]
has a useful discussion of when you might need to use the square
root form of this algorithm.
x : numpy.array(dim_x, 1)
State estimate
You are responsible for setting the various state variables to
reasonable values; the defaults below will not give you a functional
filter.
P : numpy.array(dim_x, dim_x)
State covariance matrix
Parameters
----------
x_prior : numpy.array(dim_x, 1)
Prior (predicted) state estimate
dim_x : int
Number of state variables for the Kalman filter. For example, if
you are tracking the position and velocity of an object in two
dimensions, dim_x would be 4.
P_prior : numpy.array(dim_x, dim_x)
Prior (predicted) state covariance matrix
This is used to set the default size of P, Q, and u
R : numpy.array(dim_z, dim_z)
Measurement noise matrix
dim_z : int
Number of of measurement inputs. For example, if the sensor
provides you with position in (x,y), dim_z would be 2.
Q : numpy.array(dim_x, dim_x)
Process noise matrix
dim_u : int (optional)
size of the control input, if it is being used.
Default value of 0 indicates it is not used.
F : numpy.array()
State Transition matrix
H : numpy.array(dim_z, dim_x)
Measurement function
y : numpy.array
Residual of the update step. Read only.
References
----------
K : numpy.array(dim_x, dim_z)
Kalman gain of the update step. Read only.
[1] Robert Grover Brown. Introduction to Random Signals and Applied
Kalman Filtering. Wiley and sons, 2012.
"""
S : numpy.array
Systen uncertaintly projected to measurement space. Read only.
Examples
--------
See my book Kalman and Bayesian Filters in Python
https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python
References
----------
[1] Robert Grover Brown. Introduction to Random Signals and Applied
Kalman Filtering. Wiley and sons, 2012.
"""
def __init__(self, dim_x, dim_z, dim_u=0):
if dim_z < 1:

@@ -133,3 +155,7 @@ raise ValueError('dim_x must be 1 or greater')

# copy prior
self.x_prior = self.x[:]
self._P1_2_prior = self._P1_2[:]
def update(self, z, R2=None):

@@ -183,3 +209,5 @@ """

def predict(self, u=0):
""" Predict next position.
"""
Predict next state (prior) using the Kalman filter state propagation
equations.

@@ -189,3 +217,3 @@ Parameters

u : np.array
u : np.array, optional
Optional control vector. If non-zero, it is multiplied by B

@@ -202,3 +230,7 @@ to create the control input into the system.

# copy prior
self.x_prior = self.x[:]
self._P1_2_prior = self._P1_2[:]
def residual_of(self, z):

@@ -253,3 +285,8 @@ """ returns the residual for the given measurement (z). Does not alter

@property
def P_prior(self):
""" covariance matrix"""
return dot(self._P1_2_prior.T, self._P1_2_prior)
@property

@@ -256,0 +293,0 @@ def P1_2(self):

@@ -8,3 +8,3 @@ # -*- coding: utf-8 -*-

from filterpy.common import Saver
from filterpy.kalman import CubatureKalmanFilter as CKF

@@ -95,3 +95,2 @@ from filterpy.kalman import UnscentedKalmanFilter as UKF

def test_1d():
def fx(x, dt):

@@ -107,3 +106,2 @@ F = np.array([[1., dt],

ckf = CKF(dim_x=2, dim_z=1, dt=0.1, hx=hx, fx=fx)

@@ -129,3 +127,3 @@

s = Saver(kf)
for i in range(50):

@@ -141,2 +139,4 @@ z = np.array([[i+randn()*0.1]])

assert abs(ckf.x[1] -kf.x[1]) < 1e-10
s.save()
s.to_array()

@@ -143,0 +143,0 @@

@@ -23,4 +23,2 @@ # -*- coding: utf-8 -*-

import matplotlib.pyplot as plt
import numpy.random as random
from numpy.random import randn
from math import sqrt

@@ -31,38 +29,41 @@ import numpy as np

from filterpy.common import Saver
from filterpy.examples import RadarSim
DO_PLOT = False
def H_of(x):
""" compute Jacobian of H matrix for state x """
def test_ekf():
def H_of(x):
""" compute Jacobian of H matrix for state x """
horiz_dist = x[0]
altitude = x[2]
horiz_dist = x[0]
altitude = x[2]
denom = sqrt(horiz_dist**2 + altitude**2)
denom = sqrt(horiz_dist**2 + altitude**2)
# dh_ddist = horiz_dist/denom
# dh_dvel = 0
# dh_dalt = altitude/denom
return array ([[horiz_dist/denom, 0., altitude/denom]])
# dh_ddist = horiz_dist/denom
# dh_dvel = 0
# dh_dalt = altitude/denom
return array ([[horiz_dist/denom, 0., altitude/denom]])
def hx(x):
""" takes a state variable and returns the measurement that would
correspond to that state.
"""
def hx(x):
""" takes a state variable and returns the measurement that would
correspond to that state.
"""
return sqrt(x[0]**2 + x[2]**2)
return sqrt(x[0]**2 + x[2]**2)
dt = 0.05
proccess_error = 0.05
dt = 0.05
proccess_error = 0.05
rk = ExtendedKalmanFilter(dim_x=3, dim_z=1)
rk = ExtendedKalmanFilter(dim_x=3, dim_z=1)
rk.F = eye(3) + array ([[0, 1, 0],
[0, 0, 0],
[0, 0, 0]])*dt
rk.F = eye(3) + array ([[0, 1, 0],
[0, 0, 0],
[0, 0, 0]])*dt

@@ -72,60 +73,60 @@

def fx(x, dt):
return np.dot(rk.F, x)
def fx(x, dt):
return np.dot(rk.F, x)
rk.x = array([-10., 90., 1100.])
rk.R *= 10
rk.Q = array([[0, 0, 0],
[0, 1, 0],
[0, 0, 1]]) * 0.001
rk.x = array([-10., 90., 1100.])
rk.R *= 10
rk.Q = array([[0, 0, 0],
[0, 1, 0],
[0, 0, 1]]) * 0.001
rk.P *= 50
rk.P *= 50
DO_PLOT = True
rs = []
xs = []
radar = RadarSim(dt)
ps = []
pos = []
rs = []
xs = []
radar = RadarSim(dt)
ps = []
s = Saver(rk)
for i in range(int(20/dt)):
z = radar.get_range(proccess_error)
pos.append(radar.pos)
pos = []
rk.update(asarray([z]), H_of, hx, R=hx(rk.x)*proccess_error)
ps.append(rk.P)
rk.predict()
for i in range(int(20/dt)):
z = radar.get_range(proccess_error)
pos.append(radar.pos)
xs.append(rk.x)
rs.append(z)
s.save()
s.to_array()
rk.update(asarray([z]), H_of, hx, R=hx(rk.x)*proccess_error)
ps.append(rk.P)
rk.predict()
xs = asarray(xs)
ps = asarray(ps)
rs = asarray(rs)
xs.append(rk.x)
rs.append(z)
p_pos = ps[:,0,0]
p_vel = ps[:,1,1]
p_alt = ps[:,2,2]
pos = asarray(pos)
xs = asarray(xs)
ps = asarray(ps)
rs = asarray(rs)
if DO_PLOT:
p_pos = ps[:,0,0]
p_vel = ps[:,1,1]
p_alt = ps[:,2,2]
pos = asarray(pos)
plt.subplot(311)
plt.plot(xs[:,0])
plt.ylabel('position')
if DO_PLOT:
plt.subplot(312)
plt.plot(xs[:,1])
plt.ylabel('velocity')
plt.subplot(311)
plt.plot(xs[:,0])
plt.ylabel('position')
plt.subplot(313)
#plt.plot(xs[:,2])
#plt.ylabel('altitude')
plt.subplot(312)
plt.plot(xs[:,1])
plt.ylabel('velocity')
plt.subplot(313)
#plt.plot(xs[:,2])
#plt.ylabel('altitude')
plt.plot(p_pos)
plt.plot(-p_pos)
plt.plot(xs[:,0]-pos)
plt.plot(p_pos)
plt.plot(-p_pos)
plt.plot(xs[:,0]-pos)

@@ -21,3 +21,3 @@ # -*- coding: utf-8 -*-

from filterpy.kalman import EnsembleKalmanFilter as EnKF
from filterpy.common import Q_discrete_white_noise
from filterpy.common import Q_discrete_white_noise, Saver
from math import cos, sin

@@ -48,4 +48,4 @@

ps = []
zs = []
s = Saver(f)
for t in range (0,100):

@@ -63,2 +63,4 @@ # create measurement = t plus white noise

ps.append(3*(f.P[0,0]**.5))
s.save()
s.to_array()

@@ -122,2 +124,5 @@ results = np.asarray(results)

#test that __repr__ doesn't assert
str(f)
results = np.asarray(results)

@@ -142,3 +147,3 @@ measurements = np.asarray(measurements)

DO_PLOT = True
#test_circle ()
test_circle ()
test_1d_const_vel()

@@ -145,0 +150,0 @@

@@ -114,4 +114,4 @@ # -*- coding: utf-8 -*-

M, P, *_ = kf.batch_filter(zs)
rts_x, *_ = kf.rts_smoother(M, P)
M, P, _, _ = kf.batch_filter(zs)
rts_x, _, _, _ = kf.rts_smoother(M, P)

@@ -118,0 +118,0 @@ xfl = xs[:,0].T[0]

@@ -27,3 +27,3 @@ # -*- coding: utf-8 -*-

def test_noisy_1d():
f = FadingKalmanFilter (5., dim_x=2, dim_z=1)
f = FadingKalmanFilter(5., dim_x=2, dim_z=1)

@@ -30,0 +30,0 @@ f.X = np.array([[2.],

@@ -26,5 +26,5 @@ # -*- coding: utf-8 -*-

import matplotlib.pyplot as plt
from filterpy.kalman import IMMEstimator, KalmanFilter, MMAEFilterBank
from numpy import array, asarray
from filterpy.common import Q_discrete_white_noise
from filterpy.kalman import IMMEstimator, KalmanFilter
from numpy import array
from filterpy.common import Q_discrete_white_noise, Saver
import matplotlib.pyplot as plt

@@ -233,4 +233,8 @@ from numpy.random import randn

# ensure __repr__ doesn't have problems
str(bank)
xs, probs = [], []
cvxs, caxs = [], []
s = Saver(bank)
for i, z in enumerate(zs[0:10]):

@@ -248,2 +252,4 @@ z = np.array([z]).T

probs.append(bank.mu.copy())
s.save()
s.to_array()

@@ -250,0 +256,0 @@ if DO_PLOT:

@@ -23,3 +23,4 @@ # -*- coding: utf-8 -*-

import matplotlib.pyplot as plt
from scipy.linalg import inv
from filterpy.common import Saver
from filterpy.kalman import KalmanFilter, InformationFilter

@@ -85,5 +86,9 @@

def test_1d():
f = KalmanFilter (dim_x=2, dim_z=1)
inf = InformationFilter (dim_x=2, dim_z=1)
f = KalmanFilter(dim_x=2, dim_z=1)
inf = InformationFilter(dim_x=2, dim_z=1)
# ensure __repr__ doesn't assert
str(inf)
f.x = np.array([[2.],

@@ -108,5 +113,4 @@ [0.]]) # initial state (location and velocity)

r2 = []
zs = []
s = Saver(inf)
for t in range (100):

@@ -128,5 +132,7 @@ # create measurement = t plus white noise

m.append(z)
s.save()
assert abs(f.x[0,0] - inf.x[0,0]) < 1.e-12
if DO_PLOT:

@@ -133,0 +139,0 @@ plt.plot(m)

@@ -25,3 +25,3 @@ # -*- coding: utf-8 -*-

from filterpy.kalman import KalmanFilter, update, predict, batch_filter
from filterpy.common import Q_discrete_white_noise, kinematic_kf
from filterpy.common import Q_discrete_white_noise, kinematic_kf, Saver
from scipy.linalg import block_diag, norm

@@ -126,3 +126,7 @@

f.P = np.eye(2)*100.
m,c,_,_ = f.batch_filter(zs,update_first=False)
s = Saver(f)
m,c,_,_ = f.batch_filter(zs,update_first=False, saver=s)
s.to_array()
assert len(s.x) == len(zs)
assert len(s.x) == len(s)

@@ -137,4 +141,2 @@ # plot data

["noisy measurement", "KF output", "ideal", "batch"], loc=4)
plt.show()

@@ -145,2 +147,3 @@

from scipy.linalg import inv
from numpy import dot
global ks

@@ -161,3 +164,2 @@ dt = 1.

measurements = []
results = []

@@ -172,12 +174,12 @@ xest = []

# perform kalman filtering
x = F @ x
P = F @ P @ F.T + Q
x = dot(F, x)
P = dot(dot(F, P), F.T) + Q
P2 = P.copy()
P2[0,1] = 0 # force there to be no correlation
P2[1,0] = 0
S = H @ P2 @ H.T + R
K = P2 @ H.T @inv(S)
y = z - H@x
x = x + K@y
P2[0, 1] = 0 # force there to be no correlation
P2[1, 0] = 0
S = dot(dot(H, P2), H.T) + R
K = dot(dot(P2, H.T), inv(S))
y = z - dot(H, x)
x = x + dot(K, y)

@@ -184,0 +186,0 @@ # save data

@@ -28,3 +28,3 @@ # -*- coding: utf-8 -*-

from numpy import array
from filterpy.common import Q_discrete_white_noise
from filterpy.common import Q_discrete_white_noise, Saver
import matplotlib.pyplot as plt

@@ -172,2 +172,3 @@ from numpy.random import randn

cvxs, caxs = [], []
s = Saver(bank)
for i, z in enumerate(z_xs):

@@ -180,4 +181,5 @@ bank.predict()

print(i, cv.likelihood, ca.likelihood, bank.p)
s.save()
probs.append(bank.p[0] / bank.p[1])
s.to_array()

@@ -184,0 +186,0 @@ if DO_PLOT:

@@ -25,4 +25,5 @@ # -*- coding: utf-8 -*-

if __name__ == '__main__':
DO_PLOT = False
def test_rts():
fk = KalmanFilter(dim_x=2, dim_z=1)

@@ -41,3 +42,3 @@

zs = [t + random.randn()*4 for t in range (40)]
zs = [t + random.randn()*4 for t in range(40)]

@@ -47,16 +48,19 @@ mu, cov, _, _ = fk.batch_filter (zs)

M,P,C = fk.rts_smoother(mu, cov)
M, P, _, _ = fk.rts_smoother(mu, cov)
if DO_PLOT:
p1, = plt.plot(zs,'cyan', alpha=0.5)
p2, = plt.plot(M[:,0],c='b')
p3, = plt.plot(mus,c='r')
p4, = plt.plot([0, len(zs)], [0, len(zs)], 'g') # perfect result
plt.legend([p1, p2, p3, p4],
["measurement", "RKS", "KF output", "ideal"], loc=4)
plt.show()
# plot data
p1, = plt.plot(zs,'cyan', alpha=0.5)
p2, = plt.plot (M[:,0],c='b')
p3, = plt.plot (mus,c='r')
p4, = plt.plot ([0,len(zs)],[0,len(zs)], 'g') # perfect result
plt.legend([p1,p2, p3, p4],
["measurement", "RKS", "KF output", "ideal"], loc=4)
plt.show()
if __name__ == '__main__':
DO_PLOT = True
test_rts()

@@ -153,1 +153,3 @@ # -*- coding: utf-8 -*-

single_measurement_test()
test_fusion()

@@ -23,2 +23,3 @@ # -*- coding: utf-8 -*-

import matplotlib.pyplot as plt
from filterpy.common import Saver
from filterpy.kalman import SquareRootKalmanFilter, KalmanFilter

@@ -54,2 +55,6 @@

# does __repr__ work?
str(fsq)
measurements = []

@@ -59,2 +64,3 @@ results = []

zs = []
s = Saver(fsq)
for t in range (100):

@@ -78,8 +84,5 @@ # create measurement = t plus white noise

measurements.append(z)
s.save()
s.to_array()
p = f.P - fsq.P
print(f.P)
print(fsq.P)
for i in range(f.P.shape[0]):

@@ -86,0 +89,0 @@ assert abs(f.P[i,i] - fsq.P[i,i]) < 0.01

@@ -24,3 +24,2 @@ # -*- coding: utf-8 -*-

from __future__ import (absolute_import, division, print_function,

@@ -40,3 +39,3 @@ unicode_literals)

KalmanFilter)
from filterpy.common import Q_discrete_white_noise
from filterpy.common import Q_discrete_white_noise, Saver
import filterpy.stats as stats

@@ -204,2 +203,4 @@

kf = UKF(3, 1, dt, fx=fx, hx=hx, points=sp)
assert np.allclose(kf.x, kf.x_prior)
assert np.allclose(kf.P, kf.P_prior)

@@ -268,3 +269,3 @@ # test __repr__ doesn't crash

kf.x = np.array([-1., 1., -1., 1])
kf.P *= 0.0001
kf.P *= 1.1

@@ -275,6 +276,3 @@

zs = []
for i in range(20):
z = np.array([i+randn()*0.1, i+randn()*0.1])
zs.append(z)
zs = [[i+randn()*0.1, i+randn()*0.1] for i in range(20)]

@@ -909,2 +907,5 @@ Ms, Ps = kf.batch_filter(zs)

ukf.Q = tc[:]
s = Saver(ukf)
s.save()
s.to_array()

@@ -955,4 +956,46 @@

def _test_log_likelihood():
from filterpy.common import Saver
def fx(x, dt):
F = np.array([[1, dt, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, dt],
[0, 0, 0, 1]], dtype=float)
return np.dot(F, x)
def hx(x):
return np.array([x[0], x[2]])
dt = 0.1
points = MerweScaledSigmaPoints(4, .1, 2., -1)
kf = UKF(dim_x=4, dim_z=2, dt=dt, fx=fx, hx=hx, points=points)
z_std = 0.1
kf.R = np.diag([z_std**2, z_std**2]) # 1 standard
kf.Q = Q_discrete_white_noise(dim=2, dt=dt, var=1.1**2, block_size=2)
kf.x = np.array([-1., 1., -1., 1])
kf.P *= 1.
zs = [[i+randn()*z_std, i+randn()*z_std] for i in range(40)]
s = Saver(kf)
for z in zs:
kf.predict()
kf.update(z)
print(kf.x, kf.log_likelihood, kf.P.diagonal())
s.save()
s.to_array()
plt.plot(s.x[:, 0], s.x[:, 2])
if __name__ == "__main__":
_test_log_likelihood()
test_linear_rts()

@@ -1000,1 +1043,2 @@

#plt.show()

@@ -33,3 +33,3 @@ # -*- coding: utf-8 -*-

# pylint: disable=too-many-instance-attributes
# pylint: disable=C0103
# pylint: disable=invalid-name
r"""

@@ -41,5 +41,99 @@ Implements the Scaled Unscented Kalman filter (UKF) as defined by

You will have to set the following attributes after constructing this
object for the filter to perform properly.
Parameters
----------
dim_x : int
Number of state variables for the filter. For example, if
you are tracking the position and velocity of an object in two
dimensions, dim_x would be 4.
dim_z : int
Number of of measurement inputs. For example, if the sensor
provides you with position in (x,y), dim_z would be 2.
dt : float
Time between steps in seconds.
hx : function(x)
Measurement function. Converts state vector x into a measurement
vector of shape (dim_z).
fx : function(x,dt)
function that returns the state x transformed by the
state transistion function. dt is the time step in seconds.
points : class
Class which computes the sigma points and weights for a UKF
algorithm. You can vary the UKF implementation by changing this
class. For example, MerweScaledSigmaPoints implements the alpha,
beta, kappa parameterization of Van der Merwe, and
JulierSigmaPoints implements Julier's original kappa
parameterization. See either of those for the required
signature of this class if you want to implement your own.
sqrt_fn : callable(ndarray), default=None (implies scipy.linalg.cholesky)
Defines how we compute the square root of a matrix, which has
no unique answer. Cholesky is the default choice due to its
speed. Typically your alternative choice will be
scipy.linalg.sqrtm. Different choices affect how the sigma points
are arranged relative to the eigenvectors of the covariance matrix.
Usually this will not matter to you; if so the default cholesky()
yields maximal performance. As of van der Merwe's dissertation of
2004 [6] this was not a well reseached area so I have no advice
to give you.
If your method returns a triangular matrix it must be upper
triangular. Do not use numpy.linalg.cholesky - for historical
reasons it returns a lower triangular matrix. The SciPy version
does the right thing as far as this class is concerned.
x_mean_fn : callable (sigma_points, weights), optional
Function that computes the mean of the provided sigma points
and weights. Use this if your state variable contains nonlinear
values such as angles which cannot be summed.
.. code-block:: Python
def state_mean(sigmas, Wm):
x = np.zeros(3)
sum_sin, sum_cos = 0., 0.
for i in range(len(sigmas)):
s = sigmas[i]
x[0] += s[0] * Wm[i]
x[1] += s[1] * Wm[i]
sum_sin += sin(s[2])*Wm[i]
sum_cos += cos(s[2])*Wm[i]
x[2] = atan2(sum_sin, sum_cos)
return x
z_mean_fn : callable (sigma_points, weights), optional
Same as x_mean_fn, except it is called for sigma points which
form the measurements after being passed through hx().
residual_x : callable (x, y), optional
residual_z : callable (x, y), optional
Function that computes the residual (difference) between x and y.
You will have to supply this if your state variable cannot support
subtraction, such as angles (359-1 degreees is 2, not 358). x and y
are state vectors, not scalars. One is for the state variable,
the other is for the measurement state.
.. code-block:: Python
def residual(a, b):
y = a[0] - b[0]
if y > np.pi:
y -= 2*np.pi
if y < -np.pi:
y = 2*np.pi
return y
compute_log_likelihood : bool (default = True)
Computes log likelihood by default, but this can be a slow
computation, so if you never use it you can turn this computation
off.
Attributes

@@ -54,2 +148,8 @@ ----------

x_prior : numpy.array(dim_x)
prior (predicted) state estimate vector
P_prior : numpy.array(dim_x, dim_x)
prior (predicted) state estimate covariance
R : numpy.array(dim_z, dim_z)

@@ -67,23 +167,65 @@ measurement noise matrix

x : numpy.array(dim_x)
predicted/updated state (result of predict()/update())
P : numpy.array(dim_x, dim_x)
predicted/updated covariance matrix (result of predict()/update())
log_likelihood : scalar
Log likelihood of last measurement update.
likelihood : float
likelihood of last measurment. Read only.
Computed from the log-likelihood. The log-likelihood can be very
small, meaning a large negative value such as -28000. Taking the
exp() of that results in 0.0, which can break typical algorithms
which multiply by this value, so by default we always return a
number >= sys.float_info.min.
inv : function, default numpy.linalg.inv
If you prefer another inverse function, such as the Moore-Penrose
pseudo inverse, set it to that instead: kf.inv = np.linalg.pinv
pseudo inverse, set it to that instead:
.. code-block:: Python
kf.inv = np.linalg.pinv
Examples
--------
See my book Kalman and Bayesian Filters in Python
Simple example of a linear order 1 kinematic filter in 2D. There is no
need to use a UKF for this example, but it is easy to read.
>>> def fx(x, dt):
>>> # state transition function - predict next state based
>>> # on constant velocity model x = vt + x_0
>>> F = np.array([[1, dt, 0, 0],
>>> [0, 1, 0, 0],
>>> [0, 0, 1, dt],
>>> [0, 0, 0, 1]], dtype=float)
>>> return np.dot(F, x)
>>>
>>> def hx(x):
>>> # measurement function - convert state into a measurement
>>> # where measurements are [x_pos, y_pos]
>>> return np.array([x[0], x[2]])
>>>
>>> dt = 0.1
>>> # create sigma points to use in the filter. This is standard for Gaussian processes
>>> points = MerweScaledSigmaPoints(4, alpha=.1, beta=2., kappa=-1)
>>>
>>> kf = UnscentedKalmanFilter(dim_x=4, dim_z=2, dt=dt, fx=fx, hx=hx, points=points)
>>> kf.x = np.array([-1., 1., -1., 1]) # initial state
>>> kf.P *= 0.2 # initial uncertainty
>>> z_std = 0.1
>>> kf.R = np.diag([z_std**2, z_std**2]) # 1 standard
>>> kf.Q = Q_discrete_white_noise(dim=2, dt=dt, var=0.01**2, block_size=2)
>>>
>>> zs = [[i+randn()*z_std, i+randn()*z_std] for i in range(50)] # measurements
>>> for z in zs:
>>> kf.predict()
>>> kf.update(z)
>>> print(kf.x, 'log-likelihood', kf.log_likelihood)
For in depth explanations see my book Kalman and Bayesian Filters in Python
https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python
Also see the filterpy/kalman/tests subdirectory for test code that
may be illuminating.

@@ -99,3 +241,2 @@ References

.. [2] E. A. Wan and R. Van der Merwe, “The unscented Kalman filter for

@@ -107,2 +248,19 @@ nonlinear estimation,” in Proc. Symp. Adaptive Syst. Signal

https://www.seas.harvard.edu/courses/cs281/papers/unscented.pdf
.. [3] S. Julier, J. Uhlmann, and H. Durrant-Whyte. "A new method for
the nonlinear transformation of means and covariances in filters
and estimators," IEEE Transactions on Automatic Control, 45(3),
pp. 477-482 (March 2000).
.. [4] E. A. Wan and R. Van der Merwe, “The Unscented Kalman filter for
Nonlinear Estimation,” in Proc. Symp. Adaptive Syst. Signal
Process., Commun. Contr., Lake Louise, AB, Canada, Oct. 2000.
https://www.seas.harvard.edu/courses/cs281/papers/unscented.pdf
.. [5] Wan, Merle "The Unscented Kalman Filter," chapter in *Kalman
Filtering and Neural Networks*, John Wiley & Sons, Inc., 2001.
.. [6] R. Van der Merwe "Sigma-Point Kalman Filters for Probabilitic
Inference in Dynamic State-Space Models" (Doctoral dissertation)
"""

@@ -120,117 +278,2 @@

Parameters
----------
dim_x : int
Number of state variables for the filter. For example, if
you are tracking the position and velocity of an object in two
dimensions, dim_x would be 4.
dim_z : int
Number of of measurement inputs. For example, if the sensor
provides you with position in (x,y), dim_z would be 2.
dt : float
Time between steps in seconds.
hx : function(x)
Measurement function. Converts state vector x into a measurement
vector of shape (dim_z).
fx : function(x,dt)
function that returns the state x transformed by the
state transistion function. dt is the time step in seconds.
points : class
Class which computes the sigma points and weights for a UKF
algorithm. You can vary the UKF implementation by changing this
class. For example, MerweScaledSigmaPoints implements the alpha,
beta, kappa parameterization of Van der Merwe, and
JulierSigmaPoints implements Julier's original kappa
parameterization. See either of those for the required
signature of this class if you want to implement your own.
sqrt_fn : callable(ndarray), default = scipy.linalg.cholesky
Defines how we compute the square root of a matrix, which has
no unique answer. Cholesky is the default choice due to its
speed. Typically your alternative choice will be
scipy.linalg.sqrtm. Different choices affect how the sigma points
are arranged relative to the eigenvectors of the covariance matrix.
Usually this will not matter to you; if so the default cholesky()
yields maximal performance. As of van der Merwe's dissertation of
2004 [6] this was not a well reseached area so I have no advice
to give you.
If your method returns a triangular matrix it must be upper
triangular. Do not use numpy.linalg.cholesky - for historical
reasons it returns a lower triangular matrix. The SciPy version
does the right thing.
x_mean_fn : callable (sigma_points, weights), optional
Function that computes the mean of the provided sigma points
and weights. Use this if your state variable contains nonlinear
values such as angles which cannot be summed.
.. code-block:: Python
def state_mean(sigmas, Wm):
x = np.zeros(3)
sum_sin, sum_cos = 0., 0.
for i in range(len(sigmas)):
s = sigmas[i]
x[0] += s[0] * Wm[i]
x[1] += s[1] * Wm[i]
sum_sin += sin(s[2])*Wm[i]
sum_cos += cos(s[2])*Wm[i]
x[2] = atan2(sum_sin, sum_cos)
return x
z_mean_fn : callable (sigma_points, weights), optional
Same as x_mean_fn, except it is called for sigma points which
form the measurements after being passed through hx().
residual_x : callable (x, y), optional
residual_z : callable (x, y), optional
Function that computes the residual (difference) between x and y.
You will have to supply this if your state variable cannot support
subtraction, such as angles (359-1 degreees is 2, not 358). x and y
are state vectors, not scalars. One is for the state variable,
the other is for the measurement state.
.. code-block:: Python
def residual(a, b):
y = a[0] - b[0]
if y > np.pi:
y -= 2*np.pi
if y < -np.pi:
y = 2*np.pi
return y
compute_log_likelihood : bool (default = True)
Computes log likelihood by default, but this can be a slow
computation, so if you never use it you can turn this computation
off.
References
----------
.. [3] S. Julier, J. Uhlmann, and H. Durrant-Whyte. "A new method for
the nonlinear transformation of means and covariances in filters
and estimators," IEEE Transactions on Automatic Control, 45(3),
pp. 477-482 (March 2000).
.. [4] E. A. Wan and R. Van der Merwe, “The Unscented Kalman filter for
Nonlinear Estimation,” in Proc. Symp. Adaptive Syst. Signal
Process., Commun. Contr., Lake Louise, AB, Canada, Oct. 2000.
https://www.seas.harvard.edu/courses/cs281/papers/unscented.pdf
.. [5] Wan, Merle "The Unscented Kalman Filter," chapter in *Kalman
Filtering and Neural Networks*, John Wiley & Sons, Inc., 2001.
.. [6] R. Van der Merwe "Sigma-Point Kalman Filters for Probabilitic
Inference in Dynamic State-Space Models" (Doctoral dissertation)
"""

@@ -240,6 +283,8 @@

self.x = zeros(dim_x)
self.P = eye(dim_x)
self.x_prior = self.x[:]
self.P_prior = self.P[:]
self.Q = eye(dim_x)
self.R = eye(dim_z)
self.x = zeros(dim_x)
self.P = eye(dim_x)
self._dim_x = dim_x

@@ -257,2 +302,3 @@ self._dim_z = dim_z

self.log_likelihood = math.log(sys.float_info.min)
self.likelihood = sys.float_info.min

@@ -283,8 +329,12 @@ if sqrt_fn is None:

self.K = 0. # Kalman gain
self.y = 0. # residual
self.K = np.zeros((dim_x, dim_z)) # Kalman gain
self.y = np.zeros((dim_z)) # residual
self.inv = np.linalg.inv
# save prior
self.x_prior = self.x[:]
self.P_prior = self.P[:]
def predict(self, dt=None, UT=None, fx_args=()):

@@ -328,6 +378,9 @@ r"""

#and pass sigmas through the unscented transform
#and pass sigmas through the unscented transform to compute prior
self.x, self.P = UT(self.sigmas_f, self.Wm, self.Wc, self.Q,
self.x_mean, self.residual_x)
# save prior
self.x_prior = self.x[:]
self.P_prior = self.P[:]

@@ -394,2 +447,5 @@

self.log_likelihood = logpdf(x=self.y, cov=Pz)
self.likelihood = math.exp(self.log_likelihood)
if self.likelihood == 0:
self.likelihood = sys.float_info.min

@@ -426,24 +482,4 @@

@property
def likelihood(self):
def batch_filter(self, zs, Rs=None, UT=None, saver=None):
"""
likelihood of last measurment.
Computed from the log-likelihood. The log-likelihood can be very
small, meaning a large negative value such as -28000. Taking the
exp() of that results in 0.0, which can break typical algorithms
which multiply by this value, so by default we always return a
number >= sys.float_info.min.
But really, this is a bad measure because of the scaling that is
involved - try to use log-likelihood in your equations!"""
lh = math.exp(self.log_likelihood)
if lh == 0:
lh = sys.float_info.min
return lh
def batch_filter(self, zs, Rs=None, UT=None):
"""
Performs the UKF filter over the list of measurement in `zs`.

@@ -469,2 +505,6 @@

saver : filterpy.common.Saver, optional
filterpy.common.Saver object. If provided, saver.save() will be
called after every epoch
Returns

@@ -515,2 +555,5 @@ -------

if saver is not None:
saver.save()
return (means, covariances)

@@ -623,2 +666,4 @@

pretty_str('P', self.P),
pretty_str('x_prior', self.x_prior),
pretty_str('P_prior', self.P_prior),
pretty_str('Q', self.Q),

@@ -629,2 +674,3 @@ pretty_str('R', self.R),

pretty_str('log-likelihood', self.log_likelihood),
pretty_str('likelihood', self.likelihood),
pretty_str('sigmas_f', self.sigmas_f),

@@ -631,0 +677,0 @@ pretty_str('h', self.sigmas_h),

@@ -35,3 +35,2 @@ # -*- coding: utf-8 -*-

It is implemented to be directly callable like a function. See examples.

@@ -78,3 +77,3 @@ Parameters

z = sensor_reading() # get a measurement
x = lsq(z) # get the filtered estimate.
x = lsq.update(z) # get the filtered estimate.
print('error: {}, velocity error: {}'.format(lsq.error, lsq.derror))

@@ -87,7 +86,2 @@

Approach." Third Edition. AIAA, 2009.
|
|
**Methods**
"""

@@ -94,0 +88,0 @@

@@ -788,2 +788,3 @@ # -*- coding: utf-8 -*-

def _std_tuple_of(var=None, std=None, interval=None):

@@ -790,0 +791,0 @@ """

Metadata-Version: 1.1
Name: filterpy
Version: 1.3.1
Version: 1.3.2
Summary: Kalman filtering and optimal estimation library

@@ -5,0 +5,0 @@ Home-page: https://github.com/rlabbe/filterpy