filterpy
Advanced tools
+269
-267
| Metadata-Version: 1.1 | ||
| Name: filterpy | ||
| Version: 1.4.4 | ||
| Version: 1.4.5 | ||
| Summary: Kalman filtering and optimal estimation library | ||
@@ -9,269 +9,271 @@ Home-page: https://github.com/rlabbe/filterpy | ||
| License: MIT | ||
| Description: FilterPy - Kalman filters and other optimal and non-optimal estimation filters in Python. | ||
| ----------------------------------------------------------------------------------------- | ||
| .. image:: https://img.shields.io/pypi/v/filterpy.svg | ||
| :target: https://pypi.python.org/pypi/filterpy | ||
| This library provides Kalman filtering and various related optimal and | ||
| non-optimal filtering software written in Python. It contains Kalman | ||
| filters, Extended Kalman filters, Unscented Kalman filters, Kalman | ||
| smoothers, Least Squares filters, fading memory filters, g-h filters, | ||
| discrete Bayes, and more. | ||
| This is code I am developing in conjunction with my book Kalman and | ||
| Bayesian Filter in Python, which you can read/download at | ||
| https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/ | ||
| My aim is largely pedalogical - I opt for clear code that matches the | ||
| equations in the relevant texts on a 1-to-1 basis, even when that has a | ||
| performance cost. There are places where this tradeoff is unclear - for | ||
| example, I find it somewhat clearer to write a small set of equations | ||
| using linear algebra, but numpy's overhead on small matrices makes it | ||
| run slower than writing each equation out by hand. Furthermore, books | ||
| such Zarchan present the written out form, not the linear algebra form. | ||
| It is hard for me to choose which presentation is 'clearer' - it depends | ||
| on the audience. In that case I usually opt for the faster implementation. | ||
| I use NumPy and SciPy for all of the computations. I have experimented | ||
| with Numba and it yields impressive speed ups with minimal costs, but I | ||
| am not convinced that I want to add that requirement to my project. It | ||
| is still on my list of things to figure out, however. | ||
| Sphinx generated documentation lives at http://filterpy.readthedocs.org/. | ||
| Generation is triggered by git when I do a check in, so this will always | ||
| be bleeding edge development version - it will often be ahead of the | ||
| released version. | ||
| Plan for dropping Python 2.7 support | ||
| ------------------------------------ | ||
| I haven't finalized my decision on this, but NumPy is dropping | ||
| Python 2.7 support in December 2018. I will certainly drop Python | ||
| 2.7 support by then; I will probably do it much sooner. | ||
| At the moment FilterPy is on version 1.x. I plan to fork the project | ||
| to version 2.0, and support only Python 3.5+. The 1.x version | ||
| will still be available, but I will not support it. If I add something | ||
| amazing to 2.0 and someone really begs, I might backport it; more | ||
| likely I would accept a pull request with the feature backported | ||
| to 1.x. But to be honest I don't forsee this happening. | ||
| Why 3.5+, and not 3.4+? 3.5 introduced the matrix multiply symbol, | ||
| and I want my code to take advantage of it. Plus, to be honest, | ||
| I'm being selfish. I don't want to spend my life supporting this | ||
| package, and moving as far into the present as possible means | ||
| a few extra years before the Python version I choose becomes | ||
| hopelessly dated and a liability. I recognize this makes people | ||
| running the default Python in their linux distribution more | ||
| painful. All I can say is I did not decide to do the Python | ||
| 3 fork, and I don't have the time to support the bifurcation | ||
| any longer. | ||
| I am making edits to the package now in support of my book; | ||
| once those are done I'll probably create the 2.0 branch. | ||
| I'm contemplating a SLAM addition to the book, and am not | ||
| sure if I will do this in 3.5+ only or not. | ||
| Installation | ||
| ------------ | ||
| The most general installation is just to use pip, which should come with | ||
| any modern Python distribution. | ||
| .. image:: https://img.shields.io/pypi/v/filterpy.svg | ||
| :target: https://pypi.python.org/pypi/filterpy | ||
| :: | ||
| pip install filterpy | ||
| If you prefer to download the source yourself | ||
| :: | ||
| cd <directory you want to install to> | ||
| git clone http://github.com/rlabbe/filterpy | ||
| python setup.py install | ||
| If you use Anaconda, you can install from the conda-forge channel. You | ||
| will need to add the conda-forge channel if you haven't already done so: | ||
| :: | ||
| conda config --add channels conda-forge | ||
| and then install with: | ||
| :: | ||
| conda install filterpy | ||
| And, if you want to install from the bleeding edge git version | ||
| :: | ||
| pip install git+https://github.com/rlabbe/filterpy.git | ||
| Note: I make no guarantees that everything works if you install from here. | ||
| I'm the only developer, and so I don't worry about dev/release branches and | ||
| the like. Unless I fix a bug for you and tell you to get this version because | ||
| I haven't made a new release yet, I strongly advise not installing from git. | ||
| Basic use | ||
| --------- | ||
| Full documentation is at | ||
| https://filterpy.readthedocs.io/en/latest/ | ||
| First, import the filters and helper functions. | ||
| .. code-block:: python | ||
| import numpy as np | ||
| from filterpy.kalman import KalmanFilter | ||
| from filterpy.common import Q_discrete_white_noise | ||
| Now, create the filter | ||
| .. code-block:: python | ||
| my_filter = KalmanFilter(dim_x=2, dim_z=1) | ||
| Initialize the filter's matrices. | ||
| .. code-block:: python | ||
| my_filter.x = np.array([[2.], | ||
| [0.]]) # initial state (location and velocity) | ||
| my_filter.F = np.array([[1.,1.], | ||
| [0.,1.]]) # state transition matrix | ||
| my_filter.H = np.array([[1.,0.]]) # Measurement function | ||
| my_filter.P *= 1000. # covariance matrix | ||
| my_filter.R = 5 # state uncertainty | ||
| my_filter.Q = Q_discrete_white_noise(2, dt, .1) # process uncertainty | ||
| Finally, run the filter. | ||
| .. code-block:: python | ||
| while True: | ||
| my_filter.predict() | ||
| my_filter.update(get_some_measurement()) | ||
| # do something with the output | ||
| x = my_filter.x | ||
| do_something_amazing(x) | ||
| Sorry, that is the extent of the documentation here. However, the library | ||
| is broken up into subdirectories: gh, kalman, memory, leastsq, and so on. | ||
| Each subdirectory contains python files relating to that form of filter. | ||
| The functions and methods contain pretty good docstrings on use. | ||
| My book https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/ | ||
| uses this library, and is the place to go if you are trying to learn | ||
| about Kalman filtering and/or this library. These two are not exactly in | ||
| sync - my normal development cycle is to add files here, test them, figure | ||
| out how to present them pedalogically, then write the appropriate section | ||
| or chapterin the book. So there is code here that is not discussed | ||
| yet in the book. | ||
| Requirements | ||
| ------------ | ||
| This library uses NumPy, SciPy, Matplotlib, and Python. | ||
| I haven't extensively tested backwards compatibility - I use the | ||
| Anaconda distribution, and so I am on Python 3.6 and 2.7.14, along with | ||
| whatever version of NumPy, SciPy, and matplotlib they provide. But I am | ||
| using pretty basic Python - numpy.array, maybe a list comprehension in | ||
| my tests. | ||
| I import from **__future__** to ensure the code works in Python 2 and 3. | ||
| Testing | ||
| ------- | ||
| All tests are written to work with py.test. Just type ``py.test`` at the | ||
| command line. | ||
| As explained above, the tests are not robust. I'm still at the stage | ||
| where visual plots are the best way to see how things are working. | ||
| Apologies, but I think it is a sound choice for development. It is easy | ||
| for a filter to perform within theoretical limits (which we can write a | ||
| non-visual test for) yet be 'off' in some way. The code itself contains | ||
| tests in the form of asserts and properties that ensure that arrays are | ||
| of the proper dimension, etc. | ||
| References | ||
| ---------- | ||
| I use three main texts as my refererence, though I do own the majority | ||
| of the Kalman filtering literature. First is Paul Zarchan's | ||
| 'Fundamentals of Kalman Filtering: A Practical Approach'. I think it by | ||
| far the best Kalman filtering book out there if you are interested in | ||
| practical applications more than writing a thesis. The second book I use | ||
| is Eli Brookner's 'Tracking and Kalman Filtering Made Easy'. This is an | ||
| astonishingly good book; its first chapter is actually readable by the | ||
| layperson! Brookner starts from the g-h filter, and shows how all other | ||
| filters - the Kalman filter, least squares, fading memory, etc., all | ||
| derive from the g-h filter. It greatly simplifies many aspects of | ||
| analysis and/or intuitive understanding of your problem. In contrast, | ||
| Zarchan starts from least squares, and then moves on to Kalman | ||
| filtering. I find that he downplays the predict-update aspect of the | ||
| algorithms, but he has a wealth of worked examples and comparisons | ||
| between different methods. I think both viewpoints are needed, and so I | ||
| can't imagine discarding one book. Brookner also focuses on issues that | ||
| are ignored in other books - track initialization, detecting and | ||
| discarding noise, tracking multiple objects, an so on. | ||
| I said three books. I also like and use Bar-Shalom's Estimation with | ||
| Applications to Tracking and Navigation. Much more mathematical than the | ||
| previous two books, I would not recommend it as a first text unless you | ||
| already have a background in control theory or optimal estimation. Once | ||
| you have that experience, this book is a gem. Every sentence is crystal | ||
| clear, his language is precise, but each abstract mathematical statement | ||
| is followed with something like "and this means...". | ||
| License | ||
| ------- | ||
| .. image:: https://anaconda.org/rlabbe/filterpy/badges/license.svg :target: https://anaconda.org/rlabbe/filterpy | ||
| The MIT License (MIT) | ||
| Copyright (c) 2015 Roger R. Labbe Jr | ||
| Permission is hereby granted, free of charge, to any person obtaining a copy | ||
| of this software and associated documentation files (the "Software"), to deal | ||
| in the Software without restriction, including without limitation the rights | ||
| to use, copy, modify, merge, publish, distribute, sublicense, and/or sell | ||
| copies of the Software, and to permit persons to whom the Software is | ||
| furnished to do so, subject to the following conditions: | ||
| The above copyright notice and this permission notice shall be included in | ||
| all copies or substantial portions of the Software. | ||
| THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR | ||
| IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, | ||
| FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE | ||
| AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER | ||
| LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, | ||
| OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN | ||
| THE SOFTWARE.TION OF CONTRACT, | ||
| TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE | ||
| SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | ||
| Description-Content-Type: UNKNOWN | ||
| Description: FilterPy - Kalman filters and other optimal and non-optimal estimation filters in Python. | ||
| ----------------------------------------------------------------------------------------- | ||
| .. image:: https://img.shields.io/pypi/v/filterpy.svg | ||
| :target: https://pypi.python.org/pypi/filterpy | ||
| **NOTE**: Imminent drop of support of Python 2.7, 3.4. See section below for details. | ||
| This library provides Kalman filtering and various related optimal and | ||
| non-optimal filtering software written in Python. It contains Kalman | ||
| filters, Extended Kalman filters, Unscented Kalman filters, Kalman | ||
| smoothers, Least Squares filters, fading memory filters, g-h filters, | ||
| discrete Bayes, and more. | ||
| This is code I am developing in conjunction with my book Kalman and | ||
| Bayesian Filter in Python, which you can read/download at | ||
| https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/ | ||
| My aim is largely pedalogical - I opt for clear code that matches the | ||
| equations in the relevant texts on a 1-to-1 basis, even when that has a | ||
| performance cost. There are places where this tradeoff is unclear - for | ||
| example, I find it somewhat clearer to write a small set of equations | ||
| using linear algebra, but numpy's overhead on small matrices makes it | ||
| run slower than writing each equation out by hand. Furthermore, books | ||
| such Zarchan present the written out form, not the linear algebra form. | ||
| It is hard for me to choose which presentation is 'clearer' - it depends | ||
| on the audience. In that case I usually opt for the faster implementation. | ||
| I use NumPy and SciPy for all of the computations. I have experimented | ||
| with Numba and it yields impressive speed ups with minimal costs, but I | ||
| am not convinced that I want to add that requirement to my project. It | ||
| is still on my list of things to figure out, however. | ||
| Sphinx generated documentation lives at http://filterpy.readthedocs.org/. | ||
| Generation is triggered by git when I do a check in, so this will always | ||
| be bleeding edge development version - it will often be ahead of the | ||
| released version. | ||
| Plan for dropping Python 2.7 support | ||
| ------------------------------------ | ||
| I haven't finalized my decision on this, but NumPy is dropping | ||
| Python 2.7 support in December 2018. I will certainly drop Python | ||
| 2.7 support by then; I will probably do it much sooner. | ||
| At the moment FilterPy is on version 1.x. I plan to fork the project | ||
| to version 2.0, and support only Python 3.5+. The 1.x version | ||
| will still be available, but I will not support it. If I add something | ||
| amazing to 2.0 and someone really begs, I might backport it; more | ||
| likely I would accept a pull request with the feature backported | ||
| to 1.x. But to be honest I don't forsee this happening. | ||
| Why 3.5+, and not 3.4+? 3.5 introduced the matrix multiply symbol, | ||
| and I want my code to take advantage of it. Plus, to be honest, | ||
| I'm being selfish. I don't want to spend my life supporting this | ||
| package, and moving as far into the present as possible means | ||
| a few extra years before the Python version I choose becomes | ||
| hopelessly dated and a liability. I recognize this makes people | ||
| running the default Python in their linux distribution more | ||
| painful. All I can say is I did not decide to do the Python | ||
| 3 fork, and I don't have the time to support the bifurcation | ||
| any longer. | ||
| I am making edits to the package now in support of my book; | ||
| once those are done I'll probably create the 2.0 branch. | ||
| I'm contemplating a SLAM addition to the book, and am not | ||
| sure if I will do this in 3.5+ only or not. | ||
| Installation | ||
| ------------ | ||
| The most general installation is just to use pip, which should come with | ||
| any modern Python distribution. | ||
| .. image:: https://img.shields.io/pypi/v/filterpy.svg | ||
| :target: https://pypi.python.org/pypi/filterpy | ||
| :: | ||
| pip install filterpy | ||
| If you prefer to download the source yourself | ||
| :: | ||
| cd <directory you want to install to> | ||
| git clone http://github.com/rlabbe/filterpy | ||
| python setup.py install | ||
| If you use Anaconda, you can install from the conda-forge channel. You | ||
| will need to add the conda-forge channel if you haven't already done so: | ||
| :: | ||
| conda config --add channels conda-forge | ||
| and then install with: | ||
| :: | ||
| conda install filterpy | ||
| And, if you want to install from the bleeding edge git version | ||
| :: | ||
| pip install git+https://github.com/rlabbe/filterpy.git | ||
| Note: I make no guarantees that everything works if you install from here. | ||
| I'm the only developer, and so I don't worry about dev/release branches and | ||
| the like. Unless I fix a bug for you and tell you to get this version because | ||
| I haven't made a new release yet, I strongly advise not installing from git. | ||
| Basic use | ||
| --------- | ||
| Full documentation is at | ||
| https://filterpy.readthedocs.io/en/latest/ | ||
| First, import the filters and helper functions. | ||
| .. code-block:: python | ||
| import numpy as np | ||
| from filterpy.kalman import KalmanFilter | ||
| from filterpy.common import Q_discrete_white_noise | ||
| Now, create the filter | ||
| .. code-block:: python | ||
| my_filter = KalmanFilter(dim_x=2, dim_z=1) | ||
| Initialize the filter's matrices. | ||
| .. code-block:: python | ||
| my_filter.x = np.array([[2.], | ||
| [0.]]) # initial state (location and velocity) | ||
| my_filter.F = np.array([[1.,1.], | ||
| [0.,1.]]) # state transition matrix | ||
| my_filter.H = np.array([[1.,0.]]) # Measurement function | ||
| my_filter.P *= 1000. # covariance matrix | ||
| my_filter.R = 5 # state uncertainty | ||
| my_filter.Q = Q_discrete_white_noise(2, dt, .1) # process uncertainty | ||
| Finally, run the filter. | ||
| .. code-block:: python | ||
| while True: | ||
| my_filter.predict() | ||
| my_filter.update(get_some_measurement()) | ||
| # do something with the output | ||
| x = my_filter.x | ||
| do_something_amazing(x) | ||
| Sorry, that is the extent of the documentation here. However, the library | ||
| is broken up into subdirectories: gh, kalman, memory, leastsq, and so on. | ||
| Each subdirectory contains python files relating to that form of filter. | ||
| The functions and methods contain pretty good docstrings on use. | ||
| My book https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/ | ||
| uses this library, and is the place to go if you are trying to learn | ||
| about Kalman filtering and/or this library. These two are not exactly in | ||
| sync - my normal development cycle is to add files here, test them, figure | ||
| out how to present them pedalogically, then write the appropriate section | ||
| or chapter in the book. So there is code here that is not discussed | ||
| yet in the book. | ||
| Requirements | ||
| ------------ | ||
| This library uses NumPy, SciPy, Matplotlib, and Python. | ||
| I haven't extensively tested backwards compatibility - I use the | ||
| Anaconda distribution, and so I am on Python 3.6 and 2.7.14, along with | ||
| whatever version of NumPy, SciPy, and matplotlib they provide. But I am | ||
| using pretty basic Python - numpy.array, maybe a list comprehension in | ||
| my tests. | ||
| I import from **__future__** to ensure the code works in Python 2 and 3. | ||
| Testing | ||
| ------- | ||
| All tests are written to work with py.test. Just type ``py.test`` at the | ||
| command line. | ||
| As explained above, the tests are not robust. I'm still at the stage | ||
| where visual plots are the best way to see how things are working. | ||
| Apologies, but I think it is a sound choice for development. It is easy | ||
| for a filter to perform within theoretical limits (which we can write a | ||
| non-visual test for) yet be 'off' in some way. The code itself contains | ||
| tests in the form of asserts and properties that ensure that arrays are | ||
| of the proper dimension, etc. | ||
| References | ||
| ---------- | ||
| I use three main texts as my refererence, though I do own the majority | ||
| of the Kalman filtering literature. First is Paul Zarchan's | ||
| 'Fundamentals of Kalman Filtering: A Practical Approach'. I think it by | ||
| far the best Kalman filtering book out there if you are interested in | ||
| practical applications more than writing a thesis. The second book I use | ||
| is Eli Brookner's 'Tracking and Kalman Filtering Made Easy'. This is an | ||
| astonishingly good book; its first chapter is actually readable by the | ||
| layperson! Brookner starts from the g-h filter, and shows how all other | ||
| filters - the Kalman filter, least squares, fading memory, etc., all | ||
| derive from the g-h filter. It greatly simplifies many aspects of | ||
| analysis and/or intuitive understanding of your problem. In contrast, | ||
| Zarchan starts from least squares, and then moves on to Kalman | ||
| filtering. I find that he downplays the predict-update aspect of the | ||
| algorithms, but he has a wealth of worked examples and comparisons | ||
| between different methods. I think both viewpoints are needed, and so I | ||
| can't imagine discarding one book. Brookner also focuses on issues that | ||
| are ignored in other books - track initialization, detecting and | ||
| discarding noise, tracking multiple objects, an so on. | ||
| I said three books. I also like and use Bar-Shalom's Estimation with | ||
| Applications to Tracking and Navigation. Much more mathematical than the | ||
| previous two books, I would not recommend it as a first text unless you | ||
| already have a background in control theory or optimal estimation. Once | ||
| you have that experience, this book is a gem. Every sentence is crystal | ||
| clear, his language is precise, but each abstract mathematical statement | ||
| is followed with something like "and this means...". | ||
| License | ||
| ------- | ||
| .. image:: https://anaconda.org/rlabbe/filterpy/badges/license.svg :target: https://anaconda.org/rlabbe/filterpy | ||
| The MIT License (MIT) | ||
| Copyright (c) 2015 Roger R. Labbe Jr | ||
| Permission is hereby granted, free of charge, to any person obtaining a copy | ||
| of this software and associated documentation files (the "Software"), to deal | ||
| in the Software without restriction, including without limitation the rights | ||
| to use, copy, modify, merge, publish, distribute, sublicense, and/or sell | ||
| copies of the Software, and to permit persons to whom the Software is | ||
| furnished to do so, subject to the following conditions: | ||
| The above copyright notice and this permission notice shall be included in | ||
| all copies or substantial portions of the Software. | ||
| THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR | ||
| IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, | ||
| FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE | ||
| AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER | ||
| LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, | ||
| OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN | ||
| THE SOFTWARE.TION OF CONTRACT, | ||
| TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE | ||
| SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | ||
| Keywords: Kalman filters filtering optimal estimation tracking | ||
@@ -278,0 +280,0 @@ Platform: UNKNOWN |
@@ -41,3 +41,2 @@ LICENSE | ||
| filterpy/kalman/fixed_lag_smoother.py | ||
| filterpy/kalman/i.py | ||
| filterpy/kalman/information_filter.py | ||
@@ -44,0 +43,0 @@ filterpy/kalman/kalman_filter.py |
@@ -17,2 +17,2 @@ # -*- coding: utf-8 -*- | ||
| __version__ = "1.4.4" | ||
| __version__ = "1.4.5" |
@@ -0,1 +1,13 @@ | ||
| Version 1.4.5 | ||
| ============= | ||
| * Removed deprecated filterpy.kalman.Saver class (use | ||
| filterpy.common.Saver instead) | ||
| * GitHub #165 Bug in computation of prior state x. | ||
| * Sped up computation in Cubature and Ensemble filters by using | ||
| einsum instead of a for loop. | ||
| Version 1.4.4 | ||
@@ -2,0 +14,0 @@ ============= |
@@ -361,1 +361,55 @@ # -*- coding: utf-8 -*- | ||
| return si | ||
| def outer_product_sum(A, B=None): | ||
| """ | ||
| Computes the sum of the outer products of the rows in A and B | ||
| P = \Sum {A[i] B[i].T} for i in 0..N | ||
| Notionally: | ||
| P = 0 | ||
| for y in A: | ||
| P += np.outer(y, y) | ||
| This is a standard computation for sigma points used in the UKF, ensemble | ||
| Kalman filter, etc., where A would be the residual of the sigma points | ||
| and the filter's state or measurement. | ||
| The computation is vectorized, so it is much faster than the for loop | ||
| for large A. | ||
| Parameters | ||
| ---------- | ||
| A : np.array, shape (M, N) | ||
| rows of N-vectors to have the outer product summed | ||
| B : np.array, shape (M, N) | ||
| rows of N-vectors to have the outer product summed | ||
| If it is `None`, it is set to A. | ||
| Returns | ||
| ------- | ||
| P : np.array, shape(N, N) | ||
| sum of the outer product of the rows of A and B | ||
| Examples | ||
| -------- | ||
| Here sigmas is of shape (M, N), and x is of shape (N). The two sets of | ||
| code compute the same thing. | ||
| >>> P = outer_product_sum(sigmas - x) | ||
| >>> | ||
| >>> P = 0 | ||
| >>> for s in sigmas: | ||
| >>> y = s - x | ||
| >>> P += np.outer(y, y) | ||
| """ | ||
| if B is None: | ||
| B = A | ||
| outer = np.einsum('ij,ik->ijk', A, B) | ||
| return np.sum(outer, axis=0) |
@@ -17,3 +17,3 @@ # -*- coding: utf-8 -*- | ||
| from filterpy.common import kinematic_kf, Saver, inv_diagonal | ||
| from filterpy.common import kinematic_kf, Saver, inv_diagonal, outer_product_sum | ||
@@ -186,2 +186,18 @@ import numpy as np | ||
| def test_outer_product(): | ||
| sigmas = np.random.randn(1000000, 2) | ||
| x = np.random.randn(2) | ||
| P1 = outer_product_sum(sigmas-x) | ||
| P2 = 0 | ||
| for s in sigmas: | ||
| y = s - x | ||
| P2 += np.outer(y, y) | ||
| assert np.allclose(P1, P2) | ||
| if __name__ == "__main__": | ||
@@ -188,0 +204,0 @@ #test_repeaters() |
@@ -29,3 +29,3 @@ # -*- coding: utf-8 -*- | ||
| from filterpy.stats import logpdf | ||
| from filterpy.common import pretty_str | ||
| from filterpy.common import pretty_str, outer_product_sum | ||
@@ -367,3 +367,2 @@ | ||
| # mean and covariance of prediction passed through unscented transform | ||
| #zp, Pz = UT(self.sigmas_h, self.Wm, self.Wc, R, self.z_mean, self.residual_z) | ||
| zp, self.S = ckf_transform(self.sigmas_h, R) | ||
@@ -373,13 +372,7 @@ self.SI = inv(self.S) | ||
| # compute cross variance of the state and the measurements | ||
| Pxz = zeros((self.dim_x, self.dim_z)) | ||
| m = self._num_sigmas # literaure uses m for scaling factor | ||
| xf = self.x.flatten() | ||
| zpf = zp.flatten() | ||
| for k in range(m): | ||
| dx = self.sigmas_f[k] - xf | ||
| dz = self.sigmas_h[k] - zpf | ||
| Pxz += outer(dx, dz) | ||
| Pxz = outer_product_sum(self.sigmas_f - xf, self.sigmas_h - zpf) / m | ||
| Pxz /= m | ||
| self.K = dot(Pxz, self.SI) # Kalman gain | ||
@@ -386,0 +379,0 @@ self.y = self.residual_z(z, zp) # residual |
@@ -26,5 +26,5 @@ # -*- coding: utf-8 -*- | ||
| import numpy as np | ||
| from numpy import dot, zeros, eye, outer | ||
| from numpy import array, zeros, eye, dot | ||
| from numpy.random import multivariate_normal | ||
| from filterpy.common import pretty_str | ||
| from filterpy.common import pretty_str, outer_product_sum | ||
@@ -137,4 +137,4 @@ | ||
| dt = 0.1 | ||
| f = EnKF(x=x, P=P, dim_z=1, dt=dt, N=8, | ||
| hx=hx, fx=fx) | ||
| f = EnsembleKalmanFilter(x=x, P=P, dim_z=1, dt=dt, | ||
| N=8, hx=hx, fx=fx) | ||
@@ -174,6 +174,6 @@ std_noise = 3. | ||
| self.fx = fx | ||
| self.K = np.zeros((dim_x, dim_z)) | ||
| self.z = np.array([[None]*self.dim_z]).T | ||
| self.S = np.zeros((dim_z, dim_z)) # system uncertainty | ||
| self.SI = np.zeros((dim_z, dim_z)) # inverse system uncertainty | ||
| self.K = zeros((dim_x, dim_z)) | ||
| self.z = array([[None] * self.dim_z]).T | ||
| self.S = zeros((dim_z, dim_z)) # system uncertainty | ||
| self.SI = zeros((dim_z, dim_z)) # inverse system uncertainty | ||
@@ -185,5 +185,6 @@ self.initialize(x, P) | ||
| # used to create error terms centered at 0 mean for state and measurement | ||
| self._mean = np.zeros(dim_x) | ||
| self._mean_z = np.zeros(dim_z) | ||
| # used to create error terms centered at 0 mean for | ||
| # state and measurement | ||
| self._mean = zeros(dim_x) | ||
| self._mean_z = zeros(dim_z) | ||
@@ -234,7 +235,7 @@ def initialize(self, x, P): | ||
| Optionally provide R to override the measurement noise for this | ||
| one call, otherwise self.R will be used. | ||
| one call, otherwise self.R will be used. | ||
| """ | ||
| if z is None: | ||
| self.z = np.array([[None]*self.dim_z]).T | ||
| self.z = array([[None]*self.dim_z]).T | ||
| self.x_post = self.x.copy() | ||
@@ -259,18 +260,10 @@ self.P_post = self.P.copy() | ||
| P_zz = 0 | ||
| for sigma in sigmas_h: | ||
| s = sigma - z_mean | ||
| P_zz += outer(s, s) | ||
| P_zz = P_zz / (N-1) + R | ||
| P_zz = (outer_product_sum(sigmas_h - z_mean) / (N-1)) + R | ||
| P_xz = outer_product_sum( | ||
| self.sigmas - self.x, sigmas_h - z_mean) / (N - 1) | ||
| self.S = P_zz | ||
| self.SI = self.inv(self.S) | ||
| self.K = dot(P_xz, self.SI) | ||
| P_xz = 0 | ||
| for i in range(N): | ||
| P_xz += outer(self.sigmas[i] - self.x, sigmas_h[i] - z_mean) | ||
| P_xz /= N-1 | ||
| self.K = dot(P_xz, self.inv(P_zz)) | ||
| e_r = multivariate_normal(self._mean_z, R, N) | ||
@@ -281,3 +274,3 @@ for i in range(N): | ||
| self.x = np.mean(self.sigmas, axis=0) | ||
| self.P = self.P - dot(dot(self.K, P_zz), self.K.T) | ||
| self.P = self.P - dot(dot(self.K, self.S), self.K.T) | ||
@@ -299,9 +292,5 @@ # save measurement and posterior state | ||
| P = 0 | ||
| for s in self.sigmas: | ||
| sx = s - self.x | ||
| P += outer(sx, sx) | ||
| self.x = np.mean(self.sigmas, axis=0) | ||
| self.P = outer_product_sum(self.sigmas - self.x) / (N - 1) | ||
| self.P = P / (N-1) | ||
| # save prior | ||
@@ -308,0 +297,0 @@ self.x_prior = np.copy(self.x) |
@@ -1755,75 +1755,1 @@ # -*- coding: utf-8 -*- | ||
| return (x, P, K, pP) | ||
| class Saver(object): | ||
| """ | ||
| Deprecated. Use filterpy.common.Saver instead. | ||
| Helper class to save the states of the KalmanFilter class. | ||
| Each time you call save() the current states are appended to lists. | ||
| Generally you would do this once per epoch - predict/update. | ||
| Once you are done filtering you can optionally call to_array() | ||
| to convert all of the lists to numpy arrays. You cannot safely call | ||
| save() after calling to_array(). | ||
| Examples | ||
| -------- | ||
| .. code-block:: Python | ||
| kf = KalmanFilter(...whatever) | ||
| # initialize kf here | ||
| saver = Saver(kf) # save data for kf filter | ||
| for z in zs: | ||
| kf.predict() | ||
| kf.update(z) | ||
| saver.save() | ||
| saver.to_array() | ||
| # plot the 0th element of the state | ||
| plt.plot(saver.xs[:, 0, 0]) | ||
| """ | ||
| def __init__(self, kf, save_current=True): | ||
| """ Construct the save object, optionally saving the current | ||
| state of the filter""" | ||
| warnings.warn( | ||
| 'Use filterpy.common.Saver instead of this, as it works for any filter clase', | ||
| DeprecationWarning) | ||
| self.xs = [] | ||
| self.Ps = [] | ||
| self.Ks = [] | ||
| self.ys = [] | ||
| self.xs_prior = [] | ||
| self.Ps_prior = [] | ||
| self.kf = kf | ||
| if save_current: | ||
| self.save() | ||
| def save(self): | ||
| """ save the current state of the Kalman filter""" | ||
| kf = self.kf | ||
| self.xs.append(np.copy(kf.x)) | ||
| self.Ps.append(np.copy(kf.P)) | ||
| self.Ks.append(np.copy(kf.K)) | ||
| self.ys.append(np.copy(kf.y)) | ||
| self.xs_prior.append(np.copy(kf.x_prior)) | ||
| self.Ps_prior.append(np.copy(kf.P_prior)) | ||
| def to_array(self): | ||
| """ convert all of the lists into np.array""" | ||
| self.xs = np.array(self.xs) | ||
| self.Ps = np.array(self.Ps) | ||
| self.Ks = np.array(self.Ks) | ||
| self.ys = np.array(self.ys) | ||
| self.xs_prior = np.array(self.xs_prior) | ||
| self.Ps_prior = np.array(self.Ps_prior) |
@@ -0,0 +0,0 @@ # -*- coding: utf-8 -*- |
+269
-267
| Metadata-Version: 1.1 | ||
| Name: filterpy | ||
| Version: 1.4.4 | ||
| Version: 1.4.5 | ||
| Summary: Kalman filtering and optimal estimation library | ||
@@ -9,269 +9,271 @@ Home-page: https://github.com/rlabbe/filterpy | ||
| License: MIT | ||
| Description: FilterPy - Kalman filters and other optimal and non-optimal estimation filters in Python. | ||
| ----------------------------------------------------------------------------------------- | ||
| .. image:: https://img.shields.io/pypi/v/filterpy.svg | ||
| :target: https://pypi.python.org/pypi/filterpy | ||
| This library provides Kalman filtering and various related optimal and | ||
| non-optimal filtering software written in Python. It contains Kalman | ||
| filters, Extended Kalman filters, Unscented Kalman filters, Kalman | ||
| smoothers, Least Squares filters, fading memory filters, g-h filters, | ||
| discrete Bayes, and more. | ||
| This is code I am developing in conjunction with my book Kalman and | ||
| Bayesian Filter in Python, which you can read/download at | ||
| https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/ | ||
| My aim is largely pedalogical - I opt for clear code that matches the | ||
| equations in the relevant texts on a 1-to-1 basis, even when that has a | ||
| performance cost. There are places where this tradeoff is unclear - for | ||
| example, I find it somewhat clearer to write a small set of equations | ||
| using linear algebra, but numpy's overhead on small matrices makes it | ||
| run slower than writing each equation out by hand. Furthermore, books | ||
| such Zarchan present the written out form, not the linear algebra form. | ||
| It is hard for me to choose which presentation is 'clearer' - it depends | ||
| on the audience. In that case I usually opt for the faster implementation. | ||
| I use NumPy and SciPy for all of the computations. I have experimented | ||
| with Numba and it yields impressive speed ups with minimal costs, but I | ||
| am not convinced that I want to add that requirement to my project. It | ||
| is still on my list of things to figure out, however. | ||
| Sphinx generated documentation lives at http://filterpy.readthedocs.org/. | ||
| Generation is triggered by git when I do a check in, so this will always | ||
| be bleeding edge development version - it will often be ahead of the | ||
| released version. | ||
| Plan for dropping Python 2.7 support | ||
| ------------------------------------ | ||
| I haven't finalized my decision on this, but NumPy is dropping | ||
| Python 2.7 support in December 2018. I will certainly drop Python | ||
| 2.7 support by then; I will probably do it much sooner. | ||
| At the moment FilterPy is on version 1.x. I plan to fork the project | ||
| to version 2.0, and support only Python 3.5+. The 1.x version | ||
| will still be available, but I will not support it. If I add something | ||
| amazing to 2.0 and someone really begs, I might backport it; more | ||
| likely I would accept a pull request with the feature backported | ||
| to 1.x. But to be honest I don't forsee this happening. | ||
| Why 3.5+, and not 3.4+? 3.5 introduced the matrix multiply symbol, | ||
| and I want my code to take advantage of it. Plus, to be honest, | ||
| I'm being selfish. I don't want to spend my life supporting this | ||
| package, and moving as far into the present as possible means | ||
| a few extra years before the Python version I choose becomes | ||
| hopelessly dated and a liability. I recognize this makes people | ||
| running the default Python in their linux distribution more | ||
| painful. All I can say is I did not decide to do the Python | ||
| 3 fork, and I don't have the time to support the bifurcation | ||
| any longer. | ||
| I am making edits to the package now in support of my book; | ||
| once those are done I'll probably create the 2.0 branch. | ||
| I'm contemplating a SLAM addition to the book, and am not | ||
| sure if I will do this in 3.5+ only or not. | ||
| Installation | ||
| ------------ | ||
| The most general installation is just to use pip, which should come with | ||
| any modern Python distribution. | ||
| .. image:: https://img.shields.io/pypi/v/filterpy.svg | ||
| :target: https://pypi.python.org/pypi/filterpy | ||
| :: | ||
| pip install filterpy | ||
| If you prefer to download the source yourself | ||
| :: | ||
| cd <directory you want to install to> | ||
| git clone http://github.com/rlabbe/filterpy | ||
| python setup.py install | ||
| If you use Anaconda, you can install from the conda-forge channel. You | ||
| will need to add the conda-forge channel if you haven't already done so: | ||
| :: | ||
| conda config --add channels conda-forge | ||
| and then install with: | ||
| :: | ||
| conda install filterpy | ||
| And, if you want to install from the bleeding edge git version | ||
| :: | ||
| pip install git+https://github.com/rlabbe/filterpy.git | ||
| Note: I make no guarantees that everything works if you install from here. | ||
| I'm the only developer, and so I don't worry about dev/release branches and | ||
| the like. Unless I fix a bug for you and tell you to get this version because | ||
| I haven't made a new release yet, I strongly advise not installing from git. | ||
| Basic use | ||
| --------- | ||
| Full documentation is at | ||
| https://filterpy.readthedocs.io/en/latest/ | ||
| First, import the filters and helper functions. | ||
| .. code-block:: python | ||
| import numpy as np | ||
| from filterpy.kalman import KalmanFilter | ||
| from filterpy.common import Q_discrete_white_noise | ||
| Now, create the filter | ||
| .. code-block:: python | ||
| my_filter = KalmanFilter(dim_x=2, dim_z=1) | ||
| Initialize the filter's matrices. | ||
| .. code-block:: python | ||
| my_filter.x = np.array([[2.], | ||
| [0.]]) # initial state (location and velocity) | ||
| my_filter.F = np.array([[1.,1.], | ||
| [0.,1.]]) # state transition matrix | ||
| my_filter.H = np.array([[1.,0.]]) # Measurement function | ||
| my_filter.P *= 1000. # covariance matrix | ||
| my_filter.R = 5 # state uncertainty | ||
| my_filter.Q = Q_discrete_white_noise(2, dt, .1) # process uncertainty | ||
| Finally, run the filter. | ||
| .. code-block:: python | ||
| while True: | ||
| my_filter.predict() | ||
| my_filter.update(get_some_measurement()) | ||
| # do something with the output | ||
| x = my_filter.x | ||
| do_something_amazing(x) | ||
| Sorry, that is the extent of the documentation here. However, the library | ||
| is broken up into subdirectories: gh, kalman, memory, leastsq, and so on. | ||
| Each subdirectory contains python files relating to that form of filter. | ||
| The functions and methods contain pretty good docstrings on use. | ||
| My book https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/ | ||
| uses this library, and is the place to go if you are trying to learn | ||
| about Kalman filtering and/or this library. These two are not exactly in | ||
| sync - my normal development cycle is to add files here, test them, figure | ||
| out how to present them pedalogically, then write the appropriate section | ||
| or chapterin the book. So there is code here that is not discussed | ||
| yet in the book. | ||
| Requirements | ||
| ------------ | ||
| This library uses NumPy, SciPy, Matplotlib, and Python. | ||
| I haven't extensively tested backwards compatibility - I use the | ||
| Anaconda distribution, and so I am on Python 3.6 and 2.7.14, along with | ||
| whatever version of NumPy, SciPy, and matplotlib they provide. But I am | ||
| using pretty basic Python - numpy.array, maybe a list comprehension in | ||
| my tests. | ||
| I import from **__future__** to ensure the code works in Python 2 and 3. | ||
| Testing | ||
| ------- | ||
| All tests are written to work with py.test. Just type ``py.test`` at the | ||
| command line. | ||
| As explained above, the tests are not robust. I'm still at the stage | ||
| where visual plots are the best way to see how things are working. | ||
| Apologies, but I think it is a sound choice for development. It is easy | ||
| for a filter to perform within theoretical limits (which we can write a | ||
| non-visual test for) yet be 'off' in some way. The code itself contains | ||
| tests in the form of asserts and properties that ensure that arrays are | ||
| of the proper dimension, etc. | ||
| References | ||
| ---------- | ||
| I use three main texts as my refererence, though I do own the majority | ||
| of the Kalman filtering literature. First is Paul Zarchan's | ||
| 'Fundamentals of Kalman Filtering: A Practical Approach'. I think it by | ||
| far the best Kalman filtering book out there if you are interested in | ||
| practical applications more than writing a thesis. The second book I use | ||
| is Eli Brookner's 'Tracking and Kalman Filtering Made Easy'. This is an | ||
| astonishingly good book; its first chapter is actually readable by the | ||
| layperson! Brookner starts from the g-h filter, and shows how all other | ||
| filters - the Kalman filter, least squares, fading memory, etc., all | ||
| derive from the g-h filter. It greatly simplifies many aspects of | ||
| analysis and/or intuitive understanding of your problem. In contrast, | ||
| Zarchan starts from least squares, and then moves on to Kalman | ||
| filtering. I find that he downplays the predict-update aspect of the | ||
| algorithms, but he has a wealth of worked examples and comparisons | ||
| between different methods. I think both viewpoints are needed, and so I | ||
| can't imagine discarding one book. Brookner also focuses on issues that | ||
| are ignored in other books - track initialization, detecting and | ||
| discarding noise, tracking multiple objects, an so on. | ||
| I said three books. I also like and use Bar-Shalom's Estimation with | ||
| Applications to Tracking and Navigation. Much more mathematical than the | ||
| previous two books, I would not recommend it as a first text unless you | ||
| already have a background in control theory or optimal estimation. Once | ||
| you have that experience, this book is a gem. Every sentence is crystal | ||
| clear, his language is precise, but each abstract mathematical statement | ||
| is followed with something like "and this means...". | ||
| License | ||
| ------- | ||
| .. image:: https://anaconda.org/rlabbe/filterpy/badges/license.svg :target: https://anaconda.org/rlabbe/filterpy | ||
| The MIT License (MIT) | ||
| Copyright (c) 2015 Roger R. Labbe Jr | ||
| Permission is hereby granted, free of charge, to any person obtaining a copy | ||
| of this software and associated documentation files (the "Software"), to deal | ||
| in the Software without restriction, including without limitation the rights | ||
| to use, copy, modify, merge, publish, distribute, sublicense, and/or sell | ||
| copies of the Software, and to permit persons to whom the Software is | ||
| furnished to do so, subject to the following conditions: | ||
| The above copyright notice and this permission notice shall be included in | ||
| all copies or substantial portions of the Software. | ||
| THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR | ||
| IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, | ||
| FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE | ||
| AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER | ||
| LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, | ||
| OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN | ||
| THE SOFTWARE.TION OF CONTRACT, | ||
| TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE | ||
| SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | ||
| Description-Content-Type: UNKNOWN | ||
| Description: FilterPy - Kalman filters and other optimal and non-optimal estimation filters in Python. | ||
| ----------------------------------------------------------------------------------------- | ||
| .. image:: https://img.shields.io/pypi/v/filterpy.svg | ||
| :target: https://pypi.python.org/pypi/filterpy | ||
| **NOTE**: Imminent drop of support of Python 2.7, 3.4. See section below for details. | ||
| This library provides Kalman filtering and various related optimal and | ||
| non-optimal filtering software written in Python. It contains Kalman | ||
| filters, Extended Kalman filters, Unscented Kalman filters, Kalman | ||
| smoothers, Least Squares filters, fading memory filters, g-h filters, | ||
| discrete Bayes, and more. | ||
| This is code I am developing in conjunction with my book Kalman and | ||
| Bayesian Filter in Python, which you can read/download at | ||
| https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/ | ||
| My aim is largely pedalogical - I opt for clear code that matches the | ||
| equations in the relevant texts on a 1-to-1 basis, even when that has a | ||
| performance cost. There are places where this tradeoff is unclear - for | ||
| example, I find it somewhat clearer to write a small set of equations | ||
| using linear algebra, but numpy's overhead on small matrices makes it | ||
| run slower than writing each equation out by hand. Furthermore, books | ||
| such Zarchan present the written out form, not the linear algebra form. | ||
| It is hard for me to choose which presentation is 'clearer' - it depends | ||
| on the audience. In that case I usually opt for the faster implementation. | ||
| I use NumPy and SciPy for all of the computations. I have experimented | ||
| with Numba and it yields impressive speed ups with minimal costs, but I | ||
| am not convinced that I want to add that requirement to my project. It | ||
| is still on my list of things to figure out, however. | ||
| Sphinx generated documentation lives at http://filterpy.readthedocs.org/. | ||
| Generation is triggered by git when I do a check in, so this will always | ||
| be bleeding edge development version - it will often be ahead of the | ||
| released version. | ||
| Plan for dropping Python 2.7 support | ||
| ------------------------------------ | ||
| I haven't finalized my decision on this, but NumPy is dropping | ||
| Python 2.7 support in December 2018. I will certainly drop Python | ||
| 2.7 support by then; I will probably do it much sooner. | ||
| At the moment FilterPy is on version 1.x. I plan to fork the project | ||
| to version 2.0, and support only Python 3.5+. The 1.x version | ||
| will still be available, but I will not support it. If I add something | ||
| amazing to 2.0 and someone really begs, I might backport it; more | ||
| likely I would accept a pull request with the feature backported | ||
| to 1.x. But to be honest I don't forsee this happening. | ||
| Why 3.5+, and not 3.4+? 3.5 introduced the matrix multiply symbol, | ||
| and I want my code to take advantage of it. Plus, to be honest, | ||
| I'm being selfish. I don't want to spend my life supporting this | ||
| package, and moving as far into the present as possible means | ||
| a few extra years before the Python version I choose becomes | ||
| hopelessly dated and a liability. I recognize this makes people | ||
| running the default Python in their linux distribution more | ||
| painful. All I can say is I did not decide to do the Python | ||
| 3 fork, and I don't have the time to support the bifurcation | ||
| any longer. | ||
| I am making edits to the package now in support of my book; | ||
| once those are done I'll probably create the 2.0 branch. | ||
| I'm contemplating a SLAM addition to the book, and am not | ||
| sure if I will do this in 3.5+ only or not. | ||
| Installation | ||
| ------------ | ||
| The most general installation is just to use pip, which should come with | ||
| any modern Python distribution. | ||
| .. image:: https://img.shields.io/pypi/v/filterpy.svg | ||
| :target: https://pypi.python.org/pypi/filterpy | ||
| :: | ||
| pip install filterpy | ||
| If you prefer to download the source yourself | ||
| :: | ||
| cd <directory you want to install to> | ||
| git clone http://github.com/rlabbe/filterpy | ||
| python setup.py install | ||
| If you use Anaconda, you can install from the conda-forge channel. You | ||
| will need to add the conda-forge channel if you haven't already done so: | ||
| :: | ||
| conda config --add channels conda-forge | ||
| and then install with: | ||
| :: | ||
| conda install filterpy | ||
| And, if you want to install from the bleeding edge git version | ||
| :: | ||
| pip install git+https://github.com/rlabbe/filterpy.git | ||
| Note: I make no guarantees that everything works if you install from here. | ||
| I'm the only developer, and so I don't worry about dev/release branches and | ||
| the like. Unless I fix a bug for you and tell you to get this version because | ||
| I haven't made a new release yet, I strongly advise not installing from git. | ||
| Basic use | ||
| --------- | ||
| Full documentation is at | ||
| https://filterpy.readthedocs.io/en/latest/ | ||
| First, import the filters and helper functions. | ||
| .. code-block:: python | ||
| import numpy as np | ||
| from filterpy.kalman import KalmanFilter | ||
| from filterpy.common import Q_discrete_white_noise | ||
| Now, create the filter | ||
| .. code-block:: python | ||
| my_filter = KalmanFilter(dim_x=2, dim_z=1) | ||
| Initialize the filter's matrices. | ||
| .. code-block:: python | ||
| my_filter.x = np.array([[2.], | ||
| [0.]]) # initial state (location and velocity) | ||
| my_filter.F = np.array([[1.,1.], | ||
| [0.,1.]]) # state transition matrix | ||
| my_filter.H = np.array([[1.,0.]]) # Measurement function | ||
| my_filter.P *= 1000. # covariance matrix | ||
| my_filter.R = 5 # state uncertainty | ||
| my_filter.Q = Q_discrete_white_noise(2, dt, .1) # process uncertainty | ||
| Finally, run the filter. | ||
| .. code-block:: python | ||
| while True: | ||
| my_filter.predict() | ||
| my_filter.update(get_some_measurement()) | ||
| # do something with the output | ||
| x = my_filter.x | ||
| do_something_amazing(x) | ||
| Sorry, that is the extent of the documentation here. However, the library | ||
| is broken up into subdirectories: gh, kalman, memory, leastsq, and so on. | ||
| Each subdirectory contains python files relating to that form of filter. | ||
| The functions and methods contain pretty good docstrings on use. | ||
| My book https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/ | ||
| uses this library, and is the place to go if you are trying to learn | ||
| about Kalman filtering and/or this library. These two are not exactly in | ||
| sync - my normal development cycle is to add files here, test them, figure | ||
| out how to present them pedalogically, then write the appropriate section | ||
| or chapter in the book. So there is code here that is not discussed | ||
| yet in the book. | ||
| Requirements | ||
| ------------ | ||
| This library uses NumPy, SciPy, Matplotlib, and Python. | ||
| I haven't extensively tested backwards compatibility - I use the | ||
| Anaconda distribution, and so I am on Python 3.6 and 2.7.14, along with | ||
| whatever version of NumPy, SciPy, and matplotlib they provide. But I am | ||
| using pretty basic Python - numpy.array, maybe a list comprehension in | ||
| my tests. | ||
| I import from **__future__** to ensure the code works in Python 2 and 3. | ||
| Testing | ||
| ------- | ||
| All tests are written to work with py.test. Just type ``py.test`` at the | ||
| command line. | ||
| As explained above, the tests are not robust. I'm still at the stage | ||
| where visual plots are the best way to see how things are working. | ||
| Apologies, but I think it is a sound choice for development. It is easy | ||
| for a filter to perform within theoretical limits (which we can write a | ||
| non-visual test for) yet be 'off' in some way. The code itself contains | ||
| tests in the form of asserts and properties that ensure that arrays are | ||
| of the proper dimension, etc. | ||
| References | ||
| ---------- | ||
| I use three main texts as my refererence, though I do own the majority | ||
| of the Kalman filtering literature. First is Paul Zarchan's | ||
| 'Fundamentals of Kalman Filtering: A Practical Approach'. I think it by | ||
| far the best Kalman filtering book out there if you are interested in | ||
| practical applications more than writing a thesis. The second book I use | ||
| is Eli Brookner's 'Tracking and Kalman Filtering Made Easy'. This is an | ||
| astonishingly good book; its first chapter is actually readable by the | ||
| layperson! Brookner starts from the g-h filter, and shows how all other | ||
| filters - the Kalman filter, least squares, fading memory, etc., all | ||
| derive from the g-h filter. It greatly simplifies many aspects of | ||
| analysis and/or intuitive understanding of your problem. In contrast, | ||
| Zarchan starts from least squares, and then moves on to Kalman | ||
| filtering. I find that he downplays the predict-update aspect of the | ||
| algorithms, but he has a wealth of worked examples and comparisons | ||
| between different methods. I think both viewpoints are needed, and so I | ||
| can't imagine discarding one book. Brookner also focuses on issues that | ||
| are ignored in other books - track initialization, detecting and | ||
| discarding noise, tracking multiple objects, an so on. | ||
| I said three books. I also like and use Bar-Shalom's Estimation with | ||
| Applications to Tracking and Navigation. Much more mathematical than the | ||
| previous two books, I would not recommend it as a first text unless you | ||
| already have a background in control theory or optimal estimation. Once | ||
| you have that experience, this book is a gem. Every sentence is crystal | ||
| clear, his language is precise, but each abstract mathematical statement | ||
| is followed with something like "and this means...". | ||
| License | ||
| ------- | ||
| .. image:: https://anaconda.org/rlabbe/filterpy/badges/license.svg :target: https://anaconda.org/rlabbe/filterpy | ||
| The MIT License (MIT) | ||
| Copyright (c) 2015 Roger R. Labbe Jr | ||
| Permission is hereby granted, free of charge, to any person obtaining a copy | ||
| of this software and associated documentation files (the "Software"), to deal | ||
| in the Software without restriction, including without limitation the rights | ||
| to use, copy, modify, merge, publish, distribute, sublicense, and/or sell | ||
| copies of the Software, and to permit persons to whom the Software is | ||
| furnished to do so, subject to the following conditions: | ||
| The above copyright notice and this permission notice shall be included in | ||
| all copies or substantial portions of the Software. | ||
| THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR | ||
| IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, | ||
| FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE | ||
| AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER | ||
| LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, | ||
| OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN | ||
| THE SOFTWARE.TION OF CONTRACT, | ||
| TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE | ||
| SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | ||
| Keywords: Kalman filters filtering optimal estimation tracking | ||
@@ -278,0 +280,0 @@ Platform: UNKNOWN |
+2
-1
@@ -7,2 +7,3 @@ FilterPy - Kalman filters and other optimal and non-optimal estimation filters in Python. | ||
| **NOTE**: Imminent drop of support of Python 2.7, 3.4. See section below for details. | ||
@@ -178,3 +179,3 @@ This library provides Kalman filtering and various related optimal and | ||
| out how to present them pedalogically, then write the appropriate section | ||
| or chapterin the book. So there is code here that is not discussed | ||
| or chapter in the book. So there is code here that is not discussed | ||
| yet in the book. | ||
@@ -181,0 +182,0 @@ |
| # -*- coding: utf-8 -*- | ||
| # pylint: disable=invalid-name, too-many-instance-attributes | ||
| """Copyright 2015 Roger R Labbe Jr. | ||
| FilterPy library. | ||
| http://github.com/rlabbe/filterpy | ||
| Documentation at: | ||
| https://filterpy.readthedocs.org | ||
| Supporting book at: | ||
| https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python | ||
| This is licensed under an MIT license. See the readme.MD file | ||
| for more information. | ||
| """ | ||
| from __future__ import (absolute_import, division) | ||
| from copy import deepcopy | ||
| import math | ||
| import sys | ||
| import numpy as np | ||
| from numpy import dot, zeros, eye | ||
| from filterpy.stats import logpdf | ||
| from filterpy.common import pretty_str, reshape_z | ||
| class InformationFilter(object): | ||
| """ | ||
| Create a linear Information filter. Information filters | ||
| compute the | ||
| inverse of the Kalman filter, allowing you to easily denote having | ||
| no information at initialization. | ||
| You are responsible for setting the various state variables to reasonable | ||
| values; the defaults below will not give you a functional filter. | ||
| Parameters | ||
| ---------- | ||
| dim_x : int | ||
| Number of state variables for the filter. For example, if you | ||
| are tracking the position and velocity of an object in two | ||
| dimensions, dim_x would be 4. | ||
| This is used to set the default size of P, Q, and u | ||
| dim_z : int | ||
| Number of of measurement inputs. For example, if the sensor | ||
| provides you with position in (x,y), dim_z would be 2. | ||
| dim_u : int (optional) | ||
| size of the control input, if it is being used. | ||
| Default value of 0 indicates it is not used. | ||
| Attributes | ||
| ---------- | ||
| x : numpy.array(dim_x, 1) | ||
| State estimate vector | ||
| P_inv : numpy.array(dim_x, dim_x) | ||
| inverse state covariance matrix | ||
| x_prior : numpy.array(dim_x, 1) | ||
| Prior (predicted) state estimate. The *_prior and *_post attributes | ||
| are for convienence; they store the prior and posterior of the | ||
| current epoch. Read Only. | ||
| P_inv_prior : numpy.array(dim_x, dim_x) | ||
| Inverse prior (predicted) state covariance matrix. Read Only. | ||
| x_post : numpy.array(dim_x, 1) | ||
| Posterior (updated) state estimate. Read Only. | ||
| P_inv_post : numpy.array(dim_x, dim_x) | ||
| Inverse posterior (updated) state covariance matrix. Read Only. | ||
| z : ndarray | ||
| Last measurement used in update(). Read only. | ||
| R_inv : numpy.array(dim_z, dim_z) | ||
| inverse of measurement noise matrix | ||
| Q : numpy.array(dim_x, dim_x) | ||
| Process noise matrix | ||
| H : numpy.array(dim_z, dim_x) | ||
| Measurement function | ||
| y : numpy.array | ||
| Residual of the update step. Read only. | ||
| K : numpy.array(dim_x, dim_z) | ||
| Kalman gain of the update step. Read only. | ||
| S : numpy.array | ||
| Systen uncertaintly projected to measurement space. Read only. | ||
| log_likelihood : float | ||
| log-likelihood of the last measurement. Read only. | ||
| likelihood : float | ||
| likelihood of last measurment. Read only. | ||
| Computed from the log-likelihood. The log-likelihood can be very | ||
| small, meaning a large negative value such as -28000. Taking the | ||
| exp() of that results in 0.0, which can break typical algorithms | ||
| which multiply by this value, so by default we always return a | ||
| number >= sys.float_info.min. | ||
| inv : function, default numpy.linalg.inv | ||
| If you prefer another inverse function, such as the Moore-Penrose | ||
| pseudo inverse, set it to that instead: kf.inv = np.linalg.pinv | ||
| Examples | ||
| -------- | ||
| See my book Kalman and Bayesian Filters in Python | ||
| https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python | ||
| """ | ||
| def __init__(self, dim_x, dim_z, dim_u=0): | ||
| if dim_x < 1: | ||
| raise ValueError('dim_x must be 1 or greater') | ||
| if dim_z < 1: | ||
| raise ValueError('dim_z must be 1 or greater') | ||
| if dim_u < 0: | ||
| raise ValueError('dim_u must be 0 or greater') | ||
| self.dim_x = dim_x | ||
| self.dim_z = dim_z | ||
| self.dim_u = dim_u | ||
| self.x = zeros((dim_x, 1)) # state | ||
| self.P_inv = eye(dim_x) # uncertainty covariance | ||
| self.Q = eye(dim_x) # process uncertainty | ||
| self.B = 0. # control transition matrix | ||
| self._F = 0. # state transition matrix | ||
| self._F_inv = 0. # state transition matrix | ||
| self.H = np.zeros((dim_z, dim_x)) # Measurement function | ||
| self.R_inv = eye(dim_z) # state uncertainty | ||
| self.z = np.array([[None]*self.dim_z]).T | ||
| # gain and residual are computed during the innovation step. We | ||
| # save them so that in case you want to inspect them for various | ||
| # purposes | ||
| self.K = 0. # kalman gain | ||
| self.y = zeros((dim_z, 1)) | ||
| self.z = zeros((dim_z, 1)) | ||
| self.SI = np.zeros((dim_z, dim_z)) # inverse system uncertainty | ||
| self._I = np.eye(dim_x) # identity matrix. | ||
| self._no_information = False | ||
| self.inv = np.linalg.inv | ||
| # save priors and posteriors | ||
| self.x_prior = np.copy(self.x) | ||
| self.P_inv_prior = np.copy(self.P_inv) | ||
| self.x_post = np.copy(self.x) | ||
| self.P_inv_post = np.copy(self.P_inv) | ||
| # Only computed only if requested via property | ||
| self._log_likelihood = math.log(sys.float_info.min) | ||
| self._likelihood = sys.float_info.min | ||
| # self._mahalanobis = None | ||
| def update(self, z, R_inv=None): | ||
| """ | ||
| Add a new measurement (z) to the kalman filter. If z is None, nothing | ||
| is changed. | ||
| Parameters | ||
| ---------- | ||
| z : np.array | ||
| measurement for this update. | ||
| R : np.array, scalar, or None | ||
| Optionally provide R to override the measurement noise for this | ||
| one call, otherwise self.R will be used. | ||
| """ | ||
| if z is None: | ||
| self.z = None | ||
| self.x_post = self.x.copy() | ||
| self.P_inv_post = self.P_inv.copy() | ||
| return | ||
| if R_inv is None: | ||
| R_inv = self.R_inv | ||
| elif np.isscalar(R_inv): | ||
| R_inv = eye(self.dim_z) * R_inv | ||
| # rename for readability and a tiny extra bit of speed | ||
| H = self.H | ||
| H_T = H.T | ||
| P_inv = self.P_inv | ||
| x = self.x | ||
| if self._no_information: | ||
| self.x = dot(P_inv, x) + dot(H_T, R_inv).dot(z) | ||
| self.P_inv = P_inv + dot(H_T, R_inv).dot(H) | ||
| self._log_likelihood = math.log(sys.float_info.min) | ||
| self._likelihood = sys.float_info.min | ||
| self._mahalanobis = sys.float_info.max | ||
| else: | ||
| # y = z - Hx | ||
| # error (residual) between measurement and prediction | ||
| self.y = z - dot(H, x) | ||
| # S = HPH' + R | ||
| # project system uncertainty into measurement space | ||
| self.SI = dot(H_T, R_inv) | ||
| self.SI = self.inv(self.S) | ||
| self.K = dot(self.SI, H_T).dot(R_inv) | ||
| # x = x + Ky | ||
| # predict new x with residual scaled by the kalman gain | ||
| self.x = x + dot(self.K, self.y) | ||
| self.P_inv = P_inv + dot(H_T, R_inv).dot(H) | ||
| self.z = np.copy(reshape_z(z, self.dim_z, np.ndim(self.x))) | ||
| # save measurement and posterior state | ||
| self.z = deepcopy(z) | ||
| self.x_post = self.x.copy() | ||
| self.P_inv_post = self.P_inv.copy() | ||
| # set to None to force recompute | ||
| self._log_likelihood = None | ||
| self._likelihood = None | ||
| #self._mahalanobis = None | ||
| def predict(self, u=0): | ||
| """ Predict next position. | ||
| Parameters | ||
| ---------- | ||
| u : ndarray | ||
| Optional control vector. If non-zero, it is multiplied by B | ||
| to create the control input into the system. | ||
| """ | ||
| # x = Fx + Bu | ||
| A = dot(self._F_inv.T, self.P_inv).dot(self._F_inv) | ||
| #pylint: disable=bare-except | ||
| try: | ||
| AI = self.inv(A) | ||
| invertable = True | ||
| if self._no_information: | ||
| try: | ||
| self.x = dot(self.inv(self.P_inv), self.x) | ||
| except: | ||
| self.x = dot(0, self.x) | ||
| self._no_information = False | ||
| except: | ||
| invertable = False | ||
| self._no_information = True | ||
| if invertable: | ||
| self.x = dot(self._F, self.x) + dot(self.B, u) | ||
| self.P_inv = self.inv(AI + self.Q) | ||
| # save priors | ||
| self.P_inv_prior = np.copy(self.P_inv) | ||
| self.x_prior = np.copy(self.x) | ||
| else: | ||
| I_PF = self._I - dot(self.P_inv, self._F_inv) | ||
| FTI = self.inv(self._F.T) | ||
| FTIX = dot(FTI, self.x) | ||
| AQI = self.inv(A + self.Q) | ||
| self.x = dot(FTI, dot(I_PF, AQI).dot(FTIX)) | ||
| # save priors | ||
| self.x_prior = np.copy(self.x) | ||
| self.P_inv_prior = np.copy(AQI) | ||
| def batch_filter(self, zs, Rs=None, update_first=False, saver=None): | ||
| """ Batch processes a sequences of measurements. | ||
| Parameters | ||
| ---------- | ||
| zs : list-like | ||
| list of measurements at each time step `self.dt` Missing | ||
| measurements must be represented by 'None'. | ||
| Rs : list-like, optional | ||
| optional list of values to use for the measurement error | ||
| covariance; a value of None in any position will cause the filter | ||
| to use `self.R` for that time step. | ||
| update_first : bool, optional, | ||
| controls whether the order of operations is update followed by | ||
| predict, or predict followed by update. Default is predict->update. | ||
| saver : filterpy.common.Saver, optional | ||
| filterpy.common.Saver object. If provided, saver.save() will be | ||
| called after every epoch | ||
| Returns | ||
| ------- | ||
| means: np.array((n,dim_x,1)) | ||
| array of the state for each time step. Each entry is an np.array. | ||
| In other words `means[k,:]` is the state at step `k`. | ||
| covariance: np.array((n,dim_x,dim_x)) | ||
| array of the covariances for each time step. In other words | ||
| `covariance[k,:,:]` is the covariance at step `k`. | ||
| """ | ||
| raise NotImplementedError("this is not implemented yet") | ||
| #pylint: disable=unreachable, no-member | ||
| # this is a copy of the code from kalman_filter, it has not been | ||
| # turned into the information filter yet. DO NOT USE. | ||
| n = np.size(zs, 0) | ||
| if Rs is None: | ||
| Rs = [None] * n | ||
| # mean estimates from Kalman Filter | ||
| means = zeros((n, self.dim_x, 1)) | ||
| # state covariances from Kalman Filter | ||
| covariances = zeros((n, self.dim_x, self.dim_x)) | ||
| if update_first: | ||
| for i, (z, r) in enumerate(zip(zs, Rs)): | ||
| self.update(z, r) | ||
| means[i, :] = self.x | ||
| covariances[i, :, :] = self._P | ||
| self.predict() | ||
| if saver is not None: | ||
| saver.save() | ||
| else: | ||
| for i, (z, r) in enumerate(zip(zs, Rs)): | ||
| self.predict() | ||
| self.update(z, r) | ||
| means[i, :] = self.x | ||
| covariances[i, :, :] = self._P | ||
| if saver is not None: | ||
| saver.save() | ||
| return (means, covariances) | ||
| @property | ||
| def log_likelihood(self): | ||
| """ | ||
| log-likelihood of the last measurement. | ||
| """ | ||
| if self._log_likelihood is None: | ||
| self._log_likelihood = logpdf(x=self.y, cov=self.S) | ||
| return self._log_likelihood | ||
| @property | ||
| def likelihood(self): | ||
| """ | ||
| Computed from the log-likelihood. The log-likelihood can be very | ||
| small, meaning a large negative value such as -28000. Taking the | ||
| exp() of that results in 0.0, which can break typical algorithms | ||
| which multiply by this value, so by default we always return a | ||
| number >= sys.float_info.min. | ||
| """ | ||
| if self._likelihood is None: | ||
| self._likelihood = math.exp(self.log_likelihood) | ||
| if self._likelihood == 0: | ||
| self._likelihood = sys.float_info.min | ||
| return self._likelihood | ||
| '''@property | ||
| def mahalanobis(self): | ||
| """" | ||
| Mahalanobis distance of measurement. E.g. 3 means measurement | ||
| was 3 standard deviations away from the predicted value. | ||
| Returns | ||
| ------- | ||
| mahalanobis : float | ||
| """ | ||
| if self._mahalanobis is None: | ||
| self._mahalanobis = sqrt(float(dot(dot(self.y.T, self.SI), self.y))) | ||
| return self._mahalanobis''' | ||
| @property | ||
| def F(self): | ||
| """State Transition matrix""" | ||
| return self._F | ||
| @F.setter | ||
| def F(self, value): | ||
| self._F = value | ||
| self._F_inv = self.inv(self._F) | ||
| def __repr__(self): | ||
| return '\n'.join([ | ||
| 'InformationFilter object', | ||
| pretty_str('dim_x', self.dim_x), | ||
| pretty_str('dim_z', self.dim_z), | ||
| pretty_str('dim_u', self.dim_u), | ||
| pretty_str('x', self.x), | ||
| pretty_str('P_inv', self.P_inv), | ||
| pretty_str('x_prior', self.x_prior), | ||
| pretty_str('P_inv_prior', self.P_inv_prior), | ||
| pretty_str('F', self.F), | ||
| pretty_str('_F_inv', self._F_inv), | ||
| pretty_str('Q', self.Q), | ||
| pretty_str('R_inv', self.R_inv), | ||
| pretty_str('H', self.H), | ||
| pretty_str('K', self.K), | ||
| pretty_str('y', self.y), | ||
| pretty_str('z', self.z), | ||
| pretty_str('SI', self.SI), | ||
| pretty_str('B', self.B), | ||
| pretty_str('log-likelihood', self.log_likelihood), | ||
| pretty_str('likelihood', self.likelihood), | ||
| #pretty_str('mahalanobis', self.mahalanobis), | ||
| pretty_str('inv', self.inv) | ||
| ]) |
Alert delta unavailable
Currently unable to show alert delta for PyPI packages.