Latest Threat Research:SANDWORM_MODE: Shai-Hulud-Style npm Worm Hijacks CI Workflows and Poisons AI Toolchains.Details
Socket
Book a DemoInstallSign in
Socket

impdar

Package Overview
Dependencies
Maintainers
1
Versions
26
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

impdar - npm Package Compare versions

Comparing version
1.1.4
to
1.1.4.post3
+1
-1
impdar.egg-info/PKG-INFO
Metadata-Version: 1.0
Name: impdar
Version: 1.1.4
Version: 1.1.4.post3
Summary: Scripts for impulse radar

@@ -5,0 +5,0 @@ Home-page: http://github.com/dlilien/impdar

@@ -1,69 +0,3 @@

.gitmodules
.readthedocs.yaml
Dockerfile
LICENSE
README.md
action.yml
entrypoint.sh
install_qt.sh
install_su.sh
requirements.txt
setup.py
.github/workflows/python-package.yml
doc/Makefile
doc/conf.py
doc/index.rst
doc/installation.rst
doc/requirements.txt
doc/bin/first_pick.png
doc/bin/impdar.rst
doc/bin/imppick.rst
doc/bin/impplot.rst
doc/bin/impproc.rst
doc/bin/index.rst
doc/bin/load_cross_profile.png
doc/bin/nan_pick.png
doc/bin/picking_overview.png
doc/bin/picking_overview_edit.png
doc/bin/right_click_pick.png
doc/bin/zoom_selected.png
doc/examples/api_plot.png
doc/examples/crossprofile.png
doc/examples/crossprofile_bandpassed.png
doc/examples/crossprofile_bandpassed_ahfilt.png
doc/examples/crossprofile_bandpassed_ahfilt_restacked.png
doc/examples/crossprofile_bandpassed_ahfilt_restacked_nmo.png
doc/examples/crossprofile_proc.png
doc/examples/density_permittivity_velocity.png
doc/examples/dialog.png
doc/examples/index.rst
doc/examples/loading.rst
doc/examples/migration.rst
doc/examples/picking.rst
doc/examples/plotting.rst
doc/examples/power_plot.png
doc/examples/proc.png
doc/examples/processing.rst
doc/examples/rg_plot.png
doc/examples/small_data.png
doc/examples/time_depth.png
doc/examples/trace_plot.png
doc/examples/migration_figures/NEGIS_phsh.png
doc/examples/migration_figures/NEGIS_stolt.png
doc/examples/migration_figures/NEGIS_sumigtk.png
doc/examples/migration_figures/NEGIS_unmigrated.png
doc/examples/migration_figures/migration_cartoon.png
doc/examples/migration_figures/permittivity_box.png
doc/examples/migration_figures/synthetic.png
doc/examples/migration_figures/synthetic_migrated_kirch.png
doc/examples/migration_figures/synthetic_migrated_phsh.png
doc/examples/migration_figures/synthetic_migrated_stolt.png
doc/examples/migration_figures/synthetic_migrated_sumigtk.png
doc/lib/ImpdarError.rst
doc/lib/Picking.rst
doc/lib/Plotting.rst
doc/lib/RadarData.rst
doc/lib/index.rst
doc/lib/load.rst
doc/lib/process.rst
impdar/__init__.py

@@ -83,5 +17,3 @@ impdar.egg-info/PKG-INFO

impdar/gui/pickgui.py
impdar/gui/ui/Makefile
impdar/gui/ui/RawPickGUI.py
impdar/gui/ui/RawPickGUI.ui
impdar/gui/ui/__init__.py

@@ -104,8 +36,2 @@ impdar/gui/ui/mplfigcanvaswidget.py

impdar/lib/process.py
impdar/lib/ApresData/ApresFlags.py
impdar/lib/ApresData/ApresHeader.py
impdar/lib/ApresData/_ApresDataProcessing.py
impdar/lib/ApresData/_ApresDataSaving.py
impdar/lib/ApresData/__init__.py
impdar/lib/ApresData/load_apres.py
impdar/lib/RadarData/_RadarDataFiltering.py

@@ -115,7 +41,2 @@ impdar/lib/RadarData/_RadarDataProcessing.py

impdar/lib/RadarData/__init__.py
impdar/lib/analysis/Roughness.py
impdar/lib/analysis/__init__.py
impdar/lib/analysis/attenuation.py
impdar/lib/analysis/continuity_index.py
impdar/lib/analysis/geometric_power_corrections.py
impdar/lib/load/__init__.py

@@ -138,73 +59,4 @@ impdar/lib/load/load_UoA_mat.py

impdar/lib/migrationlib/_mig_cython.c
impdar/lib/migrationlib/_mig_cython.pyx
impdar/lib/migrationlib/mig_cython.c
impdar/lib/migrationlib/mig_cython.h
impdar/lib/migrationlib/mig_python.py
impdar/lib/migrationlib/mig_su.py
impdar/tests/test_BSI.py
impdar/tests/test_GSSI.py
impdar/tests/test_GUI.py
impdar/tests/test_LastTrace.py
impdar/tests/test_MCoRDS.py
impdar/tests/test_PE.py
impdar/tests/test_PickParameters.py
impdar/tests/test_Picks.py
impdar/tests/test_RAMAC.py
impdar/tests/test_RadarData.py
impdar/tests/test_RadarDataFiltering.py
impdar/tests/test_RadarDataSaving.py
impdar/tests/test_RadarFlags.py
impdar/tests/test_SEGY.py
impdar/tests/test_convert.py
impdar/tests/test_gecko.py
impdar/tests/test_gprmax.py
impdar/tests/test_gpslib.py
impdar/tests/test_impdarexec.py
impdar/tests/test_imppick.py
impdar/tests/test_impplot.py
impdar/tests/test_impproc.py
impdar/tests/test_load.py
impdar/tests/test_loading_utils.py
impdar/tests/test_migrationlib.py
impdar/tests/test_nosegyio.py
impdar/tests/test_picklib.py
impdar/tests/test_plot.py
impdar/tests/test_process.py
impdar/tests/input_data/GSSI_3000.DZT
impdar/tests/input_data/README
impdar/tests/input_data/along_picked.mat
impdar/tests/input_data/cross_picked.mat
impdar/tests/input_data/data_raw.mat
impdar/tests/input_data/gps_control.csv
impdar/tests/input_data/gps_control.mat
impdar/tests/input_data/gps_control_badfields.mat
impdar/tests/input_data/nonimpdar_justmissingdat.mat
impdar/tests/input_data/nonimpdar_matlab.mat
impdar/tests/input_data/rectangle.vti
impdar/tests/input_data/rectangle_gprMax_Bscan.h5
impdar/tests/input_data/rho_profile.txt
impdar/tests/input_data/shots0001_0200.segy
impdar/tests/input_data/small_data.mat
impdar/tests/input_data/small_data_otherstodeepattrs.mat
impdar/tests/input_data/small_data_picks.mat
impdar/tests/input_data/small_just_otherstodeepattrs.mat
impdar/tests/input_data/ten_col.cor
impdar/tests/input_data/ten_col.rad
impdar/tests/input_data/ten_col.rd3
impdar/tests/input_data/ten_col_nogps.rad
impdar/tests/input_data/ten_col_nogps.rd3
impdar/tests/input_data/test_bsi.h5
impdar/tests/input_data/test_gecko.gtd
impdar/tests/input_data/test_gssi.DZG
impdar/tests/input_data/test_gssi.DZT
impdar/tests/input_data/test_gssi.DZX
impdar/tests/input_data/test_gssi_justdzt.DZT
impdar/tests/input_data/test_gssi_partialgps.DZG
impdar/tests/input_data/test_gssi_partialgps.DZT
impdar/tests/input_data/test_pe.DT1
impdar/tests/input_data/test_pe.GPS
impdar/tests/input_data/test_pe.HD
impdar/tests/input_data/velocity_lateral.txt
impdar/tests/input_data/velocity_layers.txt
impdar/tests/input_data/zeros_mcords.nc
impdar/tests/input_data/zeros_mcords_mat.mat
impdar/lib/migrationlib/mig_su.py

@@ -23,3 +23,7 @@ #! /usr/bin/env python

except ImportError:
conversions_enabled = False
try:
from osgeo import osr
conversions_enabled = True
except ImportError:
conversions_enabled = False

@@ -259,16 +263,23 @@ from scipy.interpolate import interp1d

if len(all) > 5:
numbers = list(map(lambda x: float(x) if x != '' else 0, all[1:3] + [1] + [all[4]] + [1] + all[6:10] + [all[11]]))
if all[3] == 'S':
numbers[2] = -1
if all[5] == 'W':
numbers[4] = -1
# We can have corrupted lines--just ignore these and continue
try:
numbers = list(map(lambda x: float(x) if x != '' else np.nan, all[1:3] + [1] + [all[4]] + [1] + all[6:10] + [all[11]]))
if all[3] == 'S':
numbers[2] = -1
if all[5] == 'W':
numbers[4] = -1
except (ValueError, IndexError):
numbers = [np.nan] * 10
elif len(all) > 2:
numbers = list(map(lambda x: float(x) if x != '' else 0, all[1:3] + [1]))
if all[3] == 'S':
numbers[2] = -1
try:
numbers = list(map(lambda x: float(x) if x != '' else np.nan, all[1:3] + [1]))
if all[3] == 'S':
numbers[2] = -1
except (ValueError, IndexError):
numbers = [np.nan] * 10
else:
numbers = np.nan
numbers = [np.nan] * 10
return numbers
if list_of_sentences[0].split(',')[0] == '$GPGGA':
if np.all([sentence.split(',')[0] == '$GPGGA' for sentence in list_of_sentences]):
data = nmea_info()

@@ -279,3 +290,2 @@ data.all_data = np.array([_gga_sentence_split(sentence)

else:
print(list_of_sentences[0].split(',')[0])
raise ValueError('I can only do gga sentences right now')

@@ -282,0 +292,0 @@

@@ -92,4 +92,23 @@ #! /usr/bin/env python

# and to be sure that the indices line up
gssis_inds = [i for i, line in enumerate(lines) if 'GSSIS' in line]
gga_inds = [i for i, line in enumerate(lines) if 'GGA' in line]
all_gga_inds = [i for i, line in enumerate(lines) if '$GPGGA' == line.split(',')[0]]
# Get the corresponding GSSI trace numbers
all_gssis_inds = np.array([i for i, line in enumerate(lines) if line.split(',')[0] == '$GSSIS'])
gssis_inds = []
gga_inds = []
for i, lineind in enumerate(all_gga_inds):
if i == 0:
prevind = 0
else:
prevind = all_gga_inds[i - 1]
rel_inds = all_gssis_inds[np.logical_and(all_gssis_inds < lineind, all_gssis_inds > prevind)]
if len(rel_inds) > 0:
try:
# we can still have bad GSSI strings
if float(lines[np.max(rel_inds)].split(',')[1]).is_integer():
gssis_inds.append(np.max(rel_inds))
gga_inds.append(lineind)
except ValueError:
continue
# we may have some records without GGA, so check if this is the case;

@@ -96,0 +115,0 @@ # we keep track of the offset if so

@@ -307,9 +307,9 @@ #! /usr/bin/env python

# The extra shift compared to the smallest
mintrig = np.min(ind)
mintrig = np.nanmin(ind)
lims = [mintrig, self.data.shape[0]]
self.trig = self.trig-ind
trig_ends = self.data.shape[0] - (ind - mintrig) - 1
self.trig = self.trig - ind
data_old = self.data.copy()
self.data = np.zeros((data_old.shape[0] - mintrig, data_old.shape[1]))
self.data[:, :] = np.nan
trig_ends = self.data.shape[0] - (ind - mintrig)
for i in range(self.data.shape[1]):

@@ -544,6 +544,11 @@ self.data[:trig_ends[i], i] = data_old[ind[i]:, i]

for attr in ['lat', 'long', 'elev', 'x_coord', 'y_coord', 'decday', 'pressure', 'trig']:
for attr in ['lat', 'long', 'x_coord', 'y_coord', 'decday', 'pressure', 'trig']:
setattr(self,
attr,
interp1d(temp_dist, getattr(self, attr)[good_vals])(new_dists))
for attr in ['elev']:
if getattr(self, attr) is not None:
setattr(self,
attr,
interp1d(temp_dist, getattr(self, attr)[good_vals])(new_dists))

@@ -550,0 +555,0 @@ if self.picks is not None:

@@ -134,3 +134,3 @@ #! /usr/bin/env python

pts = np.vstack((self.long, self.lat)).transpose()
t_srs = 'EPSG:3426'
t_srs = 'EPSG:4326'

@@ -137,0 +137,0 @@ driver = ogr.GetDriverByName('ESRI Shapefile')

Metadata-Version: 1.0
Name: impdar
Version: 1.1.4
Version: 1.1.4.post3
Summary: Scripts for impulse radar

@@ -5,0 +5,0 @@ Home-page: http://github.com/dlilien/impdar

@@ -42,3 +42,3 @@ #! /usr/bin/env python

version = '1.1.4'
version = '1.1.4.post3'
packages = ['impdar',

@@ -45,0 +45,0 @@ 'impdar.lib',

# This workflow will install Python dependencies, run tests and lint with a variety of Python versions
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
name: Python package
on: [push, pull_request]
jobs:
Test:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
python-version: ["3.6", "3.7", "3.8", "3.9"]
qt5: [true, false]
gdal: [true, false]
seisunix: [true, false]
exclude:
- gdal: true
python-version: "3.9" # This is broken on pip at gdal@3.0.4
- os: macos-latest
qt5: true
- os: macos-latest
seisunix: true
- os: macos-latest
gdal: true
- os: windows-latest
qt5: true
- os: windows-latest
seisunix: true
- os: windows-latest
gdal: true
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
# testing
python -m pip install flake8 pytest
python -m pip install coverage
python -m pip install mock
# production
pip install -r requirements.txt
- name: Install optional dependency qt5
if: ${{ matrix.qt5 }}
run: |
sudo apt install -y xvfb x11-utils libxkbcommon-x11-0 libxcb-icccm4 libxcb-image0 libxcb-keysyms1 libxcb-randr0 libxcb-render-util0 libxcb-xinerama0 pyqt5-dev-tools
python -m pip install pyqt5
python -c 'import PyQt5'
- name: Install optional dependency SeisUnix
if: ${{ matrix.seisunix }}
run: bash install_su.sh
- name: Install optional dependency GDAL
if: ${{ matrix.gdal }}
run: |
sudo apt-get install libgdal-dev=3.0.4+dfsg-1build3
export CPLUS_INCLUDE_PATH=/usr/include/gdal
export C_INCLUDE_PATH=/usr/include/gdal
python -m pip install gdal==3.0.4
- name: Install ImpDAR
run: python -m pip install .
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Run tests GUI
if: ${{ matrix.qt5 }}
run: |
xvfb-run `which coverage` run --source impdar --omit=impdar/tests/*,impdar/lib/ApresData/*,impdar/lib/analysis/* -m pytest
- name: Run tests no GUI
if: ${{ !matrix.qt5 }}
run: |
coverage run --source impdar --omit=impdar/tests/*,impdar/lib/ApresData/*,impdar/lib/analysis/* -m pytest
- name: Produce xml coverage
run: coverage xml
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v2
with:
fail_ci_if_error: false # I see no reason to fail over this.
Windows-Build:
needs: Test
runs-on: windows-latest
strategy:
matrix:
python-version: [3.6, 3.7, 3.8, 3.9]
steps:
- uses: actions/checkout@master
- name: Download Build Tools for Visual Studio 2019
run: Invoke-WebRequest -Uri https://aka.ms/vs/16/release/vs_buildtools.exe -OutFile vs_buildtools.exe
- name: Run vs_buildtools.exe install
run: ./vs_buildtools.exe --quiet --wait --norestart --nocache --add Microsoft.VisualStudio.Component.VC.Tools.x86.x64 --add Microsoft.VisualStudio.Component.VC.v141.x86.x64 --add Microsoft.VisualStudio.Component.VC.140 --includeRecommended
- name: Set up Python ${{ matrix.python-version }} x64
uses: actions/setup-python@v1
with:
python-version: ${{ matrix.python-version }}
architecture: x64
- name: Install Python package dependencies
run: pip install cython wheel
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install -r requirements.txt
- name: Build binary wheel
run: python setup.py bdist_wheel
- uses: actions/upload-artifact@v2
with:
name: build-windows-${{ matrix.python-version }}
path: dist
Mac-Build:
runs-on: macos-latest
needs: Test
strategy:
fail-fast: false
matrix:
python-version: ["3.7", "3.8", "3.9"]
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_API_TOKEN }}
steps:
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Symlink gcc
run: |
ln -s /usr/local/bin/gfortran-9 /usr/local/bin/gfortran
ln -s /usr/local/bin/gcc-9 /usr/local/bin/gcc
continue-on-error: true
- name: Checkout
uses: actions/checkout@v2
- name: Install dependencies
run: |
python -m pip install --upgrade pip wheel cython
python -m pip install -r requirements.txt
- name: Build binary wheel
run: python setup.py bdist_wheel
- uses: actions/upload-artifact@v2
with:
name: build-macos-${{ matrix.python-version }}
path: dist
# This one is only run when actually needed
Linux-Build:
runs-on: ubuntu-latest
needs: Test
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_API_TOKEN }}
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Build and deploy manylinux wheels
uses: ./
- uses: actions/upload-artifact@v2
with:
name: build-linux
path: dist
# deploy source distribution
Source-Build:
runs-on: ubuntu-latest
needs: Test
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.7
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install -r requirements.txt
- name: create source distribution
run: python setup.py sdist
- uses: actions/upload-artifact@v2
with:
name: build-source
path: dist
Upload-PyPi:
runs-on: ubuntu-latest
needs: [Source-Build, Mac-Build, Windows-Build, Linux-Build]
steps:
- uses: actions/checkout@v2
- uses: actions/download-artifact@v2
- name: Collect packages
run: |
mkdir dist
mv build*/* dist/
- name: Publish distribution 📦 to Test PyPI
uses: pypa/gh-action-pypi-publish@master
with:
password: ${{ secrets.TEST_PYPI_API_TOKEN }}
repository_url: https://test.pypi.org/legacy/
skip_existing: true
- name: Publish distribution 📦 to PyPI
if: startsWith(github.ref, 'refs/tags')
uses: pypa/gh-action-pypi-publish@master
with:
password: ${{ secrets.PYPI_API_TOKEN }}
[submodule "doc/ImpDAR_tutorials"]
path = doc/ImpDAR_tutorials
url = https://github.com/Jakidxav/ImpDAR_tutorials.git
# .readthedocs.yaml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
# Required
version: 2
# Set the version of Python and other tools you might need
build:
os: ubuntu-20.04
tools:
python: "3.9"
# You can also specify other tool versions:
# nodejs: "16"
# rust: "1.55"
# golang: "1.17"
# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: doc/conf.py
fail_on_warning: false
# If using Sphinx, optionally build your docs in additional formats such as PDF
formats:
- pdf
# Optionally declare the Python requirements required to build your docs
python:
install:
- requirements: requirements.txt
- requirements: doc/requirements.txt
submodules:
include: all
recursive: true
# action.yml
name: 'Build Manylinux'
description: 'Build many python'
runs:
using: 'docker'
image: 'Dockerfile'

Sorry, the diff of this file is too big to display

impdar
======
The main executable for the ImpDAR package.
.. argparse::
:module: impdar.bin.impdarexec
:func: _get_args
:prog: impdar
=======
imppick
=======
Command-line call
=================
.. code-block:: bash
imppick [-h] [-xd] [-yd] fn
Positional Arguments
--------------------
+----+---------------------+
| fn | The file to process |
+----+---------------------+
Named Arguments
---------------
+-----+--------------------------------------------------------------------------+
| -xd | Use kilometers for the x-axis |
+-----+--------------------------------------------------------------------------+
| -yd | Use depth in meters (or elevation if elevation-corrected) for the y-axis |
+-----+--------------------------------------------------------------------------+
After calling imppick to bring up the GUI, things should be pretty intuitive, but navigation may seem a bit odd at first. Here is the basic view of the picker on a Mac:
.. image:: picking_overview.png
In this profile, I've picked some layers already. The active pick is highlighted in magenta (or rather the top and bottom of the packet are in magenta, and the middle of the packet is green, but the middle is not visible at this zoom). Other picks are in yellow, surrounding a blue central peak. On the left side are most of the controls for when do the picking. We'll go through the buttons on the left, from top to bottom, to get an idea of how picking proceeds.
Menus
=====
Modes
-----
The mode is displayed in a button in the upper left. We have two modes: select mode, for deciding what to modify, and edit mode, to change the picks. **Neither mode works when the matplotlib pan/zoom toolbar is active (shown below). Reclick the zoom or pan button so it is unselected if you want to use the ImpDAR functions.**
.. image:: ./zoom_selected.png
Select
______
Select mode allows us to choose which of the picks to add to. This is used to go back to old picks that already exist and modify them. If there are no picks yet, or if we want a new pick, we can go straight to edit mode.
Edit
____
Edit mode is where you will spend most of your time. In edit mode, you can modify the existing picks, either deleting from the, renumbering them, or adding to them.
Pick Options
------------
This is where we control things about how the picking algorithm operates.
Pick Number
___________
Changing this integer will change the number associated with this pick. This changes nothing about how the data are stored (i.e. you can choose pick 999 without making a big empty matrix waiting for picks 1-998), and only affects wheat we call it (and it will be exported with the number of your choice). By convention, StODeep used 99 for the bed pick. Trying to set the number to something that is already used is not allowed--ImpDAR will increment to an unused pick. If you want to switch the numbering of two picks, you should set one to something unused, move the second to the first, then the first into the second's old place.
Autopick
________
Right now, this checkbox is inactive. If we can successfully implement a decent autopicking algorithm, this will get turned on. For now, if you want to try to make ImpDAR do the work for you, try picking the leftmost and the rightmost trace.
Frequency
_________
This should, in general, be the frequency of the radar system. It sets the wavelet size that we try to correlate with each radar trace when picking. You probably want to update this once at the start of picking, then leave it alone.
Polarity
________
Choose whether you are picking layers that go +-+ or -+-. In a grayscale colorbar, these BWB and WBW respectively.
New Pick
--------
After we have selected our picking options, we probably want to do some picking. Clicking the "New pick" button adds a pick with an unused pick number (you can modify it at any time though).
View Options
------------
These options control aspects of coloring the radargram; zooming and panning are handled directly by matplotlib in the bottom toolbar.
Color limits
____________
The color limits are fairly self explanatory. Change this to increase or decrease contrast.
Color map
_________
This is again self explanatory. Change the colormap as desired to improve the visualization. CEGSIC is a custom map intended for radar data and developed at St. Olaf. All other maps just link to matplotlib.
Workflow
========
Load intersections
------------------
Once the profile is loaded, before doing any picking or numbering, you likely want to have the context of any other profiles that you have already picked. This is done through `pick > load crossprofile`. Loading the intersections should give you a string of pearls with pick numbers in each, with the dots located at where the other profile hits this one. The loading is pretty dumb, so if you have multiple intersections only the one where the traces in the two profiles are closest will load. Eventually this might become more clever, but the current implementation covers most use cases. You can load multiple profiles, so if you are really having a need for multiple intersections, just split the other file.
.. image:: load_cross_profile.png
Picking
-------
To begin picking, make sure you are in "edit" mode and that neither pan nor zoom is selected. If there are already some picks on the profile, you first will want to create a new pick. Picking a section must be done from left to right. You can skip portions by "NaN picking", then continue to the right and go back and fill in gaps later to fill in gaps. To pick, start with a left click on the layer at the left edge of the profile. After you click a second time, you should start to see the layer plotted. You should not try to pick too far away--ImpDAR will search for a reflection with the desired polarity within a certain distance, determined by the frequency, of the line connecting your clicks. If you try to make it pick through too much variability, it can miss peaks and troughs.
.. image:: first_pick.png
Now, let's say you come to a portion of the profile that you feel is ambiguous and you want to skip it. Pick up to the left side of it, then click on the right side while holding down "n". Continue clicking to the right as normal, and you will see that the portion left of where you clicked with "n", i.e. where you NaN picked, is blank.
.. image:: nan_pick.png
Now suppose you screwed up, like in the image above where it looks like you stepped down to a deeper layer by mistake, so now you want to backtrack. Right clicking will delete all picks left of the last click (generally the right end of the profile) and right of the right click.
.. image:: right_click_pick.png
We can also go back and edit a previous pick, moving it up, say. We can also delete picks in the middle of a profile by left clicking at the right edge of the span we are deleting, then right click at the left edge.
Saving
------
After picking, you need to save your picks. When you close the window, you will be prompted to save. You can also save at any time through the file menu in the upper left. If you just want to save an image of the figure, you can use the disk icon in the matplotlib toolbar or you can use the `file > save figure` from the menus. You can also export the picks as a csv file (no gdal required) or as a shapefile (needs gdal) from the `pick > export` menu.
impplot
=======
The executable syntax is described below, but look to :doc:`../examples/plotting` examples for a more useful overview of what you will get out.
.. argparse::
:module: impdar.bin.impproc
:func: _get_args
:prog: impproc
impproc
=======
An executable to perform single processing steps.
This has a lot of convenience in terms of the call since you get more help with commands, more control of arguments, control over the order in which things are done, etc, but has the disadvantage of requiring a call/load/write for every step.
You can get a list of commands with ``impproc -h``
For any individual command, you can get more help by running ``impproc [command] -h``.
Examples
--------
A sample workflow might be something like
.. code-block:: bash
# make directories for the output
mkdir bandpass hfilt nmo
# Vertical bandpass from 150-450MHz (loading in the raw data with the -gssi flag)
impproc vbp 150 450 -gssi *.DZT -o bandpass/
# do some horizontal filtering on that output
impproc hfilt 1000 2000 bandpass/*.mat -o hfilt
# finally do a conversion to the time domain
impproc nmo 10 hfilt/*.mat -o nmo
The same processing steps can be done without separating the output into different folders. At risk of file clutter, the workflow could be
.. code-block:: bash
# Vertical bandpass from 150-450MHz (loading in the raw data with the -gssi flag)
impproc vbp 150 450 -gssi *.DZT
# do some horizontal filtering on that output
impproc hfilt 1000 2000 *_vbp.mat
# finally do a conversion to the time domain
impproc nmo 10 *_hfilt.mat
# Outputs are now sitting around with _vbp_hfilt_nmo before the extension
A similar example, with visualization of the outputs, is :doc:`here </../examples/processing>`.
Usage
-----
.. argparse::
:module: impdar.bin.impproc
:func: _get_args
:prog: impproc
Executables
===========
ImpDAR has four executables:
:doc:`impdar <impdar>` is a generic call that can process data, load data, or plot. Using this call, you can perform a number of processing steps in one go, saving time on loading and saving and saving disk space on not writing intermediate outputs.
:doc:`impproc <impproc>` is designed to give greater flexibility and cleaner syntax for processing. It only performs one processing step at a time, but will thus give you intermediate outputs, by default saved with names indicating the processing performed.
:doc:`impplot <impplot>` plots data, either as a radargram, as a line plot of power versus depth, or as the return power from a pick. It can either save the plot or bring it up for interactive panning and zooming.
:doc:`imppick <imppick>` calls up the interpretation GUI. Some processing can also be done through this GUI.
Contents:
.. toctree::
:maxdepth: 2
impdar
impproc
impplot
imppick

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
#
# ImpDAR documentation build configuration file, created by
# sphinx-quickstart on Sun Jun 3 13:07:25 2018.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
# import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
needs_sphinx = '3.2.1'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
import os
import sys
import sphinx_rtd_theme
sys.path.insert(0, os.path.abspath('../impdar'))
sys.path.insert(0, os.path.abspath('../impdar/lib'))
sys.path.insert(0, os.path.abspath('../impdar/bin'))
sys.path.insert(0, os.path.abspath('..'))
extensions = ['sphinx.ext.autodoc',
'sphinxarg.ext',
'sphinx.ext.napoleon',
'sphinx.ext.mathjax',
'sphinx.ext.intersphinx',
'sphinx.ext.githubpages',
'nbsphinx']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
autodoc_mock_imports = ['sip', 'PyQt5', 'PyQt5.QtGui', 'PyQt5.QtCore', 'PyQt5.QtWidgets']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = 'ImpDAR'
copyright = '2019--2021, David Lilien'
author = 'David Lilien'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '1.1'
# The full version, including alpha/beta/rc tags.
release = '1.1.4'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'sphinx_rtd_theme'
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#
# html_theme_options = {}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# -- Options for HTMLHelp output ------------------------------------------
# Output file base name for HTML help builder.
htmlhelp_basename = 'ImpDARdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'ImpDAR.tex', 'ImpDAR Documentation',
'David Lilien', 'manual'),
]
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'impdar', 'ImpDAR Documentation',
[author], 1)
]
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'ImpDAR', 'ImpDAR Documentation',
author, 'ImpDAR', 'One line description of project.',
'Miscellaneous'),
]
# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {'https://docs.python.org/': None,
'scipy': ('http://docs.scipy.org/doc/scipy/reference/', None),
'numpy': ('http://docs.scipy.org/doc/numpy/', None),
'np': ('http://docs.scipy.org/doc/numpy/', None)}

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is not supported yet

Examples
========
Jupyter notebooks
-----------------
We have set up several Jupyter notebooks so that you can interactively run through code.
The code and output can be seen through this website. To truly run the code and modify it, you can get the source repository `here <https://github.com/Jakidxav/ImpDAR_tutorials>`_. Some of the data source files are a bit large (~100 Mb) to give realistic examples, so this repository is separated from the main code.
.. toctree::
:maxdepth: 2
Getting started (notebook) <../ImpDAR_tutorials/getting_started/ImpDAR_GettingStarted.ipynb>
Variable permittivity (notebook) <../ImpDAR_tutorials/nmo/ImpDAR_NMO_Tutorial.ipynb>
Migration (notebook) <../ImpDAR_tutorials/migration/ImpDAR_Migration_Tutorial.ipynb>
ApRES (notebook) <../ImpDAR_tutorials/apres/ImpDAR_ApRES_Tutorial.ipynb>
Plotting power (notebook) <../ImpDAR_tutorials/plot_power/ImpDAR_plot_power_Tutorial.ipynb>
Additional examples
-------------------
The primary examples that might be useful are those showing the different :doc:`processing <\processing>` steps and the basics of the :doc:`picking gui <\picking>`.
Additional examples are provided showing :doc:`plotting <\plotting>` both via the command line and via the API, and :doc:`loading <\loading>`, though loading in ImpDAR is a single line so the examples are trivial.
.. toctree::
:maxdepth: 2
loading
processing
migration
plotting
picking
Loading Examples
================
There is not a lot documented here because loading supported files is extremely straightforward in ImpDAR. Loading (i.e. converting raw radar output into the ImpDAR/StoDeep matlab format) is accomplished in a single command with the :doc:`impdar load </../bin/impdar>` command.
The only real variation amongst filetypes is that you need to tell impdar what type of input file you are using. For example, for GSSI files, ``impdar load gssi fn [fn ...]`` will produce, for each input fn, a file with identical name but file ext '.mat'. Often, one might want to put all these outputs in a separate folder. The `-o folder_name` allows specification of an output folder. If you wanted to load PulseEkko data, it would be as simple as switching ``pe`` for ``gssi`` in the command above.

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Migration
=========
What is Migration?
------------------
The goal of migration is to transform a geophysical dataset (typically seismic data but in this case radar) into an image that accurately represents the subsurface stratigraphy. Migration is a mathematical transformation in which geophysical events (timing of wave return) are re-located to where the event (the reflection) occurred in the subsurface rather than the time at which it was recorded by the receiver at the surface. Because off-nadir information intrudes into each trace, the image must be migrated as a whole to describe the true reflector geometry. Migration adjusts the angle of dipping reflectors, shortens and moves reflectors updip, unravels bowties, and most generally collapses diffractions.
.. image:: ./migration_figures/migration_cartoon.png
The migration problem is illustrated in the image above. Dipping reflectors are imaged by an off-nadir (‘apparent’) reflection. The ray path of the apparent reflection is not the same as the depth of the reflector directly below the source. The migrator’s equation is a simple analytic way to adjust the angle of a dipping reflector,
.. math::
tan(\xi_a) = sin(\xi)
where :math:`\xi` is the true reflector dip and :math:`\xi_a` is the apparent dip shown in the unmigrated image. While this equation is useful, it does not provide the full capability of migrating the entire image. To do that, we explore a few different methods below.
*Note: migration typically assumes coincident source and receiver, meaning that this processing step should be carried out after any stacking or move-out (nmo) corrections.*
Synthetic Example
-----------------
Here, we create a synthetic domain to use as an example for the ImpDAR migration routines. For this case, the permittivity is elevated within the dark blue box in the image below (:math:`\epsilon_r=12` inside and :math:`3.2` for ice outside).
.. image:: ./migration_figures/permittivity_box.png
Loading this domain into gprmax (a finite-difference time-domain modeling software), we simulate a common-offset radar survey over the box with the output as a synthetic radargram. The source is a 3-MHz wave from a Hertzian Dipole antenna. Source-receiver antenna separation is 40 m, and the step size between traces is 4 m.
.. image:: ./migration_figures/synthetic.png
This synthetic image illustrates why we need to migrate. There are large hyperbolae that extend away from the actual location of the box in both horizontal directions. These hyperbola, or diffraction curves, do not accurately represent the subsurface stratigraphy, they are only a result of imaging the box from the side as an off-nadir reflector.
Kirchhoff Migration
___________________
The first migration method that we use here is the most direct to explain conceptually. Originally (~1920’s), geophysical datesets were migrated by hand, and this method follows the logic used then. The energy is integrated along each diffraction curve and placed at the apex of the curve (Hagedoorn, 1954). The diffraction curves are expected to be hyperbolic (in a constant velocity medium they will be), so here we iterate through each point of the image, looking for a hyperbolic diffraction curve around that point and integrating the power along it.
``impdar migrate --mtype kirch synthetic.mat``
.. image:: ./migration_figures/synthetic_migrated_kirch.png
Now we can see the box in its original location (i.e. ~30-55 km lateral distance and ~30 m depth). This method seems to work, but it is slow (even for this small synthetic dataset) and it ‘over migrates’ through much of the domain as can be seen by the upward facing hyperbola ('smileys') around the edges and below the box.
Summary of Kirchhoff Migration:
• Strengths - Conceptually simple, Migrates steeply dipping reflectors.
• Weaknesses - Slow, Over migrates, No lateral velocity variation.
Stolt Migration
_______________
Migration is most commonly done in the frequency domain. In this case, the transformation is one from vertical frequency (:math:`\omega_z`) to vertical wavenumber (:math:`k_z`); thus, these migration routines are grouped as 'frequency-wavenumber' routines. The transformation is done in the frequency domain, so a 2-D Fourier transform is used before the migration and an inverse Fourier transform after. There are many such migration routines; here I highlight a couple popular ones which have been implemented in ImpDAR.
The first, and probably the simplest, of the frequency-wavenumber migration routines is 'Stolt Migration'. Stolt Migration is done over the entire domain simultaneously, so it requires the assumption of a constant velocity throughout. The transformation is
.. math::
P(x, z, t = 0) = \int \int \left [ \frac{v}{2} \frac{k_z}{\sqrt{k_x^2+k_z^2}} \right ] P \left ( k_x, 0, \frac{v}{2} \sqrt{k_x^2 + k_z^2} \right ) e^{−ik_x x−ik_z z} dk_x dk_z
where an interpolation is done from :math:`\omega_z` to :math:`k_z` in frequency-space. The routine is implemented in ImpDAR as,
``impdar migrate --mtype stolt synthetic.mat``
.. image:: ./migration_figures/synthetic_migrated_stolt.png
Stolt migration is great in places where the velocity is known to be constant. It is quite a bit faster than the other routines. Here though, we need to be careful about migrating power in from the edges of the domain, as can be seen in the lower corners above. For this reason, we apply a linear taper to the data so that the Fast Fourier Transform has a smooth transition from data to the zeros that it fills in around the edges.
Summary of Stolt Migration:
• Strengths – Fast, Resolves steeply dipping layers.
• Weaknesses – Constant velocity.
Phase-Shift Migration
_____________________
The second frequency-wavenumber migration routines is actually a set of a few called phase-shift migration (sometimes Gazdag migration). A phase-shifting operator :math:`eˆ{-ik_z z}` is applied at each z-step in downward continuation. These methods are advantageous in that they allow variable velocity as one steps down. Generally, this only allows vertical velocity variation but there is also a case which accomadates lateral velocity variation (phase-shift plus interpolation).
``impdar migrate --mtype phsh synthetic.mat``
Constant velocity phase-shift migration is the default in ImpDAR, so it can also be called as,
``impdar migrate synthetic.mat``
.. image:: ./migration_figures/synthetic_migrated_phsh.png
Much like the result from Kirchhoff migration, we see upward dipping ‘smileys’ in this migrated image.
Summary of Phase-Shift Migration:
• Strengths – Accomodates velocity variation (particularly appropriate for vertical variations, i.e. in snow/firn or similar).
• Weaknesses – Maximum dip angle.
SeisUnix Migration Routines
___________________________
There are many migration routines implemented in the seismic processing package, SeisUnix. With ImpDAR, we have no intent to replicate the work that they have done; instead, we allow the user to easily convert radar data to .segy, migrate with SeisUnix, then convert back, all in a kind of black-box fashion with only one command. If SeisUnix is not installed, this command with raise an error.
``impdar migrate --mtype sumigtk synthetic.mat``
.. image:: ./migration_figures/synthetic_migrated_sumigtk.png
Data Example
------------
Below is a real example of migration in ImpDAR for 3-MHz ground-based data from the Northeast Greenland Ice Stream (Christianson et al., 2014).
Unmigrated Data:
.. image:: ./migration_figures/NEGIS_unmigrated.png
Stolt:
.. image:: ./migration_figures/NEGIS_stolt.png
Phase-Shift:
.. image:: ./migration_figures/NEGIS_phsh.png
SeisUnix T-K:
.. image:: ./migration_figures/NEGIS_sumigtk.png
References:
Yilmaz (2001). Seismic Data Processing.
Sherrif and Geldart (1995). Exploration Seismology.
Hagedorn (1954). Seismic Imaging Migration.
Stolt (1978). Migration by Fourier Transform. *Geophysics*
Gazdag (1978). Wave Equation Migration with the Phase-Shift Method. *Geophysics*
Christianson et al. (2014). Dilatant till facilitates ice-stream flow in northeast Greenland. *Earth and Planetary Research Letters.*
Picking examples
================
It is hard to give examples of GUI use, but the different functionalities are fairly well documented along with :doc:`../bin/imppick`. In particular, look through the workflow on that page for a normal procedure for interpreting a profile.
=================
Plotting examples
=================
Visualizing output is an essential piece of processing and the disseminating radar data. You likely will need to look at the output many times so that you can discern the effect of different processing steps, and then you will likely want to make a figure at the end. With these two use cases in mind, ImpDAR provides both command-line plotting, for quick and easy visualization, and API calls more customized plots.
impplot
=======
ImpDAR permits you to make plots by calling :code:`impdar plot [fns]` with a few options, but I recommend using :code:`impplot` instead. The syntax is cleaner and it is more clear what you are doing. There are different types of plots you can make with :code:`impplot`, described below, but it is first worth noting you can always add the :code:`-s` directive to save the output to a file rather than pulling it up in a matplotlib figure window.
radargram
---------
The most common thing to plot is probably the full radargram. The basic syntax is :code:`impplot rg [fns]`, with additional options described with :doc:`../bin/impplot`. When you run :code:`impplot rg` you will get something like this popping up in an interactive window.
.. image:: rg_plot.png
You can pan and zoom around the plot, and determine what other processing steps you might want to take. You can this with multiple filenames and get a group of plots. If there are picks in the file, these will be displayed as well, though you can deactivate this feature with :code:`-nopicks`.
traces
------
Sometimes you may want to look at how the samples in an individual trace, or group of traces, vary with depth. A range of traces can be plotted with :code:`impplot [fns] trace_start trace_end`. The output is something like this.
.. image:: trace_plot.png
power
-----
This command is used to look at the variability in reflected power in space. You will get a single plot with the return power of a given layer in all the different profiles called. The syntax is :code:`impplot [fns] layer`. If there are projected coordinates, those will be used, and otherwise you are stuck with lat/lon. The result for two crossing profiles might look something like this.
.. image:: power_plot.png
API
===
There are several reasons you might want to use an API call rather than :code:`impplot`: perhaps want to modify the output, perhaps by annotating it or plotting on top of it; you may want to put a panel made by ImpDAR in a figure with other subplots; or maybe you just need to have multiple panels plotted by ImpDAR. Regardless, in these cases I would recommend loading up the data then using the explicit plotting functions in :doc:`the plotting library<\../lib/Plotting>`. I'll just give an example of a several-panel plot with all panels produced through ImpDAR. Say you want to make a 3-panel plot showing two profiles and the power returned from both. You could use
.. code-block:: python
import matplotlib.pyplot as plt
from impdar.lib import RadarData, plot
# Load the data we are using; in this case they are already processed
profile_1 = RadarData.RadarData('along_picked.mat')
profile_2 = RadarData.RadarData('cross_picked.mat')
# Make the figure we will plot upon--need some space between axes
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, gridspec_kw={'hspace': 1.0}, figsize=(12, 8))
# plot the two radargrams
plot.plot_radargram(profile_1, fig=fig, ax=ax1)
plot.plot_radargram(profile_2, fig=fig, ax=ax2)
# Now look at their return power in space on layer 5
plot.plot_power([profile_1, profile_2], 5, fig=fig, ax=ax3)
#document what we are looking at
ax1.set_title('Along flow')
ax2.set_title('Across flow')
ax3.set_title('Layer 5\nreturn power')
# see how it turned out
plt.show(fig)
And this will produce a nice 3-panel figure (though we would certainly want to do a better job with reasonable aspect ratios for most applications).
.. image:: api_plot.png

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Processing examples
===================
There are three main options for processing. The first is using ``impdar proc`` with options in order to do multiple processing steps. ``impproc`` allows simpler syntax and greater flexibility, but only can apply one processing step at a time. Finally, there are processing options within the pick GUI that allow you to see the effects of the steps immediately, though replotting can be expensive for large datasets and batch processing with the GUI is not possible.
impdar proc
-----------
With ``impdar proc``, you can perform a number of processing steps in a single line. We are starting with data in crossprofile.mat.
.. image:: crossprofile.png
This profile does not have anything above the first return; often we would have started recording earlier and have some samples that we would want to delete off the top to start. There is a lot of variability in the overall return power in different traces (resulting from the data collection, not from sub-surface variability). There is also a lot of noise. To vertically bandpass the data between 200 and 600 MHz, adaptively horizontally filter, stack 3 traces, and do a normal moveout correction with no transmit-receive separation only requires running
``impdar proc -vbp 200 600 -ahfilt -restack 3 -nmo 0 1.69 crossprofile.mat``
and then the output is saved in crossprofile_proc.mat.
.. image:: crossprofile_proc.png
impproc
-------
``impproc`` provides a bit cleaner syntax than ``impdar proc`` but accomplishes the same tasks. It is often useful to see the effect of each processing step individually, and ``impproc`` gives named outputs for each step that allow easy identification and organization. We will use the same example as above, starting with this raw data in crossprofile.mat
.. image:: crossprofile.png
First, lets do some vertical filtering. As before, we will vertically bandpass with a 5th-order forward-backward Butterworth filter from 200 to 600 MHz.
``impproc vbp 200 600 crossprofile.mat``
This gives an output in 'crossprofile_bandpassed.mat'. We can see that this has removed most of the noise.
.. image:: crossprofile_bandpassed.png
We still probably have some noise coming in horizontally (e.g. long-wavelength changes in return power due to our batteries draining in the radar controller). To remove this, we can remove something akin to the average trace.
``impproc ahfilt crossproile_bandpassed.mat``
Which gives us 'crossproile_bandpassed_ahfilt.mat'. This looks about the same, though layers have become slightly more clear.
.. image:: crossprofile_bandpassed_ahfilt.png
Since layer slopes are small, we have lots of extraneous data. We can restack to reduce noise a bit more and reduce filesize.
``impproc restack 3 crossprofile_bandpassed_ahfilt.mat``
The output is in 'crossprofile_bandpassed_ahfilt_restacked.mat'. Again, this looks about the same, but we have reduced the filesize by about a factor of 3.
.. image:: crossprofile_bandpassed_ahfilt_restacked.png
Now we want to look at this in terms of depth. We are going to do this with a constant vertical velocity. This particular data was collected with GSSI radar with a single transmit/receive antenna, so there is no need to do any geometric correction for the triangular pattern of transmit/receive that we would get with spatially separated antennas (like many HF systems have).
``impproc nmo 0 crossprofile_bandpassed_ahfilt_restacked.mat``
The output is in 'crossprofile_bandpassed_ahfilt_restacked_nmo.mat'. The plot looks identical to before, but we see that the y-axis is now in depth.
.. image:: crossprofile_bandpassed_ahfilt_restacked_nmo.png
If the permittivity is not constant (for example in the case of variable snow/firn density), we want to make that correction here as well. Optionally, pass a .csv filename as a string to the nmo filter (i.e. rho_profile='__filename__.csv'). The file should have two columns, depth and density. ImpDAR has a couple of options for permittivity models, with the default being the DECOMP mixing model for firn permittivity (Wilhelms, 2005). As an example, here is a measured density profile with modeled permittivity and velocity profiles,
.. image:: density_permittivity_velocity.png
ImpDAR then takes the modeled velocities and updates the depth profile,
.. image:: time_depth.png
For some datasets, diffraction hyperbolae distort the image, moving much energy away from the true location of the reflecting surface. In these cases, migration is an optional processing step which moves the energy back to its appropriate position in the image. For a more thorough review of the migration routines implemented in ImpDAR, see the next example page on migration.
GUI
---
After running ``imppick``, the GUI has a 'processing' menu.
.. image:: proc.png
These options should be self explanatory. If additional arguments are needed by the processing step, a dialog box will be raised. For example, cropping requires information about where you want to crop.
.. image:: dialog.png
There is no automatic saving when processing with the GUI. File > Save (or ctrl command s).

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

.. ImpDAR documentation master file, created by
sphinx-quickstart on Sun Jun 3 13:07:25 2018.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to ImpDAR's documentation!
==================================
ImpDAR is a flexible, open-source impulse radar processor that provides most of the benefits (and some additional features) compared to expensive commercial software. The starting point was the old St. Olaf deep radar matlab code. This code has a lot of history of contributors--I've tried to preserve acknowledgment of many of them in the file headers.
Support is gradually being added for a variety of file formats. Currently, `GSSI <http://www.geophysical.com>`_, `PulseEKKO <http://www.sensoft.ca>`_, `Ramac <http://www.malagpr.com>`_, `Blue Systems <http://www.bluesystem.ca/ice-penetrating-radar.html>`_, DELORES, SEGY, `gprMAX <http://gprmax.com>`_, Gecko, and legacy StoDeep files are supported. Additionally, there is support for the BAS ApRES systems (though the processing chain is separate and documentation is not yet complete). Available processing steps include various filtering operations, trivial modifications such as restacking, cropping, or reversing data, and a few different geolocation-related operations like interpolating to constant trace spacing. The primary interface is through the command line, which allows efficient processing of large volumes of data. An API, centered around the RadarData class, is also available to allow the user to use ImpDAR in other programs.
In addition to processing, ImpDAR can also be used for picking reflectors. Picking is generally an interactive process, and there is a light GUI for doing the picking. The GUI also provides support for basic processing operations, so you can see the effect of steps as you go along.
Requirements
------------
Python 3.6+ (other versions may work but are untested),
`numpy <http://www.numpy.org>`_,
`scipy <http://www.scipy.org>`_,
`matplotlib <http://matplotlib.org>`_,
`SegYIO <https://github.com/equinor/segyio/>`_,
`h5py <https://h5py.org>`_.
To do anything involving geolocation, you will also need `GDAL <http://gdal.org>`_. The GUI, which is needed to be able to pick reflectors, requires `PyQt5 <https://pypi.org/project/PyQt5/>`_.
.. include:: installation.rst
Examples
--------
Check out the :doc:`examples <\examples/index>`, particularly the Jupyter notebook examples beginning with :doc:`getting started <\ImpDAR_tutorials/getting_started/ImpDAR_GettingStarted>`, for an idea of how to run ImpDAR. These should be a good starting point that can be modified for a particular use case. While all of the output and input are on this website, if you actually want to run the code you can download all the notebooks and run them yourself. You can get those `here <https://github.com/Jakidxav/ImpDAR_tutorials>`_.
Contributing
------------
I would be thrilled to get pull requests for any additional functionality. In particular, it is difficult for me to add support for input formats for which I do not have example data--any development of readers for additional data types would be greatly appreciated.
.. toctree::
:maxdepth: 2
:caption: Contents:
installation.rst
lib/index.rst
bin/index.rst
examples/index.rst
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
Installation
------------
Beginner
________
If you do not have a current (3.6+) python installation, you will need one to begin.
I recommend getting python 3 from `anaconda <https://anaconda.org/>`_.
The Anaconda installer is straightforward to use, and you can let it set up your path, which makes the subsequent commands "just work."
However, Anaconda on Windows suggests not putting it on your path and instead using the Anaconda prompt.
The procedure is the same--just open an anaconda prompt window after installation then continue.
If you are on MacOS or Linux, you will want to restart your terminal after installing Anaconda so you get updated path specs.
Next, we need to install dependencies. GDAL is needed for accurate measurement of distance, and for converting coordinate systems.
I recommend getting it and segyio, used for interacting with the SEGY data format, using,
.. code-block:: bash
conda install -c conda-forge gdal segyio
This step can be really slow, so don not worry if it is a bit painful.
At this point, I also recommend installing h5py--if you do not, it pip will do it on the next step. This can be done with
.. code-block:: bash
conda install h5py
Now, you are ready to install impdar. You can get a version with
.. code-block:: bash
pip install impdar
If you are not a super user, you may get an error related to permissions. This is fine, you just need to install for yourself only. Use
.. code-block:: bash
pip install --user impdar
You should now be all set to start using ImpDAR. Scroll down for documentation and links for examples.
Advanced
________
If you are not using Anaconda, you are on your own for installing dependencies. The challenges are generally GDAL and PyQt5, since these rely on libraries in other languages. For the most basic use cases, you can skip these, and go straight to installing ImpDAR with pip or through github.
To be sure that you have the newest version of ImpDAR as a lot of development is happening, you will want to use the development branch from GitHub. The pypi (pip) version is not updated as often to ensure a stable release. To get the devel version off git,
.. code-block:: bash
git clone -b devel https://github.com/dlilien/ImpDAR.git
cd impdar
python setup.py install
This requires `git <https://git-scm.com/downloads>`_.
If you want to have the full suite of migration options, you will need to install `seisunix <https://github.com/JohnWStockwellJr/SeisUnix/wiki>`_.
The SeisUnix install is bit complicated, but there are instructions with it.
It should be possible to use SeisUnix on Windows with CygWin then interface with ImpDAR, but this is untested.
ImpdarError
===========
.. autoexception:: impdar.lib.ImpdarError.ImpdarError
API
===
This section documents the classes and functions of the libraries underlying ImpDAR. These really are the workhorses behind the executables that you would use for command-line processing. On the other hand, if you want to integrate the processing steps implemented by ImpDAR into another program, you will be interacting with these libraries.
The central component of ImpDAR processing is the :class:`~impdar.lib.RadarData.RadarData` class. Not only does this object store all the radar returns and auxiliary information, it also has a number of methods for processing.
Some processing steps may be implemented separately from the :class:`~impdar.lib.RadarData.RadarData` class. At present, just :func:`concatenation <impdar.lib.process.concat>`, is separate because it acts on multiple :class:`~impdar.lib.RadarData.RadarData` objects.
Contents:
.. toctree::
:maxdepth: 2
RadarData
Plotting
Picking
load
process
ImpdarError
Loading data
============
These are functions for loading loading radar data, generally from raw formats, to be used in a program or saved in ImpDAR's .mat format and used later.
For every filetype that ImpDAR can handle (e.g. GSSI .DZT files, gprMax .h5 files), there is a dedicated file for loading that filetype in `impdar/lib/load`. These files generally define a single method, which returns an `impdar.lib.RadarData.RadarData` object, with information specific to the filetype loaded in. The user does not need to interact with these files (unless they need to add functionality). For some of the systems, documentation is sparse and this is a challenge, while for others documentation is readily available (e.g. `Blue Systems <https://iprdoc.readthedocs.io/en/latest/>`).
Instead, to load data for interactive use, a generic `load` command, which takes a filetype as an argument, is defined in `impdar.lib.load.__init__`. This wrapper provides some conveniences for handling multiple files as well. There is also a `load_and_exit` command in that file, which can be used if the user does not want to interact with the data at load time, but wants the filetype converted to ImpDAR's .mat for convenience.
.. automethod:: impdar.lib.load.load
.. automethod:: impdar.lib.load.load_and_exit
Interpretation
==============
Interpretation in this context primarily means picking layers (either isochrones or the bed). In the future, this functionality may be expanded to make picking other things, e.g. some discontinuity, easier.
Functions used for picking
--------------------------
.. automodule:: impdar.lib.picklib
:members:
Classes used by interpreter
---------------------------
These classes are broken down to match the structure of StODeep, so we store information about how the picks get made, and the picks themselves, using different objects.
If you have done some interpretation, you will likely want to subsequently interact with the `Picks` object. Often, this can be done without accessing the API by converting the picks/geospatial data to another form, e.g. with `impdar convert shp fn_picked.mat`. You can also make plots with the picks on top of the radar data, or with the return power in geospatial coordinates, using `impplot rg fn_picked.mat` or `impplot power fn_picked.mat layer_num`. For further operations, you will probably want to access the `Picks` object described next. For example, using the picks object you could do something like
.. code-block:: python
import numy as np
import matplotlib.pyplot as plt
from impdar.lib import RadarData
rd = RadarData('[PICKED_DATA_FN.mat]')
# make a basic plot of the radargram
fig, ax = plt.subplots()
im, _, _, _, _ = plot.plot_radargram(rd, fig=fig, ax=ax, xdat='dist', ydat='depth', return_plotinfo=True)
# calculate the return power
c = 10. * np.log10(rd.picks.power[0, :])
c -= np.nanmax(c)
# plot the return power on the layer, being careful of NaNs
mask = ~np.isnan(rd.picks.samp1[0, :])
cm = ax.scatter(rd.dist[mask.flatten()],
rd.nmo_depth[rd.picks.samp1[0, :].astype(int)[mask]],
c=c.flatten()[mask.flatten()],
s=1)
.. automodule:: impdar.lib.Picks
:members:
.. automodule:: impdar.lib.PickParameters
:members:
Plotting
========
.. automodule:: impdar.lib.plot
:members:
Processing
==========
.. automodule:: impdar.lib.process
:members:
RadarData
=========
This page contains the documentation for the RadarData class, which is the basic object in ImpDAR.
If you are interacting with the API in a significant way, this is where you will find documentation from most of the things you care about, particularly how the data is stored and how to do basic processing steps on it.
All of the files to define the class are in impdar/lib/Radardata, with the basic initialization and class properties found in __init__.py and addional functionality spread across _RadarDataSaving, _RadarDataFiltering, and _RadarDataProcessing.
RadarData Base
--------------
.. autoclass:: impdar.lib.RadarData.RadarData
:members: attrs_guaranteed, attrs_optional, chan, data, decday, dist, dt, lat, long, pressure, snum, tnum, trace_int, trace_num, travel_time, trig, trig_level, nmo_depth, elev, x_coord, y_coord, fn, check_attrs
Saving RadarData
----------------
These are all instance methods for saving information from a RadarData object.
They are defined in impdar/lib/RadarData/_RadarDataSaving.py.
.. automethod:: impdar.lib.RadarData.__init__.RadarData.save
.. automethod:: impdar.lib.RadarData.__init__.RadarData.save_as_segy
.. automethod:: impdar.lib.RadarData.__init__.RadarData.output_shp
.. automethod:: impdar.lib.RadarData.__init__.RadarData.output_csv
Processing RadarData
--------------------
These are all instance methods for processing data on a RadarData object.
They are defined in impdar/lib/RadarData/_RadarDataProcessing.py.
.. automethod:: impdar.lib.RadarData.__init__.RadarData.reverse
.. automethod:: impdar.lib.RadarData.__init__.RadarData.nmo
.. automethod:: impdar.lib.RadarData.__init__.RadarData.crop
.. automethod:: impdar.lib.RadarData.__init__.RadarData.restack
.. automethod:: impdar.lib.RadarData.__init__.RadarData.rangegain
.. automethod:: impdar.lib.RadarData.__init__.RadarData.agc
.. automethod:: impdar.lib.RadarData.__init__.RadarData.constant_space
.. automethod:: impdar.lib.RadarData.__init__.RadarData.elev_correct
Filtering Radar Data
--------------------
These are all instance methods for filtering data to remove noise.
They are defined in impdar/lib/RadarData/_RadarDataFiltering.py.
.. automethod:: impdar.lib.RadarData.__init__.RadarData.migrate
.. automethod:: impdar.lib.RadarData.__init__.RadarData.vertical_band_pass
.. automethod:: impdar.lib.RadarData.__init__.RadarData.adaptivehfilt
.. automethod:: impdar.lib.RadarData.__init__.RadarData.horizontalfilt
.. automethod:: impdar.lib.RadarData.__init__.RadarData.highpass
.. automethod:: impdar.lib.RadarData.__init__.RadarData.winavg_hfilt
.. automethod:: impdar.lib.RadarData.__init__.RadarData.hfilt
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
SPHINXPROJ = ../impdar
SOURCEDIR = .
BUILDDIR = ../impdar-sphinx/
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
sphinx>=3.2.1
nose
h5py
matplotlib>=2.0.0
numpy>=1.14.0
scipy>=1.0.0
sphinx-argparse
nbsphinx
FROM quay.io/pypa/manylinux2014_x86_64
COPY entrypoint.sh /entrypoint.sh
RUN /opt/python/cp37-cp37m/bin/pip install twine
ENTRYPOINT ["/entrypoint.sh"]
#! /bin/sh
#
# entrypoint.sh
# Copyright (C) 2021 dlilien <dlilien90@gmail.com>
#
# Distributed under terms of the MIT license.
#
export PLAT=manylinux2014_x86_64
# Do a normal build
for PYBIN in /opt/python/cp3[7-9]-cp*/bin; do
"${PYBIN}/pip" install numpy==1.19.0 cython;
"${PYBIN}/pip" wheel --no-deps -w /github/workspace/wheelhouse/ .;
done
# Make the wheels into manylinux
ls wheelhouse/*.whl
for whl in wheelhouse/*.whl; do
auditwheel repair "$whl" --plat $PLAT -w /github/workspace/dist/;
done
#
# Makefile
# dlilien, 2018-12-06 13:15
#
#
PYQT=pyuic5
UI_SOURCES=RawPickGUI.ui
UI_PY=$(UI_SOURCES:.ui=.py)
all: ui
ui: $(UI_PY)
$(UI_PY): %.py: %.ui
$(PYQT) -x $< -o $@
sed 's/mplfigcanvaswidget/.mplfigcanvaswidget/' $@ | grep -v setShortcutVisibleInContextMenu > dum
mv dum $@
# vim:ft=make
#

Sorry, the diff of this file is not supported yet

#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 dlilien <dlilien@berens>
#
# Distributed under terms of the GNU GPL3.0 license.
"""
"""
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 David Lilien <dlilien90@gmail.com>
#
# Distributed under terms of the GNU GPL3 license.
"""
Empirical attenuation calculations from Hills et al. (2020)
Author:
Benjamin Hills
bhills@uw.edu
University of Washington
Earth and Space Sciences
Sept 26 2019
"""
import numpy as np
from scipy import stats
# -----------------------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------------------
### Single-Reflector Methods
# -----------------------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------------------
def attenuation_method2(dat,picknum,sigPc=0.,sigZ=0.,Cint=.95,u=1.69e8,*args, **kwargs):
"""
### Method 2 from the attenuation framework (Hills et al., 2020) ###
Method 1 is the same but only for the bed reflector, as is most common.
Based on Jacobel et al. (2009)
This method fits a line to the measured power for an individual reflector (in log space)
The resulting attenuation rate represents a depth-averaged value
over the depth range which the layer spans.
Parameters
----------
picknum: int
pick number to do the attenuation calculation on
sigPc: float; optional
standard deviation in measured power (error) used to constrain the regression
defaults to 0 (i.e. simple regression)
sigZ: float; optional
standard deviation in measured depth (error) used to constrain the regression
defaults to 0 (i.e. simple regression)
Cint: float; optional
confidence interval with which to describe the resulting attenuation error
default 95%
u: float; optional
light velocity in ice
Output
----------
N: float
One-way attenuation rate (dB/km)
Nerr: float
Error in one-way attenuation rate (dB/km)
"""
# get a pick depth
if 'z' in vars(dat.picks):
Z = dat.picks.z
else:
print('Warning: setting pick depth for constant velocity in ice.')
Z = dat.picks.time*u/2/1e6
# Get pick from index and remove nan values
Pc = 10.*np.log10(dat.picks.corrected_power[picknum])
Z = Z[picknum]
idx = ~np.isnan(Pc) & ~np.isnan(Z)
Pc = Pc[idx]
Z = Z[idx]
# Convert to km
if np.any(Z > 10.):
Z/=1000.
if sigZ > .1:
sigZ/=1000.
Szz = np.sum((Z-np.mean(Z))**2.)
Spp = np.sum((Pc-np.mean(Pc))**2.)
Szp = np.sum((Z-np.mean(Z))*(Pc-np.mean(Pc)))
if sigZ == 0 and sigPc == 0:
# Simple regression
N = -(Szp)/Szz
alpha = np.mean(Pc) + N*np.mean(Z)
# Error based on vertical distance from line only
Pc_err = np.sum((Pc - ((-N)*Z + alpha))**2.)
sigN = np.sqrt(Pc_err/Szz/(len(Z)-2))
tscore = stats.t.ppf(1.-(1.-Cint)/2., len(Z)-2)
Nerr = tscore*sigN
else:
# Deming regression after Casella and Berger (2002) section 12.2
lam = (sigZ**2.)/(sigPc**2.)
# Regression slope, eq. 12.2.16
N = -(-Szz+lam*Spp+np.sqrt((Szz-lam*Spp)**2.+4.*lam*Szp**2.))/(2.*lam*Szp)
alpha = np.mean(Pc) + N*np.mean(Z)
# Standard deviation in slope 12.2.22
sigN = np.sqrt(((1.+lam*N**2.)**2.*(Szz*Spp-Szp**2.))/((Szz-lam*Spp)**2.+4.*lam*Szp**2.))
tscore = stats.t.ppf(1.-(1.-Cint)/2., len(Z)-2)
# Error using Gleser's Modification with 95% confidence interval
Nerr = tscore*sigN/(np.sqrt(len(Z)-2))
# Final Output as a one-way rate in dB/km
N *= 1/2.
Nerr *= 1/2.
return N,Nerr
# -----------------------------------------------------------------------------------------------------
def attenuation_method3(dat,picknum,Ns=np.arange(30.),Nh_target=1.,Cw=0.1,win_init=100,win_step=100,u=1.69e8):
"""
### Method 3 from the attenuation framework (Hills et al., 2020) ###
Based on Schroeder et al. (2016a, 2016b)
This method decorrolates the attenuation-corrected power from the ice thickness
Assumes constant reflectivity
Parameters
----------
picknum: int
pick number to do the attenuation calculation on
Ns: array; optional
Attenuation rates to test (one-way in dB/km)
Nh_target: float; optional
Radiometric resolution target
Cw: float; optional
Minimum correlation coefficient threshold
win_init: int; optional
Initial number of traces for window size
win_step: int; optional
Number of traces to increase the window size at each step
u: float; optional
light velocity in ice
Output
----------
N_result: array
One-way attenuation rate (dB/km)
win_result: array
resulting window size (number of traces)
"""
# get a pick depth
if 'z' in vars(dat.picks):
Z = dat.picks.z
else:
print('Warning: setting pick depth for constant velocity in ice.')
Z = dat.picks.time*u/2/1e6
# Get pick from index and remove nan values
Pc = 10*np.log10(dat.picks.corrected_power[picknum])
Z = Z[picknum]
idx = ~np.isnan(Pc) & ~np.isnan(Z)
Pc = Pc[idx]
Z = Z[idx]
# Convert to km
if np.any(Z > 10.):
Z/=1000.
# Create empty arrays to fill for the resulting attenuation rate and window size
N_result = np.zeros((dat.tnum,))
win_result = np.zeros((dat.tnum,))
C = np.zeros_like(Ns)
# Loop through all the traces
for tr in range(win_init//2,dat.tnum-win_init//2):
# zero out the correlation coefficient array
C[:] = 0.
# Initial window size
win = win_init
# Radiometric Resolution (needs to converge onto Nh_target before the attenuation rate is accepted)
Nh = Nh_target + 1.
# while radiometric resolution is outside target range and window is fully within the profile
while Nh > Nh_target and win//2<=tr and win//2<=(len(Z)-tr):
# thickness and power in the window
z = Z[tr-win//2:tr+win//2]
pc = Pc[tr-win//2:tr+win//2]
# loop through all the possible attenuation rates
sum2 = np.sqrt(sum((z-np.mean(z))**2.))
# TODO: I think I could substantially speed things up in here
for j,Nj in enumerate(Ns):
# attenuation-corrected power, Schroeder et al. (2016) eq. 4
pa = pc + 2.*z*Nj
# calculate the correlation coefficient, Schroeder et al. (2016) eq. 5
sum1 = sum((z-np.mean(z))*(pa-np.mean(pa)))
sum3 = np.sqrt(sum((pa-np.mean(pa))**2.))
C[j] = abs(sum1/(sum2*sum3))
# Whichever value has the lowest correlation coefficient is chosen
Cm = np.min(C)
Nm = Ns[C==Cm]
C0 = C[Ns==0]
# If the minimum correlation coefficient is below threshold, Cw,
# and the zero correlation coefficient is above
# then update the radiometric resolution
if Cm < Cw and C0 > Cw:
Nh = np.max(Ns[C<Cw])-np.min(Ns[C<Cw])
win += win_step
N_result[tr] = Nm
win_result[tr] = win
return N_result,win_result
# -----------------------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------------------
### Multiple-Reflector Methods
# -----------------------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------------------
def attenuation_method5(dat,picknums,win=1,sigPc=0,sigZ=0,Cint=.95,u=1.69e8,*args,**kwargs):
"""
### Method 5 from the attenuation framework (Hills et al., 2020) ###
Based on MacGregor et al. (2014) and Matsuoka et al. (2010)
This method fits a line to the measured power for internal reflectors (in log space)
Parameters
----------
picknums: array
pick numbers to do the attenuation calculation on
win: int; optional
number of traces to use in the regression (default: 1)
sigPc: float; optional
standard deviation in measured power (error) used to constrain the regression
defaults to 0 (i.e. simple regression)
sigZ: float; optional
standard deviation in measured depth (error) used to constrain the regression
defaults to 0 (i.e. simple regression)
Cint: float; optional
confidence interval with which to describe the resulting attenuation error
default 95%
u: float; optional
light velocity in ice
Output
----------
N_result: array
horizontal attenuation profile (as a one-way rate in dB/km)
Nerr_result: array
Error in one-way attenuation rate (dB/km)
"""
# get a pick depth
if 'z' in vars(dat.picks):
Z = dat.picks.z
else:
print('Warning: setting pick depth for constant velocity in ice.')
Z = dat.picks.time*u/2/1e6
# Convert to km
if np.any(Z > 10.):
Z/=1000.
if sigZ > .1:
sigZ/=1000.
# create empty arrays for output
N_result = np.nan*np.empty((dat.tnum,))
Nerr_result = np.nan*np.empty((dat.tnum,))
a_result = np.nan*np.empty((dat.tnum,))
# calculate the attenuation rate for each desired trace (or window)
for tr in np.arange(win//2,dat.tnum-win//2):
# grab the data within the window
# Get pick from index and remove nan values
pc = np.squeeze(10.*np.log10(dat.picks.corrected_power[picknums,tr-win//2:tr+win//2+1]))
z = np.squeeze(Z[picknums,tr-win//2:tr+win//2+1])
idx = ~np.isnan(pc) & ~np.isnan(z)
pc = pc[idx]
z = z[idx]
# If there are not enough picked layers for this window, output nan
if len(pc)<5:
N_result[tr] = np.nan
Nerr_result[tr] = np.nan
a_result[tr] = np.nan
else:
Szz = np.sum((z-np.mean(z))**2.)
Spp = np.sum((pc-np.mean(pc))**2.)
Szp = np.sum((z-np.mean(z))*(pc-np.mean(pc)))
if sigZ == 0 and sigPc == 0:
# Simple regression
N = -(Szp)/Szz
alpha = np.mean(pc) + N*np.mean(z)
# Error based on vertical distance from line only
pc_err = np.sum((pc - ((-N)*z + alpha))**2.)
sigN = np.sqrt(pc_err/Szz/(len(z)-2))
tscore = stats.t.ppf(1.-(1.-Cint)/2., len(z)-2)
Nerr = tscore*sigN
else:
# Deming regression after Casella and Berger (2002) section 12.2
lam = (sigZ**2.)/(sigPc**2.)
# Regression slope, eq. 12.2.16
N = -(-Szz+lam*Spp+np.sqrt((Szz-lam*Spp)**2.+4.*lam*Szp**2.))/(2.*lam*Szp)
alpha = np.mean(pc) + N*np.mean(z)
# Standard deviation in slope 12.2.22
sigN = np.sqrt(((1.+lam*N**2.)**2.*(Szz*Spp-Szp**2.))/((Szz-lam*Spp)**2.+4.*lam*Szp**2.))
tscore = stats.t.ppf(1.-(1.-Cint)/2., len(z)-2)
# Error using Gleser's Modification with 95% confidence interval
Nerr = tscore*sigN/(np.sqrt(len(z)-2))
# Final Output as a one-way rate in dB/km
N_result[tr] = N*.5 #one-way attenuation rate
Nerr_result[tr] = Nerr*.5
return N_result,Nerr_result
# -----------------------------------------------------------------------------------------------------
def attenuation_method6a(dat,picknums,att_ds,win=500.,sigPc=0,sigZ=0,Cint=.95,u=1.69e8,*args,**kwargs):
"""
### Method 6 from the attenuation framework (Hills et al., 2020) ###
This method groups all the picks from all traces together.
A window of fixed size then moves from top to bottom of the profile,
fitting the regression to all data within the window for each step.
Parameters
----------
picknums: array
picks to include in the calculation
att_ds: array
depths at which to center attenuation calculations
win: float
window over which to calculate the attenuation rate
sigPc: float; optional
standard deviation in measured power (error) used to constrain the regression
defaults to 0 (i.e. simple regression)
sigZ: float; optional
standard deviation in measured depth (error) used to constrain the regression
defaults to 0 (i.e. simple regression)
Cint: float; optional
confidence interval with which to describe the resulting attenuation error
default 95%
u: float; optional
light velocity in ice
Output
---------
N_result: 1-d array for attenuation rate in dB/km
Nerr_result: 1-d array for attenuation rate error in dB/km
"""
# get a pick depth
if 'z' in vars(dat.picks):
Z = dat.picks.z
else:
print('Warning: setting pick depth for constant velocity in ice.')
Z = dat.picks.time*u/2/1e6
# Get pick from index and remove nan values
Pc = 10.*np.log10(dat.picks.corrected_power[picknums].flatten())
Z = Z[picknums].flatten()
idx = ~np.isnan(Pc) & ~np.isnan(Z)
Pc = Pc[idx]
Z = Z[idx]
# Convert to km
if np.any(Z > 10.):
Z/=1000.
if np.any(att_ds>10.):
att_ds/=1000.
if win>10.:
win/=1000.
# Create empty arrays to fill for the output attenuation rate and window size
N_result = np.zeros_like(att_ds).astype(float)
Nerr_result = np.zeros_like(att_ds).astype(float)
# loop through all the depths
for i,att_d in enumerate(att_ds):
z = Z[np.logical_and(Z>(att_d-win/2),Z<(att_d+win/2))]
pc = Pc[np.logical_and(Z>(att_d-win/2),Z<(att_d+win/2))]
if len(z)<5:
N_result[i] = np.nan
Nerr_result[i] = np.nan
continue
# Sum of squares
Szz = np.sum((z-np.mean(z))**2.)
Spp = np.sum((pc-np.mean(pc))**2.)
Szp = np.sum((z-np.mean(z))*(pc-np.mean(pc)))
if sigZ == 0 and sigPc == 0:
# Simple regression
N = -(Szp)/Szz
alpha = np.mean(pc) + N*np.mean(z)
# Error based on vertical distance from line only
pc_err = np.sum((pc - ((-N)*z + alpha))**2.)
sigN = np.sqrt(pc_err/Szz/(len(z)-2))
tscore = stats.t.ppf(1.-(1.-Cint)/2., len(z)-2)
Nerr = tscore*sigN
else:
# Deming regression after Casella and Berger (2002) section 12.2
lam = (sigZ**2.)/(sigPc**2.)
# Regression slope, eq. 12.2.16
N = -(-Szz+lam*Spp+np.sqrt((Szz-lam*Spp)**2.+4.*lam*Szp**2.))/(2.*lam*Szp)
alpha = np.mean(pc) + N*np.mean(z)
# Standard deviation in slope 12.2.22
sigN = np.sqrt(((1.+lam*N**2.)**2.*(Szz*Spp-Szp**2.))/((Szz-lam*Spp)**2.+4.*lam*Szp**2.))
tscore = stats.t.ppf(1.-(1.-Cint)/2., len(z)-2)
# Error using Gleser's Modification with 95% confidence interval
Nerr = tscore*sigN/(np.sqrt(len(z)-2))
# Fill in the result array
N_result[i] = .5*N
Nerr_result[i] = .5*Nerr
return N_result, Nerr_result
# -----------------------------------------------------------------------------------------------------
def attenuation_method6b(dat,picknums,att_ds,Ns=np.arange(30.),Nh_target=1.,Cw=0.1,win_init=100.,win_step=100.,u=1.69e8,*args,**kwargs):
"""
### Method 6b from the attenuation framework (Hills et al., 2020) ###
Based on Schroeder et al. (2016a, 2016b)
This method minimizes the correlation coeffiecient between attenuation
rate and ice thickness
Here, we are using the Schroeder method (i.e. from method 3) but in the vertical.
All picks from all traces are grouped together and the window moves from
top to bottom of the ice column, adjusting size to optimize the fit.
Parameters
----------
picknums: int
pick number to do the attenuation calculation on
att_ds
Ns: array; optional
Attenuation rates to test (one-way in dB/km)
Nh_target: float; optional
Radiometric resolution target
Cw: float; optional
Minimum correlation coefficient threshold
win_init: float; optional
Initial number of traces for window size
win_step: float; optional
Number of traces to increase the window size at each step
u: float; optional
light velocity in ice
Output
----------
N_result: array
One-way attenuation rate (dB/km)
win_result: array
resulting window size (m)
"""
# get a pick depth
if 'z' in vars(dat.picks):
Z = dat.picks.z
else:
print('Warning: setting pick depth for constant velocity in ice.')
Z = dat.picks.time*u/2/1e6
# Get pick from index and remove nan values
Pc = 10.*np.log10(dat.picks.corrected_power[picknums].flatten())
Z = Z[picknums].flatten()
idx = ~np.isnan(Pc) & ~np.isnan(Z)
Pc = Pc[idx]
Z = Z[idx]
# Convert to km
if np.any(Z > 10.):
Z/=1000.
if np.any(att_ds>10.):
att_ds/=1000.
if win_init>10.:
win_init/=1000.
win_step/=1000.
# Create empty arrays to fill for the output attenuation rate and window size
N_result = np.zeros_like(att_ds)
win_result = np.zeros_like(att_ds)
C = np.zeros_like(Ns)
# loop through all the depths
for i,att_d in enumerate(att_ds):
# current depth for attenuation calc
att_d = att_ds[i]
# Correlation Coefficient (starts empty)
C[:] = np.nan
# Initial window size
win = win_init
# Radiometric Resolution (needs to converge to Nh_target)
Nh = Nh_target + 1.
while Nh > Nh_target and att_d-win/2>=np.nanmin(Z) and att_d+win/2<=np.nanmax(Z):
# thickness and power in the window
z = Z[np.argwhere(abs(Z-att_d)<win/2)]
pc = Pc[np.argwhere(abs(Z-att_d)<win/2)]
# loop through all the possible attenuation rates
sum2 = np.sqrt(np.nansum((z-np.nanmean(z))**2.))
# TODO: I think I could substantially speed things up in here
for n in range(len(Ns)):
# attenuation-corrected power, Schroeder et al. (2016) eq. 4
pa = pc + 2.*z*Ns[n]
# calculate the correlation coefficient, Schroeder et al. (2016) eq. 5
sum1 = np.nansum((z-np.nanmean(z))*(pa-np.nanmean(pa)))
sum3 = np.sqrt(np.nansum((pa-np.nanmean(pa))**2.))
if np.any(np.isnan([sum1,sum2,sum3])):
C[n] = np.nan
else:
C[n] = abs(sum1/(sum2*sum3))
# Whichever value has the lowest correlation coefficient is chosen
Cm = np.nanmin(C)
Nm = Ns[C==Cm]
C0 = C[Ns==0]
if Cm < Cw and C0 > Cw:
Nh = (np.max(Ns[C<Cw])-np.min(Ns[C<Cw]))/2.
# get ready for the next iteration
win += win_step
# output
N_result[i] = Nm
win_result[i] = win*1000.
return N_result, win_result
# -----------------------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------------------
### Secondary Reflection Methods
# -----------------------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------------------
def attenuation_method7(dat,primary_picknum,secondary_picknum,Rib=-.22,Rfa=-17,u=1.69e8,*args,**kwargs):
"""
### Method 7 from the attenuation framework (Hills et al., 2020) ###
# Based on MacGregor (2011) and Christianson et al. (2016)
# This method fits a line to the to the measured power from the basal reflector (in log space)
Parameters
----------
primary_picknum: int
index for primary reflection
secondary_picknum: int
index for secondary reflection
Rib: float; optional
reflectivity of the ice-bed interface (in dB);
default -0.22 for ice-seawater interface from Christianson et al. (2016; Appendix A)
Rfa: flaot; optional
reflectivity of the firn-air interface (in dB);
default -17 from Christianson et al. (2016; Appendix A)
u: float; optional
light velocity in ice
Output
---------
mean(N): flaot
mean of the calculated attenuation rates
std(N): flaot
standard deviation of the calculated attenuation rates
"""
# get a pick depth
if 'z' in vars(dat.picks):
Z = dat.picks.z
else:
print('Warning: setting pick depth for constant velocity in ice.')
Z = dat.picks.time*u/2/1e6
# Convert to km
if np.any(Z > 10.):
Z/=1000.
# Get pick from index and remove nan values
P1 = dat.picks.corrected_power[primary_picknum]
P2 = dat.picks.corrected_power[secondary_picknum]
Z1 = Z[primary_picknum]
Z2 = Z[secondary_picknum]
idx = ~np.isnan(P1) & ~np.isnan(P2) & ~np.isnan(Z1) & ~np.isnan(Z2)
P1 = P1[idx]
P2 = P2[idx]
Z1 = Z1[idx]
Z2 = Z2[idx]
# Check that the secondary depth is double the primary
if not abs(np.nanmean(Z1)*2. - np.nanmean(Z2)) < .1*np.nanmean(Z1):
raise ValueError('The secondary reflection is not twice as deep as the primary.')
# convert all terms out of log space in order to use eq. A4
Rfa = 10**(Rfa/10.)
Rib = 10**(Rib/10.)
# Calculate the attenuation length scale with Christianson et al. (2016) eq. A4
La = -2.*Z1/np.log((4./(Rib*Rfa))*(P2/P1))
# Then attenuation rate is (following Jacobel et al. (2009))
N = 10.*np.log10(np.e)/La
return np.nanmean(N),np.nanstd(N)
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 David Lilien <dlilien90@gmail.com>
#
# Distributed under terms of the GNU GPL3 license.
"""
Continuity index for layers in radar data.
Karlsson et al. (2012)
Author:
Benjamin Hills
bhills@uw.edu
University of Washington
Earth and Space Sciences
Sept 26 2019
"""
import numpy as np
# ----------------------------------------------------------------------------
def continuity_index(dat,b_ind,s_ind=None,cutoff_ratio=None):
"""
Karlsson Continuity Method
Based on Karlsson et al. (2012)
This method gives a value for the continuity of radar layers
Parameters
----------
b_ind: int
bed pick index
s_ind: int; optional
surface pick index
cutoff_ratio: float; optional
assigns the number of samples that are removed from top and bottom of the trace
Output
---------
conttinuity_index: array
"""
P = 10*np.log10(dat.data**2.)
bpick = dat.picks.samp1[b_ind]
if s_ind is None:
spick = np.zeros_like(bpick)
else:
spick = dat.picks.samp1[s_ind]
# empty continuity index array
cont = np.empty((dat.tnum,)).astype(float)
# calculate the continuity index for each trace
for tr in range(dat.tnum):
# Nan if the picks are nan
if np.isnan(bpick[tr]) or np.isnan(spick[tr]):
cont[tr] = np.nan
else:
# get data from between the surface and bed
b = int(bpick[tr])
s = int(spick[tr])
p_ext=P[s:b,tr]
# cutoff based on the assigned ratio
if cutoff_ratio is not None:
cut=int(len(p_ext)*cutoff_ratio)
p_ext=p_ext[cut:-cut]
# Nan if sampling criteria are not met
if len(p_ext) < 10 or len(p_ext) > dat.snum or np.any(~np.isfinite(p_ext)):
cont[tr] = np.nan
# calculate the continuity index based on Karlsson et al. (2012) eq. 1
else:
cont[tr]=np.mean(abs(np.gradient(p_ext)))
dat.continuity_index = cont
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 David Lilien <dlilien90@gmail.com>
#
# Distributed under terms of the GNU GPL3 license.
"""
Power Corrections from the glaciological literature.
Author:
Benjamin Hills
bhills@uw.edu
University of Washington
Earth and Space Sciences
Sept 26 2019
"""
import numpy as np
# ----------------------------------------------------------------------------
def power_correction(dat,eps=[],d_eps=[],u=1.69e8,h_aircraft=0.):
"""
Geometric spreading correction for radar power.
Optionally includes refractive focusing
Parameters
---------
eps: array; optional
permittivity (relative)
d_eps: array; optional
depths for permittivity boundaries
u: float; optional
speed of light in ice
h_aircraft: float; optional
height of aircraft, airborne surveys need a correction for refractive focusing from air to ice
Output
---------
corrected_power
"""
# get a pick depth
if 'z' in vars(dat.picks):
Z = dat.picks.z
else:
print('Warning: setting pick depth for constant velocity in ice.')
Z = dat.picks.time*u/2./1e6
# spreading correction for a spherical wave
spherical_loss = (2.*Z)**2.
q = np.ones_like(Z)
if len(d_eps) > 0:
if d_eps[0] != 0:
raise KeyError('The first depth needs to be 0.')
# correct for focusing from air to firn
if h_aircraft > 0.:
qadd = refractive_focusing(h_aircraft,2.*(Z+h_aircraft),1.,eps[0])
q*=qadd
# correct for focusing within the firn
for i in range(len(eps)-1):
qadd = refractive_focusing(d_eps[i],2.*Z,eps[i],eps[i+1])
q*=qadd
# power correction including spreading and refractive gains
dat.picks.corrected_power = dat.picks.power * spherical_loss/q
# ----------------------------------------------------------------------------
def refractive_focusing(z1,z2,eps1,eps2):
"""
Refractive focusing at an interface
Bogorodsky et al., 1985; equation 3.8
Parameters
---------
z1: float
scalar Thickness above interface (m)
z2: scalar Thickness below interface (m)
eps1: scalar Permittivity above interface (relative)
eps2: scalar Permittivity below interface (relative)
Output
---------
q: scalar refractive focusing coefficient
"""
q = ((z1+z2)/(z1+z2*np.sqrt(eps1/eps2)))**2.
if hasattr(q,'__len__'):
q[z2 <= z1] = 1.
else:
if z2 <= z1:
q = 1.
return q
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 David Lilien <dlilien90@gmail.com>
#
# Distributed under terms of the GNU GPL3 license.
"""
Roughness calculation for picked layers.
Author:
Benjamin Hills
bhills@uw.edu
University of Washington
Earth and Space Sciences
Sept 26 2019
"""
import numpy as np
from scipy.signal import detrend,medfilt
from scipy.special import i0
def kirchhoff_roughness(dat,picknum,freq,filt_n=101,eps=3.15):
"""
Roughness by Kirchhoff Theory
Christianson et al. (2016), equation C2
Paramaters
----------
freq: float
antenna frequency
filt_n: int; optional
number of traces included in the median filter
eps: float; optional
relative permittivity of ice
"""
if 'interp' not in vars(dat.flags):
raise KeyError('Do interpolation before roughness calculation.')
# calculate the speed and wavelength
eps0 = 8.8541878128e-12 # vacuum permittivity
mu0 = 1.25663706212e-6 # vacuum permeability
u = 1./np.sqrt(eps*eps0*mu0) # speed of light in ice
lam = u/freq # wavelength m
# get a pick depth
if 'z' in vars(dat.picks):
Z = dat.picks.z
else:
print('Warning: setting pick depth for constant velocity in ice.')
Z = dat.picks.time*u/2/1e6
# Find window size based on the width of the first Fresnel zone
D1 = np.sqrt(2.*lam*(np.nanmean(Z)/np.sqrt(eps))) # Width of Fresnel zone
dx = dat.trace_int[0] # m spacing between traces
N = int(round(D1/(2.*dx))) # number of traces in the Fresnel window
# -----------------------------------------------------------------------------
# Define the bed geometry
bed_raw = dat.elev - Z[picknum]
bed_filt = medfilt(bed_raw,filt_n)
# RMS bed roughness; Christianson et al. (2016) equation C2
ED1 = np.nan*np.empty((len(bed_filt),))
for n in range(N,len(bed_filt)-N+1):
b = bed_filt[n-N:n+N].copy()
b = b[np.where(~np.isnan(b))]
if len(b) <= 1:
ED1[n] = np.nan
else:
b_ = detrend(b)
b_sum = 0.
for i in range(len(b)):
b_sum += (b_[i])**2.
ED1[n] = np.sqrt((1/(len(b)-1.))*b_sum)
# Find the power reduction by Kirchoff theory
# Christianson et al. (2016), equation C1
g = 4.*np.pi*ED1/lam
b = (i0((g**2.)/2.))**2.
pn = np.exp(-(g**2.))*b
return ED1,pn
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 David Lilien <dlilien90@gmail.com>
#
# Distributed under terms of the GNU GPL-3.0 license.
"""
An alternative ImpDAR class for ApRES data.
This should be considered separate from impulse data.
This class has a different set of loading and filtering scripts.
Author:
Benjamin Hills
bhills@uw.edu
University of Washington
Earth and Space Sciences
Sept 24 2019
"""
import datetime
import numpy as np
from scipy.io import loadmat
from .ApresFlags import ApresFlags
from .ApresHeader import ApresHeader
from ..ImpdarError import ImpdarError
class ApresData(object):
"""A class that holds the relevant information for an ApRES acquisition.
We keep track of processing steps with the flags attribute.
This base version's __init__ takes a filename of a .mat file in the old StODeep format to load.
"""
#: Attributes that every ApresData object should have and should not be None.
attrs_guaranteed = ['data',
'decday',
'dt',
'snum',
'cnum',
'bnum',
'chirp_num',
'chirp_att',
'chirp_time',
'travel_time',
'frequencies']
#: Optional attributes that may be None without affecting processing.
#: These may not have existed in old StoDeep files that we are compatible with,
#: and they often cannot be set at the initial data load.
#: If they exist, they all have units of meters.
attrs_optional = ['lat',
'long',
'x_coord',
'y_coord',
'elev',
'temperature1',
'temperature2',
'battery_voltage']
# TODO: add imports
#from ._ApresDataProcessing import
#from ._ApresDataSaving import
# Now make some load/save methods that will work with the matlab format
def __init__(self, fn_mat):
if fn_mat is None:
# Write these out so we can document them
# Very basics
self.snum = None #: int number of samples per chirp
self.cnum = None #: int, the number of chirps in a burst
self.bnum = None #: int, the number of bursts
self.data = None #: np.ndarray(snum x tnum) of the actual return power
self.dt = None #: float, The spacing between samples in travel time, in seconds
# Per-trace attributes
#: np.ndarray(tnum,) of the acquisition time of each trace
#: note that this is referenced to Jan 1, 0 CE (matlabe datenum)
#: for convenience, use the `datetime` attribute to access a python version of the day
self.decday = None
#: np.ndarray(tnum,) latitude along the profile. Generally not in projected coordinates
self.lat = None
#: np.ndarray(tnum,) longitude along the profile. Generally not in projected coords.
self.long = None
# chirp
self.chirp_num = None #: np.ndarray(cnum,) The 1-indexed number of the chirp
self.chirp_att = None #: np.ndarray(cnum,) Chirp attenuation settings
self.chirp_time = None #: np.ndarray(cnum,) Time at beginning of chirp (serial day)
# Sample-wise attributes
#: np.ndarray(snum,) The two way travel time to each sample, in us
self.travel_time = None
#: np.ndarray(tnum,) Optional. Projected x-coordinate along the profile.
self.x_coord = None
#: np.ndarray(tnum,) Optional. Projected y-coordinate along the profile.
self.y_coord = None
#: np.ndarray(tnum,) Optional. Elevation along the profile.
self.elev = None
# Special attributes
#: impdar.lib.RadarFlags object containing information about the processing steps done.
self.flags = ApresFlags()
self.header = ApresHeader()
self.data_dtype = None
return
# TODO: add a matlab load
mat = loadmat(fn_mat)
for attr in self.attrs_guaranteed:
if mat[attr].shape == (1, 1):
setattr(self, attr, mat[attr][0][0])
elif mat[attr].shape[0] == 1 or mat[attr].shape[1] == 1:
setattr(self, attr, mat[attr].flatten())
else:
setattr(self, attr, mat[attr])
# We may have some additional variables
for attr in self.attrs_optional:
if attr in mat:
if mat[attr].shape == (1, 1):
setattr(self, attr, mat[attr][0][0])
elif mat[attr].shape[0] == 1 or mat[attr].shape[1] == 1:
setattr(self, attr, mat[attr].flatten())
else:
setattr(self, attr, mat[attr])
else:
setattr(self, attr, None)
self.data_dtype = self.data.dtype
self.fn = fn_mat
self.flags = ApresFlags()
self.header = ApresHeader()
self.flags.from_matlab(mat['flags'])
self.check_attrs()
def check_attrs(self):
"""Check if required attributes exist.
This is largely for development only; loaders should generally call this method last,
so that they can confirm that they have defined the necessary attributes.
Raises
------
ImpdarError
If any required attribute is None or any optional attribute is fully absent"""
for attr in self.attrs_guaranteed:
if not hasattr(self, attr):
raise ImpdarError('{:s} is missing. \
It appears that this is an ill-defined RadarData object'.format(attr))
if getattr(self, attr) is None:
raise ImpdarError('{:s} is None. \
It appears that this is an ill-defined RadarData object'.format(attr))
for attr in self.attrs_optional:
if not hasattr(self, attr):
raise ImpdarError('{:s} is missing. \
It appears that this is an ill-defined RadarData object'.format(attr))
if not hasattr(self, 'data_dtype') or self.data_dtype is None:
self.data_dtype = self.data.dtype
return
@property
def datetime(self):
"""A python operable version of the time of acquisition of each trace"""
return np.array([datetime.datetime.fromordinal(int(dd)) + datetime.timedelta(days=dd % 1) - datetime.timedelta(days=366)
for dd in self.decday], dtype=np.datetime64)
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 David Lilien <dlilien90@gmail.com>
#
# Distributed under terms of the GNU GPL3 license.
"""
Process ApRES data
Author:
Benjamin Hills
bhills@uw.edu
University of Washington
Earth and Space Sciences
Sept 23 2019
"""
import numpy as np
def apres_range(self,p,max_range=4000,winfun='blackman'):
"""
Parameters
---------
self: class
data object
p: int
pad factor, level of interpolation for fft
winfun: str
window function for fft
Output
--------
Rcoarse: array
range to bin centres (m)
Rfine: array
range to reflector from bin centre (m)
spec_cor: array
spectrum corrected. positive frequency half of spectrum with
ref phase subtracted. This is the complex signal which can be used for
cross-correlating two shot segements.
### Original Matlab File Notes ###
Phase sensitive processing of FMCW radar data based on Brennan et al. 2013
Based on Paul's scripts but following the nomenclature of:
"Phase-sensitive FMCW radar imaging system for high precision Antarctic
ice shelf profile monitoring"
Brennan, Lok, Nicholls and Corr, 2013
Summary: converts raw FMCW radar voltages into a range for
Craig Stewart
2013 April 24
Modified frequencies 10 April 2014
"""
if self.flags.range != 0:
raise TypeError('The range filter has already been done on these data.')
# Processing settings
nf = int(np.floor(p*self.snum/2)) # number of frequencies to recover
# window for fft
if winfun not in ['blackman','bartlett','hamming','hanning','kaiser']:
raise TypeError('Window must be in: blackman, bartlett, hamming, hanning, kaiser')
elif winfun == 'blackman':
win = np.blackman(self.snum)
elif winfun == 'bartlett':
win = np.bartlett(self.snum)
elif winfun == 'hamming':
win = np.hamming(self.snum)
elif winfun == 'hanning':
win = np.hanning(self.snum)
elif winfun == 'kaiser':
win = np.kaiser(self.snum)
# round-trip delay Brennan et al. (2014) eq. 18
tau = np.arange(nf)/(self.header.bandwidth*p)
# Get the coarse range
self.Rcoarse = tau*self.header.ci/2.
# Calculate phase of each range bin center for correction
# Brennan et al. (2014) eq. 17 measured at t=T/2
self.phiref = 2.*np.pi*self.header.fc*tau -(self.header.chirp_grad*tau**2.)/2
# --- Loop through for each chirp in burst --- #
# preallocate
spec = np.zeros((self.bnum,self.cnum,nf)).astype(np.cdouble)
spec_cor = np.zeros((self.bnum,self.cnum,nf)).astype(np.cdouble)
for ib in range(self.bnum):
for ic in range(self.cnum):
# isolate the chirp and preprocess before transform
chirp = self.data[ib,ic,:].copy()
chirp = chirp-np.mean(chirp) # de-mean
chirp *= win # windowed
# fourier transform
fft_chirp = (np.sqrt(2.*p)/len(chirp))*np.fft.fft(chirp,p*self.snum) # fft and scale for padding
fft_chirp /= np.sqrt(np.mean(win**2.)) # scale with rms of window
# output
spec[ib,ic,:] = fft_chirp[:nf] # positive frequency half of spectrum up to (nyquist minus deltaf)
comp = np.exp(-1j*(self.phiref)) # unit phasor with conjugate of phiref phase
spec_cor[ib,ic,:] = comp*fft_chirp[:nf] # positive frequency half of spectrum with ref phase subtracted
self.data = spec_cor.copy()
self.spec = spec.copy()
# precise range measurement
self.Rfine = phase2range(np.angle(self.data),self.header.lambdac,
np.tile(self.Rcoarse,(self.bnum,self.cnum,1)),
self.header.chirp_grad,self.header.ci)
# Crop output variables to useful depth range only
n = np.argmin(self.Rcoarse<=max_range)
self.Rcoarse = self.Rcoarse[:n]
self.Rfine = self.Rfine[:,:,:n]
self.data = self.data[:,:,:n]
self.spec = self.spec[:,:,:n]
self.snum = n
self.flags.range = max_range
# --------------------------------------------------------------------------------------------
def phase_uncertainty(self):
"""
Calculate the phase uncertainty using a noise phasor.
Following Kingslake et al. (2014)
Parameters
---------
Output
--------
phase_uncertainty: array
uncertainty in the phase (rad)
r_uncertainty: array
uncertainty in the range (m) calculated from phase uncertainty
"""
if self.flags.range == 0:
raise TypeError('The range filter has not been executed on this data class, do that before the uncertainty calculation.')
# Get measured phasor from the data class, and use the median magnitude for noise phasor
meas_phasor = self.data
median_mag = np.nanmedian(abs(meas_phasor))
# Noise phasor with random phase and magnitude equal to median of measured phasor
noise_phase = np.random.uniform(-np.pi,np.pi,np.shape(meas_phasor))
noise_phasor = median_mag*(np.cos(noise_phase)+1j*np.sin(noise_phase))
noise_orth = median_mag*np.sin(np.angle(meas_phasor)-np.angle(noise_phasor))
# Phase uncertainty is the deviation in the phase introduced by the noise phasor when it is oriented perpendicular to the reflector phasor
phase_uncertainty = np.abs(np.arcsin(noise_orth/np.abs(meas_phasor)))
# Convert phase to range
r_uncertainty = phase2range(phase_uncertainty,
self.header.lambdac,
self.Rcoarse,
self.header.chirp_grad,
self.header.ci)
return phase_uncertainty, r_uncertainty
# --------------------------------------------------------------------------------------------
def phase2range(phi,lambdac,rc=None,K=None,ci=None):
"""
Convert phase difference to range for FMCW radar
Parameters
---------
phi: float or array
phase (radians), must be of spectrum after bin center correction
lambdac: float
wavelength (m) at center frequency
rc: float; optional
coarse range of bin center (m)
K: float; optional
chirp gradient (rad/s/s)
ci: float; optional
propagation velocity (m/s)
Output
--------
r: float or array
range (m)
### Original Matlab File Notes ###
Craig Stewart
2014/6/10
"""
if not all([K,ci]) or rc is None:
# First order method
# Brennan et al. (2014) eq 15
r = lambdac*phi/(4.*np.pi)
else:
# Precise
r = phi/((4.*np.pi/lambdac) - (4.*rc*K/ci**2.))
return r
# --------------------------------------------------------------------------------------------
def range_diff(self,acq1,acq2,win,step,Rcoarse=None,r_uncertainty=None,uncertainty='CR'):
"""
Calculate the vertical motion using a correlation coefficient.
Parameters
---------
self: class
data object
acq1: array
first acquisition for comparison
acq2: array
second acquisition for comparison
win: int
window size over which to do the correlation coefficient calculation
step: int
step size for the window to move between calculations
Rcoarse: array; optional
if an external depth array is desired, input here
r_uncertainty: array; optional
if unceratinty based on the noise vector is desired input value here
this should be the sum of uncertainty from both acquisitions.
uncertainty: string;
default 'CR' Cramer-Rao bound as in Jordan et al. (2020)
Output
--------
ds: array
depths at which the correlation coefficient is calculated
phase_diff: array
correlation coefficient between acquisitions
amplitude indicates how well reflection packets match between acquisitions
phase is a measure of the vertical motion
range_diff: array
vertical motion in meters
"""
if np.shape(acq1) != np.shape(acq2):
raise TypeError('Acquisition inputs must be of the same shape.')
idxs = np.arange(win//2,(len(acq1)-win//2),step)
if Rcoarse is not None:
ds = Rcoarse[idxs]
else:
ds = self.Rcoarse[idxs]
co = np.empty_like(ds).astype(np.complex)
for i,idx in enumerate(idxs):
# index two sub_arrays to compare
arr1 = acq1[idx-win//2:idx+win//2]
arr2 = acq2[idx-win//2:idx+win//2]
# correlation coefficient to get the motion
# the amplitude indicates how well the reflections match between acquisitions
# the phase is a measure of the offset
co[i] = np.corrcoef(arr1,arr2)[1,0]
# convert the phase offset to a distance vector
r_diff = phase2range(np.angle(co),
self.header.lambdac,
ds,
self.header.chirp_grad,
self.header.ci)
if uncertainty == 'CR':
# Error from Cramer-Rao bound, Jordan et al. (2020) Ann. Glac. eq. (5)
sigma = (1./abs(co))*np.sqrt((1.-abs(co)**2.)/(2.*win))
# convert the phase offset to a distance vector
r_diff_unc = phase2range(sigma,
self.header.lambdac,
ds,
self.header.chirp_grad,
self.header.ci)
elif uncertainty == 'noise_phasor':
# Uncertainty from Noise Phasor as in Kingslake et al. (2014)
# r_uncertainty should be calculated using the function phase_uncertainty defined in this script
r_diff_unc = np.array([np.nanmean(r_uncertainty[i-win//2:i+win//2]) for i in idxs])
return ds, co, r_diff, r_diff_unc
# --------------------------------------------------------------------------------------------
def stacking(self,num_chirps=None):
"""
Stack traces/chirps together to beat down the noise.
Parameters
---------
num_chirps: int
number of chirps to average over
"""
if num_chirps == None:
num_chirps = self.cnum*self.bnum
num_chirps = int(num_chirps)
if num_chirps == self.cnum:
self.data = np.reshape(np.mean(self.data,axis=1),(self.bnum,1,self.snum))
self.cnum = 1
else:
# reshape to jump across bursts
data_hold = np.reshape(self.data,(1,self.cnum*self.bnum,self.snum))
# take only the first set of chirps
data_hold = data_hold[:,:num_chirps,:]
self.data = np.array([np.mean(data_hold,axis=1)])
self.bnum = 1
self.cnum = 1
self.flags.stack = num_chirps
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 dlilien <dlilien@berens>
#
# Distributed under terms of the MIT license.
"""
"""
import numpy as np
from scipy.io import savemat
from .ApresFlags import ApresFlags
from .ApresHeader import ApresHeader
def save_apres(self, fn):
"""Save the radar data
Parameters
----------
fn: str
Filename. Should have a .mat extension
"""
mat = {}
for attr in self.attrs_guaranteed:
if getattr(self, attr) is not None:
mat[attr] = getattr(self, attr)
else:
# this guards against error in matlab format
mat[attr] = 0
for attr in self.attrs_optional:
if hasattr(self, attr) and getattr(self, attr) is not None:
mat[attr] = getattr(self, attr)
if self.flags is not None:
mat['flags'] = self.flags.to_matlab()
else:
# We want the structure available to prevent read errors from corrupt files
mat['flags'] = ApresFlags().to_matlab()
if self.header is not None:
mat['header'] = self.header.to_matlab()
else:
# We want the structure available to prevent read errors from corrupt files
mat['header'] = ApresHeader().to_matlab()
# Make sure not to expand the size of the data due to type conversion
if hasattr(self, 'data_dtype') and self.data_dtype is not None and self.data_dtype != mat['data'].dtype:
# Be carefuly of obliterating NaNs
# We will use singles instead of ints for this guess
if (self.data_dtype in [int, np.int8, np.int16]) and np.any(np.isnan(mat['data'])):
print('Warning: new file is float16 rather than ', self.data_dtype, ' since we now have NaNs')
mat['data'] = mat['data'].astype(np.float16)
elif (self.data_dtype in [np.int32]) and np.any(np.isnan(mat['data'])):
print('Warning: new file is float32 rather than ', self.data_dtype, ' since we now have NaNs')
mat['data'] = mat['data'].astype(np.float32)
elif (self.data_dtype in [np.int64]) and np.any(np.isnan(mat['data'])):
print('Warning: new file is float64 rather than ', self.data_dtype, ' since we now have NaNs')
mat['data'] = mat['data'].astype(np.float64)
else:
mat['data'] = mat['data'].astype(self.data_dtype)
savemat(fn, mat)
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 David Lilien <dlilien90@gmail.com>
#
# Distributed under terms of the GNU GPL3.0 license.
"""
Flags to keep track of processing steps
"""
import numpy as np
class ApresFlags():
"""Flags that indicate the processing that has been used on the data.
These are used for figuring out whether different processing steps have been performed. They also contain some information about the input arguments for some (but not all) of the processing steps.
Attributes
----------
batch: bool
Legacy indication of whether we are batch processing. Always False.
agc: bool
Automatic gain control has been applied.
reverse: bool
Data have been reversed.
restack: bool
Data have been restacked.
rgain: bool
Data have a linear range gain applied.
bpass: 3x1 :class:`numpy.ndarray`
Elements: (1) 1 if bandpassed; (2) Low; and (3) High (MHz) bounds
hfilt: 2x1 :class:`numpy.ndarray`
Elements: (1) 1 if horizontally filtered; (2) Filter type
interp: 2x1 :class:`numpy.ndarray`
Elements: (1) 1 if constant distance spacing applied (2) The constant spacing (m)
mig: 2x1 :class: String
None if no migration done, mtype if migration done.
"""
def __init__(self):
self.file_read_code = None
self.range = 0
self.stack = 1
self.attrs = ['file_read_code','phase2range','stack']
self.attr_dims = [None,None,None]
def to_matlab(self):
"""Convert all associated attributes into a dictionary formatted for use with :func:`scipy.io.savemat`
"""
outmat = {att: getattr(self, att) for att in self.attrs}
return outmat
def from_matlab(self, matlab_struct):
"""Associate all values from an incoming .mat file (i.e. a dictionary from :func:`scipy.io.loadmat`) with appropriate attributes
"""
for attr, attr_dim in zip(self.attrs, self.attr_dims):
setattr(self, attr, matlab_struct[attr][0][0][0])
# Use this because matlab inputs may have zeros for flags that
# were lazily appended to be arrays, but we preallocate
if attr_dim is not None and getattr(self, attr).shape[0] == 1:
setattr(self, attr, np.zeros((attr_dim, )))
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 David Lilien <dlilien90@gmail.com>
#
# Distributed under terms of the GNU GPL3 license.
"""
Header for ApRES data
This code is based on a series of Matlab scripts from Craig Stewart,
Keith Nicholls, and others.
The ApRES (Automated phase-sensitive Radio Echo Sounder) is a self-contained
instrument from BAS.
Author:
Benjamin Hills
bhills@uw.edu
University of Washington
Earth and Space Sciences
Sept 23 2019
"""
import numpy as np
import re
# --------------------------------------------------------------------------------------------
class ApresHeader():
"""
Class for parameters from the header file.
"""
def __init__(self):
"""Initialize data paramaters"""
self.fsysclk = 1e9
self.fs = 4e4
self.fn = None
self.header_string = None
self.file_format = None
self.noDwellHigh = None
self.noDwellLow = None
self.f0 = None
self.f_stop = None
self.ramp_up_step = None
self.ramp_down_step = None
self.tstep_up = None
self.tstep_down = None
self.snum = None
self.nsteps_DDS = None
self.chirp_length = None
self.chirp_grad = None
self.nchirp_samples = None
self.ramp_dir = None
# --------------------------------------------------------------------------------------------
def read_header(self,fn_apres,max_header_len=2000):
"""
Read the header string, to be partitioned later
Parameters
---------
fn_apres: string
file name to update with
max_header_len: int
maximum length of header to read (can be too long)
Output
---------
"""
self.fn = fn_apres
fid = open(fn_apres,'rb')
self.header_string = str(fid.read(max_header_len))
fid.close()
# --------------------------------------------------------------------------------------------
def get_file_format(self):
"""
Determine fmcw file format from burst header using keyword presence
There are a few different formats through the years.
### Original Matlab script Notes ###
Craig Stewart
2013-10-20
Updated by Keith Nicholls, 2014-10-22: RMB2
"""
if 'SW_Issue=' in self.header_string: # Data from RMB2 after Oct 2014
self.file_format = 5
elif 'SubBursts in burst:' in self.header_string: # Data from after Oct 2013
self.file_format = 4
elif '*** Burst Header ***' in self.header_string: # Data from Jan 2013
self.file_format = 3
elif 'RADAR TIME' in self.header_string: # Data from Prototype FMCW radar (nov 2012)
self.file_format = 2
else:
raise TypeError('Unknown file format - check file')
def update_parameters(self,fn_apres=None):
"""
Update the parameters with the apres file header
### Original Matlab Notes ###
Extract from the hex codes the actual paramaters used by RMB2
The contents of config.ini are copied into a data header.
Note this script assumes that the format of the hex codes have quotes
e.g. Reg02="0D1F41C8"
Checks for a sampling frequency of 40 or 80 KHz. Apart from Lai Bun's
variant (WDC + Greenland) it could be hard coded to 40 KHz.
However, there is no check made by the system that the N_ADC_SAMPLES
matches the requested chirp length
NOT COMPLETE - needs a means of checking for profile mode, where multiple sweeps
per period are transmitted- see last line
"""
if self.header_string is None:
if fn_apres is None:
raise TypeError('Must input file name if the header has not been read yet.')
else:
self.read_header(fn_apres)
if self.file_format is None:
self.get_file_format()
loc1 = [m.start() for m in re.finditer('Reg0', self.header_string)]
loc2 = [m.start() for m in re.finditer('="', self.header_string)]
for k in range(len(loc1)):
case = self.header_string[loc1[k]:loc2[k]]
if case == 'Reg01':
# Control Function Register 2 (CFR2) Address 0x01 Four bytes
# Bit 19 (Digital ramp enable)= 1 = Enables digital ramp generator functionality.
# Bit 18 (Digital ramp no-dwell high) 1 = enables no-dwell high functionality.
# Bit 17 (Digital ramp no-dwell low) 1 = enables no-dwell low functionality.
# With no-dwell high, a positive transition of the DRCTL pin initiates a positive slope ramp, which
# continues uninterrupted (regardless of any activity on the DRCTL pin) until the upper limit is reached.
# Setting both no-dwell bits invokes a continuous ramping mode of operation;
loc3 = self.header_string[loc2[k]+2:].find('"')
val = self.header_string[loc2[k]+2:loc2[k]+loc3+2]
val = bin(int(val, 16))
val = val[::-1]
self.noDwellHigh = int(val[18])
self.noDwellLow = int(val[17])
#elif case == 'Reg08':
# # Phase offset word Register (POW) Address 0x08. 2 Bytes dTheta = 360*POW/2^16.
# val = char(reg{1,2}(k));
# H.phaseOffsetDeg = hex2dec(val(1:4))*360/2^16;
elif case == 'Reg0B':
# Digital Ramp Limit Register Address 0x0B
# Digital ramp upper limit 32-bit digital ramp upper limit value.
# Digital ramp lower limit 32-bit digital ramp lower limit value.
loc3 = self.header_string[loc2[k]+2:].find('"')
val = self.header_string[loc2[k]+2:loc2[k]+loc3+2]
self.f0 = int(val[8:], 16)*self.fsysclk/(2**32)
self.f_stop = int(val[:8], 16)*self.fsysclk/(2**32)
elif case == 'Reg0C':
# Digital Ramp Step Size Register Address 0x0C
# Digital ramp decrement step size 32-bit digital ramp decrement step size value.
# Digital ramp increment step size 32-bit digital ramp increment step size value.
loc3 = self.header_string[loc2[k]+2:].find('"')
val = self.header_string[loc2[k]+2:loc2[k]+loc3+2]
self.ramp_up_step = int(val[8:], 16)*self.fsysclk/(2**32)
self.ramp_down_step = int(val[:8], 16)*self.fsysclk/(2**32)
elif case == 'Reg0D':
# Digital Ramp Rate Register Address 0x0D
# Digital ramp negative slope rate 16-bit digital ramp negative slope value that defines the time interval between decrement values.
# Digital ramp positive slope rate 16-bit digital ramp positive slope value that defines the time interval between increment values.
loc3 = self.header_string[loc2[k]+2:].find('"')
val = self.header_string[loc2[k]+2:loc2[k]+loc3+2]
self.tstep_up = int(val[4:], 16)*4/self.fsysclk
self.tstep_down = int(val[:4], 16)*4/self.fsysclk
strings = ['SamplingFreqMode=','N_ADC_SAMPLES=']
output = np.empty((len(strings))).astype(str)
for i,string in enumerate(strings):
if string in self.header_string:
search_start = self.header_string.find(string)
search_end = self.header_string[search_start:].find('\\')
output[i] = self.header_string[search_start+len(string):search_end+search_start]
self.fs = output[0]
if self.fs == 1: # if self.fs > 70e3:
self.fs = 8e4 # self.fs = 80e3
else: # else
self.fs = 4e4 # self.fs = 40e3
self.snum = int(output[1])
self.nsteps_DDS = round(abs((self.f_stop - self.f0)/self.ramp_up_step)) # abs as ramp could be down
self.chirp_length = int(self.nsteps_DDS * self.tstep_up)
self.nchirp_samples = round(self.chirp_length * self.fs)
# If number of ADC samples collected is less than required to collect
# entire chirp, set chirp length to length of series actually collected
if self.nchirp_samples > self.snum:
self.chirp_length = self.snum / self.fs
self.chirp_grad = 2.*np.pi*(self.ramp_up_step/self.tstep_up) # chirp gradient (rad/s/s)
if self.f_stop > 400e6:
self.ramp_dir = 'down'
else:
self.ramp_dir = 'up'
if self.noDwellHigh and self.noDwellLow:
self.ramp_dir = 'upDown'
self.nchirpsPerPeriod = np.nan # self.nchirpSamples/(self.chirpLength)
# --------------------------------------------------------------------------------------------
def to_matlab(self):
"""Convert all associated attributes into a dictionary formatted for use with :func:`scipy.io.savemat`
"""
outmat = {att: getattr(self, att) for att in vars(self)}
return outmat
def from_matlab(self, matlab_struct):
"""Associate all values from an incoming .mat file (i.e. a dictionary from :func:`scipy.io.loadmat`) with appropriate attributes
"""
for attr, attr_dim in zip(self.attrs, self.attr_dims):
setattr(self, attr, matlab_struct[attr][0][0][0])
# Use this because matlab inputs may have zeros for flags that
# were lazily appended to be arrays, but we preallocate
if attr_dim is not None and getattr(self, attr).shape[0] == 1:
setattr(self, attr, np.zeros((attr_dim, )))
for attr in self.bool_attrs:
setattr(self, attr, True if matlab_struct[attr][0][0][0] == 1 else 0)
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 David Lilien <dlilien90@gmail.com>
#
# Distributed under terms of the GNU GPL3 license.
"""
Load ApRES data
This code is based on a series of Matlab scripts from Craig Stewart,
Keith Nicholls, and others.
The ApRES (Automated phase-sensitive Radio Echo Sounder) is a self-contained
instrument from BAS.
Author:
Benjamin Hills
bhills@uw.edu
University of Washington
Earth and Space Sciences
Sept 23 2019
"""
import numpy as np
import datetime
import re
from . import ApresData
# -----------------------------------------------------------------------------------------------------
def load_apres(fns_apres,burst=1,fs=40000, *args, **kwargs):
"""Load and concatenate all apres data from several files
Parameters
----------
fns_apres: list of file names for ApresData
each loads object to concatenate
Returns
-------
RadarData
A single, concatenated output.
"""
apres_data = []
for fn in fns_apres:
try:
apres_data.append(load_apres_single_file(fn,burst=burst,fs=fs,*args,**kwargs))
except:
Warning('Cannot load file: '+fn)
from copy import deepcopy
out = deepcopy(apres_data[0])
for dat in apres_data[1:]:
if out.snum != dat.snum:
raise ValueError('Need the same number of vertical samples in each file')
if out.cnum != dat.cnum:
raise ValueError('Need the same number of chirps in each file')
if not np.all(out.travel_time == dat.travel_time):
raise ValueError('Need matching travel time vectors')
if not np.all(out.frequencies == dat.frequencies):
raise ValueError('Need matching frequency vectors')
out.data = np.vstack([[dat.data] for dat in apres_data])
out.chirp_num = np.vstack([[dat.chirp_num] for dat in apres_data])
out.chirp_att = np.vstack([[dat.chirp_att] for dat in apres_data])
out.chirp_time = np.vstack([[dat.chirp_time] for dat in apres_data])
out.time_stamp = np.hstack([dat.time_stamp for dat in apres_data])
out.temperature1 = np.hstack([dat.temperature1 for dat in apres_data])
out.temperature2 = np.hstack([dat.temperature2 for dat in apres_data])
out.battery_voltage = np.hstack([dat.battery_voltage for dat in apres_data])
out.bnum = np.shape(out.data)[0]
return out
def load_apres_single_file(fn_apres,burst=1,fs=40000, *args, **kwargs):
"""
Load ApRES data
This function calls the load_burst function below
Parameters
---------
fn_apres: string
file name
burst: int
number of bursts to load
fs: int
sampling frequency
### Original Matlab Notes ###
Craig Stewart
2013 April 24
2013 September 30 - corrected error in vif scaling
2014/5/20 time stamp moved here from fmcw_derive_parameters (so that this
is not overwritted later)
2014/5/21 changed how radar chirp is defined (now using chirp gradient as
fundamental parameter)
2014/5/22 fixed bug in chirptime
2014/8/21 moved make odd length to external (called from fmcw_range)
2014/10/22 KWN - edited to allow for new headers in RMB2 files
"""
## Load data and reshape array
if fn_apres[-4:] == '.mat':
# TODO: fix this in the __init__ file
apres_data = ApresData(fn_apres)
else:
apres_data = ApresData(None)
apres_data.header.update_parameters(fn_apres)
start_ind,end_ind = load_burst(apres_data, burst, fs)
# Extract just good chirp data from voltage record and rearrange into
# matrix with one chirp per row
# note: you can't just use reshape as we are also cropping the 20K samples
# of sync tone etc which occur after each 40K of chirp.
AttSet = apres_data.header.attenuator1 + 1j*apres_data.header.attenuator2 # unique code for attenuator setting
## Add metadata to structure
# Sampling parameters
if apres_data.header.file_format is None:
raise TypeError("File format is 'None', cannot load")
else:
if apres_data.header.file_format != 5:
raise TypeError('Loading functions have only been written for rmb5 data.\
Look back to the original Matlab scripts if you need to implement earlier formats.')
else:
apres_data.header.f1 = apres_data.header.f0 + apres_data.header.chirp_length * apres_data.header.chirp_grad/2./np.pi
apres_data.header.bandwidth = apres_data.header.chirp_length * apres_data.header.chirp_grad/2/np.pi
apres_data.header.fc = apres_data.header.f0 + apres_data.header.bandwidth/2.
apres_data.dt = 1./apres_data.header.fs
apres_data.header.er = 3.18
apres_data.header.ci = 3e8/np.sqrt(apres_data.header.er);
apres_data.header.lambdac = apres_data.header.ci/apres_data.header.fc;
# Load each chirp into a row
data_load = np.zeros((apres_data.cnum,apres_data.snum)) # preallocate array
apres_data.chirp_num = np.arange(apres_data.cnum)
apres_data.chirp_att = np.zeros((apres_data.cnum)).astype(np.cdouble)
apres_data.chirp_time = np.zeros((apres_data.cnum))
chirp_interval = 1.6384/(24.*3600.); # days TODO: why is this assigned directly?
for chirp in range(apres_data.cnum):
data_load[chirp,:] = apres_data.data[start_ind[chirp]:end_ind[chirp]]
apres_data.chirp_att[chirp] = AttSet[chirp//apres_data.cnum] # attenuator setting for chirp
apres_data.chirp_time[chirp] = apres_data.decday + chirp_interval*(chirp-1) # time of chirp
apres_data.data = data_load
# Create time and frequency stamp for samples
apres_data.travel_time = apres_data.dt*np.arange(apres_data.snum) # sampling times (rel to first)
apres_data.frequencies = apres_data.header.f0 + apres_data.travel_time*apres_data.header.chirp_grad/(2.*np.pi)
apres_data.travel_time *= 1e6
apres_data.data_dtype = apres_data.data.dtype
return apres_data
# -----------------------------------------------------------------------------------------------------
def load_burst(self,burst=1,fs=40000,max_header_len=2000,burst_pointer=0):
"""
Load bursts from the apres acquisition.
Normally, this should be called from the load_apres function.
Parameters
---------
burst: int
number of bursts to load
fs: int
sampling frequency
max_header_len: int
maximum length to read for header (can be too long)
burst_pointer: int
where to start reading the file for bursts
Output
---------
### Original Matlab Script Notes ###
Read FMCW data file from after Oct 2014 (RMB2b + VAB Iss C, SW Issue >= 101)
Corrected so that Sampling Frequency has correct use (ie, not used in
this case)
"""
if self.header.fn is None:
raise TypeError('Read in the header before loading data.')
if self.header.file_format != 5:
raise TypeError('Loading functions have only been written for rmb5 data.\
Look back to the original Matlab scripts if you need to implement earlier formats.')
try:
fid = open(self.header.fn,'rb')
except:
# Unknown file
self.flags.file_read_code = 'Unable to read file' + self.header.fn
raise TypeError('Cannot open file', self.header.fn)
# Get the total length of the file
fid.seek(0,2)
file_len = fid.tell()
burst_count = 1
# --- Read bursts in a loop --- #
while burst_count <= burst and burst_pointer <= file_len - max_header_len:
# Go to burst pointer and read the header for the burst
fid.seek(burst_pointer)
self.header.read_header(self.header.fn,max_header_len)
try:
# Read header values
strings = ['N_ADC_SAMPLES=','NSubBursts=','Average=','nAttenuators=','Attenuator1=',
'AFGain=','TxAnt=','RxAnt=']
output = np.empty((len(strings))).astype(str)
for i,string in enumerate(strings):
if string in self.header.header_string:
search_start = self.header.header_string.find(string)
search_end = self.header.header_string[search_start:].find('\\')
output[i] = self.header.header_string[search_start+len(string):search_end+search_start]
# Write header values to data object
self.snum = int(output[0])
self.n_subbursts = int(output[1])
self.average = int(output[2])
self.header.n_attenuators = int(output[3])
self.header.attenuator1 = np.array(output[4].split(',')).astype(int)[:self.header.n_attenuators]
self.header.attenuator2 = np.array(output[5].split(',')).astype(int)[:self.header.n_attenuators]
self.header.tx_ant = np.array(output[6].split(',')).astype(int)
self.header.rx_ant = np.array(output[7].split(',')).astype(int)
self.header.tx_ant = self.header.tx_ant[self.header.tx_ant==1]
self.header.rx_ant = self.header.rx_ant[self.header.rx_ant==1]
if self.average != 0:
self.cnum = 1
else:
self.cnum = self.n_subbursts*len(self.header.tx_ant)*\
len(self.header.rx_ant)*self.header.n_attenuators
# End of burst
search_string = '*** End Header ***'
search_ind = self.header.header_string.find(search_string)
burst_pointer += search_ind + len(search_string)
except:
# If the burst read is unsuccessful exit with an updated read code
self.flags.file_read_code = 'Corrupt header in burst' + str(burst_count) + 'for file' + self.header.fn
self.bnum = burst_count
raise TypeError('Burst Read Failed.')
# Move the burst pointer
if burst_count < burst and burst_pointer <= file_len - max_header_len:
if self.average != 0:
burst_pointer += self.cnum*self.snum*4
else:
burst_pointer += self.cnum*self.snum*2
burst_count += 1
# --- Get remaining information from burst header --- #
# Look for a few different strings and save output
strings = ['Time stamp=','Temp1=','Temp2=','BatteryVoltage=']
output = []
for i,string in enumerate(strings):
if string in self.header.header_string:
search_start = [m.start() for m in re.finditer(string, self.header.header_string)]
search_end = [self.header.header_string[ind:].find('\\') for ind in search_start]
out = [self.header.header_string[search_start[i]+len(string):search_end[i]+search_start[i]] for i in range(len(search_start))]
output.append(out)
if 'Time stamp' not in self.header.header_string:
self.flags.file_read_code = 'Burst' + str(self.bnum) + 'not found in file' + self.header.fn
else:
self.time_stamp = np.array([datetime.datetime.strptime(str_time, '%Y-%m-%d %H:%M:%S') for str_time in output[0]])
timezero = datetime.datetime(1, 1, 1, 0, 0, 0)
day_offset = self.time_stamp - timezero
self.decday = np.array([offset.days for offset in day_offset]) + 377. # Matlab compatable
self.temperature1 = np.array(output[1]).astype(float)
self.temperature2 = np.array(output[2]).astype(float)
self.battery_voltage = np.array(output[3]).astype(float)
# --- Read in the actual data --- #
# Go to the end of the header
end_byte = b'*** End Header ***'
data_ind = fid.read(max_header_len).rfind(end_byte) + len(end_byte)
fid.seek(data_ind)
# Only if all the bursts were read
if burst_count != burst+1:
# too few bursts in file
self.flags.file_read_code = 'Burst' + str(self.bnum) + 'not found in file' + self.header.fn
self.bnum = burst_count - 1
raise TypeError('Burst ' + str(self.bnum) + ' not found in file ' + self.header.fn)
else:
# TODO: Check the other readers for average == 1 or average == 2
if self.average == 2:
self.data = np.fromfile(fid,dtype='uint32',count=self.cnum*self.snum)
elif self.average == 1:
fid.seek(burst_pointer+1)
self.data = np.fromfile(fid,dtype='float4',count=self.cnum*self.snum)
else:
self.data = np.fromfile(fid,dtype='uint16',count=self.cnum*self.snum)
if fid.tell()-(burst_pointer-1) < self.cnum*self.snum:
self.flags.file_read_code = 'Corrupt header in burst' + str(burst_count) + 'for file' + self.header.fn
self.data[self.data<0] = self.data[self.data<0] + 2**16.
self.data = self.data.astype(float) * 2.5/2**16.
if self.average == 2:
self.data /= (self.n_subbursts*self.header.n_attenuators)
start_ind = np.transpose(np.arange(0,self.snum*self.cnum,self.snum))
end_ind = start_ind + self.snum
self.bnum = burst
fid.close()
# Clean temperature record (wrong data type?)
self.temperature1[self.temperature1>300] -= 512
self.temperature2[self.temperature2>300] -= 512
self.flags.file_read_code = 'Successful Read'
return start_ind,end_ind
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 dlilien <dlilien90@gmail.com>
#
# Distributed under terms of the GNU GPL3.0 license.
# Worked from minimal example from scipy-examples.org
"""Fast Migration. Just Kirchhoff for now"""
import sys
cimport numpy as np
np.import_array()
import numpy as np
import time
# cdefine the signature of our c function
# Need this so that the function is recognized
cdef extern from "mig_cython.h":
void mig_kirch_loop (double * migdata, int tnum, int snum, double * dist, double * zs, double * zs2, double * tt_sec, double vel, double * gradD, double max_travel_time, int nearfield)
def migrationKirchoffLoop(np.ndarray[double, ndim=2, mode="c"] migdata not None,
int tnum,
int snum,
np.ndarray[double, ndim=1, mode="c"] dist not None,
np.ndarray[double, ndim=1, mode="c"] zs not None,
np.ndarray[double, ndim=1, mode="c"] zs2 not None,
np.ndarray[double, ndim=1, mode="c"] tt_sec not None,
float vel,
np.ndarray[double, ndim=2, mode="c"] gradD not None,
float max_travel_time,
bint nearfield
):
"""I am not sure if this wrapper is needed, but I think it gives us type checking so I'm leaving it"""
mig_kirch_loop(<double*> np.PyArray_DATA(migdata),
tnum,
snum,
<double*> np.PyArray_DATA(dist),
<double*> np.PyArray_DATA(zs),
<double*> np.PyArray_DATA(zs2),
<double*> np.PyArray_DATA(tt_sec),
vel,
<double*> np.PyArray_DATA(gradD),
max_travel_time,
int(nearfield)
)
def migrationKirchhoff(dat, vel=1.69e8, nearfield=False):
"""Kirchhoff Migration (Berkhout 1980; Schneider 1978; Berryhill 1979)
This migration method uses an integral solution to the scalar wave equation Yilmaz (2001) eqn 4.5.
The algorithm cycles through every sample in each trace, creating a hypothetical diffraciton
hyperbola for that location,
t(x)^2 = t(0)^2 + (2x/v)^2
To migrate, we integrate the power along that hyperbola and assign the solution to the apex point.
There are two terms in the integral solution, Yilmaz (2001) eqn 4.5, a far-field term and a
near-field term. Most algorithms ignore the near-field term because it is small. Here there is an option,
but default is to ignore.
Parameters
---------
dat: data as a class in the ImpDAR format
vel: wave velocity, default is for ice
nearfield: boolean to indicate whether or not to use the nearfield term in summation
Output
---------
dat: data as a class in the ImpDAR format (with dat.data now being migrated data)
"""
print('Kirchhoff Migration (diffraction summation) of %.0fx%.0f matrix' % (dat.tnum, dat.snum))
# check that the arrays are compatible
_check_data_shape(dat)
# start the timer
start = time.time()
# Calculate the time derivative of the input data
gradD = np.gradient(np.ascontiguousarray(dat.data, dtype=np.float64), dat.travel_time / 1.0e6, axis=0)
# Create an empty array to fill with migrated data
migdata = np.ascontiguousarray(np.zeros_like(dat.data, dtype=np.float64), dtype=np.float64)
# Try to cache some variables that we need lots
tt_sec = dat.travel_time / 1.0e6
max_travel_time = np.max(tt_sec)
# Cache the depths
zs = vel * tt_sec / 2.0
zs2 = zs**2.
migrationKirchoffLoop(migdata,
dat.tnum,
dat.snum,
np.ascontiguousarray(dat.dist, dtype=np.float64) * 1.0e3,
np.ascontiguousarray(zs, dtype=np.float64),
np.ascontiguousarray(zs2, dtype=np.float64),
np.ascontiguousarray(tt_sec, dtype=np.float64),
vel,
np.ascontiguousarray(gradD, dtype=np.float64),
max_travel_time,
nearfield
)
dat.data = migdata.copy()
# print the total time
print('Kirchhoff Migration of %.0fx%.0f matrix complete in %.2f seconds'
% (dat.tnum, dat.snum, time.time() - start))
return dat
def _check_data_shape(dat):
if np.size(dat.data, 1) != dat.tnum or np.size(dat.data, 0) != dat.snum:
raise ValueError('The input array must be of size (tnum,snum)')
/*
* mig_cython.h
* Copyright (C) 2019 dlilien <dlilien@berens>
*
* Distributed under terms of the GNU GPL3.0 license.
*/
#ifndef MIG_CYTHON_H
#define MIG_CYTHON_H
void mig_kirch_loop (double * migdata, int tnum, int snum, double * dist, double * zs, double * zs2, double * tt_sec, double vel, double * gradD, double max_travel_time, int nearfield);
#endif /* !MIG_CYTHON_H */

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

0 0 0 0
1 10 0.1 100
2 20 0.2 200
3 30 0.3 300
4 40 0.4 400
5 50 0.5 500
6 60 0.6 600
7 70 0.7 700
8 80 0.8 800
9 90 0.9 900
10 100 1.0 1000
11 110 1.1 1100
12 120 1.2 1200
13 130 1.3 1300
14 140 1.4 1400
15 150 1.5 1500
16 160 1.6 1600
17 170 1.7 1700
18 180 1.8 1800
19 190 1.9 1900

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

The example SEGY data are taken from https://wiki.seg.org/wiki/2004_BP_velocity_estimation_benchmark_model.
They were first modified to reduce filesize by taking only the traces from the first shot.
Other example data were collected by researchers at UW and may have similarly been modified to reduce filesize.

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

0,800
50,900
51,910
100,910
10000,910

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

2.161803890360850394e+08 4.594172468512858387e+01 7.741986171574338016e+01
1.373590616341005266e+08 1.027629362876940888e+02 1.273498750899812215e+01
1.105551985408406109e+08 1.242613149989473555e+02 3.567996657332622590e+01
1.764799723261733055e+08 9.505746376479456217e+01 1.224484694216248748e+01
2.082878347534067631e+08 1.654637071161263151e+01 3.344817734577820545e+01
1.592992890153085589e+08 2.940348509908498897e+01 6.943723807156430894e+01
1.407290914004876316e+08 9.463773973739745315e+01 7.181624202990849426e+01
1.940109286060774326e+08 4.208701683181728015e+01 6.128101655625451372e+01
2.021185277219191790e+08 5.016492377995131591e+01 9.463437731834360989e+00
1.738347373491538465e+08 6.987523543668119430e+01 7.016725038402061898e+01
1.151745749755704552e+08 8.345194806258928111e+01 8.959924083962857821e+01
1.975055458718187809e+08 1.399411618769065626e+02 6.459802239533919987e+01
1.840599034874652922e+08 1.321869744910018767e+02 7.562508293327745434e+01
1.899529091170004010e+08 1.252186987717308568e+02 4.506342411724115493e+01
1.686450417173107266e+08 4.772966254656429896e+01 8.942740961449356973e+01
1.997074357683942318e+08 8.905282122107591647e+01 3.150531431129083870e+01
1.868645079931488335e+08 7.244317807736813108e+01 6.513397225607646135e+01
1.679368646286510527e+08 4.185022413556256282e+01 6.531932826524197822e+01
1.792355332274949849e+08 6.168705094723811300e+01 9.012750463715173055e+01
1.543890347547339201e+08 1.254830169129547528e+02 7.959919018375428834e+01
1.677e8 0
1.677e8 50
1.2e8 51
2.2e8 100

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is too big to display

#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 David Lilien <dlilien90@gmail.com>
#
# Distributed under terms of the GNU GPL3.0 license.
"""
Make sure that we can successfully read BSI input files
"""
import os
import unittest
from impdar.lib.load import load_bsi
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
class TestBSI(unittest.TestCase):
@unittest.skipIf(not load_bsi.H5, 'h5py is not available')
def test_load_bsi(self):
load_bsi.load_bsi(os.path.join(THIS_DIR, 'input_data', 'test_bsi.h5'))
@unittest.skipIf(load_bsi.H5, 'h5py is available')
def test_load_bsi_noh5py(self):
with self.assertRaises(ImportError):
load_bsi.load_bsi(os.path.join(THIS_DIR, 'input_data', 'test_bsi.h5'))
if __name__ == '__main__':
unittest.main()
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 dlilien <dlilien@berens>
#
# Distributed under terms of the GNU GPL3.0 license.
"""
Test converting between filetypes
"""
import os
import pytest
import unittest
from impdar.lib import convert
from impdar.lib.RadarData._RadarDataSaving import CONVERSIONS_ENABLED
from impdar.lib.load.load_segy import SEGY
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
class TestConvert(unittest.TestCase):
@unittest.skipIf(not CONVERSIONS_ENABLED, 'No GDAL on this version')
def test_guessload2shp(self):
convert.convert(os.path.join(THIS_DIR, 'input_data', 'small_data.mat'), 'shp')
self.assertTrue(os.path.exists(os.path.join(THIS_DIR, 'input_data', 'small_data.shp')))
convert.convert([os.path.join(THIS_DIR, 'input_data', 'test_pe.DT1')], 'shp')
self.assertTrue(os.path.exists(os.path.join(THIS_DIR, 'input_data', 'test_pe.shp')))
convert.convert([os.path.join(THIS_DIR, 'input_data', 'test_gssi.DZT')], 'shp')
self.assertTrue(os.path.exists(os.path.join(THIS_DIR, 'input_data', 'test_gssi.shp')))
@unittest.skipIf(not CONVERSIONS_ENABLED, 'No GDAL on this version')
def test_knownload2shp(self):
convert.convert(os.path.join(THIS_DIR, 'input_data', 'small_data.mat'), 'shp', in_fmt='mat')
self.assertTrue(os.path.exists(os.path.join(THIS_DIR, 'input_data', 'small_data.shp')))
convert.convert([os.path.join(THIS_DIR, 'input_data', 'test_pe.DT1')], 'shp', in_fmt='pe')
self.assertTrue(os.path.exists(os.path.join(THIS_DIR, 'input_data', 'test_pe.shp')))
convert.convert([os.path.join(THIS_DIR, 'input_data', 'test_gssi.DZT')], 'shp', in_fmt='gssi')
self.assertTrue(os.path.exists(os.path.join(THIS_DIR, 'input_data', 'test_gssi.shp')))
@unittest.skipIf(not CONVERSIONS_ENABLED, 'No GDAL on this version')
def test_knownload2mat(self):
convert.convert([os.path.join(THIS_DIR, 'input_data', 'test_pe.DT1')], 'mat', in_fmt='pe')
self.assertTrue(os.path.exists(os.path.join(THIS_DIR, 'input_data', 'test_pe.mat')))
convert.convert([os.path.join(THIS_DIR, 'input_data', 'test_gssi.DZT')], 'mat', in_fmt='gssi')
self.assertTrue(os.path.exists(os.path.join(THIS_DIR, 'input_data', 'test_gssi.mat')))
@unittest.skipIf(SEGY, 'SEGY enabled, this is a failure test')
def test_nosegy(self):
with self.assertRaises(ImportError):
convert.convert([os.path.join(THIS_DIR, 'input_data', 'test_pe.DT1')], 'mat', in_fmt='segy')
with self.assertRaises(ImportError):
convert.convert([os.path.join(THIS_DIR, 'input_data', 'small_data.mat')], 'sgy', in_fmt='mat')
@unittest.skipIf(not SEGY, 'SEGY needed for this test')
def test_segy_save(self):
pytest.importorskip('segyio', 'No SEGY on this version')
convert.convert(os.path.join(THIS_DIR, 'input_data', 'small_data.mat'), 'sgy', in_fmt='mat')
self.assertTrue(os.path.exists(os.path.join(THIS_DIR, 'input_data', 'small_data.sgy')))
convert.convert(os.path.join(THIS_DIR, 'input_data', 'shots0001_0200.segy'), 'mat', in_fmt='segy')
def test_badinsout(self):
with self.assertRaises(ValueError):
convert.convert([os.path.join(THIS_DIR, 'input_data', 'small_data.mat')], 'dummy')
with self.assertRaises(ValueError):
convert.convert([os.path.join(THIS_DIR, 'input_data', 'small_data.wtf')], 'shp')
def tearDown(self):
for ext in ['shp', 'shx', 'dbf', 'prj', 'sgy']:
for pref in ['small_data', 'test_gssi', 'test_pe']:
if os.path.exists(os.path.join(THIS_DIR, 'input_data', pref + '.' + ext)):
os.remove(os.path.join(THIS_DIR, 'input_data', pref + '.' + ext))
for pref in ['test_gssi', 'test_pe', 'shots0001_0200']:
if os.path.exists(os.path.join(THIS_DIR, 'input_data', pref + '.mat')):
os.remove(os.path.join(THIS_DIR, 'input_data', pref + '.mat'))
if __name__ == '__main__':
unittest.main()
#! /usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Test the gecko file import for data from the St. Olaf HF radar.
Author:
Benjamin Hills
benjaminhhills@gmail.com
University of Washington
Earth and Space Sciences
Mar 28 2019
"""
import sys
import os
import unittest
from impdar.lib.load import load_olaf
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
class TestLoadGecko(unittest.TestCase):
@unittest.skipIf(sys.version_info[0] < 3, 'Bytes are weird in 2')
def test_load_gecko(self):
load_olaf.load_olaf(os.path.join(THIS_DIR, 'input_data', 'test_gecko.gtd'), channel=1)
load_olaf.load_olaf(os.path.join(THIS_DIR, 'input_data', 'test_gecko.gtd'), channel=2)
if __name__ == '__main__':
unittest.main()
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 dlilien <dlilien90@gmail.com>
#
# Distributed under terms of the GNU GPL3.0 license.
"""
Make sure that we can successfully read gprMax input files
"""
import os
import unittest
from impdar.lib.load import load_gprMax
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
class TestGPRMax(unittest.TestCase):
@unittest.skipIf(not load_gprMax.H5, 'No h5py found')
def test_load(self):
load_gprMax.load_gprMax(os.path.join(THIS_DIR, 'input_data', 'rectangle_gprMax_Bscan.h5'))
@unittest.skipIf(load_gprMax.H5, 'h5py is available')
def test_load_noh5py(self):
with self.assertRaises(ImportError):
load_gprMax.load_gprMax(os.path.join(THIS_DIR, 'input_data', 'rectangle_gprMax_Bscan.h5'))
def tearDown(self):
fn = os.path.join(THIS_DIR, 'input_data', 'rectangle_gprMax_Bscan_raw.mat')
if os.path.exists(fn):
os.path.remove(fn)
if __name__ == '__main__':
unittest.main()
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 dlilien <dlilien@berens>
#
# Distributed under terms of the GNU GPL3.0 license.
"""
"""
import sys
import os
import unittest
import numpy as np
from impdar.lib.NoInitRadarData import NoInitRadarData
from impdar.lib import gpslib
if sys.version_info[0] >= 3:
from unittest.mock import patch, MagicMock
else:
from mock import patch, MagicMock
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
class TestGPS(unittest.TestCase):
@patch('impdar.lib.gpslib.kinematic_gps_control')
def test_kinematic_gps_csv(self, mock_kgc):
dats = [NoInitRadarData(big=True)]
gpslib.kinematic_gps_csv(dats, os.path.join(THIS_DIR, 'input_data', 'gps_control.csv'))
self.assertTrue(np.allclose(np.arange(0, 2.0, 0.1), mock_kgc.call_args[0][1]))
self.assertTrue(np.allclose(np.arange(0, 200, 10), mock_kgc.call_args[0][2]))
self.assertTrue(np.allclose(np.arange(0, 2000, 100), mock_kgc.call_args[0][3]))
self.assertTrue(np.allclose(np.arange(0, 20, 1), mock_kgc.call_args[0][4]))
with self.assertRaises(ValueError):
gpslib.kinematic_gps_csv(dats, os.path.join(THIS_DIR, 'input_data', 'gps_control.csv'), names='dumbanddummer')
with self.assertRaises(ValueError):
gpslib.kinematic_gps_csv(dats, os.path.join(THIS_DIR, 'input_data', 'gps_control.csv'), names='dumb,dumb,and,dummer')
def test_kinematic_gps_control(self):
dats = [NoInitRadarData(big=True)]
gpslib.kinematic_gps_control(dats, np.arange(0, 2.0, 0.1), np.arange(40, 60., 1.), np.arange(0., 2000., 100.), np.arange(0., 20., 1.), guess_offset=False)
self.assertTrue(np.allclose(np.arange(0, 2.0, 0.1), dats[0].lat))
self.assertTrue(np.allclose(np.arange(40, 60, 1), dats[0].long))
self.assertTrue(np.allclose(np.arange(0, 2000, 100), dats[0].elev))
with self.assertRaises(ValueError):
gpslib.kinematic_gps_control(dats, np.arange(0, 2.0, 0.1), np.arange(0, 200, 10), np.arange(0, 2000, 100), np.arange(0, 20, 1), guess_offset=True)
dat = NoInitRadarData(big=True)
gpslib.kinematic_gps_control(dat, np.arange(0, 2.0, 0.1), np.arange(40, 60, 1), np.arange(0, 2000, 100), np.arange(0, 20, 1), guess_offset=False)
self.assertTrue(np.allclose(np.arange(0, 2.0, 0.1), dat.lat))
self.assertTrue(np.allclose(np.arange(40, 60, 1), dat.long))
self.assertTrue(np.allclose(np.arange(0, 2000, 100), dat.elev))
# We should be allowed to be off by 360 in longitude
dat = NoInitRadarData(big=True)
gpslib.kinematic_gps_control(dat, np.arange(0, 2.0, 0.1), np.arange(40, 60, 1) - 360., np.arange(0, 2000, 100), np.arange(0, 20, 1), guess_offset=False)
self.assertTrue(np.allclose(np.arange(0, 2.0, 0.1), dats[0].lat))
self.assertTrue(np.allclose(np.arange(40, 60, 1), dats[0].long))
self.assertTrue(np.allclose(np.arange(0, 2000, 100), dats[0].elev))
# and off the other way
dat = NoInitRadarData(big=True)
dat.long = dat.long - 360.
gpslib.kinematic_gps_control(dat, np.arange(0, 2.0, 0.1), np.arange(40, 60, 1), np.arange(0, 2000, 100), np.arange(0, 20, 1), guess_offset=False)
self.assertTrue(np.allclose(np.arange(0, 2.0, 0.1), dats[0].lat))
self.assertTrue(np.allclose(np.arange(40, 60, 1), dats[0].long))
self.assertTrue(np.allclose(np.arange(0, 2000, 100), dats[0].elev))
dat = NoInitRadarData(big=True)
gpslib.kinematic_gps_control(dat, np.arange(-1.0, 3.0, 0.1), np.arange(20, 60, 1), np.arange(-1000, 3000, 100), np.arange(-10, 30, 1), guess_offset=True)
# Multiple inputs
dats = [NoInitRadarData(big=True), NoInitRadarData(big=True)]
gpslib.kinematic_gps_control(dats, np.arange(-1.0, 3.0, 0.1), np.arange(40, 80, 1), np.arange(-1000, 3000, 100), np.arange(-10, 30, 1), guess_offset=True)
# Bad timing
dat = NoInitRadarData(big=True)
dat.decday = dat.decday + 10
with self.assertRaises(ValueError):
gpslib.kinematic_gps_control(dat, np.arange(0, 2.0, 0.1), np.arange(0, 20, 1), np.arange(0, 2000, 100), np.arange(0, 20, 1))
@patch('impdar.lib.gpslib.kinematic_gps_control')
def test_kinematic_gps_mat(self, mock_kgc):
dats = [NoInitRadarData(big=True)]
gpslib.kinematic_gps_mat(dats, os.path.join(THIS_DIR, 'input_data', 'gps_control.mat'))
self.assertTrue(np.allclose(np.arange(0, 2.0, 0.1), mock_kgc.call_args[0][1]))
self.assertTrue(np.allclose(np.arange(0, 200, 10), mock_kgc.call_args[0][2]))
self.assertTrue(np.allclose(np.arange(0, 2000, 100), mock_kgc.call_args[0][3]))
self.assertTrue(np.allclose(np.arange(0, 20, 1), mock_kgc.call_args[0][4]))
with self.assertRaises(ValueError):
gpslib.kinematic_gps_mat(dats, os.path.join(THIS_DIR, 'input_data', 'gps_control_badfields.mat'), extrapolate=False)
@patch('impdar.lib.gpslib.kinematic_gps_mat')
@patch('impdar.lib.gpslib.kinematic_gps_csv')
def test_interp(self, mock_kgc, mock_kgm):
dats = [NoInitRadarData(big=True)]
dats[0].constant_space = MagicMock()
gpslib.interp(dats, 10., fn='dum.csv')
self.assertTrue(len(mock_kgc.mock_calls) > 0)
self.assertTrue(len(dats[0].constant_space.mock_calls) > 0)
dats[0].constant_space = MagicMock()
gpslib.interp(dats, 10., fn='dum.mat')
self.assertTrue(len(mock_kgm.mock_calls) > 0)
self.assertTrue(len(dats[0].constant_space.mock_calls) > 0)
with self.assertRaises(Exception):
gpslib.interp(dats, 10., fn='dum.badext')
dats = [NoInitRadarData(big=True)]
dats[0].constant_space = MagicMock()
gpslib.interp(dats, 10.)
self.assertTrue(len(dats[0].constant_space.mock_calls) > 0)
@unittest.skipIf(not gpslib.conversions_enabled, 'No gdal')
def test_conversions(self):
pts = np.array([[-8., 10.], [-9., 11.], [-10., 12.]])
conv_utm, _ = gpslib.get_utm_conversion(-8.0, 10.0)
proj_pts = conv_utm(pts)
self.assertTrue(np.all(~np.isnan(proj_pts)))
pts = np.array([[-88., 10.], [-89., 11.], [-89.1, 12.]])
conv_sps, _ = gpslib.get_conversion(t_srs='EPSG:3031')
proj_pts = conv_sps(pts)
self.assertTrue(np.all(~np.isnan(proj_pts)))
@unittest.skipIf(gpslib.conversions_enabled, 'GDAL found, this is a failure test')
def test_conversions_off(self):
# we want to be able to import gpslib but later fail
with self.assertRaises(ImportError):
conv_utm = gpslib.get_utm_conversion(-8.0, 10.0)
with self.assertRaises(ImportError):
conv_sps = gpslib.get_conversion(t_srs='EPSG:3031')
if __name__ == '__main__':
unittest.main()
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 dlilien <dlilien90@gmail.com>
#
# Distributed under terms of the GNU GPL3.0 license.
"""
Make sure that we can successfully read gssi input files
"""
import os
import unittest
from impdar.lib.load import load_gssi
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
class TestGSSI(unittest.TestCase):
def test_load4000_withDZG(self):
load_gssi.load_gssi(os.path.join(THIS_DIR, 'input_data', 'test_gssi.DZT'))
def test_load3000(self):
load_gssi.load_gssi(os.path.join(THIS_DIR, 'input_data', 'GSSI_3000.DZT'))
def test_load4000_withoutDZG(self):
load_gssi.load_gssi(os.path.join(THIS_DIR, 'input_data', 'test_gssi_justdzt.DZT'))
def test_load4000_partialDZG(self):
load_gssi.load_gssi(os.path.join(THIS_DIR, 'input_data', 'test_gssi_partialgps.DZT'))
def test_save_withDZG(self):
load_gssi.load_gssi(os.path.join(THIS_DIR, 'input_data', 'test_gssi.DZT')).save(os.path.join(THIS_DIR, 'input_data', 'test_gssi_raw.mat'))
os.remove(os.path.join(THIS_DIR, 'input_data', 'test_gssi_raw.mat'))
def test_save_withoutDZG(self):
load_gssi.load_gssi(os.path.join(THIS_DIR, 'input_data', 'test_gssi_justdzt.DZT')).save(os.path.join(THIS_DIR, 'input_data', 'test_gssi_justdzt_raw.mat'))
os.remove(os.path.join(THIS_DIR, 'input_data', 'test_gssi_justdzt_raw.mat'))
if __name__ == '__main__':
unittest.main()
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 dlilien <dlilien@berens>
#
# Distributed under terms of the GNU GPL3.0 license.
"""
"""
import sys
import os
import unittest
import numpy as np
from impdar.lib.RadarData._RadarDataSaving import CONVERSIONS_ENABLED
from impdar.lib.RadarData import RadarData
try:
import matplotlib
matplotlib.use('QT5Agg')
from PyQt5 import QtWidgets, QtCore
from impdar.gui.pickgui import InteractivePicker, VBPInputDialog, CropInputDialog, warn, plt
app = QtWidgets.QApplication(sys.argv)
qt = True
except ImportError:
qt = False
if sys.version_info[0] >= 3:
from unittest.mock import MagicMock, patch
else:
from mock import MagicMock, patch
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
class DummyEvent:
def accept(self):
pass
def ignore(self):
pass
@unittest.skipIf(not qt, 'No Qt')
class TestInteractivePicker(unittest.TestCase):
def setUp(self):
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data.mat'))
self.ip = InteractivePicker(data)
def test_other_lims(self):
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data.mat'))
ip = InteractivePicker(data, xdat='dist')
self.assertEqual(ip.x, 'dist')
with self.assertRaises(ValueError):
ip = InteractivePicker(data, xdat='dum')
ip = InteractivePicker(data, ydat='twtt')
self.assertEqual(ip.y, 'twtt')
ip = InteractivePicker(data, ydat='depth')
self.assertEqual(ip.y, 'depth')
data.nmo_depth = data.travel_time
ip = InteractivePicker(data, ydat='depth')
self.assertEqual(ip.y, 'depth')
with self.assertRaises(ValueError):
ip = InteractivePicker(data, ydat='dum')
with self.assertRaises(ValueError):
ip = InteractivePicker(data, ydat='elev')
data.elevation = np.arange(ip.dat.tnum)
data.flags.elev = True
ip = InteractivePicker(data, x_range=None)
self.assertEqual(ip.x_range, (0, ip.dat.tnum))
def test_PickNum(self):
self.ip.pickNumberBox.setValue(1)
def test_update_polarity(self):
self.assertEqual(self.ip.dat.picks.pickparams.pol, 1)
self.ip.wbw_radio.setChecked(True)
self.assertEqual(self.ip.dat.picks.pickparams.pol, -1)
def test_reverse_color(self):
self.assertEqual(self.ip.im.get_cmap(), plt.cm.get_cmap(self.ip.ColorSelector.currentText()))
self.ip._update_color_reversal(QtCore.Qt.Checked)
self.assertEqual(self.ip.im.get_cmap(), plt.cm.get_cmap(self.ip.ColorSelector.currentText() + '_r'))
def test_select_lines_click(self):
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data_picks.mat'))
self.ip = InteractivePicker(data)
event = DummyEvent()
event.artist = self.ip.cline[0]
self.ip._select_lines_click(event)
self.assertEqual(self.ip.pickNumberBox.value(), 1)
event.artist = 'dumdum'
self.ip._select_lines_click(event)
self.assertEqual(self.ip.pickNumberBox.value(), 1)
event.artist = self.ip.cline[1]
self.ip._select_lines_click(event)
self.assertEqual(self.ip.pickNumberBox.value(), 5)
def test_freq_update(self):
p = self.ip.dat.picks.pickparams.plength
self.ip._freq_update(678)
self.assertEqual(self.ip.dat.picks.pickparams.freq, 678)
self.assertFalse(p == self.ip.dat.picks.pickparams.plength)
def test_add_pick(self):
#blank pick
self.ip._add_pick()
self.assertTrue(self.ip.dat.picks.samp1.shape[0] == 1)
# Overwrite slot
self.ip._add_pick()
self.assertTrue(self.ip.dat.picks.samp1.shape[0] == 1)
# add to existing
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data_picks.mat'))
self.ip = InteractivePicker(data)
self.assertTrue(self.ip.dat.picks.samp1.shape[0] == 2)
self.ip._add_pick()
self.assertTrue(self.ip.dat.picks.samp1.shape[0] == 3)
# Check that we can add a non-blank pick
self.ip._add_pick(snum=10, tnum=2)
self.assertTrue(self.ip.dat.picks.samp1.shape[0] == 3)
self.assertTrue(self.ip.current_pick[1, 2] > 5)
# snum but no tnum
self.ip._add_pick(snum=10, tnum=None)
self.assertTrue(self.ip.current_pick[1, 0] > 5)
def test_color_select(self):
self.ip._color_select('bone')
self.assertTrue(self.ip.im.get_cmap(), 'bone')
self.ip._color_select('CEGSIC')
self.assertTrue(self.ip.im.get_cmap(), 'CEGSIC')
def test_lim_update(self):
self.ip._update_lims(-100, 100)
self.assertEqual(self.ip.im.get_clim(), (-100, 100))
with self.assertRaises(ValueError):
self.ip._update_lims(100, -100)
self.ip.minSpinner.setValue(-999)
self.ip.maxSpinner.setValue(999)
self.assertEqual(self.ip.im.get_clim(), (-999, 999))
self.ip.minSpinner.setValue(1000)
self.assertEqual(self.ip.im.get_clim(), (1000, 1001))
def test_mode_update(self):
self.ip._mode_update()
self.ip._mode_update()
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data_picks.mat'))
ip = InteractivePicker(data)
ip._mode_update()
ip._mode_update()
def test_edit_lines_click_existingline(self):
# First, plain left click
# event has x and y data
event = DummyEvent()
event.xdata = 10.
event.ydata = 1.0e-1
event.button = 1
self.ip._freq_update(800)
# assume we have a pick
self.ip._add_pick(snum=10, tnum=1)
self.ip._add_point_pick = MagicMock()
self.ip.update_lines = MagicMock()
self.ip._edit_lines_click(event)
self.assertTrue(self.ip._add_point_pick.called)
self.assertTrue(self.ip.update_lines.called)
# now nanpick
self.ip._n_pressed = True
self.ip._add_nanpick = MagicMock()
self.ip.update_lines = MagicMock()
# prevent a dialog box
with patch('impdar.gui.pickgui.warn'):
self.ip._edit_lines_click(event)
self.assertTrue(self.ip._add_nanpick.called)
self.assertTrue(self.ip.update_lines.called)
self.ip._n_pressed = False
# now delete pick
event.button = 3
self.ip._delete_picks = MagicMock()
self.ip.update_lines = MagicMock()
self.ip._edit_lines_click(event)
self.assertTrue(self.ip._delete_picks.called)
self.assertTrue(self.ip.update_lines.called)
@unittest.skipIf(sys.version_info[0] < 3, 'Mock is only on 3+')
def test_edit_lines_click_newline(self):
# First, plain left click
# event has x and y data
event = DummyEvent()
event.xdata = 10.
event.ydata = 1.0e-1
event.button = 1
# assume we have no picks
self.ip._add_pick = MagicMock()
self.ip.update_lines = MagicMock()
self.ip._edit_lines_click(event)
self.assertTrue(self.ip._add_pick.called)
self.assertTrue(self.ip.update_lines.called)
@unittest.skipIf(sys.version_info[0] < 3, 'Mock is only on 3+')
def test_add_point_pick(self):
# need to mock a lot to not deal with actually doing any picking
with patch('impdar.lib.picklib.packet_pick', return_value=np.zeros((5, ))) as mock1:
with patch('impdar.lib.picklib.pick', return_value=np.zeros((5, self.ip.dat.tnum - 1))) as mock2:
self.ip._add_pick(0, 0)
self.ip._add_point_pick(0, self.ip.dat.tnum - 1)
@unittest.skipIf(sys.version_info[0] < 3, 'Mock is only on 3+')
def test_add_nan_pick(self):
with patch('impdar.lib.picklib.packet_pick', return_value=np.zeros((5, ))) as mock1:
self.ip._add_pick(0, 0)
self.ip._add_nanpick(1, 10)
self.assertEqual(self.ip.dat.picks.lasttrace.snum[0], 1)
self.assertEqual(self.ip.dat.picks.lasttrace.tnum[0], 10)
@unittest.skipIf(not qt, 'No Qt')
class TestInteractivePickerLoadingSaving(unittest.TestCase):
def setUp(self):
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data.mat'))
self.ip = InteractivePicker(data)
@unittest.skipIf(sys.version_info[0] < 3, 'Mock is only on 3+')
@patch('impdar.gui.pickgui.QMessageBox')
def test_save_cancel_closeSAVE(self, patchsave):
patchsave.return_value.exec_.return_value = patchsave.Save
patchsave.return_value.Save = patchsave.Save
self.ip.fn = None
self.ip._save_as = MagicMock(return_value=True)
event = DummyEvent()
event.accept = MagicMock()
self.ip._save_cancel_close(event)
self.assertTrue(self.ip._save_as.called)
self.assertTrue(event.accept.called)
event = DummyEvent()
event.ignore = MagicMock()
self.ip._save_as = MagicMock(return_value=False)
self.ip._save_cancel_close(event)
self.assertTrue(self.ip._save_as.called)
self.assertTrue(event.ignore.called)
self.ip.fn = 'dummy'
self.ip._save = MagicMock()
event.accept = MagicMock()
self.ip._save_cancel_close(event)
self.assertTrue(self.ip._save.called)
self.assertTrue(event.accept.called)
@unittest.skipIf(sys.version_info[0] < 3, 'Mock is only on 3+')
@patch('impdar.gui.pickgui.QMessageBox')
def test_save_cancel_closeCANCEL(self, patchcancel):
# patchcancel.exec_.return_value = patchcancel.Cancel
patchcancel.return_value.exec_.return_value = patchcancel.Cancel
patchcancel.return_value.Cancel = patchcancel.Cancel
event = DummyEvent()
event.ignore = MagicMock()
self.ip._save_cancel_close(event)
self.assertTrue(event.ignore.called)
@unittest.skipIf(sys.version_info[0] < 3, 'Mock is only on 3+')
@patch('impdar.gui.pickgui.QMessageBox')
def test_save_cancel_closeClose(self, patchclose):
# patchcancel.exec_.return_value = patchclose.Close
patchclose.return_value.exec_.return_value = patchclose.Close
patchclose.return_value.Close = patchclose.Close
event = DummyEvent()
event.accept = MagicMock()
self.ip._save_cancel_close(event)
self.assertTrue(event.accept.called)
@unittest.skipIf(sys.version_info[0] < 3, 'Mock is only on 3+')
@patch('impdar.gui.pickgui.QFileDialog')
def test_load_cp(self, patchqfd):
patchqfd.getOpenFileName.return_value = ('not_a_file', True)
with self.assertRaises(IOError):
self.ip._load_cp(DummyEvent())
patchqfd.getOpenFileName.return_value = (os.path.join(THIS_DIR, 'input_data', 'small_data.mat'), True)
with patch('impdar.gui.pickgui.warn') as patchwarn:
self.ip._load_cp(DummyEvent())
self.assertTrue(patchwarn.called)
patchqfd.getOpenFileName.return_value = (os.path.join(THIS_DIR, 'input_data', 'cross_picked.mat'), True)
self.ip._load_cp(DummyEvent())
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data.mat'))
self.ip = InteractivePicker(data, ydat='depth')
self.ip._load_cp(DummyEvent())
def test_save(self):
self.ip.fn = None
with self.assertRaises(AttributeError):
self.ip._save(DummyEvent())
self.ip.fn = os.path.join(THIS_DIR, 'input_data', 'test_out.mat')
self.ip._save(DummyEvent())
self.assertTrue(os.path.exists(os.path.join(THIS_DIR, 'input_data', 'test_out.mat')))
os.remove(os.path.join(THIS_DIR, 'input_data', 'test_out.mat'))
@unittest.skipIf(sys.version_info[0] < 3, 'Mock is only on 3+')
@patch('impdar.gui.pickgui.QFileDialog')
def test_save_as(self, patchqfd):
patchqfd.getSaveFileName.return_value = (os.path.join(THIS_DIR, 'input_data', 'test_out.mat'), True)
self.ip._save_as(DummyEvent())
self.assertTrue(os.path.exists(os.path.join(THIS_DIR, 'input_data', 'test_out.mat')))
os.remove(os.path.join(THIS_DIR, 'input_data', 'test_out.mat'))
@unittest.skipIf(sys.version_info[0] < 3, 'Mock is only on 3+')
@patch('impdar.gui.pickgui.QFileDialog')
def test_export_csv(self, patchqfd):
patchqfd.getSaveFileName.return_value = (os.path.join(THIS_DIR, 'input_data', 'test.csv'), True)
self.ip._export_csv(DummyEvent())
self.assertTrue(os.path.exists(os.path.join(THIS_DIR, 'input_data', 'test.csv')))
os.remove(os.path.join(THIS_DIR, 'input_data', 'test.csv'))
@unittest.skipIf(sys.version_info[0] < 3, 'Mock is only on 3+')
@unittest.skipIf(not CONVERSIONS_ENABLED, 'No GDAL on this version')
@patch('impdar.gui.pickgui.QFileDialog')
def test_export_shp(self, patchqfd):
patchqfd.getSaveFileName.return_value = (os.path.join(THIS_DIR, 'input_data', 'test.shp'), True)
self.ip._export_shp(DummyEvent())
for ext in ['shp', 'shx', 'prj', 'dbf']:
self.assertTrue(os.path.exists(os.path.join(THIS_DIR, 'input_data', 'test.' + ext)))
os.remove(os.path.join(THIS_DIR, 'input_data', 'test.' + ext))
@unittest.skipIf(not qt, 'No Qt')
class TestInteractivePickerProcessing(unittest.TestCase):
def setUp(self):
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data.mat'))
self.ip = InteractivePicker(data)
@unittest.skipIf(sys.version_info[0] < 3, 'Mock is only on 3+')
def test_ahfilt(self):
self.ip.dat.adaptivehfilt = MagicMock()
# takes a dummy event arg
self.ip._ahfilt(DummyEvent())
self.assertTrue(self.ip.dat.adaptivehfilt.called)
@unittest.skipIf(sys.version_info[0] < 3, 'Mock is only on 3+')
@patch('impdar.gui.pickgui.VBPInputDialog', exec_=lambda x: None, lims=(100, 200))
def test_vbp(self, vbpmock):
self.ip.dat.vertical_band_pass = MagicMock()
self.ip._vbp(DummyEvent())
self.assertTrue(self.ip.dat.vertical_band_pass.called)
@unittest.skipIf(sys.version_info[0] < 3, 'Mock is only on 3+')
@patch('impdar.gui.pickgui.CropInputDialog', exec_=lambda x: None, top_or_bottom='top', inputtype='twtt')
def test_crop(self, cropinputmock):
self.ip.dat.crop = MagicMock()
self.ip._crop(DummyEvent())
self.assertTrue(self.ip.dat.crop.called)
@unittest.skipIf(sys.version_info[0] < 3, 'Mock is only on 3+')
def test_reverse(self):
self.ip.dat.reverse = MagicMock()
self.ip._reverse(DummyEvent())
self.assertTrue(self.ip.dat.reverse.called)
@unittest.skipIf(not qt, 'No Qt')
class TestVBP(unittest.TestCase):
def test_VBPInputDialog(self):
vbp = VBPInputDialog()
vbp._click_ok()
self.assertTrue(vbp.lims == (50, 250))
self.assertTrue(vbp.accepted)
vbp = VBPInputDialog()
vbp.minspin.setValue(2)
vbp.maxspin.setValue(298)
vbp._click_ok()
self.assertTrue(vbp.lims == (2, 298))
self.assertTrue(vbp.accepted)
vbp = VBPInputDialog()
vbp.minspin.setValue(299)
vbp.maxspin.setValue(298)
# click OK twice since we have bad lims
vbp._click_ok()
vbp._click_ok()
self.assertTrue(vbp.lims == (297, 298))
self.assertTrue(vbp.accepted)
@unittest.skipIf(not qt, 'No Qt')
class TestCrop(unittest.TestCase):
def test_CropInputDialog(self):
cid = CropInputDialog()
cid._click_ok()
self.assertTrue(cid.accepted)
cid = CropInputDialog()
cid.inputtype.setCurrentText('snum')
self.assertTrue(cid.spinnerlabel.text() == 'Cutoff (sample num):')
cid.inputtype.setCurrentText('twtt')
self.assertTrue(cid.spinnerlabel.text() == 'Cutoff in TWTT (usec):')
cid.inputtype.setCurrentText('depth')
self.assertTrue(cid.spinnerlabel.text() == 'Cutoff in depth (m):')
cid._click_ok()
self.assertTrue(cid.accepted)
if __name__ == '__main__':
unittest.main()
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 dlilien <dlilien90@gmail.com>
#
# Distributed under terms of the GNU GPL-3.0 license.
"""
Test the machinery of process. This is broken up to match where it would likely fail; tests process wrappers of various methods are with the tests of those methods
"""
import sys
import os
import unittest
from impdar.bin import impdarexec
if sys.version_info[0] >= 3:
from unittest.mock import patch, MagicMock
else:
from mock import patch, MagicMock
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
class TestMain(unittest.TestCase):
@patch('impdar.bin.impdarexec.load.load_and_exit')
def test_load(self, load_patch):
impdarexec.sys.argv = ['dummy', 'load', 'mat', 'fn.mat']
impdarexec.main()
self.assertTrue(load_patch.called)
aca, kwca = load_patch.call_args
self.assertEqual(kwca['fns_in'], ['fn.mat'])
self.assertEqual(kwca['filetype'], 'mat')
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impdarexec.sys.argv = ['dummy', 'load', 'notanintype', 'fn.mat']
impdarexec.main()
@patch('impdar.bin.impdarexec.process.process_and_exit')
def test_process(self, process_patch):
impdarexec.sys.argv = ['dummy', 'proc', '-rev', 'fn.mat']
impdarexec.main()
self.assertTrue(process_patch.called)
aca, kwca = process_patch.call_args
self.assertEqual(kwca['fn'], ['fn.mat'])
self.assertEqual(kwca['rev'], True)
@patch('impdar.bin.impdarexec.plot.plot')
def test_plot(self, plot_patch):
impdarexec.sys.argv = ['dummy', 'plot', 'fn.mat']
impdarexec.main()
self.assertTrue(plot_patch.called)
aca, kwca = plot_patch.call_args
self.assertEqual(kwca['fns'], ['fn.mat'])
@patch('impdar.bin.impdarexec.convert.convert')
def test_convert(self, convert_patch):
impdarexec.sys.argv = ['dummy', 'convert', 'fn.mat', 'shp']
impdarexec.main()
self.assertTrue(convert_patch.called)
aca, kwca = convert_patch.call_args
self.assertEqual(kwca['fns_in'], ['fn.mat'])
self.assertEqual(kwca['out_fmt'], 'shp')
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impdarexec.sys.argv = ['dummy', 'convert', 'fn.mat', 'notanoutput']
impdarexec.main()
if __name__ == '__main__':
unittest.main()
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 dlilien <dlilien90@gmail.com>
#
# Distributed under terms of the GNU GPL-3.0 license.
"""
Test the machinery of imppick.
"""
import sys
import unittest
try:
from impdar.bin import imppick
QT = True
except ImportError:
QT = False
if sys.version_info[0] >= 3:
from unittest.mock import patch, MagicMock
else:
from mock import patch, MagicMock
class TestMain(unittest.TestCase):
# mock so that we have no real gui
@unittest.skipIf(not QT, 'No Qt')
@patch('impdar.bin.imppick.QtWidgets.QApplication')
@patch('impdar.bin.imppick.pickgui.InteractivePicker')
def test_badinput(self, pick_patch, qapppatch):
imppick.sys.argv = ['dummy']
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(BaseException):
imppick.main()
imppick.sys.argv = ['dummy', '-xd']
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(BaseException):
imppick.main()
imppick.sys.argv = ['dummy', '-yd']
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(BaseException):
imppick.main()
imppick.sys.argv = ['dummy', '-xd', '-yd']
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(BaseException):
imppick.main()
imppick.sys.argv = ['dummy', 'fn', 'fn2']
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(BaseException):
imppick.main()
@unittest.skipIf(not QT, 'No Qt')
@patch('impdar.bin.imppick.QtWidgets.QApplication')
@patch('impdar.bin.imppick.pickgui.InteractivePicker')
@patch('impdar.bin.imppick.load.load')
def test_pick_tnumsnum(self, load_patch, pick_patch, qapppatch):
load_patch.return_value = [MagicMock()]
imppick.sys.argv = ['dummy', 'fn']
# this is supposed to exit when finished
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
imppick.main()
self.assertTrue(load_patch.called)
load_patch.asseert_called_with('mat', ['fn'])
self.assertTrue(pick_patch.called)
pick_patch.assert_called_with(load_patch.return_value[0], xdat='tnum', ydat='twtt')
@unittest.skipIf(not QT, 'No Qt')
@patch('impdar.bin.imppick.QtWidgets.QApplication')
@patch('impdar.bin.imppick.pickgui.InteractivePicker')
@patch('impdar.bin.imppick.load.load')
def test_pick_tnumdepth(self, load_patch, pick_patch, qapppatch):
load_patch.return_value = [MagicMock()]
imppick.sys.argv = ['dummy', 'fn', '-yd']
# this is supposed to exit when finished
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
imppick.main()
self.assertTrue(load_patch.called)
load_patch.asseert_called_with('mat', ['fn'])
self.assertTrue(pick_patch.called)
pick_patch.assert_called_with(load_patch.return_value[0], xdat='tnum', ydat='depth')
@unittest.skipIf(not QT, 'No Qt')
@patch('impdar.bin.imppick.QtWidgets.QApplication')
@patch('impdar.bin.imppick.pickgui.InteractivePicker')
@patch('impdar.bin.imppick.load.load')
def test_pick_distsnum(self, load_patch, pick_patch, qapppatch):
load_patch.return_value = [MagicMock()]
imppick.sys.argv = ['dummy', 'fn', '-xd']
# this is supposed to exit when finished
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
imppick.main()
self.assertTrue(load_patch.called)
load_patch.asseert_called_with('mat', ['fn'])
self.assertTrue(pick_patch.called)
pick_patch.assert_called_with(load_patch.return_value[0], xdat='dist', ydat='twtt')
if __name__ == '__main__':
unittest.main()
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 dlilien <dlilien90@gmail.com>
#
# Distributed under terms of the GNU GPL-3.0 license.
"""
Test the machinery of impplot.
This is broken up to match where it would likely fail;
tests process wrappers of various methods are with the tests of those methods
"""
import sys
import unittest
from impdar.bin import impplot
if sys.version_info[0] >= 3:
from unittest.mock import patch, MagicMock
else:
from mock import patch, MagicMock
class TestMain(unittest.TestCase):
# mock so that we have no real processing
@patch('impdar.bin.impplot.plot.plot')
def test_badinput(self, plot_patch):
impplot.sys.argv = ['dummy']
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(BaseException):
impplot.main()
impplot.sys.argv = ['dummy', 'dummy']
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(BaseException):
impplot.main()
impplot.sys.argv = ['dummy', 'rg']
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(BaseException):
impplot.main()
impplot.sys.argv = ['dummy', 'dummy', 'fn']
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impplot.main()
@patch('impdar.bin.impplot.plot.plot')
def test_rg(self, plot_patch):
impplot.sys.argv = ['dummy', 'rg', 'fn']
impplot.main()
self.assertTrue(plot_patch.called)
aca, kwca = plot_patch.call_args
self.assertEqual(aca[0], ['fn'])
# we can let these default, but if touched must be None
if 'power' in kwca:
self.assertIsNone(kwca['power'])
if 'tr' in kwca:
self.assertIsNone(kwca['tr'])
@patch('impdar.bin.impplot.plot.plot')
def test_power(self, plot_patch):
impplot.sys.argv = ['dummy', 'power', 'fn', '16']
impplot.main()
self.assertTrue(plot_patch.called)
aca, kwca = plot_patch.call_args
self.assertEqual(aca[0], ['fn'])
self.assertEqual(kwca['power'], 16)
if 'tr' in kwca:
self.assertIsNone(kwca['tr'])
impplot.sys.argv = ['dummy', 'power', 'fn', '16.']
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impplot.main()
@patch('impdar.bin.impplot.plot.plot')
def test_traces(self, plot_patch):
impplot.sys.argv = ['dummy', 'traces', 'fn', '8', '16']
impplot.main()
self.assertTrue(plot_patch.called)
aca, kwca = plot_patch.call_args
self.assertEqual(aca[0], ['fn'])
self.assertEqual(kwca['tr'], (8, 16))
if 'power' in kwca:
self.assertIsNone(kwca['power'])
impplot.sys.argv = ['dummy', 'traces', 'fn', '8', '16.']
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impplot.main()
impplot.sys.argv = ['dummy', 'traces', 'fn', '8.', '16']
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impplot.main()
impplot.sys.argv = ['dummy', 'traces', 'fn', '8']
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(BaseException):
impplot.main()
if __name__ == '__main__':
unittest.main()
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 dlilien <dlilien90@gmail.com>
#
# Distributed under terms of the GNU GPL-3.0 license.
"""
Test the machinery of process. This is broken up to match where it would likely fail; tests process wrappers of various methods are with the tests of those methods
"""
import sys
import os
import unittest
from impdar.bin import impproc
from impdar.lib import NoInitRadarData
if sys.version_info[0] >= 3:
from unittest.mock import patch, MagicMock
else:
from mock import patch, MagicMock
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
class TestMain(unittest.TestCase):
# mock so that we have no real processing
@patch('impdar.bin.impproc.agc')
@patch('impdar.bin.impproc.load')
def test_inputfile(self, load_patch, agc_patch):
load_patch.return_value = [MagicMock()]
impproc.sys.argv = ['dummy', 'agc', os.path.join(THIS_DIR, 'input_data', 'small_data.mat')]
impproc.main()
self.assertTrue(agc_patch.called)
self.assertTrue(load_patch.called)
self.assertTrue(load_patch.called_with('mat', [os.path.join(THIS_DIR, 'input_data', 'small_data.mat')]))
impproc.sys.argv = ['dummy', 'agc', os.path.join(THIS_DIR, 'input_data', 'small_data.mat'), os.path.join(THIS_DIR, 'input_data', 'small_data.mat')]
impproc.main()
self.assertTrue(agc_patch.called)
self.assertTrue(load_patch.called)
load_patch.assert_called_with('mat', [os.path.join(THIS_DIR, 'input_data', 'small_data.mat'), os.path.join(THIS_DIR, 'input_data', 'small_data.mat')])
# mock so that we have no real processing
@patch('impdar.bin.impproc.agc')
@patch('impdar.bin.impproc.load')
def test_outputfile(self, load_patch, agc_patch):
load_patch.return_value = [MagicMock()]
rd_patch = load_patch.return_value
rd_patch[0].save = MagicMock()
impproc.sys.argv = ['dummy', 'agc', '-o', 'dummy', os.path.join(THIS_DIR, 'input_data', 'small_data.mat')]
impproc.main()
self.assertTrue(agc_patch.called)
self.assertTrue(load_patch.called)
self.assertTrue(load_patch.called_with('mat', [os.path.join(THIS_DIR, 'input_data', 'small_data.mat')]))
self.assertTrue(rd_patch[0].save.called)
rd_patch[0].save.assert_called_with('dummy')
@patch('impdar.bin.impproc.agc')
@patch('impdar.bin.impproc.load')
def test_outputraw(self, load_patch, agc_patch):
load_patch.return_value = [MagicMock()]
rd_patch = load_patch.return_value
for p in rd_patch:
p.save = MagicMock()
impproc.sys.argv = ['dummy', 'agc', os.path.join(THIS_DIR, 'input_data', 'small_data_raw.mat')]
impproc.main()
self.assertTrue(load_patch.called)
self.assertTrue(load_patch.called_with('mat', [os.path.join(THIS_DIR, 'input_data', 'small_data_raw.mat')]))
for p in rd_patch:
self.assertTrue(p.save.called)
p.save.assert_called_with(os.path.join(THIS_DIR, 'input_data', 'small_data_agc.mat'))
@patch('impdar.bin.impproc.agc')
@patch('impdar.bin.impproc.load')
def test_outputmultiple(self, load_patch, agc_patch):
load_patch.return_value = [MagicMock(), MagicMock()]
rd_patch = load_patch.return_value
for p in rd_patch:
p.save = MagicMock()
impproc.sys.argv = ['dummy', 'agc', '-o', 'dummy', os.path.join(THIS_DIR, 'input_data', 'small_data.mat'), os.path.join(THIS_DIR, 'input_data', 'small_data.mat')]
impproc.main()
for p in rd_patch:
self.assertTrue(p.save.called)
p.save.assert_called_with(os.path.join('dummy', 'small_data_agc.mat'))
@patch('impdar.bin.impproc.agc')
@patch('impdar.bin.impproc.load')
def test_outputmultipleraw(self, load_patch, agc_patch):
load_patch.return_value = [MagicMock(), MagicMock()]
rd_patch = load_patch.return_value
for p in rd_patch:
p.save = MagicMock()
impproc.sys.argv = ['dummy', 'agc', '-o', 'dummy', os.path.join(THIS_DIR, 'input_data', 'small_data_raw.mat'), os.path.join(THIS_DIR, 'input_data', 'small_data_raw.mat')]
impproc.main()
for p in rd_patch:
self.assertTrue(p.save.called)
p.save.assert_called_with(os.path.join('dummy', 'small_data_agc.mat'))
def test_help(self):
with self.assertRaises(BaseException):
impproc.sys.argv = ['dummy']
impproc.main()
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'dummy']
impproc.main()
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'ahfilt']
impproc.main()
class TestInputs(unittest.TestCase):
@patch('impdar.bin.impproc.agc')
@patch('impdar.bin.impproc.load')
def test_agc(self, load_patch, agc_patch):
load_patch.return_value = [MagicMock()]
window = 10
impproc.sys.argv = ['dummy', 'agc', 'dummy.mat', '-window', str(window)]
impproc.main()
self.assertTrue(agc_patch.called)
aca, kwca = agc_patch.call_args
self.assertEqual(kwca['window'], window)
window = 50
impproc.sys.argv = ['dummy', 'agc', 'dummy.mat', '-window', str(window)]
impproc.main()
self.assertTrue(agc_patch.called)
aca, kwca = agc_patch.call_args
self.assertEqual(kwca['window'], window)
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'agc', 'dummy.mat', '-window', '10.1']
impproc.main()
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'agc', 'dummy.mat', '-window', 'badint']
impproc.main()
@patch('impdar.bin.impproc.vbp')
@patch('impdar.bin.impproc.load')
def test_vbp(self, load_patch, vbp_patch):
load_patch.return_value = [MagicMock()]
impproc.sys.argv = ['dummy', 'vbp', '10', '20', 'dummy.mat']
impproc.main()
self.assertTrue(vbp_patch.called)
aca, kwca = vbp_patch.call_args
self.assertEqual(kwca['low_MHz'], 10)
self.assertEqual(kwca['high_MHz'], 20)
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'vbp', 'dummy.mat', '10', '20']
impproc.main()
@patch('impdar.bin.impproc.rev')
@patch('impdar.bin.impproc.load')
def test_rev(self, load_patch, rev_patch):
load_patch.return_value = [MagicMock()]
impproc.sys.argv = ['dummy', 'rev', 'dummy.mat']
impproc.main()
self.assertTrue(rev_patch.called)
@patch('impdar.bin.impproc.ahfilt')
@patch('impdar.bin.impproc.load')
def test_ahfilt(self, load_patch, ahfilt_patch):
load_patch.return_value = [MagicMock()]
impproc.sys.argv = ['dummy', 'ahfilt', '1000', 'dummy.mat']
impproc.main()
self.assertTrue(ahfilt_patch.called)
@patch('impdar.bin.impproc.nmo')
@patch('impdar.bin.impproc.load')
def test_nmo(self, load_patch, nmo_patch):
load_patch.return_value = [MagicMock()]
sep = 123.4
impproc.sys.argv = ['dummy', 'nmo', str(sep), 'dummy.mat']
impproc.main()
self.assertTrue(nmo_patch.called)
aca, kwca = nmo_patch.call_args
self.assertEqual(kwca['ant_sep'], sep)
impproc.sys.argv = ['dummy', 'nmo', '--uice', str(10), str(sep), 'dummy.mat']
impproc.main()
aca, kwca = nmo_patch.call_args
self.assertEqual(kwca['ant_sep'], sep)
self.assertEqual(kwca['uice'], 10.)
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'nmo', '--uice', 'badvel', str(sep), 'dummy.mat']
impproc.main()
impproc.sys.argv = ['dummy', 'nmo', '--uair', str(10), str(sep), 'dummy.mat']
impproc.main()
aca, kwca = nmo_patch.call_args
self.assertEqual(kwca['ant_sep'], sep)
self.assertEqual(kwca['uair'], 10.)
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'nmo', '--uair', 'badvel', str(sep), 'dummy.mat']
impproc.main()
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'nmo', 'dummy.mat', str(sep)]
impproc.main()
@patch('impdar.bin.impproc.interp')
@patch('impdar.bin.impproc.load')
def test_interp(self, load_patch, interp_patch):
load_patch.return_value = [MagicMock()]
spacing = 10.
impproc.sys.argv = ['dummy', 'interp', str(spacing), 'dummy.mat']
impproc.main()
self.assertTrue(interp_patch.called)
aca, kwca = interp_patch.call_args
self.assertEqual(kwca['spacing'], spacing)
self.assertEqual(kwca['extrapolate'], False)
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'interp', 'dummy.mat', os.path.join(THIS_DIR, 'input_data', 'small_data.mat')]
impproc.main()
impproc.sys.argv = ['dummy', 'interp', '--gps_fn', 'dummy', str(spacing), 'dummy.mat']
impproc.main()
aca, kwca = interp_patch.call_args
self.assertEqual(kwca['spacing'], spacing)
self.assertEqual(kwca['gps_fn'], 'dummy')
impproc.sys.argv = ['dummy', 'interp', '--offset', str(10), str(spacing), 'dummy.mat']
impproc.main()
aca, kwca = interp_patch.call_args
self.assertEqual(kwca['spacing'], spacing)
self.assertEqual(kwca['offset'], 10.)
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'interp', '--offset', 'badfloat', str(spacing), 'dummy.mat']
impproc.main()
impproc.sys.argv = ['dummy', 'interp', '--minmove', str(10), str(spacing), 'dummy.mat']
impproc.main()
aca, kwca = interp_patch.call_args
self.assertEqual(kwca['spacing'], spacing)
self.assertEqual(kwca['minmove'], 10.)
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'interp', '--minmove', 'badfloat', str(spacing), 'dummy.mat']
impproc.main()
impproc.sys.argv = ['dummy', 'interp', '--extrapolate', str(spacing), 'dummy.mat']
impproc.main()
aca, kwca = interp_patch.call_args
self.assertEqual(kwca['spacing'], spacing)
self.assertEqual(kwca['extrapolate'], True)
@patch('impdar.bin.impproc.concat')
@patch('impdar.bin.impproc.load')
def test_cat(self, load_patch, cat_patch):
load_patch.return_value = [MagicMock()]
impproc.sys.argv = ['dummy', 'cat', 'dummy.mat', 'dummy.mat']
impproc.main()
self.assertTrue(cat_patch.called)
@patch('impdar.bin.impproc.elev')
@patch('impdar.bin.impproc.load')
def test_elev(self, load_patch, elev_patch):
load_patch.return_value = [MagicMock()]
impproc.sys.argv = ['dummy', 'elev', 'dummy.mat']
impproc.main()
self.assertTrue(elev_patch.called)
@patch('impdar.bin.impproc.hfilt')
@patch('impdar.bin.impproc.load')
def test_hfilt(self, load_patch, hfilt_patch):
load_patch.return_value = [MagicMock()]
impproc.sys.argv = ['dummy', 'hfilt', '10', '20', 'dummy.mat']
impproc.main()
self.assertTrue(hfilt_patch.called)
aca, kwca = hfilt_patch.call_args
self.assertEqual(kwca['start_trace'], 10)
self.assertEqual(kwca['end_trace'], 20)
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'hfilt', '10', 'dummy', 'dummy.mat']
impproc.main()
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'hfilt', 'dummy', '10', 'dummy.mat']
impproc.main()
@patch('impdar.bin.impproc.crop')
@patch('impdar.bin.impproc.load')
def test_crop(self, load_patch, crop_patch):
load_patch.return_value = [MagicMock()]
lim = 10
for top_or_bottom in ['top', 'bottom']:
for dimension in ['snum', 'twtt', 'depth', 'pretrig']:
impproc.sys.argv = ['dummy', 'crop', top_or_bottom, dimension, str(lim), 'dummy.mat']
impproc.main()
self.assertTrue(crop_patch.called)
aca, kwca = crop_patch.call_args
self.assertEqual(kwca['top_or_bottom'], top_or_bottom)
self.assertEqual(kwca['dimension'], dimension)
self.assertEqual(kwca['lim'], lim)
# Now bad entries
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'crop', 'top', 'bad', str(lim), 'dummy.mat']
impproc.main()
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'crop', 'bad', 'snum', str(lim), 'dummy.mat']
impproc.main()
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'crop', 'top', 'snum', 'notgood', 'dummy.mat']
impproc.main()
@patch('impdar.bin.impproc.hcrop')
@patch('impdar.bin.impproc.load')
def test_hcrop(self, load_patch, hcrop_patch):
load_patch.return_value = [MagicMock()]
lim = 10
for left_or_right in ['left', 'right']:
for dimension in ['tnum', 'dist']:
impproc.sys.argv = ['dummy', 'hcrop', left_or_right, dimension, str(lim), 'dummy.mat']
impproc.main()
self.assertTrue(hcrop_patch.called)
aca, kwca = hcrop_patch.call_args
self.assertEqual(kwca['left_or_right'], left_or_right)
self.assertEqual(kwca['dimension'], dimension)
self.assertEqual(kwca['lim'], lim)
# Now bad entries
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'hcrop', 'left', 'bad', str(lim), 'dummy.mat']
impproc.main()
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'hcrop', 'bad', 'tnum', str(lim), 'dummy.mat']
impproc.main()
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'hcrop', 'left', 'tnum', 'notgood', 'dummy.mat']
impproc.main()
@patch('impdar.bin.impproc.restack')
@patch('impdar.bin.impproc.load')
def test_restack(self, load_patch, restack_patch):
load_patch.return_value = [MagicMock()]
interval = 3
impproc.sys.argv = ['dummy', 'restack', str(interval), 'dummy.mat']
impproc.main()
self.assertTrue(restack_patch.called)
aca, kwca = restack_patch.call_args
self.assertEqual(kwca['traces'], interval)
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'restack', 'bad', 'dummy.mat']
impproc.main()
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'restack', '0.1', 'dummy.mat']
impproc.main()
@patch('impdar.bin.impproc.rgain')
@patch('impdar.bin.impproc.load')
def test_rgain(self, load_patch, rgain_patch):
load_patch.return_value = [MagicMock()]
impproc.sys.argv = ['dummy', 'rgain', 'dummy.mat']
impproc.main()
self.assertTrue(rgain_patch.called)
slope = 10
impproc.sys.argv = ['dummy', 'rgain', '-slope', str(slope), 'dummy.mat']
impproc.main()
aca, kwca = rgain_patch.call_args
self.assertEqual(kwca['slope'], slope)
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'rgain', '-slope', 'bad', 'dummy.mat']
impproc.main()
@patch('impdar.bin.impproc.mig')
@patch('impdar.bin.impproc.load')
def test_migrateTypes(self, load_patch, migrate_patch):
load_patch.return_value = [MagicMock()]
impproc.sys.argv = ['dummy', 'migrate', 'dummy.mat']
impproc.main()
self.assertTrue(migrate_patch.called)
# mtype tests
for mtype in ['stolt', 'kirch', 'phsh', 'tk', 'sustolt', 'sumigtk', 'sumigffd']:
impproc.sys.argv = ['dummy', 'migrate', '--mtype', mtype, 'dummy.mat']
impproc.main()
aca, kwca = migrate_patch.call_args
self.assertEqual(kwca['mtype'], mtype)
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'migrate', '--mtype', 'bad', 'dummy.mat']
impproc.main()
impproc.sys.argv = ['dummy', 'migrate', '--nearfield', 'dummy.mat']
impproc.main()
aca, kwca = migrate_patch.call_args
self.assertEqual(kwca['nearfield'], True)
badint = 0.1
goodint = 10
worseint = 'hello'
impproc.sys.argv = ['dummy', 'migrate', '--htaper', str(goodint), 'dummy.mat']
impproc.main()
aca, kwca = migrate_patch.call_args
self.assertEqual(kwca['htaper'], goodint)
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'migrate', '--htaper', str(badint), 'dummy.mat']
impproc.main()
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'migrate', '--htaper', str(worseint), 'dummy.mat']
impproc.main()
impproc.sys.argv = ['dummy', 'migrate', '--vtaper', str(goodint), 'dummy.mat']
impproc.main()
aca, kwca = migrate_patch.call_args
self.assertEqual(kwca['vtaper'], goodint)
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'migrate', '--vtaper', str(badint), 'dummy.mat']
impproc.main()
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'migrate', '--vtaper', str(worseint), 'dummy.mat']
impproc.main()
impproc.sys.argv = ['dummy', 'migrate', '--nxpad', str(goodint), 'dummy.mat']
impproc.main()
aca, kwca = migrate_patch.call_args
self.assertEqual(kwca['nxpad'], goodint)
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'migrate', '--nxpad', str(badint), 'dummy.mat']
impproc.main()
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'migrate', '--nxpad', str(worseint), 'dummy.mat']
impproc.main()
impproc.sys.argv = ['dummy', 'migrate', '--tmig', str(goodint), 'dummy.mat']
impproc.main()
aca, kwca = migrate_patch.call_args
self.assertEqual(kwca['tmig'], goodint)
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'migrate', '--tmig', str(badint), 'dummy.mat']
impproc.main()
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'migrate', '--tmig', str(worseint), 'dummy.mat']
impproc.main()
impproc.sys.argv = ['dummy', 'migrate', '--verbose', str(goodint), 'dummy.mat']
impproc.main()
aca, kwca = migrate_patch.call_args
self.assertEqual(kwca['verbose'], goodint)
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'migrate', '--verbose', str(badint), 'dummy.mat']
impproc.main()
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'migrate', '--verbose', str(worseint), 'dummy.mat']
impproc.main()
impproc.sys.argv = ['dummy', 'migrate', '--vel', str(goodint), 'dummy.mat']
impproc.main()
aca, kwca = migrate_patch.call_args
self.assertEqual(kwca['vel'], goodint)
argparse_mock = MagicMock()
with patch('argparse.ArgumentParser._print_message', argparse_mock):
with self.assertRaises(SystemExit):
impproc.sys.argv = ['dummy', 'migrate', '--vel', 'dummy', 'dummy.mat']
impproc.main()
impproc.sys.argv = ['dummy', 'migrate', '--vel_fn', str(goodint), 'dummy.mat']
impproc.main()
aca, kwca = migrate_patch.call_args
self.assertEqual(kwca['vel_fn'], str(goodint))
class TestProc(unittest.TestCase):
def setUp(self):
self.data = NoInitRadarData.NoInitRadarDataFiltering()
def test_hfilt(self):
impproc.hfilt(self.data)
def test_ahfilt(self):
impproc.ahfilt(self.data)
def test_rev(self):
impproc.rev(self.data)
def test_elev(self):
self.data.nmo_depth = self.data.travel_time * 1.68e8 / 2.
impproc.elev(self.data)
def test_vbp(self):
impproc.vbp(self.data, 0.1, 100.)
def test_crop(self):
impproc.crop(self.data, 2)
def test_hcrop(self):
impproc.crop(self.data, 2)
def test_nmo(self):
impproc.nmo(self.data)
def test_restack(self):
impproc.restack(self.data)
def test_rgain(self):
impproc.rgain(self.data)
def test_agc(self):
impproc.agc(self.data)
def test_mig(self):
impproc.mig(self.data)
if __name__ == '__main__':
unittest.main()
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 dlilien <dlilien@berens>
#
# Distributed under terms of the GNU GPL3.0 license.
"""
Test the methods on the Pick object
"""
import os
import unittest
import numpy as np
from impdar.lib.LastTrace import LastTrace
class TestPickMods(unittest.TestCase):
def test_mod_line(self):
lt = LastTrace()
with self.assertRaises(AttributeError):
lt.mod_line(0, 1, 1)
lt.snum = [0]
lt.tnum = [0]
with self.assertRaises(ValueError):
lt.mod_line(1, 50, 40)
lt.mod_line(0, 50, 40)
self.assertEqual(lt.snum[0], 50)
self.assertEqual(lt.tnum[0], 40)
def test_add_pick(self):
lt = LastTrace()
lt.add_pick(0, 10)
self.assertEqual(len(lt.snum), 1)
self.assertEqual(len(lt.tnum), 1)
self.assertEqual(lt.snum[0], 0)
self.assertEqual(lt.tnum[0], 10)
lt.add_pick(50, 40)
self.assertEqual(len(lt.snum), 2)
self.assertEqual(len(lt.tnum), 2)
self.assertEqual(lt.snum, [0, 50])
self.assertEqual(lt.tnum, [10, 40])
with self.assertRaises(TypeError):
lt.add_pick([12, 15.5], 0)
if __name__ == '__main__':
unittest.main()
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 dlilien <dlilien@berens>
#
# Distributed under terms of the GNU GPL3.0 license.
"""
"""
import sys
import os
import unittest
from impdar.lib import load
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
class TestLoad(unittest.TestCase):
def test_loadmat(self):
data = load.load('mat', os.path.join(THIS_DIR, 'input_data', 'small_data.mat'))
self.assertEqual(data[0].data.shape, (20, 40))
def test_loadgssi(self):
data = load.load('gssi', os.path.join(THIS_DIR, 'input_data', 'test_gssi.DZT'))
def test_loadpe(self):
data = load.load('pe', os.path.join(THIS_DIR, 'input_data', 'test_pe.DT1'))
@unittest.skipIf(sys.version_info[0] < 3, 'Bytes are weird in 2')
def test_loadgecko(self):
data = load.load('gecko', os.path.join(THIS_DIR, 'input_data', 'test_gecko.gtd'))
data = load.load('gecko', [os.path.join(THIS_DIR, 'input_data', 'test_gecko.gtd'), os.path.join(THIS_DIR, 'input_data', 'test_gecko.gtd')])
def test_loadbad(self):
with self.assertRaises(ValueError):
data = load.load('bad', os.path.join(THIS_DIR, 'input_data', 'small_data.bad'))
def test_load_and_exitmat(self):
data = load.load_and_exit('mat', os.path.join(THIS_DIR, 'input_data', 'small_data.mat'), o=os.path.join(THIS_DIR, 'input_data', 'small_data_rawrrr.mat'))
self.assertTrue(os.path.exists(os.path.join(THIS_DIR, 'input_data', 'small_data_rawrrr.mat')))
@unittest.skipIf(sys.version_info[0] < 3, 'Bytes are weird in 2')
def test_load_and_exitgecko(self):
load.load_and_exit('gecko', os.path.join(THIS_DIR, 'input_data', 'test_gecko.gtd'))
self.assertTrue(os.path.exists(os.path.join(THIS_DIR, 'input_data', 'test_gecko_raw.mat')))
os.remove(os.path.join(THIS_DIR, 'input_data', 'test_gecko_raw.mat'))
load.load_and_exit('gecko', [os.path.join(THIS_DIR, 'input_data', 'test_gecko.gtd'), os.path.join(THIS_DIR, 'input_data', 'test_gecko.gtd')])
self.assertTrue(os.path.exists(os.path.join(THIS_DIR, 'input_data', 'test_gecko_raw.mat')))
os.remove(os.path.join(THIS_DIR, 'input_data', 'test_gecko_raw.mat'))
def test_load_and_exitcustomfn(self):
data = load.load_and_exit('mat', os.path.join(THIS_DIR, 'input_data', 'small_data.mat'))
self.assertTrue(os.path.exists(os.path.join(THIS_DIR, 'input_data', 'small_data_raw.mat')))
@unittest.skipIf(sys.version_info[0] < 3, 'FileNotFoundError not in 2')
def test_load_and_exiterror(self):
# We dont have an output folder
with self.assertRaises(FileNotFoundError):
load.load_and_exit('mat', [os.path.join(THIS_DIR, 'input_data', 'small_data.mat'), os.path.join(THIS_DIR, 'input_data', 'small_data.mat')], o='dummy')
def tearDown(self):
if os.path.exists(os.path.join(THIS_DIR, 'input_data', 'small_data_raw.mat')):
os.remove(os.path.join(THIS_DIR, 'input_data', 'small_data_raw.mat'))
if os.path.exists(os.path.join(THIS_DIR, 'input_data', 'small_data_rawrrr.mat')):
os.remove(os.path.join(THIS_DIR, 'input_data', 'small_data_rawrrr.mat'))
if __name__ == '__main__':
unittest.main()
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2020 David Lilien <dlilien90@gmail.com>
#
# Distributed under terms of the GNU GPL3.0 license.
"""
Testing loading_utils
"""
import unittest
from impdar.lib.load import loading_utils
class TestLoadGecko(unittest.TestCase):
def test_common_start(self):
start = loading_utils.common_start(['abra', 'abracadabra'])
self.assertEqual('abra', start)
start = loading_utils.common_start(['abra', 'abra'])
self.assertEqual('abra', start)
start = loading_utils.common_start(['abra', 'abra', 'abracad'])
self.assertEqual('abra', start)
start = loading_utils.common_start(['abra'])
self.assertEqual('abra', start)
start = loading_utils.common_start(['', 'abra'])
self.assertEqual('', start)
if __name__ == '__main__':
unittest.main()
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 dlilien <dlilien90@gmail.com>
#
# Distributed under terms of the GNU GPL3.0 license.
"""
Make sure that we can successfully read gssi input files
"""
import os
import unittest
import numpy as np
from impdar.lib.load import load_mcords
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
class TestMCoRDS_NC(unittest.TestCase):
@unittest.skipIf(not load_mcords.NC, 'No netcdf on this version')
def test_loadnc(self):
dat = load_mcords.load_mcords_nc(os.path.join(THIS_DIR, 'input_data', 'zeros_mcords.nc'))
self.assertTrue(np.all(dat.data == 0.))
@unittest.skipIf(load_mcords.NC, 'NETCDF on this version')
def test_loadnc_failure(self):
with self.assertRaises(ImportError):
load_mcords.load_mcords_nc(os.path.join(THIS_DIR, 'input_data', 'zeros_mcords.nc'))
class TestMCoRDS_MAT(unittest.TestCase):
def test_loadmat(self):
dat = load_mcords.load_mcords_mat(os.path.join(THIS_DIR,
'input_data',
'zeros_mcords_mat.mat'))
self.assertTrue(np.allclose(dat.data, 0.))
def test_loadbadmat(self):
with self.assertRaises(KeyError):
load_mcords.load_mcords_mat(os.path.join(THIS_DIR, 'input_data', 'small_data.mat'))
with self.assertRaises(KeyError):
load_mcords.load_mcords_mat(os.path.join(THIS_DIR,
'input_data',
'nonimpdar_matlab.mat'))
if __name__ == '__main__':
unittest.main()
#! /usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Test the migration routines
Author:
Benjamin Hills
benjaminhhills@gmail.com
University of Washington
Earth and Space Sciences
Mar 12 2019
"""
import sys
import os
import unittest
import pytest
import subprocess as sp
import numpy as np
from impdar.lib import migrationlib
from impdar.lib.migrationlib import mig_python
try:
from impdar.lib.migrationlib import mig_cython
CYTHON = True
except ImportError:
CYTHON = False
from impdar.lib.load import load_segy
from impdar.lib.NoInitRadarData import NoInitRadarData
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
OUT_DIR = os.path.join(THIS_DIR, 'Migration_tests')
OUT_PREFIX = 'rectangle'
# in_file = out_prefix+'_gprMax_Bscan.h5'
class TestMigration(unittest.TestCase):
def test_check_data_shape(self):
data = NoInitRadarData(big=True)
# should pass, i.e. nothing happens
mig_python._check_data_shape(data)
# make it fail
data.data = np.ones((1, 1))
with self.assertRaises(ValueError):
mig_python._check_data_shape(data)
def test_getVelocityProfile(self):
data = NoInitRadarData(big=True)
self.assertEqual(1.68e8, mig_python.getVelocityProfile(data, 1.68e8))
# need reasonable input here for 2d. Needs a different travel time.
data.travel_time = data.travel_time / 10.
mig_python.getVelocityProfile(data, np.genfromtxt(os.path.join(THIS_DIR, 'input_data', 'velocity_layers.txt')))
# this should still work since we are close
data.travel_time = data.travel_time / 10.
twod = np.genfromtxt(os.path.join(THIS_DIR, 'input_data', 'velocity_layers.txt'))
twod = twod * 0.0045 + 1.0e-7 * twod[1]
mig_python.getVelocityProfile(data, twod)
# need reasonable input here for 3d
data = NoInitRadarData(big=True)
mig_python.getVelocityProfile(data, np.genfromtxt(os.path.join(THIS_DIR, 'input_data', 'velocity_lateral.txt')))
# Bad distance with good 3d grid
data.dist = None
with self.assertRaises(ValueError):
mig_python.getVelocityProfile(data, np.genfromtxt(os.path.join(THIS_DIR, 'input_data', 'velocity_lateral.txt')))
data = NoInitRadarData(big=True)
# this should fail on bad z
twod_vel = 1.68e8 * np.ones((10, 2))
twod_vel[:, 1] = 0.
with self.assertRaises(ValueError):
mig_python.getVelocityProfile(data, twod_vel)
# Use some bad x values
with self.assertRaises(ValueError):
mig_python.getVelocityProfile(data, 1.68e8 * np.ones((10, 3)))
# bad z values
threed_vel = 1.68e8 * np.ones((10, 3))
threed_vel[:, -1] = np.arange(10) * 1000.
with self.assertRaises(ValueError):
mig_python.getVelocityProfile(data, threed_vel)
# Make sure we reject bad input shapes
with self.assertRaises(ValueError):
mig_python.getVelocityProfile(data, 1.68e8 * np.ones((8,)))
with self.assertRaises(ValueError):
mig_python.getVelocityProfile(data, 1.68e8 * np.ones((8, 1)))
with self.assertRaises(ValueError):
mig_python.getVelocityProfile(data, 1.68e8 * np.ones((1, 2)))
with self.assertRaises(ValueError):
mig_python.getVelocityProfile(data, 1.68e8 * np.ones((8, 4)))
def test_Stolt(self):
data = NoInitRadarData(big=True)
data = mig_python.migrationStolt(data)
def test_Kirchhoff(self):
data = NoInitRadarData(big=True)
data = mig_python.migrationKirchhoff(data)
def test_TimeWavenumber(self):
data = NoInitRadarData(big=True)
data = mig_python.migrationTimeWavenumber(data)
def test_PhaseShiftConstant(self):
data = NoInitRadarData(big=True)
data = mig_python.migrationPhaseShift(data)
def test_PhaseShiftVariable(self):
data = NoInitRadarData(big=True)
data.travel_time = data.travel_time / 10.
data = mig_python.migrationPhaseShift(data, vel_fn=os.path.join(THIS_DIR, 'input_data', 'velocity_layers.txt'))
data = NoInitRadarData(big=True)
with self.assertRaises(TypeError):
data = mig_python.migrationPhaseShift(data, vel_fn=os.path.join(THIS_DIR, 'input_data', 'notafile.txt'))
def test_PhaseShiftLateral(self):
data = NoInitRadarData(big=True)
data = mig_python.migrationPhaseShift(data, vel_fn=os.path.join(THIS_DIR, 'input_data', 'velocity_lateral.txt'))
@unittest.skipIf(sp.Popen(['which', 'sumigtk']).wait() != 0 or (not load_segy.SEGY) or (sys.version_info[0] < 3), 'SeisUnix not found')
def test_sumigtk(self):
pytest.importorskip('segyio', 'No SEGY on this version')
data = NoInitRadarData(big=True)
data.dt = 1.0e-9
data.travel_time = data.travel_time * 1.0e-9
data.fn = os.path.join(THIS_DIR, 'input_data', 'rectangle_sumigtk.mat')
migrationlib.migrationSeisUnix(data, quiet=True)
@unittest.skipIf(sp.Popen(['which', 'sumigtk']).wait() != 0 or (not load_segy.SEGY) or (sys.version_info[0] < 3), 'SeisUnix not found')
def test_sustolt(self):
pytest.importorskip('segyio', 'No SEGY on this version')
data = NoInitRadarData(big=True)
data.dt = 1.0e-9
data.travel_time = data.travel_time * 1.0e-9
data.fn = os.path.join(THIS_DIR, 'input_data', 'rectangle_sustolt.mat')
migrationlib.migrationSeisUnix(data, quiet=True)
@unittest.skipIf(sp.Popen(['which', 'sustolt']).wait() != 0 or load_segy.SEGY, 'Test of edge case')
def test_sustolt_nosegy(self):
data = NoInitRadarData(big=True)
data.dt = 1.0e-9
data.travel_time = data.travel_time * 1.0e-9
data.fn = os.path.join(THIS_DIR, 'input_data', 'rectangle_sustolt.mat')
with self.assertRaises(ImportError):
migrationlib.migrationSeisUnix(data)
@unittest.skipIf(sp.Popen(['which', 'sustolt']).wait() == 0, 'Test for no SeisUnix')
def test_sustolt_seisunix(self):
data = NoInitRadarData(big=True)
data.dt = 1.0e-9
data.travel_time = data.travel_time * 1.0e-9
data.fn = os.path.join(THIS_DIR, 'input_data', 'rectangle_sustolt.mat')
with self.assertRaises(Exception):
migrationlib.migrationSeisUnix(data)
def tearDown(self):
for suff in ['PhaseShiftLateral', 'PhaseShiftConstant', 'PhaseShiftVariable', 'Kirchoff', 'Stolt', 'sumigtk', 'sustolt']:
if os.path.exists(os.path.join(THIS_DIR, 'input_data', 'rectangle_' + suff + '.mat')):
os.remove(os.path.join(THIS_DIR, 'input_data', 'rectangle_' + suff + '.mat'))
class TestCythonMig(unittest.TestCase):
@unittest.skipIf(not CYTHON, 'No compiled mig library here')
def test_Kirchoff_cython(self):
data = NoInitRadarData(big=True)
data = mig_cython.migrationKirchhoff(data)
@unittest.skipIf(not CYTHON, 'No compiled mig library here')
def test_compKirchoff_cython(self):
data = NoInitRadarData(big=True)
pdata = NoInitRadarData(big=True)
data = mig_cython.migrationKirchhoff(data)
pdata = mig_python.migrationKirchhoff(pdata)
data.data[np.isnan(data.data)] = 0.
self.assertTrue(np.allclose(data.data, pdata.data))
if __name__ == '__main__':
unittest.main()
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 dlilien <dlilien@berens>
#
# Distributed under terms of the GNU GPL3.0 license.
"""
"""
import sys
sys.modules['segyio'] = None
sys.modules['_segyio'] = None
import os
import unittest
import numpy as np
from impdar.lib import load
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
@unittest.skipIf(sys.version_info[0] < 3, 'Excluding segyio fails')
class TestLoadNoSEGY(unittest.TestCase):
def test_loadmat(self):
# We want normal functionality if not SEGY
data = load.load('mat', os.path.join(THIS_DIR, 'input_data', 'small_data.mat'))
self.assertEqual(data[0].data.shape, (20, 40))
def test_segyimporterror(self):
# Fail if use segy
with self.assertRaises(ImportError):
data = load.load('segy', os.path.join(THIS_DIR, 'input_data', 'small_data.segy'))
if __name__ == '__main__':
unittest.main()
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 David Lilien <dlilien90@gmail.com>
#
# Distributed under terms of the GNU GPL3.0 license.
"""
Make sure that we can successfully read gssi input files
"""
import os
import unittest
from impdar.lib.load import load_pulse_ekko
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
class TestPE(unittest.TestCase):
def test_load_pe(self):
load_pulse_ekko.load_pe(os.path.join(THIS_DIR, 'input_data', 'test_pe.DT1'))
if __name__ == '__main__':
unittest.main()
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 dlilien <dlilien@berens>
#
# Distributed under terms of the GNU GPL3.0 license.
"""
Test the methods on the Pick object
"""
import os
import unittest
import numpy as np
from impdar.lib.NoInitRadarData import NoInitRadarData
from impdar.lib import picklib, Picks, RadarData
traces = np.random.random((300, 200))
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
class BareRadarData(NoInitRadarData):
def __init__(self):
super(BareRadarData, self).__init__()
self.dt = 1.0e-7
self.data = traces
self.picks = Picks.Picks(self)
class TestPickLib(unittest.TestCase):
def test_midpoint(self):
# Fully test that we find midpoints as expected
self.assertTrue(np.allclose(picklib._midpoint(200, 100, 100), np.ones((200,)) * 100.))
self.assertTrue(np.allclose(picklib._midpoint(200, -9999, 100), np.ones((200,)) * 100.))
self.assertTrue(np.allclose(picklib._midpoint(200, 0, 200), np.arange(200)))
def test_packet_power(self):
with self.assertRaises(ValueError):
picklib.packet_power(traces, 2, 100)
self.assertTrue(len(picklib.packet_power(traces[:, 0], 10, 100)[0]) == 10)
self.assertTrue(len(picklib.packet_power(traces[:, 0], 11, 100)[0]) == 11)
def test_packet_pick(self):
easy_pick_trace = np.zeros((traces.shape[1], ))
cpeak = 100
bpeak = -200
tpeak = -100
easy_pick_trace[101] = cpeak
easy_pick_trace[107] = bpeak
easy_pick_trace[95] = tpeak
data = BareRadarData()
# do something ill-advised where we now have mismatched plength, scst, and FWW
data.picks.pickparams.scst = 200
data.picks.pickparams.FWW = 200
with self.assertRaises(ValueError):
picklib.packet_pick(traces[:, 0], data.picks.pickparams, 100)
# This should also be an error due to mismatched plength, scst, and FWW
data.picks.pickparams.scst = 2
data.picks.pickparams.FWW = 0
with self.assertRaises(ValueError):
picklib.packet_pick(traces[:, 0], data.picks.pickparams, 100)
# We should be able to pick within 3
# for a variety of frequency this is how the discrete rounding works out
data = BareRadarData()
for freq in [0.85, 0.9, 0.95]:
data.picks.pickparams.freq_update(freq)
for pick in [98, 101, 104]:
pickout = picklib.packet_pick(easy_pick_trace, data.picks.pickparams, pick)
self.assertEqual(pickout[0], 95)
self.assertEqual(pickout[1], 101)
self.assertEqual(pickout[2], 107)
# and now we should be able to pick slightly wider
data.picks.pickparams.freq_update(0.8)
for pick in [97, 101, 105]:
pickout = picklib.packet_pick(easy_pick_trace, data.picks.pickparams, pick)
self.assertEqual(pickout[0], 95)
self.assertEqual(pickout[1], 101)
self.assertEqual(pickout[2], 107)
# if our plength is really short we should still hit the middle
# sides are undefined
data.picks.pickparams.freq_update(4.0)
pickout = picklib.packet_pick(easy_pick_trace, data.picks.pickparams, 101)
self.assertEqual(pickout[1], 101)
pickout = picklib.packet_pick(easy_pick_trace, data.picks.pickparams, 102)
self.assertEqual(pickout[1], 101)
def test_pick(self):
easy_pick_traces = np.zeros((traces.shape[1], 10))
cpeak = 100
bpeak = -200
tpeak = -100
easy_pick_traces[101, :] = cpeak
easy_pick_traces[107, :] = bpeak
easy_pick_traces[95, :] = tpeak
data = BareRadarData()
data.picks.pickparams.freq_update(1.0)
# first just do a line across the middle guessing correctly
picks = picklib.pick(easy_pick_traces, 101, 101, data.picks.pickparams)
self.assertTrue(np.all(picks[0, :] == 95))
self.assertTrue(np.all(picks[1, :] == 101))
self.assertTrue(np.all(picks[2, :] == 107))
# now do a line across the middle guessing slanty
picks = picklib.pick(easy_pick_traces, 99, 105, data.picks.pickparams)
self.assertTrue(np.all(picks[0, :] == 95))
self.assertTrue(np.all(picks[1, :] == 101))
def test_intersection(self):
thisdata = RadarData.RadarData(os.path.join(THIS_DIR, 'input_data', 'along_picked.mat'))
thatdata = RadarData.RadarData(os.path.join(THIS_DIR, 'input_data', 'cross_picked.mat'))
nopickdata = RadarData.RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data.mat'))
tnum, sn = picklib.get_intersection(thisdata, thatdata)
self.assertTrue(len(sn) == len(thatdata.picks.picknums))
tnum, sn = picklib.get_intersection(thatdata, thisdata)
self.assertTrue(len(sn) == len(thisdata.picks.picknums))
tnum, sn = picklib.get_intersection(thatdata, thisdata, return_nans=True)
self.assertTrue(len(sn) == len(thisdata.picks.picknums))
with self.assertRaises(AttributeError):
tnum, sn = picklib.get_intersection(thisdata, nopickdata)
if __name__ == '__main__':
unittest.main()
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 dlilien <dlilien@berens>
#
# Distributed under terms of the GNU GPL3.0 license.
"""
Test the methods on the Pick object
"""
import unittest
from impdar.lib import PickParameters, NoInitRadarData
class TestPickParameters(unittest.TestCase):
def test_init(self):
rd = NoInitRadarData.NoInitRadarData()
pick_params = PickParameters.PickParameters(rd)
for attr in pick_params.attrs:
self.assertIsNotNone(getattr(pick_params, attr))
def test_freq_update(self):
rd = NoInitRadarData.NoInitRadarData()
pick_params = PickParameters.PickParameters(rd)
pick_params.freq_update(1000.0)
self.assertEqual(pick_params.FWW, 1)
self.assertEqual(pick_params.plength, 3)
self.assertEqual(pick_params.scst, 1)
rd = NoInitRadarData.NoInitRadarDataFiltering()
pick_params = PickParameters.PickParameters(rd)
pick_params.freq_update(1.0e-8)
self.assertEqual(pick_params.plength, rd.snum)
def test_to_struct(self):
rd = NoInitRadarData.NoInitRadarData()
pick_params = PickParameters.PickParameters(rd)
mat = pick_params.to_struct()
for attr in pick_params.attrs:
self.assertIsNotNone(mat[attr])
pick_params.dt = None
mat = pick_params.to_struct()
for attr in pick_params.attrs:
self.assertIsNotNone(mat[attr])
if __name__ == '__main__':
unittest.main()
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 dlilien <dlilien@berens>
#
# Distributed under terms of the GNU GPL3.0 license.
"""
Test the methods on the Pick object
"""
import os
import unittest
import numpy as np
from impdar.lib.RadarData import RadarData
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
class TestPickMods(unittest.TestCase):
def test_add_pick_loaded(self):
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data_picks.mat'))
data.picks.add_pick(2)
self.assertTrue(data.picks.samp1.shape == (3, data.tnum))
data.picks.samp1[-1, :] = 0
data.picks.samp2[-1, :] = 0
data.picks.samp3[-1, :] = 0
data.picks.add_pick(10)
self.assertTrue(data.picks.samp1.shape == (4, data.tnum))
def test_add_pick_blank(self):
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data.mat'))
data.picks.add_pick(1)
self.assertTrue(data.picks.samp1.shape == (1, data.tnum))
def test_add_pick_badpicknum(self):
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data.mat'))
data.picks.add_pick(1)
# need to do this to prevent overwriting
data.picks.samp1[0, 0] = 1.
with self.assertRaises(ValueError):
data.picks.add_pick(1)
def test_add_pick_overwrite(self):
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data.mat'))
data.picks.add_pick(1)
data.picks.add_pick(2)
self.assertTrue(data.picks.samp1.shape == (1, data.tnum))
def test_update_pick(self):
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data.mat'))
data.picks.add_pick(1)
data.picks.update_pick(1, np.zeros((5, data.tnum)))
self.assertTrue(np.all(data.picks.samp1 == 0))
def test_update_pick_badpick_infoshape(self):
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data.mat'))
data.picks.add_pick(1)
with self.assertRaises(ValueError):
data.picks.update_pick(1, np.zeros((4, 2)))
def test_update_pick_badpicknum(self):
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data.mat'))
data.picks.add_pick(1)
with self.assertRaises(ValueError):
data.picks.update_pick(0, np.zeros((5, data.tnum)))
def test_smooth(self):
# first, no NaNs
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data_picks.mat'))
cache_val = data.picks.samp1.copy()
for attr in ['samp1', 'samp2', 'samp3', 'power']:
val = getattr(data.picks, attr)
val[np.isnan(val)] = 1
setattr(data.picks, attr, val)
data.picks.smooth(4, units='tnum')
# NaNs ends only
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data_picks.mat'))
for attr in ['samp1', 'samp2', 'samp3', 'power']:
val = getattr(data.picks, attr)
val[:, -1] = np.NaN
val[:, 0] = np.NaN
setattr(data.picks, attr, val)
data.picks.smooth(4, units='tnum')
# Middle Nans
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data_picks.mat'))
for attr in ['samp1', 'samp2', 'samp3', 'power']:
val = getattr(data.picks, attr)
val[:, -1] = np.NaN
val[:, 0] = np.NaN
val[:, 5] = np.NaN
setattr(data.picks, attr, val)
data.picks.smooth(4, units='tnum')
# one row all nans
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data_picks.mat'))
for attr in ['samp1', 'samp2', 'samp3', 'power']:
val = getattr(data.picks, attr)
val[0, :] = np.NaN
setattr(data.picks, attr, val)
data.picks.smooth(4, units='tnum')
# Now with dist
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data_picks.mat'))
data.flags.interp = [2, 1]
for attr in ['samp1', 'samp2', 'samp3', 'power']:
val = getattr(data.picks, attr)
val[:, -1] = np.NaN
val[:, 0] = np.NaN
val[:, 5] = np.NaN
setattr(data.picks, attr, val)
data.picks.smooth(4, units='dist')
# do not complain if nothing to do
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data_picks.mat'))
data.picks.samp1 = None
data.picks.smooth(4, units='tnum')
# fail with dist but no interp
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data_picks.mat'))
data.flags.interp = None
with self.assertRaises(Exception):
data.picks.smooth(4, units='dist')
# Fail with elevation
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data_picks.mat'))
data.flags.elev = True
with self.assertRaises(Exception):
data.picks.smooth(4, units='dist')
with self.assertRaises(Exception):
data.picks.smooth(4, units='tnum')
# Fail with bad units
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data_picks.mat'))
data.flags.interp = [2, 1]
with self.assertRaises(ValueError):
data.picks.smooth(4, 'dum')
# Now make sure we fail with bad wavelengths--too high or too low for both units
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data_picks.mat'))
data.flags.interp = [2, 1]
with self.assertRaises(ValueError):
data.picks.smooth(0.5, 'tnum')
with self.assertRaises(ValueError):
data.picks.smooth(data.flags.interp[0] / 2, 'dist')
with self.assertRaises(ValueError):
data.picks.smooth(data.tnum + 2, 'tnum')
with self.assertRaises(ValueError):
data.picks.smooth(data.flags.interp[0] * data.tnum + 2, 'dist')
if __name__ == '__main__':
unittest.main()
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 dlilien <dlilien@berens>
#
# Distributed under terms of the GNU GPL3.0 license.
"""
Test the machinery of plotting. We will not try the "show" lines.
"""
import sys
import os
import unittest
import numpy as np
from impdar.lib.RadarData import RadarData
from impdar.lib.NoInitRadarData import NoInitRadarData
from impdar.lib.Picks import Picks
from impdar.lib import plot
import matplotlib.pyplot as plt
if sys.version_info[0] >= 3:
from unittest.mock import patch
else:
from mock import patch
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
class DummyFig:
# to mock saving
def __init__(self):
sfcalled = False
def savefig(self, fn, dpi=None, ftype=None):
sfcalled = True
def Any(cls):
# to mock data argument in tests
class Any(cls):
def __init__(self):
pass
def __eq__(self, other):
return True
return Any()
class TestPlot(unittest.TestCase):
@patch('impdar.lib.plot.plt.show')
@patch('impdar.lib.plot.plot_radargram', returns=[DummyFig(), None])
def test_plotPLOTARGS(self, mock_plot_rad, mock_show):
plot.plot([os.path.join(THIS_DIR, 'input_data', 'small_data.mat')])
mock_plot_rad.assert_called_with(Any(RadarData), xdat='tnum', ydat='twtt', x_range=None, pick_colors=None, clims=None, cmap=Any(object), flatten_layer=None)
mock_plot_rad.reset_called()
plot.plot([os.path.join(THIS_DIR, 'input_data', 'small_data.mat')], xd=True)
mock_plot_rad.assert_called_with(Any(RadarData), xdat='dist', ydat='twtt', x_range=None, pick_colors=None, clims=None, cmap=Any(object), flatten_layer=None)
mock_plot_rad.reset_called()
plot.plot([os.path.join(THIS_DIR, 'input_data', 'small_data.mat')], yd=True)
mock_plot_rad.assert_called_with(Any(RadarData), xdat='tnum', ydat='depth', x_range=None, pick_colors=None, clims=None, cmap=Any(object), flatten_layer=None)
mock_plot_rad.reset_called()
plot.plot([os.path.join(THIS_DIR, 'input_data', 'small_data.mat')], xd=True, yd=True)
mock_plot_rad.assert_called_with(Any(RadarData), xdat='dist', ydat='depth', x_range=None, pick_colors=None, clims=None, cmap=Any(object), flatten_layer=None)
mock_plot_rad.reset_called()
# Check that we can save
plot.plot([os.path.join(THIS_DIR, 'input_data', 'small_data.mat')], xd=True, yd=True, s=True)
mock_plot_rad.assert_called_with(Any(RadarData), xdat='dist', ydat='depth', x_range=None, pick_colors=None, clims=None, cmap=Any(object), flatten_layer=None)
mock_plot_rad.reset_called()
@patch('impdar.lib.plot.plt.show')
@patch('impdar.lib.plot.plot_traces', returns=[DummyFig(), None])
def test_plotPLOTTRACES(self, mock_plot_tr, mock_show):
plot.plot([os.path.join(THIS_DIR, 'input_data', 'small_data.mat')], tr=0)
mock_plot_tr.assert_called_with(Any(RadarData), 0, ydat='twtt')
@patch('impdar.lib.plot.plt.show')
@patch('impdar.lib.plot.plot_spectrogram', returns=[DummyFig(), None])
def test_plotPLOTSPECDENSE(self, mock_plot_specdense, mock_show):
plot.plot([os.path.join(THIS_DIR, 'input_data', 'small_data.mat')], spectra=(0, 1), window=0, scaling=1)
mock_plot_specdense.assert_called_with(Any(RadarData), (0, 1), window=0, scaling=1)
@patch('impdar.lib.plot.plt.show')
@patch('impdar.lib.plot.plot_ft', returns=[DummyFig(), None])
def test_plotFT(self, mock_plot_ft, mock_show):
plot.plot([os.path.join(THIS_DIR, 'input_data', 'small_data.mat')], ft=True)
mock_plot_ft.assert_called_with(Any(RadarData))
@patch('impdar.lib.plot.plt.show')
@patch('impdar.lib.plot.plot_hft', returns=[DummyFig(), None])
def test_plotHFT(self, mock_plot_hft, mock_show):
plot.plot([os.path.join(THIS_DIR, 'input_data', 'small_data.mat')], hft=True)
mock_plot_hft.assert_called_with(Any(RadarData))
@patch('impdar.lib.plot.plt.show')
@patch('impdar.lib.plot.plot_power', returns=[DummyFig(), None])
def test_plotPLOTPOWER(self, mock_plot_power, mock_show):
plot.plot([os.path.join(THIS_DIR, 'input_data', 'small_data.mat')], power=0)
mock_plot_power.assert_called_with(Any(RadarData), 0)
@patch('impdar.lib.plot.plt.show')
@patch('impdar.lib.plot.plot_radargram', returns=[DummyFig(), None])
def test_plotLOADGSSI(self, mock_plot_rad, mock_show):
plot.plot([os.path.join(THIS_DIR, 'input_data', 'test_gssi.DZT')], filetype='gssi')
mock_plot_rad.assert_called_with(Any(RadarData), xdat='tnum', ydat='twtt', x_range=None, pick_colors=None, clims=None, cmap=Any(object), flatten_layer=None)
@patch('impdar.lib.plot.plt.show')
@patch('impdar.lib.plot.plot_radargram', returns=[DummyFig(), None])
def test_plotLOADPE(self, mock_plot_rad, mock_show):
plot.plot([os.path.join(THIS_DIR, 'input_data', 'test_pe.DT1')], filetype='pe')
mock_plot_rad.assert_called_with(Any(RadarData), xdat='tnum', ydat='twtt', x_range=None, pick_colors=None, clims=None, cmap=Any(object), flatten_layer=None)
def test_plotBADINPUT(self):
with self.assertRaises(ValueError):
plot.plot([os.path.join(THIS_DIR, 'input_data', 'small_data.mat')], tr=0, power=1)
def tearDown(self):
plt.close('all')
class TestPlotTraces(unittest.TestCase):
@patch('impdar.lib.plot.plt.show')
def test_plot_traces(self, mock_show):
# Only checking that these do not throw errors
dat = NoInitRadarData(big=True)
fig, ax = plot.plot_traces(dat, 0)
fig, ax = plt.subplots()
plot.plot_traces(dat, 0, fig=fig)
plot.plot_traces(dat, 0, fig=fig, ax=ax)
plot.plot_traces(dat, [1, 1], fig=fig, ax=ax)
plot.plot_traces(dat, [1, 18], fig=fig, ax=ax)
with self.assertRaises(ValueError):
plot.plot_traces(dat, np.arange(10), fig=fig, ax=ax)
with self.assertRaises(IndexError):
plot.plot_traces(dat, 999, fig=fig, ax=ax)
# no nmo
plot.plot_traces(dat, 0, ydat='depth', fig=fig, ax=ax)
plot.plot_traces(dat, 0, ydat='dual', fig=fig, ax=ax)
# with nmo
dat.nmo_depth = np.linspace(0, 10, dat.travel_time.shape[0])
plot.plot_traces(dat, 0, ydat='depth', fig=fig, ax=ax)
plot.plot_traces(dat, 0, ydat='dual', fig=fig, ax=ax)
with self.assertRaises(ValueError):
plot.plot_traces(dat, 0, ydat='dum', fig=fig, ax=ax)
# Make sure we handle axes rescaling ok
dat.data[:, 0] = 10
dat.data[:, 1] = -10
plot.plot_traces(dat, (0, 2), fig=fig, ax=ax)
def tearDown(self):
plt.close('all')
class TestPlotPower(unittest.TestCase):
@patch('impdar.lib.plot.plt.show')
def test_plot_power(self, mock_show):
# Only checking that these do not throw errors
dat = NoInitRadarData(big=True)
with self.assertRaises(TypeError):
plot.plot_power(dat, [12, 14])
with self.assertRaises(ValueError):
plot.plot_power(dat, 0)
dat.picks = Picks(dat)
dat.picks.add_pick(10)
dat.picks.power[:] = 10.5
# works with constant power
fig, ax = plot.plot_power(dat, 10)
# works with various inputs
fig, ax = plt.subplots()
plot.plot_power(dat, 10, fig=fig)
plot.plot_power(dat, 10, fig=fig, ax=ax)
plot.plot_power(dat, 10, clims=(-100, 100), fig=fig, ax=ax)
# works with multiple inputs
plot.plot_power([dat, dat], 10, fig=fig, ax=ax)
# works with projected coordinates
dat.x_coord = np.arange(dat.data.shape[1])
dat.y_coord = np.arange(dat.data.shape[1])
plot.plot_power(dat, 10, fig=fig, ax=ax)
plot.plot_power([dat, dat], 10, fig=fig, ax=ax)
with self.assertRaises(ValueError):
plot.plot_power(dat, 0, fig=fig, ax=ax)
# gets ok lims with variable power?
dat.picks.power[:, 0] = 1
plot.plot_power(dat, 10, fig=fig, ax=ax)
def tearDown(self):
plt.close('all')
class TestPlotRadargram(unittest.TestCase):
@patch('impdar.lib.plot.plt.show')
def test_plot_radargram(self, mock_show):
# Only checking that these do not throw errors
dat = NoInitRadarData(big=True)
fig, ax = plot.plot_radargram(dat)
fig, ax = plt.subplots()
plot.plot_radargram(dat, fig=fig, ax=ax)
plot.plot_radargram(dat, fig=fig)
dat.data = dat.data + 1.0j * dat.data
plot.plot_radargram(dat, fig=fig, ax=ax)
# Varying xdata
dat = NoInitRadarData(big=True)
plot.plot_radargram(dat, x_range=None, fig=fig, ax=ax)
plot.plot_radargram(dat, xdat='dist', fig=fig, ax=ax)
with self.assertRaises(ValueError):
plot.plot_radargram(dat, xdat='dummy', fig=fig, ax=ax)
# Varying ydata
dat.nmo_depth = None
plot.plot_radargram(dat, y_range=None, fig=fig, ax=ax)
plot.plot_radargram(dat, ydat='depth', fig=fig, ax=ax)
plot.plot_radargram(dat, ydat='dual', fig=fig, ax=ax)
# with nmo defined, these two are different
dat.nmo_depth = np.linspace(0, 100, dat.travel_time.shape[0])
plot.plot_radargram(dat, ydat='depth', fig=fig, ax=ax)
plot.plot_radargram(dat, ydat='dual', fig=fig, ax=ax)
dat = NoInitRadarData(big=True)
with self.assertRaises(ValueError):
plot.plot_radargram(dat, ydat='dummy', fig=fig, ax=ax)
# Cannot do dist if we have no dist
dat = NoInitRadarData(big=True)
dat.dist = None
with self.assertRaises(ValueError):
plot.plot_radargram(dat, xdat='dist', fig=fig, ax=ax)
# Elevation offsets
dat = NoInitRadarData(big=True)
with self.assertRaises(ValueError):
plot.plot_radargram(dat, ydat='elev', fig=fig, ax=ax)
dat.flags.elev = True
dat.elev = np.zeros(dat.data.shape[1])
dat.elev[1:] = 1
plot.plot_radargram(dat, ydat='elev', fig=fig, ax=ax)
@patch('impdar.lib.plot.plt.show')
def test_plot_radargram_flattenlayer(self, mock_show):
dat = NoInitRadarData(big=True)
dat.picks = Picks(dat)
dat.picks.add_pick(10)
dat.picks.power[:] = 10
dat.picks.samp1[:] = 0
dat.picks.samp2[:] = 1 # make sure no bugs if this is actually constant
dat.picks.samp3[:] = 3
# works with constant power
fig, ax = plot.plot_radargram(dat, flatten_layer=10)
# make sure we can actually follow a variable layer
dat.picks.samp2[:, 1:] = 2
dat.picks.samp2[:, -1] = 4
# works with constant power
fig, ax = plot.plot_radargram(dat, flatten_layer=10)
dat.picks.samp2[:] = 0 # make sure no bugs if this is at the top
fig, ax = plot.plot_radargram(dat, flatten_layer=10)
dat.picks.samp2[:] = dat.data.shape[0] - 1 # make sure no bugs if this is at the bottom
fig, ax = plot.plot_radargram(dat, flatten_layer=10)
dat.picks.samp2[:, 1] = np.NaN # make sure no bugs if this is at the bottom
fig, ax = plot.plot_radargram(dat, flatten_layer=10)
with self.assertRaises(ValueError):
fig, ax = plot.plot_radargram(dat, flatten_layer=1)
def tearDown(self):
plt.close('all')
class TestPlotFT(unittest.TestCase):
@patch('impdar.lib.plot.plt.show')
def test_plot_ft(self, mcok_show):
# Only checking that these do not throw errors
dat = NoInitRadarData(big=True)
fig, ax = plt.subplots()
plot.plot_ft(dat, fig=fig, ax=ax)
plot.plot_ft(dat, fig=fig)
plot.plot_ft(dat)
def tearDown(self):
plt.close('all')
class TestPlotHFT(unittest.TestCase):
@patch('impdar.lib.plot.plt.show')
def test_plot_hft(self, mock_show):
# Only checking that these do not throw errors
dat = NoInitRadarData(big=True)
plot.plot_hft(dat)
fig, ax = plt.subplots()
plot.plot_hft(dat, fig=fig, ax=ax)
plot.plot_hft(dat, fig=fig)
def tearDown(self):
plt.close('all')
class TestPlotPicks(unittest.TestCase):
@patch('impdar.lib.plot.plt.show')
def test_plot_picks_via_radargram(self, mock_show):
"""We want to be able to call this via plot_radargram"""
dat = NoInitRadarData(big=True)
dat.picks = Picks(dat)
dat.picks.samp1 = np.ones((2, len(dat.lat)))
dat.picks.samp2 = np.ones((2, len(dat.lat)))
dat.picks.samp3 = np.ones((2, len(dat.lat)))
dat.picks.picknums = [0, 9]
plot.plot_radargram(dat, pick_colors='mgm')
@patch('impdar.lib.plot.plt.show')
def test_plot_picks(self, mock_show):
# Only checking that these do not throw errors
dat = NoInitRadarData(big=True)
fig, ax = plot.plot_picks(dat, np.arange(int(dat.tnum)), dat.travel_time)
dat.picks = Picks(dat)
dat.picks.picknums = [2, 10]
dat.picks.samp1 = np.ones((2, len(dat.lat)))
dat.picks.samp2 = np.ones((2, len(dat.lat)))
dat.picks.samp3 = np.ones((2, len(dat.lat)))
fig, ax = plot.plot_picks(dat, np.arange(int(dat.tnum)), dat.travel_time)
plot.plot_picks(dat, np.arange(int(dat.tnum)), dat.travel_time, fig=fig)
plot.plot_picks(dat, np.arange(int(dat.tnum)), dat.travel_time, fig=fig, ax=ax)
plot.plot_picks(dat, np.arange(int(dat.tnum)), dat.travel_time, colors='g', fig=fig, ax=ax)
plot.plot_picks(dat, np.arange(int(dat.tnum)), dat.travel_time, colors='gmm', fig=fig, ax=ax)
plot.plot_picks(dat, np.arange(int(dat.tnum)), dat.travel_time, colors=['c', 'g'], fig=fig, ax=ax)
plot.plot_picks(dat, np.arange(int(dat.tnum)), dat.travel_time, colors=['cmy', 'brb'], fig=fig, ax=ax)
plot.plot_picks(dat, np.arange(int(dat.tnum)), dat.travel_time, colors=True, fig=fig, ax=ax)
plot.plot_picks(dat, np.arange(int(dat.tnum)), dat.travel_time, colors=False, fig=fig, ax=ax)
plot.plot_picks(dat, np.arange(int(dat.tnum)), dat.travel_time, colors=['c', 'm', 'b'], just_middle=False, fig=fig, ax=ax)
with self.assertRaises(ValueError):
plot.plot_picks(dat, np.arange(int(dat.tnum)), dat.travel_time, colors=['c', 'm', 'b'], just_middle=True, fig=fig, ax=ax)
with self.assertRaises(ValueError):
plot.plot_picks(dat, np.arange(int(dat.tnum)), dat.travel_time, colors=['cm', 'br'], fig=fig, ax=ax)
def tearDown(self):
plt.close('all')
class TestPlotSpectral(unittest.TestCase):
@patch('impdar.lib.plot.plt.show')
def test_plot_spectrogram(self, mock_show):
# Only checking that these do not throw errors
dat = NoInitRadarData(big=True)
dat.picks = Picks(dat)
dat.picks.samp1 = np.ones((2, len(dat.lat)))
dat.picks.samp2 = np.ones((2, len(dat.lat)))
dat.picks.samp3 = np.ones((2, len(dat.lat)))
fig, ax = plot.plot_spectrogram(dat, (0.,5.0))
plot.plot_spectrogram(dat, (0.,5.0), fig=fig)
plot.plot_spectrogram(dat, (0.,5.0), fig=fig, ax=ax)
plot.plot_spectrogram(dat, (0.,5.0), window='hamming')
plot.plot_spectrogram(dat, (0.,5.0), scaling='density')
# no error if freq high
plot.plot_spectrogram(dat, 100)
# freq too low
with self.assertRaises(ValueError):
plot.plot_spectrogram(dat, (0.,-100))
with self.assertRaises(ValueError):
plot.plot_spectrogram(dat, (0.,5), scaling='dummy')
with self.assertRaises(ValueError):
plot.plot_spectrogram(dat, (0.,5), window='dummy')
@unittest.skipIf(sys.version_info[0] < 3, 'Att error on 2')
def test_failure_3(self):
dat = NoInitRadarData(big=True)
dat.picks = Picks(dat)
dat.picks.samp1 = np.ones((2, len(dat.lat)))
dat.picks.samp2 = np.ones((2, len(dat.lat)))
dat.picks.samp3 = np.ones((2, len(dat.lat)))
with self.assertRaises(TypeError):
plot.plot_specdense(dat, 'bad')
@unittest.skipIf(sys.version_info[0] >= 3, 'Type error on 3')
def test_failure_3(self):
dat = NoInitRadarData(big=True)
dat.picks = Picks(dat)
dat.picks.samp1 = np.ones((2, len(dat.lat)))
dat.picks.samp2 = np.ones((2, len(dat.lat)))
dat.picks.samp3 = np.ones((2, len(dat.lat)))
with self.assertRaises(AttributeError):
plot.plot_specdense(dat, 'bad')
def tearDown(self):
plt.close('all')
if __name__ == '__main__':
unittest.main()
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 dlilien <dlilien90@gmail.com>
#
# Distributed under terms of the GNU GPL-3.0 license.
"""
Test the machinery of process. This is broken up to match where it would likely fail; tests process wrappers of various methods are with the tests of those methods
"""
import sys
import os
import unittest
import numpy as np
from impdar.lib.NoInitRadarData import NoInitRadarData
from impdar.lib.RadarData import RadarData
from impdar.lib import process
if sys.version_info[0] >= 3:
from unittest.mock import MagicMock, patch
else:
from mock import MagicMock, patch
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
class TestConcat(unittest.TestCase):
def test_concat_nopicks(self):
dats = process.concat([NoInitRadarData(), NoInitRadarData()])
self.assertTrue(dats[0].data.shape == (2, 4))
with self.assertRaises(ValueError):
d2 = NoInitRadarData()
d2.snum = 3
dats = process.concat([NoInitRadarData(), d2])
with self.assertRaises(ValueError):
d2 = NoInitRadarData()
d2.travel_time = np.array((2, 3))
dats = process.concat([NoInitRadarData(), d2])
def test_concat_picks(self):
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data_picks.mat'))
# no overlapping picks
data_otherp = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data_picks.mat'))
data_otherp.picks.picknums = [pn * 10 - 1 for pn in data_otherp.picks.picknums]
# one overlapping pick
data_somepsame = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data_picks.mat'))
data_somepsame.picks.picknums = [1, 19]
dats = process.concat([data, data])
for attr in ['samp1', 'samp2', 'samp3', 'power']:
self.assertEqual(getattr(dats[0].picks, attr).shape[1], 2 * getattr(data.picks, attr).shape[1])
self.assertEqual(getattr(dats[0].picks, attr).shape[0], getattr(data.picks, attr).shape[0])
dats = process.concat([data, data_otherp])
for attr in ['samp1', 'samp2', 'samp3', 'power']:
self.assertTrue(getattr(dats[0].picks, attr).shape[1] == 2 * data.picks.samp1.shape[1])
self.assertTrue(getattr(dats[0].picks, attr).shape[0] == 2 * data.picks.samp1.shape[0])
for pn in data.picks.picknums:
self.assertTrue(pn in dats[0].picks.picknums)
for pn in data_otherp.picks.picknums:
self.assertTrue(pn in dats[0].picks.picknums)
dats = process.concat([data, data_somepsame])
for attr in ['samp1', 'samp2', 'samp3', 'power']:
self.assertTrue(getattr(dats[0].picks, attr).shape[1] == 2 * data.picks.samp1.shape[1])
self.assertTrue(getattr(dats[0].picks, attr).shape[0] == 2 * data.picks.samp1.shape[0] - 1)
for pn in data.picks.picknums:
self.assertTrue(pn in dats[0].picks.picknums)
for pn in data_somepsame.picks.picknums:
self.assertTrue(pn in dats[0].picks.picknums)
# no picks
data_np = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data_picks.mat'))
data_np.picks.picknums = 0
dats = process.concat([data, data_np])
for attr in ['samp1', 'samp2', 'samp3', 'power']:
self.assertTrue(getattr(dats[0].picks, attr).shape[1] == 2 * data.picks.samp1.shape[1])
self.assertTrue(np.all(np.isnan(getattr(dats[0].picks, attr)[0, data.picks.samp1.shape[1]:])))
for pn in data.picks.picknums:
self.assertTrue(pn in dats[0].picks.picknums)
data_np.picks.picknums = None
dats = process.concat([data, data_np])
for attr in ['samp1', 'samp2', 'samp3', 'power']:
self.assertTrue(getattr(dats[0].picks, attr).shape[1] == 2 * data.picks.samp1.shape[1])
self.assertTrue(np.all(np.isnan(getattr(dats[0].picks, attr)[0, data.picks.samp1.shape[1]:])))
for pn in data.picks.picknums:
self.assertTrue(pn in dats[0].picks.picknums)
data_np.picks = None
dats = process.concat([data, data_np])
for attr in ['samp1', 'samp2', 'samp3', 'power']:
self.assertTrue(getattr(dats[0].picks, attr).shape[1] == 2 * data.picks.samp1.shape[1])
self.assertTrue(np.all(np.isnan(getattr(dats[0].picks, attr)[0, data.picks.samp1.shape[1]:])))
for pn in data.picks.picknums:
self.assertTrue(pn in dats[0].picks.picknums)
class TestProcess_and_exit(unittest.TestCase):
def test_process_and_exitLOADMAT(self):
process.process_and_exit([os.path.join(THIS_DIR, 'input_data', 'small_data.mat')])
def test_process_and_exitCAT(self):
process.process_and_exit([os.path.join(THIS_DIR, 'input_data', 'small_data.mat'), os.path.join(THIS_DIR, 'input_data', 'small_data.mat')], cat=True)
def test_process_and_exitPROCESS(self):
process.process_and_exit([os.path.join(THIS_DIR, 'input_data', 'small_data.mat')], rev=True)
def test_process_and_exitOUTNAMING(self):
process.process_and_exit([os.path.join(THIS_DIR, 'input_data', 'data_raw.mat'), os.path.join(THIS_DIR, 'input_data', 'small_data.mat')], cat=True)
self.assertTrue(os.path.exists(os.path.join(THIS_DIR, 'input_data', 'data_cat.mat')))
process.process_and_exit([os.path.join(THIS_DIR, 'input_data', 'small_data.mat'), os.path.join(THIS_DIR, 'input_data', 'small_data.mat')], cat=True)
self.assertTrue(os.path.exists(os.path.join(THIS_DIR, 'input_data', 'small_data_cat.mat')))
process.process_and_exit([os.path.join(THIS_DIR, 'input_data', 'small_data.mat'), os.path.join(THIS_DIR, 'input_data', 'data_raw.mat')], rev=True)
self.assertTrue(os.path.exists(os.path.join(THIS_DIR, 'input_data', 'small_data_proc.mat')))
process.process_and_exit([os.path.join(THIS_DIR, 'input_data', 'test_gssi.DZT')], filetype='gssi', rev=True)
self.assertTrue(os.path.exists(os.path.join(THIS_DIR, 'input_data', 'test_gssi_proc.mat')))
process.process_and_exit([os.path.join(THIS_DIR, 'input_data', 'small_data.mat'), os.path.join(THIS_DIR, 'input_data', 'small_data.mat')], cat=True, o=os.path.join(THIS_DIR, 'small_data_cat.mat'))
self.assertTrue(os.path.exists(os.path.join(THIS_DIR, 'small_data_cat.mat')))
process.process_and_exit([os.path.join(THIS_DIR, 'input_data', 'data_raw.mat'), os.path.join(THIS_DIR, 'input_data', 'small_data.mat')], cat=True, o=os.path.join(THIS_DIR, 'data_cat.mat'))
self.assertTrue(os.path.exists(os.path.join(THIS_DIR, 'data_cat.mat')))
process.process_and_exit([os.path.join(THIS_DIR, 'input_data', 'data_raw.mat'), os.path.join(THIS_DIR, 'input_data', 'small_data.mat')], rev=True, o=THIS_DIR)
self.assertTrue(os.path.exists(os.path.join(THIS_DIR, 'data_proc.mat')))
self.assertTrue(os.path.exists(os.path.join(THIS_DIR, 'small_data_proc.mat')))
def tearDown(self):
if os.path.exists(os.path.join(THIS_DIR, 'small_data_cat.mat')):
os.remove(os.path.join(THIS_DIR, 'small_data_cat.mat'))
if os.path.exists(os.path.join(THIS_DIR, 'input_data', 'small_data_cat.mat')):
os.remove(os.path.join(THIS_DIR, 'input_data', 'small_data_cat.mat'))
if os.path.exists(os.path.join(THIS_DIR, 'input_data', 'small_data_proc.mat')):
os.remove(os.path.join(THIS_DIR, 'input_data', 'small_data_proc.mat'))
if os.path.exists(os.path.join(THIS_DIR, 'small_data_proc.mat')):
os.remove(os.path.join(THIS_DIR, 'small_data_proc.mat'))
if os.path.exists(os.path.join(THIS_DIR, 'data_proc.mat')):
os.remove(os.path.join(THIS_DIR, 'data_proc.mat'))
if os.path.exists(os.path.join(THIS_DIR, 'input_data', 'data_proc.mat')):
os.remove(os.path.join(THIS_DIR, 'input_data', 'data_proc.mat'))
if os.path.exists(os.path.join(THIS_DIR, 'input_data', 'data_cat.mat')):
os.remove(os.path.join(THIS_DIR, 'input_data', 'data_cat.mat'))
if os.path.exists(os.path.join(THIS_DIR, 'data_cat.mat')):
os.remove(os.path.join(THIS_DIR, 'data_cat.mat'))
if os.path.exists(os.path.join(THIS_DIR, 'input_data', 'test_gssi_proc.mat')):
os.remove(os.path.join(THIS_DIR, 'input_data', 'test_gssi_proc.mat'))
class TestProcess(unittest.TestCase):
def setUp(self):
self.data = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data.mat'))
self.data.x_coord = np.arange(40)
self.data.nmo_depth = None
self.data.travel_time = np.arange(0, 0.2, 0.01)
self.data.dt = 1.0e-8
self.data.trig = self.data.trig * 0.
def test_process_Reverse(self):
self.data.reverse = MagicMock()
self.assertTrue(process.process([self.data], rev=True))
self.data.reverse.assert_called_with()
def test_process_Crop(self):
with self.assertRaises(TypeError):
process.process([self.data], crop=True)
with self.assertRaises(ValueError):
process.process([self.data], crop=('ugachacka', 'top', 'snum'))
self.data.crop = MagicMock()
self.assertTrue(process.process([self.data], crop=(17, 'bottom', 'snum')))
self.data.crop.assert_called_with(17, 'bottom', 'snum')
def test_process_Denoise(self):
self.data.denoise = MagicMock()
with self.assertRaises(ValueError):
process.process([self.data], denoise=1)
with self.assertRaises(ValueError):
process.process([self.data], denoise='12')
with self.assertRaises(ValueError):
process.process([self.data], denoise=(1, ))
process.process([self.data], denoise=(1, 2))
self.data.denoise.assert_called_with(1, 2)
@patch('impdar.lib.process.interpdeep')
def test_process_Interp(self, mock_interp):
dl = [self.data]
with self.assertRaises(ValueError):
process.process([self.data], interp=('ba', 2))
with self.assertRaises(ValueError):
process.process([self.data], interp='ba')
with self.assertRaises(ValueError):
process.process([self.data], interp=1)
with self.assertRaises(ValueError):
process.process([self.data], interp=(1, ))
process.process(dl, interp=(1, 2))
mock_interp.assert_called_with(dl, 1.0, 2)
def test_process_hcrop(self):
with self.assertRaises(TypeError):
process.process([self.data], hcrop=True)
with self.assertRaises(ValueError):
process.process([self.data], hcrop=('ugachacka', 'left', 'tnum'))
self.data.hcrop = MagicMock()
self.assertTrue(process.process([self.data], hcrop=(17, 'left', 'tnum')))
self.data.hcrop.assert_called_with(17, 'left', 'tnum')
def test_process_NMO(self):
self.data.nmo = MagicMock()
self.assertTrue(process.process([self.data], nmo=(0., 2.0, 2.0)))
self.data.nmo.assert_called_with(0., 2.0, 2.0)
self.data.nmo = MagicMock()
self.assertTrue(process.process([self.data], nmo=0))
self.data.nmo.assert_called_with(0, 1.6)
self.data.nmo = MagicMock()
self.assertTrue(process.process([self.data], nmo=1.0))
self.data.nmo.assert_called_with(1.0, 1.6)
def test_process_restack(self):
self.data.restack = MagicMock()
self.assertTrue(process.process([self.data], restack=3))
self.data.restack.assert_called_with(3)
self.data.restack = MagicMock()
self.assertTrue(process.process([self.data], restack=[4., 'dummy']))
self.data.restack.assert_called_with(4)
def test_process_vbp(self):
with self.assertRaises(TypeError):
process.process([self.data], vbp=3)
self.data.vertical_band_pass = MagicMock()
self.assertTrue(process.process([self.data], vbp=(3, 4)))
self.data.vertical_band_pass.assert_called_with(3, 4)
def test_migrate(self):
self.data.migrate = MagicMock()
self.assertTrue(process.process([self.data], migrate=True))
self.data.migrate.assert_called_with(mtype='stolt')
def tearDown(self):
if os.path.exists(os.path.join(THIS_DIR, 'input_data', 'test_out.mat')):
os.remove(os.path.join(THIS_DIR, 'input_data', 'test_out.mat'))
if __name__ == '__main__':
unittest.main()
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 dlilien <dlilien90@gmail.com>
#
# Distributed under terms of the GNU GPL-3.0 license.
"""
Test the basics of RadarData
"""
import sys
import os
import unittest
import numpy as np
from impdar.lib.RadarData import RadarData, _RadarDataProcessing
from impdar.lib.Picks import Picks
from impdar.lib.ImpdarError import ImpdarError
if sys.version_info[0] >= 3:
from unittest.mock import patch
else:
from mock import patch
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
class TestRadarDataLoading(unittest.TestCase):
def test_ReadSucceeds(self):
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data.mat'))
self.assertEqual(data.data.shape, (20, 40))
def test_ReadLegacyStodeep(self):
# This one has data and other attrs
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data_otherstodeepattrs.mat'))
self.assertEqual(data.data.shape, (20, 40))
# This one has has only other attrs
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_just_otherstodeepattrs.mat'))
self.assertEqual(data.data.shape, (20, 40))
def test_badread(self):
# Data but not other attrs
with self.assertRaises(KeyError):
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'nonimpdar_matlab.mat'))
# All other attrs, no data
with self.assertRaises(KeyError):
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'nonimpdar_justmissingdat.mat'))
def tearDown(self):
if os.path.exists(os.path.join(THIS_DIR, 'input_data', 'test_out.mat')):
os.remove(os.path.join(THIS_DIR, 'input_data', 'test_out.mat'))
class TestRadarDataMethods(unittest.TestCase):
def setUp(self):
self.data = RadarData(os.path.join(THIS_DIR, 'input_data', 'small_data.mat'))
self.data.x_coord = np.arange(40)
self.data.nmo_depth = None
self.data.travel_time = np.arange(0, 0.2, 0.01)
self.data.dt = 1.0e-8
self.data.trig = self.data.trig * 0.
def test_Reverse(self):
data_unrev = self.data.data.copy()
self.data.reverse()
self.assertTrue(np.allclose(self.data.data, np.fliplr(data_unrev)))
self.assertTrue(np.allclose(self.data.x_coord, np.arange(39, -1, -1)))
self.data.reverse()
self.assertTrue(np.allclose(self.data.data, data_unrev))
self.assertTrue(np.allclose(self.data.x_coord, np.arange(40)))
def test_CropTWTT(self):
self.data.crop(0.165, 'bottom', dimension='twtt')
self.assertTrue(self.data.data.shape == (17, 40))
self.data.crop(0.055, 'top', dimension='twtt')
self.assertTrue(self.data.data.shape == (11, 40))
# do not fail on bad flags
self.data.flags.crop = False
self.data.crop(0.055, 'top', dimension='twtt')
self.assertTrue(self.data.flags.crop.shape == (3,))
self.assertTrue(self.data.flags.crop[0])
def test_CropErrors(self):
with self.assertRaises(ValueError):
self.data.crop(0.165, 'bottom', dimension='dummy')
with self.assertRaises(ValueError):
self.data.crop(0.165, 'dummy', dimension='twtt')
def test_CropSNUM(self):
self.data.crop(17, 'bottom', dimension='snum')
self.assertTrue(self.data.data.shape == (17, 40))
self.data.crop(6, 'top', dimension='snum')
self.assertTrue(self.data.data.shape == (11, 40))
def test_CropTrigInt(self):
self.data.trig = 2
with self.assertRaises(ValueError):
self.data.crop(17, 'bottom', dimension='pretrig')
self.data.crop(6, 'top', dimension='pretrig')
self.assertTrue(self.data.data.shape == (18, 40))
def test_CropTrigMat(self):
self.data.trig = np.ones((40,), dtype=int)
self.data.trig[20:] = 2
self.data.crop(6, 'top', dimension='pretrig')
self.assertTrue(self.data.data.shape == (19, 40))
def test_CropDepthOnTheFly(self):
self.data.crop(0.165, 'bottom', dimension='depth', uice=2.0e6)
self.assertTrue(self.data.data.shape == (17, 40))
self.data.crop(0.055, 'top', dimension='depth', uice=2.0e6)
self.assertTrue(self.data.data.shape == (11, 40))
def test_CropDepthWithNMO(self):
self.data.nmo(0., uice=2.0e6, uair=2.0e6)
self.data.crop(0.165, 'bottom', dimension='depth')
self.assertTrue(self.data.data.shape == (17, 40))
self.data.crop(0.055, 'top', dimension='depth')
self.assertTrue(self.data.data.shape == (11, 40))
def test_HCropTnum(self):
self.data.hcrop(2, 'left', dimension='tnum')
self.assertTrue(self.data.data.shape == (20, 39))
self.data.hcrop(15, 'right', dimension='tnum')
self.assertTrue(self.data.data.shape == (20, 14))
# Make sure we can ditch the last one
self.data.hcrop(14, 'right', dimension='tnum')
self.assertTrue(self.data.data.shape == (20, 13))
def test_HCropInputErrors(self):
with self.assertRaises(ValueError):
self.data.hcrop(2, 'left', dimension='dummy')
with self.assertRaises(ValueError):
self.data.hcrop(2, 'dummy', dimension='tnum')
def test_HCropBoundsErrors(self):
# There are lots of bad inputs for tnum
with self.assertRaises(ValueError):
self.data.hcrop(44, 'right', dimension='tnum')
with self.assertRaises(ValueError):
self.data.hcrop(-44, 'right', dimension='tnum')
with self.assertRaises(ValueError):
self.data.hcrop(0, 'right', dimension='tnum')
with self.assertRaises(ValueError):
self.data.hcrop(1, 'right', dimension='tnum')
with self.assertRaises(ValueError):
self.data.hcrop(-1, 'right', dimension='tnum')
with self.assertRaises(ValueError):
self.data.hcrop(41, 'right', dimension='tnum')
# Fewer ways to screw up distance
with self.assertRaises(ValueError):
self.data.hcrop(1.6, 'right', dimension='dist')
with self.assertRaises(ValueError):
self.data.hcrop(0, 'right', dimension='dist')
with self.assertRaises(ValueError):
self.data.hcrop(-1, 'right', dimension='dist')
def test_HCropDist(self):
self.data.hcrop(0.01, 'left', dimension='dist')
self.assertTrue(self.data.data.shape == (20, 39))
self.data.hcrop(1.4, 'right', dimension='dist')
self.assertTrue(self.data.data.shape == (20, 38))
def test_agc(self):
self.data.agc()
self.assertTrue(self.data.flags.agc)
def test_rangegain(self):
self.data.rangegain(1.0)
self.assertTrue(self.data.flags.rgain)
self.data.flags.rgain = False
self.data.trig = np.zeros((self.data.tnum, ))
self.data.rangegain(1.0)
self.assertTrue(self.data.flags.rgain)
# Deprecated for trig not to be a vector, but check it anyway
self.data.trig = 0.0
self.data.rangegain(1.0)
self.assertTrue(self.data.flags.rgain)
def test_NMO(self):
# If velocity is 2
self.data.nmo(0., uice=2.0, uair=2.0)
self.assertTrue(np.allclose(self.data.travel_time * 1.0e-6, self.data.nmo_depth))
# shouldn't care about uair if offset=0
self.setUp()
self.data.nmo(0., uice=2.0, uair=200.0)
self.assertTrue(np.allclose(self.data.travel_time * 1.0e-6, self.data.nmo_depth))
self.setUp()
self.data.nmo(0., uice=2.0, uair=200.0)
self.assertEqual(self.data.flags.nmo.shape, (2,))
self.assertTrue(self.data.flags.nmo[0])
self.setUp()
self.data.nmo(0., uice=2.0, uair=2.0, const_firn_offset=3.0)
self.assertTrue(np.allclose(self.data.travel_time * 1.0e-6 + 3.0, self.data.nmo_depth))
self.setUp()
self.data.trig = np.ones((self.data.tnum, ))
with self.assertRaises(ImpdarError):
self.data.nmo(0., uice=2.0, uair=2.0)
# Good rho profile
self.setUp()
self.data.nmo(0., rho_profile=os.path.join(THIS_DIR, 'input_data', 'rho_profile.txt'))
# bad rho profile
self.setUp()
with self.assertRaises(Exception):
self.data.nmo(0., rho_profile=os.path.join(THIS_DIR, 'input_data', 'velocity_layers.txt'))
def test_optimize_moveout_depth(self):
d = _RadarDataProcessing.optimize_moveout_depth(100.0, 100.0 / 1.68e8 * 2., 10.0, np.array([0., 10., 50., 1000.]), np.array([2.5e8, 2.0e8, 1.8e8, 1.68e8]))
self.assertFalse(np.isnan(d))
d = _RadarDataProcessing.optimize_moveout_depth(2000.0, 100.0 / 1.68e8 * 2., 10.0, np.array([0., 10., 50., 1000.]), np.array([2.5e8, 2.0e8, 1.8e8, 1.68e8]))
self.assertFalse(np.isnan(d))
with self.assertRaises(ValueError):
d = _RadarDataProcessing.optimize_moveout_depth(-2000.0, 100.0 / 1.68e8 * 2., 10.0, np.array([0., 10., 50., 1000.]), np.array([2.5e8, 2.0e8, 1.8e8, 1.68e8]))
def test_restack_odd(self):
self.data.restack(5)
self.assertTrue(self.data.data.shape == (20, 8))
def test_restack_even(self):
self.data.restack(4)
self.assertTrue(self.data.data.shape == (20, 8))
def test_elev_correct(self):
self.data.elev = np.arange(self.data.data.shape[1]) * 0.002
with self.assertRaises(ValueError):
self.data.elev_correct()
self.data.nmo(0, 2.0e6)
self.data.elev_correct(v_avg=2.0e6)
self.assertTrue(self.data.data.shape == (27, 40))
def test_constant_space_real(self):
# Basic check where there is movement every step
distlims = (self.data.dist[0], self.data.dist[-1])
space = 100.
targ_size = np.ceil((distlims[-1] - distlims[0]) * 1000. / space)
self.data.constant_space(space)
self.assertTrue(self.data.data.shape == (20, targ_size))
self.assertTrue(self.data.x_coord.shape == (targ_size, ))
self.assertTrue(self.data.y_coord.shape == (targ_size, ))
self.assertTrue(self.data.lat.shape == (targ_size, ))
self.assertTrue(self.data.long.shape == (targ_size, ))
self.assertTrue(self.data.elev.shape == (targ_size, ))
self.assertTrue(self.data.decday.shape == (targ_size, ))
# Make sure we can have some bad values from no movement
# This will delete some distance so, be careful with checks
self.setUp()
self.data.constant_space(space, min_movement=35.)
self.assertEqual(self.data.data.shape[0], 20)
self.assertLessEqual(self.data.data.shape[1], targ_size)
self.assertLessEqual(self.data.x_coord.shape[0], targ_size)
self.assertLessEqual(self.data.y_coord.shape[0], targ_size)
self.assertLessEqual(self.data.lat.shape[0], targ_size)
self.assertLessEqual(self.data.long.shape[0], targ_size)
self.assertLessEqual(self.data.elev.shape[0], targ_size)
self.assertLessEqual(self.data.decday.shape[0], targ_size)
# do not fail because flags structure is weird from matlab
self.setUp()
self.data.flags.interp = False
self.data.constant_space(space)
self.assertTrue(self.data.flags.interp.shape == (2,))
self.assertTrue(self.data.flags.interp[0])
self.assertEqual(self.data.flags.interp[1], space)
# Want to be able to do picks too
self.setUp()
self.data.pick = Picks(self.data)
self.data.picks.samp1 = np.ones((2, self.data.tnum))
self.data.picks.samp2 = np.ones((2, self.data.tnum))
self.data.picks.samp3 = np.ones((2, self.data.tnum))
self.data.picks.power = np.ones((2, self.data.tnum))
self.data.picks.time = np.ones((2, self.data.tnum))
self.data.constant_space(space)
self.assertTrue(self.data.data.shape == (20, targ_size))
self.assertTrue(self.data.x_coord.shape == (targ_size, ))
self.assertTrue(self.data.y_coord.shape == (targ_size, ))
self.assertTrue(self.data.lat.shape == (targ_size, ))
self.assertTrue(self.data.long.shape == (targ_size, ))
self.assertTrue(self.data.elev.shape == (targ_size, ))
self.assertTrue(self.data.decday.shape == (targ_size, ))
self.assertTrue(self.data.picks.samp1.shape == (2, targ_size))
self.assertTrue(self.data.picks.samp2.shape == (2, targ_size))
self.assertTrue(self.data.picks.samp3.shape == (2, targ_size))
self.assertTrue(self.data.picks.power.shape == (2, targ_size))
self.assertTrue(self.data.picks.time.shape == (2, targ_size))
def test_constant_space_complex(self):
# One of the few functions that really differs with complex data.
self.data.data = self.data.data + 1.0j * self.data.data
distlims = (self.data.dist[0], self.data.dist[-1])
space = 100.
targ_size = np.ceil((distlims[-1] - distlims[0]) * 1000. / space)
self.data.constant_space(space)
self.assertTrue(self.data.data.shape == (20, targ_size))
self.assertTrue(self.data.x_coord.shape == (targ_size, ))
self.assertTrue(self.data.y_coord.shape == (targ_size, ))
self.assertTrue(self.data.lat.shape == (targ_size, ))
self.assertTrue(self.data.long.shape == (targ_size, ))
self.assertTrue(self.data.elev.shape == (targ_size, ))
self.assertTrue(self.data.decday.shape == (targ_size, ))
def test_constant_sample_depth_spacing(self):
# first check that it fails if we are not set up
self.data.nmo_depth = None
with self.assertRaises(AttributeError):
self.data.constant_sample_depth_spacing()
# Spoof variable nmo depths
self.data.nmo_depth = np.hstack((np.arange(self.data.snum // 2),
self.data.snum // 2 + 2. * np.arange(self.data.snum // 2)))
self.data.constant_sample_depth_spacing()
self.assertTrue(np.allclose(np.diff(self.data.nmo_depth), np.ones((self.data.snum - 1,)) * np.diff(self.data.nmo_depth)[0]))
# So now if we call again, it should do nothing and return 1
rv = self.data.constant_sample_depth_spacing()
self.assertEqual(1, rv)
def test_traveltime_to_depth(self):
# We are not constant
depths = self.data.traveltime_to_depth(np.arange(10) - 1., (np.arange(10) + 1) * 91.7)
self.assertFalse(np.allclose(np.diff(depths), np.ones((len(depths) - 1,)) * (depths[1] - depths[0])))
# We are constant
depths = self.data.traveltime_to_depth(np.arange(10) - 1., (np.ones((10,)) * 91.7))
self.assertTrue(np.allclose(np.diff(depths), np.ones((len(depths) - 1,)) * (depths[1] - depths[0])))
# we have negative travel times
self.data.travel_time = self.data.travel_time - 0.01
depths = self.data.traveltime_to_depth(np.arange(10) - 1., (np.arange(10) + 1) * 91.7)
self.assertFalse(np.allclose(np.diff(depths), np.ones((len(depths) - 1,)) * (depths[1] - depths[0])))
def tearDown(self):
if os.path.exists(os.path.join(THIS_DIR, 'input_data', 'test_out.mat')):
os.remove(os.path.join(THIS_DIR, 'input_data', 'test_out.mat'))
if __name__ == '__main__':
unittest.main()
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 David Lilien <dlilien90@gmail.com>
#
# Distributed under terms of the GNU GPL3.0 license.
"""
"""
import sys
import unittest
import numpy as np
from impdar.lib.NoInitRadarData import NoInitRadarDataFiltering as NoInitRadarData
from impdar.lib.RadarData import RadarData
from impdar.lib import process
from impdar.lib.ImpdarError import ImpdarError
if sys.version_info[0] >= 3:
from unittest.mock import MagicMock, patch
else:
from mock import MagicMock, patch
data_dummy = np.ones((500, 400))
def Any(cls):
# to mock data argument in tests
class Any(cls):
def __init__(self):
pass
def __eq__(self, other):
return True
return Any()
class TestAdaptive(unittest.TestCase):
def test_AdaptiveRun(self):
radardata = NoInitRadarData()
radardata.adaptivehfilt(window_size=radardata.tnum // 10)
self.assertTrue(np.all(radardata.data <= 1.))
# make sure it works with a big window too
radardata = NoInitRadarData()
radardata.adaptivehfilt(window_size=radardata.tnum * 2)
self.assertTrue(np.all(radardata.data <= 1.))
class TestHfilt(unittest.TestCase):
def test_horizontalfilt(self):
radardata = NoInitRadarData()
radardata.horizontalfilt(0, 100)
# We taper in the hfilt, so this is not just zeros
self.assertTrue(np.all(radardata.data == radardata.hfilt_target_output))
class TestHighPass(unittest.TestCase):
def test_highpass_simple(self):
radardata = NoInitRadarData()
# fails without constant-spaced data
radardata.flags.interp = np.ones((2,))
radardata.highpass(radardata.tnum * radardata.flags.interp[1] * 0.8)
# There is no high-frequency variability, so this result should be small
# We only have residual variability from the quality of the filter
self.assertTrue(np.all(np.abs((radardata.data - radardata.data[0, 0])) < 1.0e-3))
def test_highpass_badcutoff(self):
radardata = NoInitRadarData()
# fails without constant-spaced data
radardata.flags.interp = np.ones((2,))
with self.assertRaises(ValueError):
radardata.highpass(radardata.flags.interp[1] * 0.5)
with self.assertRaises(ValueError):
radardata.highpass(radardata.tnum * radardata.flags.interp[1] * 1.5)
def test_highpass_errors(self):
radardata = NoInitRadarData()
with self.assertRaises(ImpdarError):
radardata.highpass(100.0)
# Elevation corrected data should fail
radardata.flags.interp = np.ones((2,))
# make sure this throws no error, then
radardata.highpass(100.0)
with self.assertRaises(ImpdarError):
radardata.flags.elev = True
radardata.highpass(100.0)
class TestHorizontalBandPass(unittest.TestCase):
def test_hbp_simple(self):
radardata = NoInitRadarData()
# fails without constant-spaced data
radardata.flags.interp = np.ones((2,))
radardata.horizontal_band_pass(5., radardata.tnum * radardata.flags.interp[1] * 0.9)
# We cannot really check this since the filter causes some residual variability as an edge effect
def test_hbp_badcutoff(self):
radardata = NoInitRadarData()
# fails without constant-spaced data
radardata.flags.interp = np.ones((2,))
with self.assertRaises(ValueError):
radardata.horizontal_band_pass(0.5, radardata.tnum / 10.)
with self.assertRaises(ValueError):
radardata.horizontal_band_pass(radardata.tnum / 10., radardata.tnum * 2.)
def test_hbp_errors(self):
radardata = NoInitRadarData()
with self.assertRaises(ImpdarError):
# We have a screwed up filter here because of sampling vs. frequency used
radardata.horizontal_band_pass(1000.0, 2000.0)
radardata.flags.interp = np.ones((2,))
# make sure this throws no error, then
radardata.horizontal_band_pass(radardata.tnum / 10., radardata.tnum / 2.)
with self.assertRaises(ValueError):
radardata.horizontal_band_pass(radardata.tnum / 2., radardata.tnum / 10.)
# Elevation corrected data should fail
with self.assertRaises(ImpdarError):
radardata.flags.elev = True
radardata.horizontal_band_pass(radardata.tnum / 10., radardata.tnum / 2.)
class TestLowPass(unittest.TestCase):
def test_lowpass_simple(self):
radardata = NoInitRadarData()
# fails without constant-spaced data
radardata.flags.interp = np.ones((2,))
radardata.lowpass(100.0)
# There is no high-frequency variability, so this result should be small
# We only have residual variability from the quality of the filter
self.assertTrue(np.all(np.abs((radardata.data - radardata.data[0, 0]) / radardata.data[0, 0]) < 1.0e-3))
def test_lowpass_badcutoff(self):
# We have a screwed up filter here because of sampling vs. frequency used
radardata = NoInitRadarData()
# fails without constant-spaced data
radardata.flags.interp = np.ones((2,))
with self.assertRaises(ValueError):
radardata.lowpass(radardata.flags.interp[1] * 0.5)
with self.assertRaises(ValueError):
radardata.lowpass(radardata.tnum * 1.5)
def test_lowpass_errors(self):
radardata = NoInitRadarData()
with self.assertRaises(ImpdarError):
# We have a screwed up filter here because of sampling vs. frequency used
radardata.lowpass(100.0)
# Elevation corrected data should fail
radardata.flags.interp = np.ones((2,))
# make sure this throws no error, then
radardata.lowpass(100.0)
with self.assertRaises(ImpdarError):
radardata.flags.elev = True
radardata.lowpass(100.0)
class TestWinAvgHfilt(unittest.TestCase):
def test_WinAvgExp(self):
radardata = NoInitRadarData()
radardata.winavg_hfilt(11, taper='full')
self.assertTrue(np.all(radardata.data == radardata.hfilt_target_output))
def test_WinAvgExpBadwinavg(self):
# Tests the check on whether win_avg < tnum
radardata = NoInitRadarData()
radardata.winavg_hfilt(data_dummy.shape[1] + 10, taper='full')
self.assertTrue(np.all(radardata.data == radardata.hfilt_target_output))
def test_WinAvgPexp(self):
radardata = NoInitRadarData()
radardata.winavg_hfilt(11, taper='pexp', filtdepth=-1)
self.assertTrue(np.all(radardata.data == radardata.pexp_target_output))
def test_WinAvgbadtaper(self):
radardata = NoInitRadarData()
with self.assertRaises(ValueError):
radardata.winavg_hfilt(11, taper='not_a_taper', filtdepth=-1)
class TestVBP(unittest.TestCase):
def test_vbp_butter(self):
radardata = NoInitRadarData()
radardata.vertical_band_pass(0.1, 100., filttype='butter')
# The filter is not too good, so we have lots of residual
self.assertTrue(np.all(np.abs(radardata.data) < 1.0e-4))
def test_vbp_cheb(self):
radardata = NoInitRadarData()
radardata.vertical_band_pass(0.1, 100., filttype='cheb')
# The filter is not too good, so we have lots of residual
self.assertTrue(np.all(np.abs(radardata.data) < 1.0e-2))
def test_vbp_bessel(self):
radardata = NoInitRadarData()
radardata.vertical_band_pass(0.1, 100., filttype='bessel')
# The filter is not too good, so we have lots of residual
self.assertTrue(np.all(np.abs(radardata.data) < 1.0e-1))
def test_vbp_fir(self):
radardata = NoInitRadarData()
radardata.vertical_band_pass(1., 10., filttype='fir', order=100)
radardata.vertical_band_pass(1., 10., filttype='fir', order=2, fir_window='hanning')
def test_vbp_badftype(self):
radardata = NoInitRadarData()
with self.assertRaises(ValueError):
radardata.vertical_band_pass(0.1, 100., filttype='dummy')
class TestDenoise(unittest.TestCase):
def test_denoise(self):
radardata = NoInitRadarData()
with self.assertRaises(ValueError):
radardata.denoise()
radardata.data = np.random.random(radardata.data.shape)
radardata.denoise()
radardata = NoInitRadarData()
radardata.data = np.random.random(radardata.data.shape)
radardata.denoise(noise=0.1)
def test_denoise_badftype(self):
radardata = NoInitRadarData()
with self.assertRaises(ValueError):
radardata.denoise(ftype='dummy')
class TestRadarDataHfiltWrapper(unittest.TestCase):
def test_adaptive(self):
radardata = NoInitRadarData()
radardata.adaptivehfilt = MagicMock()
radardata.hfilt(ftype='adaptive', window_size=1000)
radardata.adaptivehfilt.assert_called_with(window_size=1000)
def test_horizontalfilt(self):
radardata = NoInitRadarData()
radardata.horizontalfilt = MagicMock()
radardata.hfilt(ftype='hfilt', bounds=(0, 100))
radardata.horizontalfilt.assert_called_with(0, 100)
def test_badfilter(self):
radardata = NoInitRadarData()
with self.assertRaises(ValueError):
radardata.hfilt(ftype='dummy')
class TestProcessWrapper(unittest.TestCase):
def test_process_ahfilt(self):
radardata = NoInitRadarData()
radardata.adaptivehfilt = MagicMock()
process.process([radardata], ahfilt=1000)
radardata.adaptivehfilt.assert_called_with(window_size=1000)
def test_process_hfilt(self):
radardata = NoInitRadarData()
radardata.horizontalfilt = MagicMock()
process.process([radardata], hfilt=(0, 100))
radardata.horizontalfilt.assert_called_with(0, 100)
def test_process_vbp(self):
radardata = NoInitRadarData()
radardata.vertical_band_pass = MagicMock()
process.process([radardata], vbp=(0.1, 100.))
# The filter is not too good, so we have lots of residual
radardata.vertical_band_pass.assert_called_with(0.1, 100.)
class TestMigrationWrapper(unittest.TestCase):
"""This is only to make sure the calls are setup correctly. Actual tests are separate"""
@patch('impdar.lib.migrationlib.migrationKirchhoff')
def test_wrap_kirchhoff(self, patch_ob):
radardata = NoInitRadarData()
radardata.migrate(mtype='kirch', vel=10., nearfield=False)
patch_ob.assert_called_with(Any(RadarData), vel=10., nearfield=False)
@patch('impdar.lib.migrationlib.migrationStolt')
def test_wrap_stolt(self, patch_ob):
radardata = NoInitRadarData()
radardata.migrate(mtype='stolt', htaper=1, vtaper=2, vel=999.)
patch_ob.assert_called_with(Any(RadarData), htaper=1, vtaper=2, vel=999.)
@patch('impdar.lib.migrationlib.migrationPhaseShift')
def test_wrap_phaseshift(self, patch_ob):
radardata = NoInitRadarData()
radardata.migrate(mtype='phsh', vel=1., vel_fn='dummy', htaper=1, vtaper=2)
patch_ob.assert_called_with(Any(RadarData), vel=1., vel_fn='dummy', htaper=1, vtaper=2)
@patch('impdar.lib.migrationlib.migrationTimeWavenumber')
def test_wrap_tk(self, patch_ob):
radardata = NoInitRadarData()
radardata.migrate(mtype='tk', vel=1., vel_fn='dummy', htaper=1, vtaper=2)
patch_ob.assert_called_with(Any(RadarData), vel=1., vel_fn='dummy', htaper=1, vtaper=2)
@patch('impdar.lib.migrationlib.migrationSeisUnix')
def test_wrap_seisunix(self, patch_ob):
radardata = NoInitRadarData()
radardata.migrate(mtype='su_stolt', vtaper=1, htaper=2, tmig=3, vel_fn=None, vel=1.68e7, nxpad=15, verbose=1)
patch_ob.assert_called_with(Any(RadarData), vtaper=1, htaper=2, tmig=3, vel_fn=None, vel=1.68e7, nxpad=15, verbose=1, mtype='su_stolt')
def test_bad_mtype(self):
radardata = NoInitRadarData()
with self.assertRaises(ValueError):
radardata.migrate(mtype='dummy')
# and spoof the checker for seisunix
with self.assertRaises(Exception):
radardata.migrate(mtype='su_dummy')
if __name__ == '__main__':
unittest.main()
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 dlilien <dlilien@berens>
#
# Distributed under terms of the GNU GPL3.0 license.
"""
"""
import os
import unittest
import numpy as np
from impdar.lib.RadarData import RadarData
from impdar.lib.NoInitRadarData import NoInitRadarData
from impdar.lib.RadarData._RadarDataSaving import CONVERSIONS_ENABLED
from impdar.lib.RadarFlags import RadarFlags
from impdar.lib.Picks import Picks
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
class TestRadarDataSaving(unittest.TestCase):
def test_WriteNoFLags(self):
rd = NoInitRadarData()
rd.save(os.path.join(THIS_DIR, 'input_data', 'test_out.mat'))
os.remove(os.path.join(THIS_DIR, 'input_data', 'test_out.mat'))
def testWriteWithFlags(self):
rd = NoInitRadarData()
rd.flags = RadarFlags()
rd.save(os.path.join(THIS_DIR, 'input_data', 'test_out.mat'))
def testWriteWithPicksBlank(self):
rd = NoInitRadarData()
rd.picks = Picks(rd)
rd.save(os.path.join(THIS_DIR, 'input_data', 'test_out.mat'))
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'test_out.mat'))
self.assertTrue(data.picks is not None)
self.assertTrue(data.picks.lasttrace is not None)
self.assertTrue(data.picks.lasttrace.tnum is None)
self.assertTrue(data.picks.samp1 is None)
self.assertTrue(data.picks.samp2 is None)
self.assertTrue(data.picks.samp3 is None)
def testWriteWithPicksFull(self):
rd = NoInitRadarData()
rd.picks = Picks(rd)
rd.picks.add_pick()
rd.save(os.path.join(THIS_DIR, 'input_data', 'test_out.mat'))
data = RadarData(os.path.join(THIS_DIR, 'input_data', 'test_out.mat'))
self.assertTrue(data.picks is not None)
self.assertTrue(data.picks.lasttrace is not None)
self.assertTrue(data.picks.samp1 is not None)
self.assertTrue(data.picks.samp2 is not None)
self.assertTrue(data.picks.samp3 is not None)
def test_WriteRead(self):
# We are going to create a really bad file (most info missing) and see if we recover it or get an error
rd = NoInitRadarData()
rd.save(os.path.join(THIS_DIR, 'input_data', 'test_out.mat'))
RadarData(os.path.join(THIS_DIR, 'input_data', 'test_out.mat'))
def tearDown(self):
for fn in ['test_out.mat', 'test.shp', 'test.shx', 'test.prj', 'test.dbf']:
if os.path.exists(os.path.join(THIS_DIR, 'input_data', fn)):
os.remove(os.path.join(THIS_DIR, 'input_data', fn))
class TestRadarDataExports(unittest.TestCase):
def test__get_pick_targ_infoAutoselect(self):
# Make sure that we are selecting the proper output format
rd = NoInitRadarData()
# With no depth, we should output travel time
rd.nmo_depth = None
out_name, tout = rd._get_pick_targ_info(None)
self.assertEqual(out_name, 'twtt')
self.assertTrue(np.all(tout == rd.travel_time))
# With depth, return depth
rd.nmo_depth = np.arange(len(rd.travel_time)) * 1.1
out_name, tout = rd._get_pick_targ_info(None)
self.assertEqual(out_name, 'depth')
self.assertTrue(np.all(tout == rd.nmo_depth))
def test__get_pick_targ_infoBadSelections(self):
# Make sure that we are selecting the proper output format
rd = NoInitRadarData()
# Try depth when there is no depth
rd.nmo_depth = None
with self.assertRaises(AttributeError):
out_name, tout = rd._get_pick_targ_info('depth')
# Elevation with no depth or elevation
with self.assertRaises(AttributeError):
out_name, tout = rd._get_pick_targ_info('elev')
# Elevation with depth but not elevation
rd.nmo_depth = np.arange(len(rd.travel_time)) * 1.1
with self.assertRaises(AttributeError):
out_name, tout = rd._get_pick_targ_info('elev')
# Now try to pass a bad value for the selection
with self.assertRaises(ValueError):
out_name, tout = rd._get_pick_targ_info('dummy')
with self.assertRaises(ValueError):
out_name, tout = rd._get_pick_targ_info(['dummy', 'snum'])
def test__get_pick_targ_infoGoodSelections(self):
# Make sure that we are selecting the proper output format
rd = NoInitRadarData()
rd.nmo_depth = np.arange(len(rd.travel_time)) * 1.1
rd.elev = np.arange(rd.tnum) * 1001
out_name, tout = rd._get_pick_targ_info('twtt')
self.assertEqual(out_name, 'twtt')
self.assertTrue(np.all(tout == rd.travel_time))
out_name, tout = rd._get_pick_targ_info('depth')
self.assertEqual(out_name, 'depth')
self.assertTrue(np.all(tout == rd.nmo_depth))
out_name, tout = rd._get_pick_targ_info('snum')
self.assertEqual(out_name, 'snum')
self.assertTrue(np.all(tout == np.arange(rd.snum)))
out_name, tout = rd._get_pick_targ_info('elev')
self.assertEqual(out_name, 'elev')
@unittest.skipIf(not CONVERSIONS_ENABLED, 'No GDAL on this version')
def test_output_shp_nolayers(self):
rd = NoInitRadarData()
rd.output_shp(os.path.join(THIS_DIR, 'input_data', 'test.shp'))
@unittest.skipIf(not CONVERSIONS_ENABLED, 'No GDAL on this version')
def test_output_shp_picks(self):
# Make sure that we are selecting the proper output format
rd = NoInitRadarData()
rd.nmo_depth = np.arange(len(rd.travel_time)) * 1.1
rd.elev = np.arange(rd.tnum) * 1001
rd.picks = Picks(rd)
rd.picks.add_pick()
# First, export with NaNs, both with normal field (depth) and elev
rd.picks.samp2[:] = np.nan
rd.output_shp(os.path.join(THIS_DIR, 'input_data', 'test0.shp'))
rd.output_shp(os.path.join(THIS_DIR, 'input_data', 'test1.shp'), target_out='elev')
# Fill in NaNs
rd.picks.samp2[:] = 1
rd.output_shp(os.path.join(THIS_DIR, 'input_data', 'test2.shp'))
rd.output_shp(os.path.join(THIS_DIR, 'input_data', 'test3.shp'), target_out='elev')
# Check geometry
rd.output_shp(os.path.join(THIS_DIR, 'input_data', 'test4.shp'), t_srs='EPSG:3413')
@unittest.skipIf(CONVERSIONS_ENABLED, 'Version has GDAL, just checking we fail without')
def test_output_shp_nolayers_nogdal(self):
rd = NoInitRadarData()
with self.assertRaises(ImportError):
rd.output_shp(os.path.join(THIS_DIR, 'input_data', 'test.shp'))
def test_output_csv(self):
# Make sure that we are selecting the proper output format
rd = NoInitRadarData()
rd.nmo_depth = np.arange(len(rd.travel_time)) * 1.1
rd.elev = np.arange(rd.tnum) * 1001
rd.picks = Picks(rd)
rd.picks.add_pick()
# First, export with NaNs
rd.picks.samp2[:] = np.nan
rd.output_csv(os.path.join(THIS_DIR, 'input_data', 'test.csv'))
with open(os.path.join(THIS_DIR, 'input_data', 'test.csv')) as fin:
lines = fin.readlines()
# we should have four entries: lat, lon, trace, and the one pick in header and data
self.assertEqual(len(lines[0].split(',')), 4)
self.assertEqual(len(lines[1].split(',')), 4)
# we should have a row per trace, plus a header
self.assertEqual(len(lines), rd.tnum + 1)
# The final header should be in terms of depth
self.assertTrue(lines[0].index('depth') > 0)
# Fill in NaNs
rd.picks.samp2[:] = 1
rd.output_csv(os.path.join(THIS_DIR, 'input_data', 'test.csv'))
with open(os.path.join(THIS_DIR, 'input_data', 'test.csv')) as fin:
lines = fin.readlines()
# we should have four entries: lat, lon, trace, and the one pick in header and data
self.assertEqual(len(lines[0].split(',')), 4)
self.assertEqual(len(lines[1].split(',')), 4)
# we should have a row per trace, plus a header
self.assertEqual(len(lines), rd.tnum + 1)
# The final header should be in terms of depth
self.assertTrue(lines[0].index('depth') > 0)
# Check output target for elevation, which is the only weird one
rd.output_csv(os.path.join(THIS_DIR, 'input_data', 'test.csv'), target_out='elev')
with open(os.path.join(THIS_DIR, 'input_data', 'test.csv')) as fin:
lines = fin.readlines()
# we should have four entries: lat, lon, trace, and the one pick in header and data
self.assertEqual(len(lines[0].split(',')), 4)
self.assertEqual(len(lines[1].split(',')), 4)
# we should have a row per trace, plus a header
self.assertEqual(len(lines), rd.tnum + 1)
# The final header should be in terms of elev
self.assertTrue(lines[0].index('elev') > 0)
def test_output_csv_nolayers(self):
rd = NoInitRadarData()
rd.output_csv(os.path.join(THIS_DIR, 'input_data', 'test.csv'))
with open(os.path.join(THIS_DIR, 'input_data', 'test.csv')) as fin:
lines = fin.readlines()
# we should only have three entries: lat, lon, trace in header and data
self.assertEqual(len(lines[0].split(',')), 3)
self.assertEqual(len(lines[1].split(',')), 3)
# we should have a row per trace, plus a header
self.assertEqual(len(lines), rd.tnum + 1)
def tearDown(self):
for i in range(6):
for fn in ['test_out.mat', 'test{:d}.shp'.format(i), 'test{:d}.shx'.format(i), 'test{:d}.prj'.format(i), 'test{:d}.dbf'.format(i), 'test.csv']:
if os.path.exists(os.path.join(THIS_DIR, 'input_data', fn)):
os.remove(os.path.join(THIS_DIR, 'input_data', fn))
if __name__ == '__main__':
unittest.main()
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 dal22 <dal22@loki.ess.washington.edu>
#
# Distributed under terms of the GNU GPL3.0 license.
"""
"""
import os
import unittest
import numpy as np
from impdar.lib.RadarFlags import RadarFlags
class TestFlags(unittest.TestCase):
def setUp(self):
self.rdf = RadarFlags()
def test_BoolOutputConversion(self):
# Make sure the value is as expected
self.rdf.reverse = False
out = self.rdf.to_matlab()
self.assertFalse(out['reverse'])
self.rdf.rgain = True
out = self.rdf.to_matlab()
self.assertTrue(out['rgain'])
for attr in self.rdf.attrs:
self.assertTrue(attr in out)
def test_InputConversion(self):
in_flags_bad_format = {'agc': 0,
'batch': 0,
'bpass': np.array([0., 0., 0.]),
'crop': np.array([0., 0., 0.]),
'elev': 0,
'hfilt': np.array([0., 0.]),
'interp': np.array([0., 0.]),
'mig': 0,
'nmo': np.array([0., 0.]),
'restack': 0,
'reverse': 0,
'rgain': 0}
# in_flags_random_arg = {'unknown_val': False}
in_flags_bad = {'reverse': True}
with self.assertRaises(KeyError):
self.rdf.from_matlab(in_flags_bad)
with self.assertRaises(TypeError):
self.rdf.from_matlab(in_flags_bad_format)
# self.rdf.from_matlab(in_flags_random_arg)
def tearDown(self):
del self.rdf
if __name__ == '__main__':
unittest.main()
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 dlilien <dlilien90@gmail.com>
#
# Distributed under terms of the GNU GPL3.0 license.
"""
Make sure that we can successfully read ramac/mala input files
"""
import os
import unittest
import numpy as np
from impdar.lib.load import load_ramac
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
class TestRAMAC(unittest.TestCase):
def test_load_ramac_withgps(self):
a = load_ramac.load_ramac(os.path.join(THIS_DIR, 'input_data', 'ten_col.rd3'))
b = load_ramac.load_ramac(os.path.join(THIS_DIR, 'input_data', 'ten_col.rad'))
c = load_ramac.load_ramac(os.path.join(THIS_DIR, 'input_data', 'ten_col'))
self.assertTrue(np.all(a.data == b.data))
self.assertTrue(np.all(a.data == c.data))
def test_load_ramac_nogps(self):
load_ramac.load_ramac(os.path.join(THIS_DIR, 'input_data', 'ten_col_nogps.rd3'))
if __name__ == '__main__':
unittest.main()
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2019 dlilien <dlilien90@gmail.com>
#
# Distributed under terms of the GNU GPL-3.0 license.
"""
Test the basics of RadarData
"""
import os
import unittest
import pytest
import numpy as np
from impdar.lib.NoInitRadarData import NoInitRadarData
from impdar.lib.load.load_segy import load_segy, SEGY
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
class TestSEGY(unittest.TestCase):
@unittest.skipIf(not SEGY, 'No SEGY on this version')
def test_ReadSucceeds(self):
pytest.importorskip('segyio')
load_segy(os.path.join(THIS_DIR, 'input_data', 'shots0001_0200.segy'))
@unittest.skipIf(not SEGY, 'No SEGY on this version')
def test_WriteSucceeds(self):
pytest.importorskip('segyio')
data = load_segy(os.path.join(THIS_DIR, 'input_data', 'shots0001_0200.segy'))
data.save_as_segy(os.path.join(THIS_DIR, 'input_data', 'shots0001_0200_resave.segy'))
@unittest.skipIf(not SEGY, 'No SEGY on this version')
def test_ReadWriteRead(self):
pytest.importorskip('segyio')
data = load_segy(os.path.join(THIS_DIR, 'input_data', 'shots0001_0200.segy'))
data.save_as_segy(os.path.join(THIS_DIR, 'input_data', 'shots0001_0200_resave.segy'))
data2 = load_segy(os.path.join(THIS_DIR, 'input_data', 'shots0001_0200_resave.segy'))
self.assertEqual(data.data.shape, data2.data.shape)
self.assertTrue(np.allclose(data.data, data2.data))
@unittest.skipIf(SEGY, 'SEGY on this version, only a graceful failure test')
def test_SaveFails(self):
data = NoInitRadarData()
with self.assertRaises(ImportError):
data.save_as_segy(os.path.join(THIS_DIR, 'input_data', 'shots0001_0200_resave.segy'))
def tearDown(self):
if os.path.exists(os.path.join(THIS_DIR, 'input_data', 'shots0001_0200_resave.segy')):
os.remove(os.path.join(THIS_DIR, 'input_data', 'shots0001_0200_resave.segy'))
if __name__ == '__main__':
unittest.main()
#!/usr/bin/env bash
# script for installing qt5
# pulled from http://gist.github.com/hsercanatli/9773597f58a9961b558b58d2f0cffe6c 3.6-dev
sudo add-apt-repository --yes ppa:ubuntu-sdk-team/ppa
sudo apt-get update -qq
# Install Qt5, QtMultimedia and QtSvg
sudo apt-get install -qq qtdeclarative5-dev libqt5svg5-dev qtmultimedia5-dev
export QMAKE=/usr/lib/x86_64-linux-gnu/qt5/bin/qmake
# Library versions
PYQT_VERSION=5.12.8
SIP_VERSION=4.19
# Install sip
wget --retry-connrefused https://sourceforge.net/projects/pyqt/files/sip/sip-$SIP_VERSION/sip-$SIP_VERSION.tar.gz
tar -xzf sip-$SIP_VERSION.tar.gz
cd sip-$SIP_VERSION
python configure.py
make
sudo make install
cd ..
# Install PyQt5
export PYTHONPATH=$PYTHONPATH:$HOME/PyQt5_install-$PYQT_VERSION
python -c 'import PyQt5'
if [ "$?" -eq "0" ]; then
echo "PyQt5 imported"
else
wget --retry-connrefused https://sourceforge.net/projects/pyqt/files/PyQt5/PyQt-$PYQT_VERSION/PyQt5_gpl-$PYQT_VERSION.tar.gz
tar -xzf PyQt5_gpl-$PYQT_VERSION.tar.gz
cd PyQt5_gpl-$PYQT_VERSION
python configure.py --confirm-license --qmake=/usr/lib/x86_64-linux-gnu/qt5/bin/qmake --destdir $HOME/PyQt5_install-$PYQT_VERSION
make
sudo make install
fi
python -c 'from PyQt5 import QtCore'
if [ "$?" -eq "0" ]; then
echo "PyQt5 imported"
fi
#! /bin/sh
#
# install_su.sh
# Copyright (C) 2021 dlilien <dlilien@hozideh>
#
# Distributed under terms of the MIT license.
#
sudo apt-get install gfortran
export thisdir=$PWD
cd ..
export CWPROOT=$PWD/SeisUnix
git clone https://github.com/JohnWStockwellJr/SeisUnix.git
cd $CWPROOT/src
mv configs/Makefile.config_Linux_x86_64 Makefile.config
touch LICENSE_44R14_ACCEPTED
touch MAILHOME_44R14
make install <<< y
cd $thisdir
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<http://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<http://www.gnu.org/philosophy/why-not-lgpl.html>.
Some portions of this code (c-sources for migration routines) are distributed under
a different license. This code is modified from SeisUnix
https://github.com/JohnWStockwellJr/SeisUnix
The license is reproduced below:
This file is property of the Colorado School of Mines.
Copyright 2008, Colorado School of Mines,
All rights reserved.
Redistribution and use in source and binary forms, with or
without modification, are permitted provided that the following
conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided
with the distribution.
* Neither the name of the Colorado School of Mines nor the names of
its contributors may be used to endorse or promote products
derived from this software without specific prior written permission.
Warranty Disclaimer:
THIS SOFTWARE IS PROVIDED BY THE COLORADO SCHOOL OF MINES AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
COLORADO SCHOOL OF MINES OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
Export Restriction Disclaimer:
We believe that CWP/SU: Seismic Un*x is a low technology product that does
not appear on the Department of Commerce CCL list of restricted exports.
Accordingly, we believe that our product meets the qualifications of
an ECCN (export control classification number) of EAR99 and we believe
it fits the qualifications of NRR (no restrictions required), and
is thus not subject to export restrictions of any variety.
Approved Reference Format:
In publications, please refer to SU as per the following example:
Cohen, J. K. and Stockwell, Jr. J. W., (200_), CWP/SU: Seismic Un*x
Release No. __: an open source software package for seismic
research and processing,
Center for Wave Phenomena, Colorado School of Mines.
Articles about SU in peer-reviewed journals:
Saeki, T., (1999), A guide to Seismic Un*x (SU)(2)---examples of data processing (part 1), data input and preparation of headers, Butsuri-Tansa (Geophysical Exploration), vol. 52, no. 5, 465-477.
Stockwell, Jr. J. W. (1999), The CWP/SU: Seismic Un*x Package, Computers and Geosciences, May 1999.
Stockwell, Jr. J. W. (1997), Free Software in Education: A case study of CWP/SU: Seismic Un*x, The Leading Edge, July 1997.
Templeton, M. E., Gough, C.A., (1998), Web Seismic Un*x: Making seismic reflection processing more accessible, Computers and Geosciences.
Acknowledgements:
SU stands for CWP/SU:Seismic Un*x, a processing line developed at Colorado
School of Mines, partially based on Stanford Exploration Project (SEP)
software.
nose
matplotlib>=2.0.0
numpy>=1.12.0
scipy>=0.19.1
segyio
h5py