Latest Threat Research:SANDWORM_MODE: Shai-Hulud-Style npm Worm Hijacks CI Workflows and Poisons AI Toolchains.Details
Socket
Book a DemoSign in
Socket

hiPhive

Package Overview
Dependencies
Maintainers
2
Versions
15
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

hiPhive - pypi Package Compare versions

Comparing version
1.0
to
1.1
+22
-8
.gitlab-ci.yml

@@ -1,2 +0,2 @@

image: $CI_REGISTRY/materials-modeling/$CI_PROJECT_NAME/ci
image: $CI_REGISTRY/materials-modeling/$CI_PROJECT_NAME/cicd

@@ -51,8 +51,6 @@ variables:

test_examples:
.test_examples:
stage: test
tags:
- linux
only:
- schedules
needs:

@@ -75,13 +73,29 @@ - build:linux

test_notebooks:
test_examples:manual:
extends: .test_examples
when: manual
test_examples:schedules:
extends: .test_examples
only:
- schedules
.test_notebooks:
stage: test
tags:
- linux
only:
- schedules
needs:
- build:linux
script:
- pytest --nbmake $(find examples/ -name '*.ipynb')
- pytest --nbmake --nbmake-timeout=3600 $(find examples/ -name '*.ipynb')
test_notebooks:manual:
extends: .test_notebooks
when: manual
test_notebooks:schedules:
extends: .test_notebooks
only:
- schedules
style_check:

@@ -88,0 +102,0 @@ stage: test

@@ -36,4 +36,19 @@ .. _cutoffs_and_cluster_filters:

by the number of unique lattice sites in the cluster.
In order to use a cutoff matrix in hiphive one must use the :class:`Cutoffs <hiphive.cutoffs.Cutoffs>` object.
A simple example is shown below for which a third-order :class:`ClusterSpace <hiphive.ClusterSpace>` is constructed but only including two-body terms.
>>> from ase.build import bulk
>>> from hiphive import ClusterSpace
>>> from hiphive.cutoffs import Cutoffs
>>> prim = bulk('Al', a=4.0)
>>> cutoff_matrix = [
... [5.0, 5.0], # 2-body
... [0.0, 0.0]] # 3-body
>>> cutoffs = Cutoffs(cutoff_matrix)
>>> cs = ClusterSpace(prim, cutoffs)
While the majority of parameters in a higher-order FCP is typically associated with three and four-body interactions, they are typically much weaker and less relevant than two-body interactions.
The usage of a cutoff matrix provides a more fine-grained approach and can thus greatly reduce the number of parameters in a :class:`ForceConstantPotential <hiphive.ForceConstantPotential>`, simplifying model construction and improving computational performance.
Cluster filters

@@ -46,4 +61,4 @@ ---------------

cluster `(i,j,k,...)` is to return True or False depending on whether
the cluster should be kept or not. The ``BaseClusterFilter`` code
looks like this::
the cluster should be kept or not.
The :class:`BaseClusterFilter <hiphive.cutoffs.BaseClusterFilter>` code looks like this::

@@ -90,4 +105,3 @@ class BaseClusterFilter:

Please note that the cutoffs from the ``Cutoffs`` object are always
enforced first, *after* which the cluster filter is applied.
Please note that the cutoffs from the :class:`Cutoffs <hiphive.cutoffs.Cutoffs>` object are always enforced first, *after* which the cluster filter is applied.

@@ -94,0 +108,0 @@

@@ -6,2 +6,7 @@ .. index:: FAQ

Here are answers to some frequently asked questions.
Feel free to also read through previously asked questions by users on `matsci.org <https://matsci.org/hiphive>`_ and in the `gitlab issue tracker <https://gitlab.com/materials-modeling/hiphive/-/issues?sort=updated_desc&state=all&label_name[]=User>`_.
Failing tests

@@ -96,3 +101,3 @@ --------------

from ase.io import read
from hiphive import ClusterSpace, ForceConstantPotential
from hiphive import ClusterSpace, ForceConstantPotential, enforce_rotational_sum_rules
from hiphive.cutoffs import estimate_maximum_cutoff

@@ -99,0 +104,0 @@ from hiphive.utilities import extract_parameters

@@ -8,2 +8,4 @@ Input/output and logging

Input and output
----------------

@@ -19,5 +21,16 @@ .. index::

.. index::
single: Function reference; Input/output GPUMD
.. automodule:: hiphive.input_output.gpumd
:members:
:undoc-members:
:noindex:
.. index::
single: Function reference; Logging
Logging
--------
.. autofunction:: hiphive.input_output.logging_tools.set_config
:noindex:

@@ -23,11 +23,9 @@ Other functions

Utilities
---------
MD tools
--------
.. automodule:: hiphive.utilities
:members:
:undoc-members:
.. autofunction:: hiphive.md_tools.spectral_energy_density.compute_sed
:noindex:
Enforcing rotational sum rules

@@ -38,1 +36,9 @@ ------------------------------

:noindex:
Utilities
---------
.. automodule:: hiphive.utilities
:members:
:undoc-members:

@@ -68,3 +68,4 @@ .. _tutorial_prepare_reference_data:

with randomized displacements. A more detailed discussion of this subject can
be found :ref:`here <advanced_topics_structure_generation>`.
be found :ref:`here <advanced_topics_structure_generation>`, and we recommend
looking into the phonon-rattle approach for generating large but physical displacements.

@@ -71,0 +72,0 @@ .. warning::

@@ -5,2 +5,3 @@ # Base image

# Base packages
# git is used in hiphive-testing and included here to avoid two near identical images
RUN \

@@ -10,2 +11,3 @@ apt-get update -qy && \

apt-get install -qy \
git \
graphviz \

@@ -17,3 +19,3 @@ pandoc \

RUN \
conda install -c conda-forge phono3py
conda install -c conda-forge phono3py=2.0.0

@@ -29,2 +31,3 @@ RUN \

pytest \
setuptools_scm \
twine \

@@ -38,5 +41,5 @@ xdoctest

h5py \
numba \
'numpy<1.21' \
sympy \
numba>=0.55 \
'numpy<1.22' \
sympy>=1.1 \
&& \

@@ -46,3 +49,3 @@ pip3 install \

scikit-learn \
scipy \
scipy>=1.0.0 \
spglib

@@ -58,4 +61,8 @@

cloud_sptheme \
nbsphinx
nbsphinx \
&& \
pip3 install --upgrade \
jinja2==3.0.3
# Packages for running examples

@@ -62,0 +69,0 @@ RUN \

@@ -12,7 +12,5 @@ """

from hiphive.structure_generation import generate_mc_rattled_structures
from hiphive.utilities import prepare_structures
# parameters
structures_fname = 'rattled_structures.extxyz'
n_structures = 5

@@ -24,10 +22,15 @@ cell_size = 4

# setup
atoms_ideal = bulk('Ni', cubic=True).repeat(cell_size)
calc = EMT()
prim = bulk('Ni', cubic=True)
atoms_ideal = prim.repeat(cell_size)
# generate structures
structures = generate_mc_rattled_structures(atoms_ideal, n_structures, rattle_std, minimum_distance)
structures = prepare_structures(structures, atoms_ideal, calc)
for atoms in structures:
atoms.calc = EMT()
atoms.get_forces()
# save structures
write(structures_fname, structures)
write('prim.extxyz', prim)
write('supercell_ideal.extxyz', atoms_ideal)
write('supercells_rattled.extxyz', structures)

@@ -9,10 +9,14 @@ """

from hiphive import ClusterSpace, StructureContainer, ForceConstantPotential
from hiphive.utilities import prepare_structures
from trainstation import Optimizer
# read structures containing displacements and forces
structures = read('rattled_structures.extxyz@:')
prim = read('prim.extxyz')
atoms_ideal = read('supercell_ideal.extxyz')
rattled_structures = read('supercells_rattled.extxyz', index=':')
# set up cluster space
cutoffs = [5.0, 4.0, 4.0]
cs = ClusterSpace(structures[0], cutoffs)
cs = ClusterSpace(prim, cutoffs)
print(cs)

@@ -22,2 +26,3 @@ cs.print_orbits()

# ... and structure container
structures = prepare_structures(rattled_structures, atoms_ideal)
sc = StructureContainer(cs)

@@ -24,0 +29,0 @@ for structure in structures:

Metadata-Version: 2.1
Name: hiphive
Version: 1.0
Version: 1.1
Summary: High-order force constants for the masses

@@ -5,0 +5,0 @@ Home-page: http://hiphive.materialsmodeling.org/

ase
h5py
numba
numpy<1.21,>=1.12
numba>=0.55
numpy<1.22,>=1.18
pandas
scipy
scipy>=1.0.0
scikit-learn

@@ -8,0 +8,0 @@ spglib

@@ -5,3 +5,2 @@ .coveragerc

.gitlab-ci.yml
CHANGELOG
CONTRIBUTING.md

@@ -11,3 +10,2 @@ Dockerfile

README.rst
mypkg.sh
setup.py

@@ -96,3 +94,2 @@ .gitlab/issue_templates/Bug.md

doc/source/tutorial/_static/phonon_frequencies_gamma_nickel.svg
examples/zbl.py
examples/advanced_topics/anharmonic_energy_surface/1_prepare_data.py

@@ -99,0 +96,0 @@ examples/advanced_topics/anharmonic_energy_surface/2_setup_containers.py

@@ -22,3 +22,3 @@ """

'Paul Erhart']
__version__ = '1.0'
__version__ = '1.1'
__all__ = ['ClusterSpace',

@@ -25,0 +25,0 @@ 'StructureContainer',

@@ -176,25 +176,27 @@ """

# TODO: Fix tolerance
# Loop over all sites in the basis
for site, base in enumerate(basis):
# If the scaled position belongs to this site, the offset is the
# difference in scaled coordinates and should be integer
offset = np.subtract(spos, base)
# The diff is the difference between the offset vector and the nearest
# integer vector.
diff = offset - np.round(offset, 0)
# It should be close to the null vector if this is the correct site.
if np.linalg.norm(diff) < tol:
# If the difference is less than the tol make the offset integers
offset = np.round(offset, 0).astype(int)
# This should be the correct atom
atom = Atom(site, offset)
# Just to be sure we check that the atom actually produces the
# input spos given the input basis
s = ('Atom=[{},{}] with basis {} != {}'
.format(atom.site, atom.offset, basis, spos))
assert np.linalg.norm(spos - atom_to_spos(atom, basis)) < tol, s
return atom
# If no atom was found we throw an error
s = '{} not compatible with {} and tolerance {}'.format(spos, basis, tol)
# TODO: Should we throw more explicit error?
raise Exception(s)
# If needed, convert inputs to arrays to make use of numpy vectorization
spos = np.asarray(spos)
basis = np.asarray(basis)
# If the scaled position belongs to this site, the offset is the
# difference in scaled coordinates and should be integer
offsets = spos - basis
# The diff is the difference between the offset vector and the nearest
# integer vector.
diffs = offsets - np.round(offsets, 0)
# It should be close to the null vector if this is the correct site.
match_indices = np.nonzero(np.linalg.norm(diffs, axis=1) < tol)[0]
# If no atom was found or more than one atoms were found we throw an error
if len(match_indices) != 1:
raise ValueError(f'{spos} not compatible with {basis} and tolerance {tol}')
# This should be the correct atom
site = match_indices[0]
# If the difference is less than the tol make the offset integers
offset = np.rint(offsets[site])
atom = Atom(site, offset)
# Just to be sure we check that the atom actually produces the
# input spos given the input basis
s = ('Atom=[{},{}] with basis {} != {}'
.format(atom.site, atom.offset, basis, spos))
assert np.linalg.norm(spos - atom_to_spos(atom, basis)) < tol, s
return atom
"""
Functionality for enforcing rotational sum rules
"""
import numpy as np
from typing import List
from sklearn.linear_model import Ridge
import numpy as np
from scipy.sparse import coo_matrix
from .utilities import SparseMatrix
from scipy.sparse import coo_matrix, lil_matrix
from ..cluster_space import ClusterSpace
def enforce_rotational_sum_rules(cs, parameters, sum_rules=None, alpha=1e-6, **ridge_kwargs):
def enforce_rotational_sum_rules(cs: ClusterSpace,
parameters: np.ndarray,
sum_rules: List[str] = None,
alpha: float = 1e-6,
**ridge_kwargs: dict) -> np.ndarray:
""" Enforces rotational sum rules by projecting parameters.

@@ -19,13 +24,13 @@

----------
cs : ClusterSpace
cs
the underlying cluster space
parameters : numpy.ndarray
parameters
parameters to be constrained
sum_rules : list(str)
sum_rules
type of sum rules to enforce; possible values: 'Huang', 'Born-Huang'
alpha : float
alpha
hyperparameter to the ridge regression algorithm; keyword argument
passed to the optimizer; larger values specify stronger regularization,
i.e. less correction but higher stability [default: 1e-6]
ridge_kwargs : dict
i.e., less correction but higher stability [default: 1e-6]
ridge_kwargs
kwargs to be passed to sklearn Ridge

@@ -112,8 +117,2 @@

for i, M in enumerate(Ms):
row, col, data = [], [], []
for r, c, v in M.row_list():
row.append(r)
col.append(c)
data.append(np.float64(v))
M = coo_matrix((data, (row, col)), shape=M.shape)
M = M.dot(cvs_trans)

@@ -151,3 +150,3 @@ M = M.toarray()

m = SparseMatrix(3**4, parameter_map[-1][-1] + 1, 0)
m = np.zeros((parameter_map[-1][-1] + 1, 3**4))

@@ -171,4 +170,5 @@ def R(i, j):

Cij -= Cij.transpose([2, 3, 0, 1])
for k in range(3**4):
m[k, et_index] += Cij.flat[k]
m[et_index] += Cij.flat
m = coo_matrix(m.transpose())
return m

@@ -179,6 +179,8 @@

constraints = []
# Use scipy list-of-lists sparse matrix for good tradeoff between
# restructuring, indexing/slicing and memory footprint
M = lil_matrix((len(prim) * 3**3, parameter_map[-1][-1] + 1))
for i in range(len(prim)):
m = SparseMatrix(3**3, parameter_map[-1][-1] + 1, 0)
# Use smaller numpy arrays for speedy arithmetic
m = np.zeros((parameter_map[-1][-1] + 1, 3**3))
for j in range(len(atom_list)):

@@ -195,7 +197,6 @@ ets, orbit_index = lookup.get(tuple(sorted((i, j))), (None, None))

tmp -= tmp.transpose([0, 2, 1])
for k in range(3**3):
m[k, et_index] += tmp.flat[k]
constraints.append(m)
m[et_index] += tmp.flat
M[i*3**3:(i+1)*3**3, :] = m.transpose()
M = SparseMatrix.vstack(*constraints)
return M
# Convert lil_matrix to coo_matrix
return M.tocoo()

@@ -8,3 +8,2 @@ """

from collections import defaultdict
from sympy.core import S
from ..input_output.logging_tools import Progress

@@ -108,2 +107,4 @@ from ase import Atoms

""" This is a slightly patched version which also uses the sparse rref
and is faster due to up-front creation of empty SparseMatrix
vectors instead of conversion of the finished vectors
"""

@@ -121,9 +122,12 @@ if (max(*self.shape) < 10): # If matrix small use the dense version

# to 0. Then, we will use back substitution to solve the system
vec = [S.Zero]*self.cols
vec[free_var] = S.One
# initialize each vector as an empty SparseMatrix
vec = self._new(self.cols, 1, 0)
vec[free_var] = 1
for piv_row, piv_col in enumerate(pivots):
vec[piv_col] -= reduced[piv_row, free_var]
basis.append(vec)
return [self._new(self.cols, 1, b) for b in basis]
return basis
def to_array(self):

@@ -130,0 +134,0 @@ """ Cast SparseMatrix instance to numpy array """

@@ -313,3 +313,6 @@ """

def get_fcs_sensing(self, fcs: ForceConstants) -> Tuple[np.ndarray, np.ndarray]:
def get_fcs_sensing(self,
fcs: ForceConstants,
sparse: bool = False) \
-> Union[Tuple[np.ndarray, np.ndarray], Tuple[coo_matrix, np.ndarray]]:
""" Creates a fit matrix from force constants directly.

@@ -353,3 +356,5 @@

M = vstack(M)
M = M.dot(self.cs._cvs).toarray()
M = M.dot(self.cs._cvs)
if not sparse:
M = M.toarray()
F = np.concatenate(F)

@@ -356,0 +361,0 @@ return M, F

@@ -256,5 +256,9 @@ """

atoms. """
for cluster in fc_dict.keys():
if not all(0 <= i < self.n_atoms for i in cluster):
raise ValueError('Cluster {} not in supercell'.format(cluster))
# Use flat numpy array for fast check
cluster_indices = np.concatenate([cluster for cluster in fc_dict.keys()])
if not np.all((0 <= cluster_indices) & (cluster_indices < self.n_atoms)):
# If the check failed, do slower list processing to find the erring cluster
for cluster in fc_dict.keys():
if not all(0 <= i < self.n_atoms for i in cluster):
raise ValueError('Cluster {} not in supercell'.format(cluster))

@@ -261,0 +265,0 @@ @classmethod

@@ -124,3 +124,7 @@ import numpy as np

def write_fcp_txt(fname, path, n_types, max_order):
def write_fcp_txt(fname: str,
path: str,
n_types: int,
max_order: int,
heat_current_order: int = 2):
""" Write driver potential file for GPUMD.

@@ -130,23 +134,25 @@

----------
fname : str
fname
file name
path : str
path
path to directory with force constant file
n_types : int
n_types
number of atom types
max_order : int
max_order
maximum order of the force constant potential
heat_current_order
heat current order used in thermal conductivity
Format
------
Format is a simple file containing the following
fcp number_of_atom_types
highest_order
highest_force_order heat_current_order
path_to_force_constant_files
which in practice for a binary system with a sixth order model would mean
which in practice for a binary system with a sixth order model,
consider third-order heat-currents, would mean
fcp 2
6
6 3
/path/to/your/folder

@@ -157,3 +163,3 @@ """

f.write('fcp {}\n'.format(n_types))
f.write('{}\n'.format(max_order))
f.write('{} {}\n'.format(max_order, heat_current_order))
f.write('{}'.format(path.rstrip('/'))) # without a trailing '/'

@@ -160,0 +166,0 @@

@@ -52,2 +52,38 @@ """

def _get_forces_from_atoms(atoms: Atoms, calc=None) -> np.ndarray:
""" Try to get forces from an atoms object """
# Check if two calculators are available
if atoms.calc is not None and calc is not None:
raise ValueError('Atoms.calc is not None and calculator was provided')
# If calculator is provided as argument
if calc is not None:
atoms_tmp = atoms.copy()
atoms_tmp.calc = calc
forces_calc = atoms_tmp.get_forces()
if 'forces' in atoms.arrays:
if not np.allclose(forces_calc, atoms.get_array('forces')):
raise ValueError('Forces in atoms.arrays are different from the calculator forces')
return forces_calc
# If calculator is attached
if atoms.calc is not None:
if not isinstance(atoms.calc, SinglePointCalculator):
raise ValueError('atoms.calc is not a SinglePointCalculator')
forces_calc = atoms.get_forces()
if 'forces' in atoms.arrays:
if not np.allclose(forces_calc, atoms.get_array('forces')):
raise ValueError('Forces in atoms.arrays are different from the calculator forces')
return forces_calc
# No calculator attached or provided as argument, forces should therefore be in atoms.arrays
if 'forces' in atoms.arrays:
forces = atoms.get_array('forces')
else:
raise ValueError('Unable to find forces')
return forces
def prepare_structure(atoms: Atoms,

@@ -77,14 +113,4 @@ atoms_ideal: Atoms,

"""
# get forces
if 'forces' in atoms.arrays:
forces = atoms.get_array('forces')
elif calc is not None:
atoms_tmp = atoms.copy()
atoms_tmp.calc = calc
forces = atoms_tmp.get_forces()
elif isinstance(atoms.calc, SinglePointCalculator):
forces = atoms.get_forces()
else:
raise ValueError('Unable to find forces')
forces = _get_forces_from_atoms(atoms, calc=calc)

@@ -171,7 +197,6 @@ # setup new atoms

assert np.linalg.norm(atoms.cell - atoms_ref.cell) < 1e-6
dist_matrix = get_distances(
atoms.positions, atoms_ref.positions, cell=atoms_ref.cell, pbc=True)[1]
permutation = []
for i in range(len(atoms_ref)):
dist_row = dist_matrix[:, i]
dist_row = get_distances(
atoms.positions, atoms_ref.positions[i], cell=atoms_ref.cell, pbc=True)[1][:, 0]
permutation.append(np.argmin(dist_row))

@@ -273,6 +298,7 @@

cs: ClusterSpace,
sanity_check: bool = True) -> Tuple[np.ndarray, np.ndarray, int, np.ndarray]:
sanity_check: bool = True,
lstsq_method: str = 'numpy') \
-> Tuple[np.ndarray, np.ndarray, int, np.ndarray]:
""" Extracts parameters from force constants.
TODO: Rename this function with more explanatory name?

@@ -282,3 +308,8 @@ This function can be used to extract parameters to create a

The return values come from NumPy's `lstsq function
<https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.lstsq.html>`_.
<https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.lstsq.html>`_
or from SciPy's `sparse lsqr function
<https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.lsqr.html>`_.
Using `lstsq_method='scipy'` might be faster and have a smaller memory footprint for large
systems, at the expense of some accuracy. This is due to the use of sparse matrices
and an iterative solver.

@@ -294,2 +325,5 @@ Parameters

the input fcs and the output fcs
lstsq_method
method to use when making a least squares fit of a ForceConstantModel to the given fcs,
allowed values are 'numpy' for `np.linalg.lstsq` or 'scipy' `for scipy.sparse.linalg.lsqr`

@@ -304,6 +338,17 @@ Returns

from .force_constant_potential import ForceConstantPotential
from scipy.sparse.linalg import lsqr
if lstsq_method not in ['numpy', 'scipy']:
raise ValueError('lstsq_method must be either numpy or scipy')
# extract the parameters
fcm = ForceConstantModel(fcs.supercell, cs)
parameters = np.linalg.lstsq(*fcm.get_fcs_sensing(fcs), rcond=None)[0]
# If the cluster space large, a sparse least squares solver is faster
if lstsq_method == 'numpy':
A, b = fcm.get_fcs_sensing(fcs, sparse=False)
parameters = np.linalg.lstsq(A, b, rcond=None)[0]
elif lstsq_method == 'scipy':
A, b = fcm.get_fcs_sensing(fcs, sparse=True)
# set minimal tolerances to maximize iterative least squares accuracy
parameters = lsqr(A, b, atol=0, btol=0, conlim=0)[0]

@@ -310,0 +355,0 @@ # calculate the relative force constant error

Metadata-Version: 2.1
Name: hiphive
Version: 1.0
Version: 1.1
Summary: High-order force constants for the masses

@@ -5,0 +5,0 @@ Home-page: http://hiphive.materialsmodeling.org/

@@ -36,6 +36,6 @@ #!/usr/bin/env python3

'h5py',
'numba',
'numpy>=1.12,<1.21',
'numba>=0.55',
'numpy>=1.18,<1.22', # imposed by numba
'pandas',
'scipy',
'scipy>=1.0.0', # imposed by numba
'scikit-learn',

@@ -42,0 +42,0 @@ 'spglib',

@@ -9,3 +9,4 @@ import numpy as np

def test_fcs_sensing():
tol = 1e-12
tols = [1e-12, 1e-10]
methods = ['numpy', 'scipy']

@@ -22,4 +23,4 @@ cutoffs = [5.0, 4.0]

fcs = fcp.get_force_constants(ideal)
fitted_parameters = extract_parameters(fcs, cs)
assert np.linalg.norm(fitted_parameters - parameters) < tol
for tol, method in zip(tols, methods):
fitted_parameters = extract_parameters(fcs, cs, lstsq_method=method)
assert np.linalg.norm(fitted_parameters - parameters) < tol

@@ -105,2 +105,17 @@ import unittest

# with forces as arrays, but atoms object is permutated
atoms = atoms_ideal.copy()
atoms.rattle(0.1)
forces_ref = np.random.random((N, 3))
atoms.new_array('forces', forces_ref)
p = np.arange(0, N, 1)
np.random.shuffle(p)
atoms = atoms[p]
atoms = prepare_structure(atoms, atoms_ideal)
self.assertIn('displacements', atoms.arrays)
np.testing.assert_almost_equal(atoms.arrays['forces'], forces_ref)
np.testing.assert_almost_equal(atoms.positions, atoms_ideal.positions)
# with SinglePointCalculator

@@ -126,2 +141,60 @@ atoms = atoms_ideal.copy()

# With both calculator attached and forces as array and they are the same
atoms = atoms_ideal.copy()
atoms.rattle(0.1)
forces_ref = np.random.random((N, 3))
spc = SinglePointCalculator(atoms, forces=forces_ref)
atoms.calc = spc
atoms.set_array('forces', forces_ref)
atoms = prepare_structure(atoms, atoms_ideal)
self.assertIn('displacements', atoms.arrays)
np.testing.assert_almost_equal(atoms.arrays['forces'], forces_ref)
np.testing.assert_almost_equal(atoms.positions, atoms_ideal.positions)
# With both calculator provided and forces as array and they are the same
atoms = atoms_ideal.copy()
atoms.rattle(0.1)
atoms.calc = EMT()
forces_ref = atoms.get_forces()
atoms.calc = None
atoms.set_array('forces', forces_ref)
atoms = prepare_structure(atoms, atoms_ideal, calc=EMT())
self.assertIn('displacements', atoms.arrays)
np.testing.assert_almost_equal(atoms.arrays['forces'], forces_ref)
np.testing.assert_almost_equal(atoms.positions, atoms_ideal.positions)
# check that error is raised if two calculators are given
atoms = atoms_ideal.copy()
atoms.rattle(0.1)
atoms.calc = EMT()
atoms.get_forces()
with self.assertRaises(ValueError):
prepare_structure(atoms, atoms_ideal, calc=EMT())
# check that error is raised if calculator attached and forces as array differ
atoms = atoms_ideal.copy()
atoms.rattle(0.1)
forces_ref = np.random.random((N, 3))
spc = SinglePointCalculator(atoms, forces=forces_ref)
atoms.calc = spc
atoms.set_array('forces', forces_ref + 0.1)
with self.assertRaises(ValueError):
prepare_structure(atoms, atoms_ideal)
# check that error is raised if calculator provided and forces as array differ
atoms = atoms_ideal.copy()
atoms.rattle(0.1)
atoms.calc = EMT()
forces_ref = atoms.get_forces()
atoms.calc = None
atoms.set_array('forces', forces_ref + 0.1)
with self.assertRaises(ValueError):
prepare_structure(atoms, atoms_ideal, calc=EMT())
# check that errors are raised if no forces are given
atoms = atoms_ideal.copy()
atoms.rattle(0.1)
with self.assertRaises(ValueError):
prepare_structure(atoms, atoms_ideal)
def test_find_permutation(self):

@@ -128,0 +201,0 @@ """ Test find_permutation function. """

0.7
---
* additional fitting methods (OMP, adaptive-LASSO, split-Bregman with pre-conditioning) !206 !223 !225
* support for modeling selection via AIC and BIC !225
* support for GPUMD format !200 !230
* refactored and improved handling of constraints !224
* changed the training set size default fraction from 0.75 to 0.9 !226
* improved documentation !207 !209 !219
* improved tests !211 !216 !217 !218 !229
* updated dependencies and compatibility with external libraries !210 !212 !220
* various fixes and smaller improvements !199 !205 !221 !222 !228 !231 !232
0.6
---
* added line search option for lambda-parameter in ARDR (default is no line search)
* added read/write functions for optimizer
* target values are now normalized during fitting when standardize=True
* adjusted to changes in ase 3.18 concerning Cell object
0.5
---
* IO functions for Optimizer
* updated rotational sum rules example
* metadata added to ForceConstantsPotential
* various small fixes to documentation
0.4.1
-----
* functionality for reconstructing force constants obtained from e.g., phonopy
* native rattle module
* ForceConstants object now supports read and write
* improved documentation
* improved Cutoffs object
* improved support for recursive feature elimination (RFE)
* code clean up
* overnight builds
0.4
---
* added cluster filter functionality
* fixed and improved IO functions for force constants parsing from phonopy, phono3py, and ShengBTE
* improved interface for optimization with recursive feature elimination
* ForceConstants API
* extended additional topics (including interfacing DFT codes with hiphive)
* bug fixes (related to numerical tolerance and others)
0.3
---
* large speed up of sensing matrix calculation
* self-consistent phonons
* spectral energy density
* new advanced tutorial topics
* smaller bug fixes and improvements
0.2
---
* much improved performance
* improved numerical stability
* improved memory management
* structure container storage uses data compression
* additional unit tests; increased code coverage
* improved code quality thanks to extensive refactoring
* bug fixes
from ase.build import bulk
from ase.md.verlet import VelocityVerlet
from ase.md.velocitydistribution import ZeroRotation, Stationary, MaxwellBoltzmannDistribution
from ase import units
from hiphive.calculators.zbl import ZBLCalculator
T = 300
dt = 0.1
size = 3
a = 1.0
cutoff = 3
skin = 1
steps = 10000
traj_name = 'zbl.traj'
atoms = bulk('H', crystalstructure='fcc', a=a, cubic=True).repeat(size)
atoms.calc = ZBLCalculator(cutoff=cutoff, skin=skin)
MaxwellBoltzmannDistribution(atoms, temperature_K=2*T, force_temp=True)
ZeroRotation(atoms)
Stationary(atoms)
dyn = VelocityVerlet(atoms, timestep=0.1*units.fs, logfile='-', trajectory=traj_name)
dyn.run(steps)
# More information available on
# https://packaging.python.org/tutorials/packaging-projects/
# https://packaging.python.org/guides/distributing-packages-using-setuptools/#packaging-your-project
# https://docs.python.org/3/extending/building.html
# clean up
rm -fr build/ dist/ *.egg-info/ tmp/
# prepare source distribution and wheel
python3 setup.py sdist
python3 setup.py bdist_wheel --universal
# FIRST TESTING
# upload to test.pypi.org
twine upload --repository-url https://test.pypi.org/legacy/ dist/*
# test install
python3 -m pip install \
--index-url https://test.pypi.org/simple/ \
--extra-index-url https://pypi.org/simple \
hiphive
# AFTER TESTING
# upload to actual index
#twine upload dist/*

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display