Running and writing tests¶
Every new function that is written in to HyperSpy needs to be tested and documented.
Tests are short functions found in hyperspy/tests that call your functions under some known conditions and check the outputs against known values. They should depend on as few other features as possible so that when they break we know exactly what caused it. Ideally, the tests should be written at the same time than the code itself, as they are very convenient to run to check outputs when coding. Writing tests can seem laborious but you’ll probably soon find that they’re very important as they force you to sanity check all you do. For details on running and writing HyperSpy test see dev_tests
HyperSpy uses the pytest library for testing. The
tests reside in the
First ensure pytest and its plugins are installed by:
# If using a standard hyperspy install pip install hyperspy[test] # Or, from a hyperspy local development directory pip install -e .[test] # Or just installing the dependencies using conda conda install -c conda-forge pytest pytest-mpl
To run them:
pytest --mpl --pyargs hyperspy
Or, from HyperSpy’s project folder simply:
Useful hints on testing:
When comparing integers, it’s fine to use
==. When comparing floats, be sure to use
numpy.testing.assert_equal()is convenient to compare numpy arrays.
hyperspy.misc.test_utils.pycontains a few useful functions for testing.
@pytest.mark.parametrize()is a very convenient decorator to test several parameters of the same function without having to write to much repetitive code, which is often error-prone. See pytest documentation for more details.
It is good to check that the tests does not use too much of memory after creating new tests. If you need to explicitly delete your objects and free memory, you can do the following to release the memory associated to the
sobject, for example:
del s gc.collect()
Once, you have pushed your PR to the official HyperSpy repository, it can be useful to check the coverage of your tests using the coveralls.io check of your PR. There should be a link to it at the bottom of your PR on the github PR page. This service can help you to find how well your code is being tested and exactly which part is not currently tested.
pytest-sugar can be installed to have a nicer look and feel of pytest in the console (encoding issue have been reported in the Windows console).
Plotting is tested using the
@pytest.mark.mpl_image_compare decorator of
the pytest mpl plugin. This
decorator uses reference images to compare with the generated output during the
tests. The references images are located in the folder defined by the argument
baseline_dir of the
To run plotting tests, you simply need to add the option
If you don’t use the
--mpl, the code of the tests will be executed but the
images will not be compared to the references images.
If you need to add or change some plots, follow the workflow below:
Write the tests using appropriate decorator such as
If you need to generate new reference image in the folder
plot_test_dir, for example, run:
Run again the tests and this time they should pass.
git addto put the new file in the git repository.
When the plotting tests are failling, it is possible to download the figure comparison images generated by pytest-mpl in the artifacts tabs of the corresponding build.
- The plotting tests need matplotlib > 3.0.0, since small changes in the way
matplotlib generates the figure can make the tests fail.
In travis and appveyor, the matplotlib backend is set to
agg by setting
MPLBACKEND environment variable to
agg. At the first import of
matplotlib.pyplot, matplotlib will look at the
variable and set accordingly the backend.
See pytest-mpl for more details.
Exporting pytest results as HTML¶
pytest-html it is possible to export the results of running pytest
for easier viewing. I can be installed by conda:
conda install pytest-html
and run by:
pytest --mpl --html=report.html