HyperSpy API is changing in version 2.0, see the release notes!

hyperspy.api#

hyperspy.api.get_configuration_directory_path()

Return configuration path

hyperspy.api.interactive(f[, event, ...])

Chainable operations on Signals that update on events.

hyperspy.api.load([filenames, signal_type, ...])

Load potentially multiple supported files into HyperSpy.

hyperspy.api.print_known_signal_types()

Print all known signal_types

hyperspy.api.set_log_level(level)

Convenience function to set the log level of all hyperspy modules.

hyperspy.api.stack(signal_list[, axis, ...])

Concatenate the signals in the list over a given axis or a new axis.

hyperspy.api.transpose(*args[, signal_axes, ...])

Transposes all passed signals according to the specified options.

All public packages, functions and classes are available in this module.

When starting HyperSpy using the hyperspy script (e.g. by executing hyperspy in a console, using the context menu entries or using the links in the Start Menu, the api package is imported in the user namespace as hs, i.e. by executing the following:

>>> import hyperspy.api as hs

(Note that code snippets are indicated by three greater-than signs)

We recommend to import the HyperSpy API as above also when doing it manually. The docstring examples assume that hyperspy.api has been imported as hs, numpy as np and matplotlib.pyplot as plt.

Functions:

get_configuration_directory_path()

Return the configuration directory path.

interactive()

Define operations that are automatically recomputed on event changes.

load()

Load data into BaseSignal instances from supported files.

preferences

Preferences class instance to configure the default value of different parameters. It has a CLI and a GUI that can be started by execting its gui method i.e. preferences.gui().

print_known_signal_types()

Print all known signal_type.

set_log_level()

Convenience function to set HyperSpy’s the log level.

stack()

Stack several signals.

transpose()

Transpose a signal.

The api package contains the following submodules/packages:

signals

Signal classes which are the core of HyperSpy. Use this modules to create Signal instances manually from numpy arrays. Note that to load data from supported file formats is more convenient to use the load function.

model

Components that can be used to create a model for curve fitting.

plot

Plotting functions that operate on multiple signals.

data

Synthetic datasets.

roi

Region of interests (ROIs) that operate on BaseSignal instances and include widgets for interactive operation.

samfire

SAMFire utilities (strategies, Pool, fit convergence tests)

For more details see their doctrings.

hyperspy.api.get_configuration_directory_path()#

Return configuration path

hyperspy.api.interactive(f, event='auto', recompute_out_event='auto', *args, **kwargs)#

Chainable operations on Signals that update on events. The operation result will be updated when a given event is triggered.

Parameters:
fcallable()

A function that returns an object and that optionally can place the result in an object given through the out keyword.

event(list of) Event, str (“auto”) or None

Update the result of the operation when the event is triggered. If "auto" and f is a method of a Signal class instance its data_changed event is selected if the function takes an out argument. If None, update is not connected to any event. The default is "auto". It is also possible to pass an iterable of events, in which case all the events are connected.

recompute_out_event(list of) Event, str (“auto”) or None

Optional argument. If supplied, this event causes a full recomputation of a new object. Both the data and axes of the new object are then copied over to the existing out object. Only useful for signals or other objects that have an attribute axes_manager. If "auto" and f is a method of a Signal class instance its AxesManager any_axis_changed event is selected. Otherwise, the signal data_changed event is selected. If None, recompute_out is not connected to any event. The default is "auto". It is also possible to pass an iterable of events, in which case all the events are connected.

*args

Arguments to be passed to f.

**kwargsdict

Keyword arguments to be passed to f.

Returns:
BaseSignal or subclass

Signal updated with the operation result when a given event is triggered.

hyperspy.api.load(filenames=None, signal_type=None, stack=False, stack_axis=None, new_axis_name='stack_element', lazy=False, convert_units=False, escape_square_brackets=False, stack_metadata=True, load_original_metadata=True, show_progressbar=None, **kwds)#

Load potentially multiple supported files into HyperSpy.

Supported formats: hspy (HDF5), msa, Gatan dm3, Ripple (rpl+raw), Bruker bcf and spx, FEI ser and emi, SEMPER unf, EMD, EDAX spd/spc, CEOS prz tif, and a number of image formats.

Depending on the number of datasets to load in the file, this function will return a HyperSpy signal instance or list of HyperSpy signal instances.

Any extra keywords are passed to the corresponding reader. For available options, see their individual documentation.

Parameters:
filenamesNone, (list of) str or (list of) pathlib.Path, default None

The filename to be loaded. If None, a window will open to select a file to load. If a valid filename is passed, that single file is loaded. If multiple file names are passed in a list, a list of objects or a single object containing multiple datasets, a list of signals or a stack of signals is returned. This behaviour is controlled by the stack parameter (see below). Multiple files can be loaded by using simple shell-style wildcards, e.g. ‘my_file*.msa’ loads all the files that start by ‘my_file’ and have the ‘.msa’ extension. Alternatively, regular expression type character classes can be used (e.g. [a-z] matches lowercase letters). See also the escape_square_brackets parameter.

signal_typeNone, str, default None

The acronym that identifies the signal type. May be any signal type provided by HyperSpy or by installed extensions as listed by hs.print_known_signal_types(). The value provided may determines the Signal subclass assigned to the data. If None (default), the value is read/guessed from the file. Any other value would override the value potentially stored in the file. For example, for electron energy-loss spectroscopy use ‘EELS’. If ‘’ (empty string) the value is not read from the file and is considered undefined.

stackbool, default False

Default False. If True and multiple filenames are passed, stacking all the data into a single object is attempted. All files must match in shape. If each file contains multiple (N) signals, N stacks will be created, with the requirement that each file contains the same number of signals.

stack_axisNone, int or str, default None

If None (default), the signals are stacked over a new axis. The data must have the same dimensions. Otherwise, the signals are stacked over the axis given by its integer index or its name. The data must have the same shape, except in the dimension corresponding to axis.

new_axis_namestr, optional

The name of the new axis (default ‘stack_element’), when axis is None. If an axis with this name already exists, it automatically appends ‘-i’, where i are integers, until it finds a name that is not yet in use.

lazybool, default False

Open the data lazily - i.e. without actually reading the data from the disk until required. Allows opening arbitrary-sized datasets.

convert_unitsbool, default False

If True, convert the units using the convert_to_units method of the axes_manager. If False, does nothing.

escape_square_bracketsbool, default False

If True, and filenames is a str containing square brackets, then square brackets are escaped before wildcard matching with glob.glob(). If False, square brackets are used to represent character classes (e.g. [a-z] matches lowercase letters).

stack_metadata{bool, int}

If integer, this value defines the index of the signal in the signal list, from which the metadata and original_metadata are taken. If True, the original_metadata and metadata of each signals are stacked and saved in original_metadata.stack_elements of the returned signal. In this case, the metadata are copied from the first signal in the list. If False, the metadata and original_metadata are not copied.

show_progressbarNone or bool

If True, display a progress bar. If None, the default from the preferences settings is used. Only used with stack=True.

load_original_metadatabool, default True

If True, all metadata contained in the input file will be added to original_metadata. This does not affect parsing the metadata to metadata.

readerNone, str, module, optional

Specify the file reader to use when loading the file(s). If None (default), will use the file extension to infer the file type and appropriate reader. If str, will select the appropriate file reader from the list of available readers in HyperSpy. If module, it must implement the file_reader function, which returns a dictionary containing the data and metadata for conversion to a HyperSpy signal.

print_info: bool, optional

For SEMPER unf- and EMD (Berkeley)-files. If True, additional information read during loading is printed for a quick overview. Default False.

downsampleint (1–4095), optional

For Bruker bcf files, if set to integer (>=2) (default 1), bcf is parsed into down-sampled size array by given integer factor, multiple values from original bcf pixels are summed forming downsampled pixel. This allows to improve signal and conserve the memory with the cost of lower resolution.

cutoff_at_kVNone, int, float, optional

For Bruker bcf files and Jeol, if set to numerical (default is None), hypermap is parsed into array with depth cutoff at set energy value. This allows to conserve the memory by cutting-off unused spectral tails, or force enlargement of the spectra size. Bruker bcf reader accepts additional values for semi-automatic cutoff. “zealous” value truncates to the last non zero channel (this option should not be used for stacks, as low beam current EDS can have different last non zero channel per slice). “auto” truncates channels to SEM/TEM acceleration voltage or energy at last channel, depending which is smaller. In case the hv info is not there or hv is off (0 kV) then it fallbacks to full channel range.

select_type‘spectrum_image’, ‘image’, ‘single_spectrum’, None, optional

If None (default), all data are loaded. For Bruker bcf and Velox emd files: if one of ‘spectrum_image’, ‘image’ or ‘single_spectrum’, the loader returns either only the spectrum image, only the images (including EDS map for Velox emd files), or only the single spectra (for Velox emd files).

first_frameint, optional

Only for Velox emd files: load only the data acquired after the specified fname. Default 0.

last_frameNone, int, optional

Only for Velox emd files: load only the data acquired up to specified fname. If None (default), load the data up to the end.

sum_framesbool, optional

Only for Velox emd files: if False, load each EDS frame individually. Default is True.

sum_EDS_detectorsbool, optional

Only for Velox emd files: if True (default), the signals from the different detectors are summed. If False, a distinct signal is returned for each EDS detectors.

rebin_energyint, optional

Only for Velox emd files: rebin the energy axis by the integer provided during loading in order to save memory space. Needs to be a multiple of the length of the energy dimension (default 1).

SI_dtypenumpy.dtype, None, optional

Only for Velox emd files: set the dtype of the spectrum image data in order to save memory space. If None, the default dtype from the Velox emd file is used.

load_SI_image_stackbool, optional

Only for Velox emd files: if True, load the stack of STEM images acquired simultaneously as the EDS spectrum image. Default is False.

dataset_pathNone, str, list of str, optional

For filetypes which support several datasets in the same file, this will only load the specified dataset. Several datasets can be loaded by using a list of strings. Only for EMD (NCEM) and hdf5 (USID) files.

stack_groupbool, optional

Only for EMD NCEM. Stack datasets of groups with common name. Relevant for emd file version >= 0.5 where groups can be named ‘group0000’, ‘group0001’, etc.

ignore_non_linear_dimsbool, optional

Only for HDF5 USID files: if True (default), parameters that were varied non-linearly in the desired dataset will result in Exceptions. Else, all such non-linearly varied parameters will be treated as linearly varied parameters and a Signal object will be generated.

only_valid_databool, optional

Only for FEI emi/ser files in case of series or linescan with the acquisition stopped before the end: if True, load only the acquired data. If False, fill empty data with zeros. Default is False and this default value will change to True in version 2.0.

Returns:
(list of) BaseSignal or subclass

Examples

Loading a single file providing the signal type:

>>> d = hs.load('file.dm3', signal_type="EDS_TEM") 

Loading multiple files:

>>> d = hs.load(['file1.hspy','file2.hspy']) 

Loading multiple files matching the pattern:

>>> d = hs.load('file*.hspy') 

Loading multiple files containing square brackets in the filename:

>>> d = hs.load('file[*].hspy', escape_square_brackets=True) 

Loading multiple files containing character classes (regular expression):

>>> d = hs.load('file[0-9].hspy')  

Loading (potentially larger than the available memory) files lazily and stacking:

>>> s = hs.load('file*.blo', lazy=True, stack=True) 

Specify the file reader to use

>>> s = hs.load('a_nexus_file.h5', reader='nxs') 

Loading a file containing several datasets:

>>> s = hs.load("spameggsandham.nxs") 
>>> s 
[<Signal1D, title: spam, dimensions: (32,32|1024)>,
 <Signal1D, title: eggs, dimensions: (32,32|1024)>,
 <Signal1D, title: ham, dimensions: (32,32|1024)>]

Use list indexation to access single signal

>>> s[0] 
<Signal1D, title: spam, dimensions: (32,32|1024)>
hyperspy.api.print_known_signal_types()#

Print all known signal_types

This includes signal_types from all installed packages that extend HyperSpy.

Examples

>>> hs.print_known_signal_types() 
+--------------------+---------------------+--------------------+----------+
|    signal_type     |       aliases       |     class name     | package  |
+--------------------+---------------------+--------------------+----------+
| DielectricFunction | dielectric function | DielectricFunction |  exspy   |
|      EDS_SEM       |                     |   EDSSEMSpectrum   |  exspy   |
|      EDS_TEM       |                     |   EDSTEMSpectrum   |  exspy   |
|        EELS        |       TEM EELS      |    EELSSpectrum    |  exspy   |
|      hologram      |                     |   HologramImage    | holospy  |
|      MySignal      |                     |      MySignal      | hspy_ext |
+--------------------+---------------------+--------------------+----------+
hyperspy.api.set_log_level(level)#

Convenience function to set the log level of all hyperspy modules.

Note: The log level of all other modules are left untouched.

Parameters:
levelint or str

The log level to set. Any values that logging.Logger.setLevel() accepts are valid. The default options are:

  • ‘CRITICAL’

  • ‘ERROR’

  • ‘WARNING’

  • ‘INFO’

  • ‘DEBUG’

  • ‘NOTSET’

Examples

For normal logging of hyperspy functions, you can set the log level like this:

>>> import hyperspy.api as hs
>>> hs.set_log_level('INFO')
>>> hs.load('my_file.dm3') 
INFO:rsciio.digital_micrograph:DM version: 3
INFO:rsciio.digital_micrograph:size 4796607 B
INFO:rsciio.digital_micrograph:Is file Little endian? True
INFO:rsciio.digital_micrograph:Total tags in root group: 15
<Signal2D, title: My file, dimensions: (|1024, 1024)>

If you need the log output during the initial import of hyperspy, you should set the log level like this:

>>> from hyperspy.logger import set_log_level
>>> hs.set_log_level('DEBUG')
>>> import hyperspy.api as hs 
DEBUG:hyperspy.gui:Loading hyperspy.gui
DEBUG:hyperspy.gui:Current MPL backend: TkAgg
DEBUG:hyperspy.gui:Current ETS toolkit: qt4
DEBUG:hyperspy.gui:Current ETS toolkit set to: null
hyperspy.api.stack(signal_list, axis=None, new_axis_name='stack_element', lazy=None, stack_metadata=True, show_progressbar=None, **kwargs)#

Concatenate the signals in the list over a given axis or a new axis.

The title is set to that of the first signal in the list.

Parameters:
signal_listlist of BaseSignal

List of signals to stack.

axisNone, int or str

If None, the signals are stacked over a new axis. The data must have the same dimensions. Otherwise the signals are stacked over the axis given by its integer index or its name. The data must have the same shape, except in the dimension corresponding to axis. If the stacking axis of the first signal is uniform, it is extended up to the new length; if it is non-uniform, the axes vectors of all signals are concatenated along this direction; if it is a FunctionalDataAxis, it is extended based on the expression of the first signal (and its sub axis x is handled as above depending on whether it is uniform or not).

new_axis_namestr

The name of the new axis when axis is None. If an axis with this name already exists it automatically append ‘-i’, where i are integers, until it finds a name that is not yet in use.

lazybool or None

Returns a LazySignal if True. If None, only returns lazy result if at least one is lazy.

stack_metadata{bool, int}

If integer, this value defines the index of the signal in the signal list, from which the metadata and original_metadata are taken. If True, the original_metadata and metadata of each signals are stacked and saved in original_metadata.stack_elements of the returned signal. In this case, the metadata are copied from the first signal in the list. If False, the metadata and original_metadata are not copied.

show_progressbarNone or bool

If True, display a progress bar. If None, the default from the preferences settings is used.

Returns:
BaseSignal

Examples

>>> data = np.arange(20)
>>> s = hs.stack(
...    [hs.signals.Signal1D(data[:10]), hs.signals.Signal1D(data[10:])]
... )
>>> s
<Signal1D, title: Stack of , dimensions: (2|10)>
>>> s.data
array([[ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9],
       [10, 11, 12, 13, 14, 15, 16, 17, 18, 19]])
hyperspy.api.transpose(*args, signal_axes=None, navigation_axes=None, optimize=False)#

Transposes all passed signals according to the specified options.

For parameters see BaseSignal.transpose.

Examples

>>> signal_iterable = [
...    hs.signals.BaseSignal(np.random.random((2,)*(i+1))) for i in range(3)
... ]
>>> signal_iterable
[<BaseSignal, title: , dimensions: (|2)>,
 <BaseSignal, title: , dimensions: (|2, 2)>,
 <BaseSignal, title: , dimensions: (|2, 2, 2)>]
>>> hs.transpose(*signal_iterable, signal_axes=1)
[<Signal1D, title: , dimensions: (|2)>,
<Signal1D, title: , dimensions: (2|2)>,
<Signal1D, title: , dimensions: (2, 2|2)>]