HDF5FileHandler

class btrack.io.HDF5FileHandler(filename: PathLike, read_write: str = 'r', *, obj_type: str = 'obj_type_1')

Bases: object

Generic HDF5 file hander for reading and writing datasets. This is inter-operable between segmentation, tracking and analysis code.

Parameters:
filenamestr

The filename of the hdf5 file to be used.

read_writestr

A read/write mode for the file, e.g. w, r, a etc.

obj_typestr

The name of the object type. Defaults to obj_type_1. The object type name must start with obj_type_

Notes

Basic format of the HDF file is:

data.h5/
β”œβ”€ segmentation/
β”‚  β”œβ”€ images                (J x (d) x h x w) uint16 segmentation
β”œβ”€ objects/
β”‚  β”œβ”€ obj_type_1/
β”‚  β”‚  β”œβ”€ coords             (I x 5) [t, x, y, z, object_type]
β”‚  β”‚  β”œβ”€ labels             (I x D) [label, (softmax scores ...)]
β”‚  β”‚  β”œβ”€ map                (J x 2) [start_index, end_index] -> coords array
β”‚  β”‚  β”œβ”€ properties /
β”‚  β”‚  β”‚  β”œβ”€ area            (I x 1) first named property (e.g. `area`)
β”‚  β”‚  β”‚  β”œβ”€ ...
β”‚  β”œβ”€ obj_type_2/
β”‚  β”œβ”€ ...
β”œβ”€ tracks/
β”‚  β”œβ”€ obj_type_1/
β”‚  β”‚  β”œβ”€ tracks             (I x 1) [index into coords]
β”‚  β”‚  β”œβ”€ dummies            similar to objects/coords, but for dummy objects
β”‚  β”‚  β”œβ”€ map                (K x 2) [start_index, end_index] -> tracks array
β”‚  β”‚  β”œβ”€ LBEPRG             (K x 6) [L, B, E, P, R, G]
β”‚  β”‚  β”œβ”€ fates              (K x n) [fate_from_tracker, ...future_expansion]
β”‚  β”œβ”€ obj_type_2/
β”‚  β”œβ”€ ...

Where I is the number of objects, J is the number of frames and K is number of tracks. LBEPR is a modification of the LBEP format to also include the root node of the tree.

Examples

Read objects from a file:

>>> with HDF5FileHandler('file.h5', 'r') as handler:
...    objects = handler.objects

Use filtering by property for object retrieval:

>>> obj = handler.filtered_objects('flag==1')
... obj = handler.filtered_objects('area>100')

Write tracks directly to a file:

>>> handler.write_tracks(tracks)
Attributes:
segmentationnpt.NDArray

A numpy array representing the segmentation data. TZYX

objectslist [PyTrackObject]

Return the objects in the file.

filtered_objectsnpt.NDArray

A filtered list of objects based on metadata.

trackslist [Tracklet]

Return the tracks in the file.

lbepnpt.NDArray

Return the LBEP data.

Attributes Summary

lbep

Return the LBEP data.

object_type

object_types

objects

Return the objects in the file.

segmentation

tracks

Return the tracks in the file.

Methods Summary

close()

filtered_objects([f_expr,Β ...])

A filtered list of objects based on metadata.

tree()

Recursively iterate over the H5 file to reveal the tree structure and number of elements within.

write_objects(data)

Write objects to HDF file.

write_properties(data,Β *[,Β allow_overwrite])

Write object properties to HDF file.

write_segmentation(segmentation)

Write out the segmentation to an HDF file.

write_tracks(data,Β *[,Β f_expr])

Write tracks to HDF file.

Attributes Documentation

lbep

Return the LBEP data.

object_type
object_types
objects

Return the objects in the file.

segmentation
tracks

Return the tracks in the file.

Methods Documentation

close()
filtered_objects(f_expr: str | None = None, *, lazy_load_properties: bool = True, exclude_properties: list[str] | None = None) list[PyTrackObject]

A filtered list of objects based on metadata.

Parameters:
f_exprstr

A string representing a filtering option. For example, area>100 would filter objects by a property key area where the numerical value of area was greater than 100.

lazy_load_propertiesbool

For future expansion. To allow lazy loading of large datasets.

exclude_propertieslist or None

A list of properties keys to exclude when loading from disk.

Returns:
objectslist

A list of btrack.btypes.PyTrackObject objects.

tree() None

Recursively iterate over the H5 file to reveal the tree structure and number of elements within.

write_objects(data: list[btypes.PyTrackObject] | BayesianTracker) None

Write objects to HDF file.

Parameters:
datalist or BayesianTracker instance

Either a list of PyTrackObject to be written, or an instance of BayesianTracker with a .objects property.

write_properties(data: dict[str, Any], *, allow_overwrite: bool = False) None

Write object properties to HDF file.

Parameters:
datadict {key: (N, D)}

A dictionary of key-value pairs of properties to be written. The values should be an array equal in length to the number of objects and with D dimensions.

allow_overwritebool

Allow to delete the existing property keys from the HDF5 file and overwrite with new values from the data dict. Defaults to False.

write_segmentation(segmentation: ndarray[Any, dtype[_ScalarType_co]]) None

Write out the segmentation to an HDF file.

Parameters:
segmentationnpt.NDArray

A numpy array representing the segmentation data. T(Z)YX, uint16

write_tracks(data: list[btypes.Tracklet] | BayesianTracker, *, f_expr: str | None = None) None

Write tracks to HDF file.

Parameters:
datalist of Tracklets or an instance of BayesianTracker

A list of tracklets or an instance of BayesianTracker.

f_exprstr

An expression which represents how the objects have been filtered prior to tracking, e.g. area>100.0