segmentation_to_objects

btrack.io.segmentation_to_objects(segmentation: ndarray[Any, dtype[_ScalarType_co]] | Generator, *, intensity_image: ndarray[Any, dtype[_ScalarType_co]] | Generator | None = None, properties: tuple[str, ...] = (), extra_properties: tuple[Callable] | None = None, scale: tuple[float] | None = None, use_weighted_centroid: bool = True, assign_class_ID: bool = False, num_workers: int = 1) list[PyTrackObject]

Convert segmentation to a set of trackable objects.

Parameters:
segmentationnpt.NDArray, dask.array.core.Array or Generator

Segmentation can be provided in several different formats. Arrays should be ordered as T(Z)YX.

intensity_imagenpt.NDArray, dask.array.core.Array or Generator, optional

Intensity image with same size as segmentation, to be used to calculate additional properties. See skimage.measure.regionprops for more info.

propertiestuple of str, optional

Properties passed to scikit-image regionprops. These additional properties are added as metadata to the btrack objects. See skimage.measure.regionprops for more info.

extra_propertiestuple of callable, optional

Callable functions to calculate additional properties from the segmentation and intensity image data. See skimage.measure.regionprops for more info.

scaletuple, optional

A scale for each spatial dimension of the input segmentation. Defaults to one for all axes, and allows scaling for anisotropic imaging data.

use_weighted_centroidbool, default True

If an intensity image has been provided, default to calculating the weighted centroid. See skimage.measure.regionprops for more info. Note: if measuring additional properties from a multichannel image then use_weighted_centroid needs to be set to False, otherwise the _props_to_dict function fails to write the output.

assign_class_IDbool, default False

If specified, assign a class label for each individual object based on the pixel intensity found in the mask. Requires semantic segmentation, i.e. object type 1 will have pixel value 1.

num_workersint

Number of workers to use while processing the image data.

Returns:
objectslist

A list of btrack.btypes.PyTrackObject() trackable objects.

Notes

If tqdm is installed, a progress bar will be provided.

Examples

>>> objects = btrack.utils.segmentation_to_objects(
...   segmentation,
...   properties=('area', ),
...   scale=(1., 1.),
...   assign_class_ID=True,
...   num_workers=4,
... )

It’s also possible to provide custom analysis functions :

>>> def foo(_mask: npt.NDArray) -> float:
...     return np.sum(_mask)

that can be passed to btrack.utils.segmentation_to_objects() :

>>> objects = btrack.utils.segmentation_to_objects(
...   segmentation,
...   extra_properties=(foo, ),
...   num_workers=1,
... )