The 3D geometry of anatomical structures facilitates computer-assisted diagnosis and therapy planning. Medical image data provides the basis for reconstructions of such geometries. The wide variety of anatomical shapes and specifics of different imaging modalities pose a challenge for fully automated segmentation methods. In the past years, great results have been achieved for segmentation of various individual anatomical structures – both bony (e.g. knee joint) and soft tissues (e.g. liver). However, existing algorithms require a substantial amount of tuning for every new structure of interest and modality. We aim at combining shape and appearance prior knowledge in novel ways in order to develop data-driven algorithms, that generalize well to new, unseen anatomies and modalities with as little adaptation as possible.

Example segmentations

 Model-based Segmentation

Two models are involved in the process of geometry reconstruction. The first model deals with the shape of the target anatomy, and describes the typical variations in shape with respect to a given population. The second model deals with the appearance of the target anatomy in a given modality. By appearance we mean the typical image data around individual points of the anatomy in a given modality. The shape model is used as a regularizer to restrict the space of shapes to anatomically plausible ones, while the appearance model is used to align the reconstructed geometry with the image data. Together, these two goals are formulated as a minimization problem, which is solved iteratively by alternating between matching the shape to the shape model and matching the local image data around the shape to the appearance model.

Statistical Shape and Appearance Models

The goal is to employ data-driven, statistical models, that are created automatically from given reference reconstructions. While the statistical shape models have been studied quite extensively, the statistical appearance models employed are often hand-crafted algorithms, based on absolute intensity and gradient values. Being tweaked to the specific anatomy and modality, they do not generalize well. In addition, creating a heuristic algorithm that covers structures with a very wide variety of appearances is challenging. Principal components analysis (PCA), although being data-driven, has not proven to beneficially act as a robust appearance model. We aim to analyze other machine learning based approaches to build reliable, generic, statistical appearance models.

Dictionary Learning

Dictionary Learning (DL) does not require any heuristics and is general enough to be applied across anatomies and modalities. DL operations are matrix operations, thus being efficiently evaluated. Given 3D image data and accordingly segmented anatomical structures of interest, rotational invariant histograms of oriented gradients (HoG) are sampled at the structures’ boundaries. These feature samples are used as input for learning a dictionary.

Dictionary Learning

A second dictionary is learnt for background image information. A combined dictionary of foreground and background features has been established, acting as an appearance model for image segmentation.

Neural Networks

Neural Networks have the capacity to automatically learn complex, non-linear mappings. Therefore, they make a good candidate to learn the appearance of a structure from reference segmentations. Similary to dictionary learning, a neural network should learn to discriminate between image data that corresponds to the anatomy and image data that does not. Deep network architectures with high capacity promise to be able to learn a large variety of apperances across the anatomy shape.

Neural network