In minimally invasive surgery, medics using flexible endoscopes are facing a complex navigation problem due to a limited indirect view, rotating images, disturbed hand-eye-coordination, etc.

Experienced endoscopic surgeons are able to mentally assign an anatomical position to the endoscopic image while simultaneously looking at video images being acquired from the tip of the instrument, and directing the instrument through lumina of the human body.

Instead of using electromagnetic, mechanical, or optical devices to precisely locate the tip of the instrument, an image-based approach is to be developed that reduces the technical complexity in the operating room.

Our approach to identify the position of a flexible endoscope within lumina of the human body by matching live endoscopic images in situ with previously learnt image sequences to assist surgeons in transluminal endoscopic navigation is as follows:

  • select a limited set of representative images from endoscopic image sequences
  • define image features by ontology framework
  • convert the set of endoscopic image sequences into sequences of feature vectors
  • set up atlas of different anatomical lumina as a graph
  • analyze endoscopic images and match with learnt sequences to estimate the endoscope positions, considering a probability based on adjacent images along the path
  • communicate and visualize the actual position of the endoscopic tip with respect to the anatomical reference.

To compute the most likely position of the endoscope tip we want to identify the best match between a path in the anatomaical atlas and the complete sequence of all feature vectors like this:

  • compute the likelihood of a path match as a sum of logarithms
  • compute the best matchng path as a shortest path
  • compute the best matching alternatives as 2-shortest, ..., k-shortest paths
  • use the k-shortest path algorithm by Epstein
  • recompute k-shortst paths in real time when new information becomes available
  • calibrate the algorithm with surgery, surgeon, etc. profiles.