# Adaptive Screen-Space Sampling

**In volume rendering, a standard approach to achieve high-quality images builds on excessive oversampling in image and object space. Consequently, this results in a large number of sampling points, whose evaluation is expensive, especially for large datasets. In this project we analyze an adaptive method that is employed in the field of finite element methods ( hp-FEM).**

# Introduction

The goal of this work is to reduce the number of sampling points for volume raycasting via error-controlled adaptive screen-space sampling. That is, the sampling frequency is increased in image regions where color variations across space is high (e.g., edges), and it is decreased in homogeneous image regions (Figure 1, Figure 4). The heart of such methods are criteria to estimate this variance and the resulting image-space error. In contrast to previous work that also deals with adaptive image discretization, the strength of our method lies in an ubiquitous error estimator founded in the FEM theory.

Minimizing the number of sampling points is of interest for rendering large datasets where each evaluation might need an expensive decompression. Furthermore, with increasing screen resolutions high-resolution images are created more efficiently with our method.

# h- vs. p. vs. hp-FEM

Assuming a function *u* that needs to be approximated, we distinguish three versions of the finite element method:

: The convergence of the approximative solution is achieved by increasingly finer grids.*h*-FEM

The convergence of the approximative solution is achieved by increasing the polynomial degree on a uniform grid of fixed-size finite elements.*p*-FEM:

The convergence of the approximative solution is improved by combinding the advantages of*hp*-FEM:*h*- and*p*-adaptivity. In regions of high frequency, a fine grid and low polynomial degrees are chosen. In regions where u is smooth, a coarse grid with high polynomial degrees is preferable. Depending on the regularity of the function*u*, which gives us a hint about the smoothness of the function, this leads to a method that converges algebraically, or even exponentially, to the exact solution. For analytical functions with high regularity*p*-refinement is dominant so that*hp*-adaptivity leads to a faster convergence than*h*-adaptivity only (Figure 2).

In the case of volume rendering, the order of convergence is constrained by the smoothness of the signal that results from the input data mapped to color and opacity values. Real data often has many discontinuities and, thus, hardly leads to any *p*-refinement. More sophisticated reconstruction kernels can improve the convergence behavior as they increase the smoothness of the input data. However, often high-frequency transfer functions are desired to distinguish

different materials in the final image. Consequently, *hp*-adaptivity would result in marginal benefit for highly inhomogeneous data.

Until now, we focused on h-adaptivity using bilinear interpolation for image

reconstruction.

# Preliminary Results

To determine the efficiency of our method, we compared the convergence behavior of our technique to a uniform image subdivision and the adaptive refinement scheme presented in *Levoy, Volume rendering by adaptive refinement, 1990*. For more details, please refer to our technical report.

# ZIB Reports

- Andrea Kratz, Jan Reininghaus, Markus Hadwiger, Ingrid Hotz. Adaptive Screen-Space Sampling for Volume Ray-Casting. ZIB-Report 11-04, 2011.