next up previous
Next: Our implementation Up: A quick guide to Previous: A quick guide to

Wedgelet approximations

Wedgelet approximations were introduced by Donoho [1], as a means to efficiently approximate piecewise constant images. Generally speaking, these approximations are obtained by partitioning the image domain adaptively into disjoint sets, followed by computing an approximation of the image on each of these sets. Optimal approximations are defined by means of a certain functional weighing approximation error against complexity of the decomposition. The optimization can be imagined as a game of puzzle: The aim is to approximate the image by putting together a number of pieces from a fixed set, possibly using a minimal number of pieces.

As can be imagined, the efficient computation of such an optimal approximation is a critical issue, depending on the particular class of partitions under considerations. Donoho proposed to use wedges, and to study the associated wedgelet approximations. For the sake of notational convenience, we fix that images are understood as elements of the function space $\mathbbm{R}^I$, where $I =
\{ 0, \ldots, 2^J -1 \} \times \{ 0, \ldots,2^J-1 \}$. Other image sizes can be treated by suitable adaptation, at the cost of a more complicated notation.

The wedgelet approach can be described by a two-step procedure:

  1. Decompose the image domain $I$ into a disjoint union of wedge-shaped sets, $I = \bigcup_{w \in {\cal P}} w$.
  2. On each set $w \in {\cal P}$, approximate the image by a constant.
Here the $w$ are required to be elements of a fixed set ${\cal W}$ of wedges. A wedgelet approximation of an image $f$ associated to the regularization parameter $\gamma$ is by definition a minimizer of the functional
\begin{displaymath}
H_{\gamma,f} ( {\cal P},f_{{\cal P}} ) = \gamma \vert{\cal P}\vert + \Vert f - f_{{\cal P}} \Vert _2^2  .
\end{displaymath} (1)

Here $f_{{\cal P}}$ is a function which is constant on each wedge $w \in {\cal P}$, and $\Vert \cdot \Vert _2$ denotes the $\ell^2$-norm on $\mathbbm{R}^I$,

\begin{displaymath}\Vert g \Vert _2^2 = \sum_{x \in I} \vert g(x)\vert^2  .\end{displaymath}

The regularization parameter $\gamma$ can be interpreted as a scale: As $\gamma$ ranges from $0$ to $\infty$, the minimizer $\widehat{f}_\gamma$ of (1) runs through a stack of images, starting from the data (for $\gamma = 0$) to a constant image (for $\gamma = \infty$).

A minimizer $(\widehat{{\cal P}}_\gamma,\widehat{f}_{\gamma} )$ of (1) can be viewed as an optimal approximation to $f$ with a prescribed complexity: If $N = \vert\widehat{{\cal
P}}_\gamma\vert$, let $X_N$ denote the set of images in $\mathbbm{R}^I$ which are constant on a partition of $I$ into at most $N$ wedges. Then $\widehat{f}_{\gamma}$ is an element of $X_N$ with minimal $\ell^2$-distance to $f$.

A further useful observation is that since we consider constant approximation, $\widehat{f}_{\gamma}$ is easily obtained from the optimal partition $\widehat{{\cal P}}_\gamma$ by computing mean values of $f$ over the elements of $w \in {\cal P}$. Hence the minimization procedure boils down to finding the optimal partition.

This allows yet another interpretation of a minimizer $(\widehat{{\cal P}}_\gamma,\widehat{f}_{\gamma} )$: Let $\epsilon
= \Vert f - \widehat{f}_{\gamma}\Vert _2$. Then, among all wedge partitions incurring an approximation error of at most $\epsilon$, $\widehat{{\cal P}}_\gamma$ is the one with the smallest number of elements.

Hence, a minimization result of (1) can either be viewed as optimal approximation with (at most) a fixed number $N(\gamma)$ of pieces, or as partition with a minimal number of pieces among those partitions that incur (at most) a fixed approximation error $\epsilon(\gamma)$.

Clearly the properties of this scheme depend on the precise definition of the set ${\cal W}$ of admissible wedges. One possible choice could be to take the set ${\cal Q}$ of dyadic squares contained in $I$,

\begin{displaymath}
{\cal Q} = \{ [2^jk, 2^j(k+1)[ \times [2^j m, 2^j (m+1)[ : 0
\le j \le J , 0 \le k,m < 2^{J-j} \}  . \end{displaymath}

Strictly speaking, the "wedges" appearing in this set do not deserve the name, and approximations based on dyadic squares have been studied for quite some time, usually under the name of "quadtree decompositions". The wedgelet construction proposed by Donoho can be seen as a refinement of this: Then elements of ${\cal W}$ are obtained by splitting an element $q \in {\cal Q}$ along a suitable straight line, yielding $q = w_1 \cup w_2$. For each dyadic square the lines are prescribed according to a suitable scheme (more on that in the next section). Figure 1 shows examples of optimal approximations using wedges and dyadic squares, with the same number of pieces. Since coding wedges requires more information per piece, the comparison is not exactly fair; but the images manage to convey the motivation behind the construction of wedgelets, which do a much better job at resolving diagonal edges.

Figure 1: IBB North. $640 \times 480$ (a) Original image, (b) approximation using 1000 dyadic squares, and (c) approximation using 1000 wedges.
Image ibbnorth
Image ibbnorth_640x480_1000sq
Image ibbnorth_640x480_1000w

With these definitions, an algorithm for the minimization of (1) can be broken down into the following two steps:

  1. Local minimization: For each dyadic square, determine the wedge split $q = w_1 \cup w_2$ which yields the minimal local approximation error $\Vert f_{p} - \widehat{f}_p \Vert _2^2$. Here $f_p$ is the restriction of $f$ to $p$, and $\widehat{f}_p$ is the function that assigns each element of $w_i$ the mean value of $f$ on $w_i$. Store the optimal split and associated approximation error.
  2. Global minimization: Given $\gamma$, compute the optimal wedgelet decomposition ${\cal P}$ from the data stored in the first step.

This way of organizing the minimization procedure is based on the following observations. We refer to [1,2] for more details.

  1. From a computational point of view, local minimization dominates the algorithm performance. We are required to compute mean values and $\ell^2$-errors for a large number of wedges, and a naive approach to this problem results in huge computation times. An effective solution of this problem is one of the main contributions of our implementation.
  2. In view of the previous remarks, it is important to note that the local minimization does not depend on the parameter $\gamma$. The quadtree structure associated to dyadic squares allows a fast implementation of the global minimization step, yielding arbitrary access to the minimizers of (1) almost in real time.


next up previous
Next: Our implementation Up: A quick guide to Previous: A quick guide to
fof 050301