Iterated Function Systems (IFS)
- Documentation Pulled From: http://links.uwaterloo.ca/ResearchIFSFractalCoding.html
Iterated Function Systems (IFS)
- IFS is the term originally devised by Michael Barnsley and Steven Demko [1] for a collection of contraction mappings over a complete metric space, typically compact subsets of R^n. Systems of contraction mappings had been considered previously by a number authors for various purposes. But the landmark papers of John Hutchinson [2] and, independently, Barnsley and Demko [1] showed how such systems of mappings with associated probabilities could be used to construct fractal sets and measures: the former from a geometric measure theory setting and the latter from a probabilistic setting. For those who are not familiar with IFS, we'll provide a simple discussion and example a little later in this section.
- Just as important, however, was the fact that the Barnsley/Demko paper was the first to suggest that IFS could be used to approximate natural objects. This was the seed of the inverse problem of fractal approximation: Given a "target" set, S, for example, a leaf, can we find an IFS with attractor A (see below) that approximates S to a reasonable degree.
- In a subsequent paper, Barnsley and students [3] showed how the inverse problem of fractal approximation was could be reformulated by means of the infamous "Collage Theorem": instead of trying to find an IFS whose attractor A would match the target S (a very tedious and difficult problem), one can look for an IFS that maps A as close as possible to itself. More on this later."
Fractal Image Coding
- After [3], the next natural question was: "Can we use IFS to approximate images?" The seminal paper [4] by A. Jacquin, then a Ph.D. student of Barnsley at Georgia Tech, provided the basis of block-based fractal image coding which is still used today. Jacquin's paper launched an intensive activity in fractal image compression [5-7]. Indeed, in the meantime, Barnsley left Georgia Tech to found the company Iterated Systems which was dedicated to the commercialization of fractal image compression.
- In fractal image coding, one tries again to approximate a "target" image y with the fixed point x' of a contractive fractal transform T. In general, however, this direct inverse problem is difficult because of the complex nature of the fractal transform, which maps modified subimages onto other regions. A great simplification is afforded by the Collage Theorem of Ref. [3] above. In collage coding, one looks for a transform T that minimizes the so-called collage distance d(y,Ty). Most, if not all, fractal image coding methods rely on collage coding.
- Fractal compression methods were actually quite competitive until the appearance of powerful wavelet-based methods and, subsequently, context-based coders. Nevertheless, it has always been the philosophy of the Waterloo research programme that fractal-based methods are interesting mathematically and that they may also be able to provide useful information about images. We are indeed finding that there is much "life after compression".
- Very briefly, block-based fractal image coding is a "local IFS" procedure. Given an image I, one typically seeks to approximate its subimages on n x n range subblocks R_i with contracted (i.e., decimated) and greyscale-modified subimages on 2n x 2n domain subblocks D_j. For each range block, one searches for the best domain block D_i from a domain pool. The larger the domain pool, the better opportunity for a good approximation, but at the expense of higher search times.
A Simple Introduction to IFS
- Suppose that w : X -> X is a contraction mapping of a complete metric space (X,d) to itself. Then, by the celebrated Banach Fixed Point Theorem, there is a unique point x' in X such that w(x') = x', the fixed point of w. Moreover, for any starting point x_0 in X, the sequence of points {x_n} defined by the iteration procedure x_{n+1}=w(x_n) converges to x' as n -> infinity. In other words, x' is an attractive fixed point .
- Now suppose that we have more than one contraction mapping on X, i.e. w_i : X -> X, i=1,...N. Each of these maps w_i will have its own fixed point x'_i. And, of course, if we apply only one of these maps repeatedly, say w_P, then the iteration sequence {x_n} will converge to its fixed point x'_P. But what happens if you pick the maps at random? Clearly, each map will be fighting to bring the sequence to its own fixed point!
- Assuming that you pick each map w_i with a nonzero probability p_i (sum of p_i = 1), the sequence will eventually approach a unique set, the "attractor" A of the "iterated function system" W = {w_1, ... , w_N}, performing a random walk over it (or at least getting nearer and nearer to it). The probabilities will play a role in determining how often various regions of the attractor are visited.
Examples on [0,1]:
- Example 1:
w_1(x) = x/3, w_2(x) = x/3 + 2/3
!Note that w_1(0)=0 and w_2(1)=1. Then the attractor of this IFS is the classical Cantor set on [0,1].
- Example 2:
w_1(x) = x/2, w_2(x) = x/2 + 1/2
!Note again that w_1(0)=0 and w_2(1)=1. Then the attractor of this IFS is the interval [0,1]. If p_1 = p_2 = 1/2, then the distribution of the random-walking iterates {x_n} over the interval [0,1] is uniform. If p_1 > p_2, then there is more probability of finding the iterates in the interval [0,1/2] than in [1/2,1]. But this means that there will be more probability of finding them in [0,1/4] vs. [1/4,1/2] and more in [1/2,3/4] than [3/4,1], and so on. The result is that the visitation frequencies over small sets demonstrate a complicated fractal-like pattern.
An example in [0,1] X [0,1]:
w_1(x,y)=(x/2,y/2), w_2(x,y)=(x/2+1/2,y/2), w_3(x,y)=(x/2,y/2+1/2)
The fixed points of these maps are, respectively, (0,0), (1,0) and (0,1). The attractor of this IFS is a "rectangular" Sierpinski gasket with corners at these fixed points.
- Note: Let us go back to Franklin Mendvil's beautiful montage of photos presented above. Starting at the upper left photo (which could be considered as the "seed" set), if you scan left-to-right and then move down and scan left-to-right again, you have the first six "sets" that are approaching this "rectangular" Sierpinski gasket.
- For a little more discussion on this subject, along with a quite readable introduction to fractal image coding, you may wish to look at E.R. Vrscay's Hitchhiker's Guide to Fractal Image Compression. It was written for a general audience with some undergraduate mathematical knowledge. Ref. [8] below is also a very nice and readable discussion of fractal image coding.
References
- 1. M.F. Barnsley and S. Demko, Iterated function systems and the global construction of fractals, Proc. Roy. Soc. London A399, 243-275 (1985).
- 2. J. Hutchinson, Fractals and self-similarity, Indiana Univ. J. Math. 30, 713-747 (1981).
- 3. M.F. Barnsley, V. Ervin, D. Hardin and J. Lancaster, Solution of an inverse problem for fractals and other sets, Proc. Nat. Acad. Sci. USA 83, 1975-1977 (1985).
- 4 . A. Jacquin, Image coding based on a fractal theory of iterated contractive image transformations, IEEE Trans. Image Proc. 1, 18-30 (1992).
- 5. M.F. Barnsley and L.P. Hurd, Fractal Image Compression, A.K. Peters, Wellesley, Mass. (1993).
- 6. Y. Fisher, Fractal Image Compression, Theory and Application, Springer-Verlag, New York (1995).
- 7. N. Lu, Fractal Imaging, Academic Press, New York (1997).
- 8. Y. Fisher, A discussion of fractal image compression, in Chaos and Fractals, New Frontiers of Science, H.-O. Peitgen, H. Jurgens and D. Saupe, Springer-Verlag, Heidelberg (1994).