Category Archives: Fractals

Fractals without a Computer!

This is really remarkably clever:

Since I can’t stand to just post a video without any explanation:

A fractal is a figure with a self-similar pattern. What that means is that there is some way of looking at it where a piece of it looks almost the same as the whole thing. In this video, what they’ve done is set up three screens, in a triangular pattern, and set them to display the input from a camera. When you point the camera at the screens, what you get is whatever the camera is seeing repeated three times in a triangular pattern – and since what’s on the screens is what’s being seen by the camera; and what’s seen by the camera is, after a bit of delay, what’s on the screens, you’re getting a self-similar system. If you watch, they’re able to manipulate it to get Julia fractals, Sierpinski triangles, and several other really famous fractals.

It’s very cool – partly because it looks neat, but also partly because it shows you something important about fractals. We tend to think of fractals in computational terms, because in general we generate fractal images using digital computers. But you don’t need to. Fractals are actually fascinatingly ubiquitous, and you can produce them in lots of different ways – not just digitally.

Chaos: Bifurcation and Predictable Unpredictability

800px-LogisticMap_BifurcationDiagram.png

Let’s look at one of the classic chaos examples, which demonstrates just how simple a chaotic system can be. It really doesn’t take much at all to push a system from being nice and smoothly predictable to being completely crazy.

This example comes from mathematical biology, and it generates a graph commonly known as the logistical map. The question behind the graph is, how can I predict what the stable population of a particular species will be over time?

If there was an unlimited amount of food, and there were no predators, then it would be pretty easy. You’d have a pretty straightforward exponential growth curve. You’d have a constant, R, which is the growth rate. R would be determined by two factors: the rate of reproduction, and the rate of death from old age. With that number, you could put together a simple exponential curve – and presto, you’d have an accurate description of the population over time.

But reality isn’t that simple. There’s a finite amount of resources – that is, a finite amount of food for for your population to consume. So there’s a maximum number of individuals that could possibly survive – if you get more than that, some will die until the population shrinks below that maximum threshold. Plus, there are factors like predators and disease, which reduce the available population of reproducing individuals. The growth rate only considers “How many children will be generated per member of the population?”; predators cull the population, which effectively reduces the growth rate. But it’s not a straightforward relationship: the number of individuals that will be consumed by predators and disease is related to the size of the population!

Modeling this reasonably well turns out to be really simple. You take the maximum population based on resources, Pmax. You then describe the population at any given point in time as a population ratio: a fraction of Pmax. So if your environment could sustain one million individuals, and the population is really 500,000, then you’d describe the population ratio as 1/2.

Now, you can describe the population at time T with a recurrence relation:

P(t+1)= R × P(t) × (1-P(t))

That simple equation isn’t perfect, but it’s results are impressively close to accurate. It’s good enough to be very useful for studying population growth.

So, what happens when you look at the behavior of that function as you vary R? You find that below a certain threshold value, it falls to zero. Cross that threshold, and you get a nice increasing curve, which is roughly what you’d expect. Up until you hit R=3. Then it splits, and you get an oscillation between two different values. If you keep increasing R, it will split again – your population will oscillate between 4 different values. A bit farther, and it will split again, to eight values. And then things start getting really wacky – because the curves converge on one another, and even start to overlap: you’ve reached chaos territory. On a graph of the function, at that point, the graph becomes a black blur, and things become almost completely unpredictable. It looks like the beautiful diagram at the top of this post that I copied from wikipedia (it’s much more detailed then anything I could create on my own).

But here’s where it gets really amazing.

Take a look at that graph. You can see that it looks fractal. With a graph like that, we can look for something called a self-similarity scaling factor. The idea of a SS-scaling factor is that we’ve got a system with strong self-similarity. If we scale the graph up or down, what’s the scaling factor where a scaled version of the graph will exactly overlap with the un-scaled graph/

For this population curve, the SSSF turns out to about 4.669.

What’s the SSSF for the Mandelbrot set? 4.669.

In fact, the SSSF for nearly all bifurcating systems that we see, and their related fractals, is virtually always exactly 4.669. There’s a basic structure which underlies all systems of this sort.

What’s this sort? Basically, it’s a dynamical system with a quadratic maximum. In other words, if you look at the recurrence relation for the dynamical system, it’s got a quadratic factor, and it’s got a maximum value. The equation for our population system can be written: P(t+1) = R×P(t)-P(t)2, which is obviously quadratic, and it will always produce a value between zero and one, so it’s got a fixed maximum value, and Pick any chaotic dynamical system with a quadratic maximum, and you’ll find this constant in it. Any dynamical system with those properties will have a recurrence structure with a scaling factor of 4.669.

That number, 4.669 is called the Feigenbaum constant, after Mitchell Fiegenbaum, who first discovered it. Most people believe that it’s a transcendental number, but no one is sure! We’re not really sure of quite where the number comes from, which makes it difficult to determine whether or not it’s really transcendental!

But it’s damned useful. By knowing that a system is subject to recurrence at a rate determined by Feigenbaum’s constant, we know exactly when that system will become chaotic. We don’t need to continue to observe it as it scales up to see when the system will go chaotic – we can predict exactly when it will happen just by virtue of the structure of the system. Feigenbaum’s constant predictably tell us when a system will become unpredictable.

Strange Attractors and the Structure of Chaos

sage0-1.png

Sorry for the slowness of the blog; I fell behind in writing my book, which is on a rather strict schedule, and until I got close to catching up, I didn’t have time to do the research necessary to write the next chaos article. (And no one has sent me any particularly interesting bad math, so I haven’t had anything to use for a quick rip.)

Anyway… Where we left off last was talking about attractors. The natural question is, why do we really care about attractors when we’re talking about chaos? That’s a question which has two different answers.

First, attractors provide an interesting way of looking at chaos. If you look at a chaotic system with an attractor, it gives you a way of understanding the chaos. If you start with a point in the attractor basin of the system, and then plot it over time, you’ll get a trace that shows you the shape of the attractor – and by doing that, you get a nice view of the structure of the system.

Second, chaotic attractors are strange. In fact, that’s their name: strange attractors: a strange attractor is an attractor whose structure has fractal dimension, and most chaotic systems have fractal-dimension attractors.

Let’s go back to the first answer, to look at it in a bit more depth. Why do we want to look in the basin in order to find the structure of the chaotic system?

If you pick a point in the attractor itself, there’s no guarantee of what it’s going to do. It might jump around inside the attractor randomly; it might be a fixed point which just sits in one place and never moves. But there’s no straightforward way of figuring out what the attractor looks like starting from a point inside of it. To return to (and strain horribly) the metaphor I used in the last post, the attractor is the part of the black hole past the even horizon: nothing inside of it can tell you anything about what it looks like from the outside. What happens inside of a black hole? How are the things that were dragged into it moving around relative to one another, or are they moving around? We can’t really tell from the outside.

But the basin is a different matter. If you start at a point in the attractor basin, you’ve got something that’s basically orbital. You know that every path starting from a point in the basin will, over time, get arbitrarily close to the attractor. It will circle and cycle around. It’s never going to escape from that area around the attractor – it’s doomed to approach it. So if you start at a point in the basin around a strange attractor, you’ll get a path that tells you something about the attractor.

Attractors can also vividly demonstrate something else about chaotic systems: they’re not necessarily chaotic everywhere. Lots of systems have the potential for chaos: that is, they’ve got sub-regions of their phase-space where they behave chaotically, but they also have regions where they don’t. Gravitational dynamics is a pretty good example of that: there are plenty of N-body systems that are pretty much stable. We can computationally roll back the history of the major bodies in our solar system for hundreds of millions of years, and still have extremely accurate descriptions of where things were. But there are regions of the phase space of an N-body system where it’s chaotic. And those regions are the attractors and attractor basins of strange attractors in the phase space.

A beautiful example of this is the first well-studied strange attractor. The guy who invented chaos theory as we know it was named Edward Lorenz. He was a meteorologist who was studying weather using computational fluid flow. He’d implemented a simulation, and as part of an accident resulting from trying to reproduce a computation, but entering less precise values for the starting conditions, he got dramatically different results. Puzzling out why, he laid the foundations of chaos theory. In the course of studying it, he took the particular equations that he was using in the original simulation, and tried to simplify them to get the simplest system that he could that still showed the non-linear behavior.

The result is one of the most well-known images of modern math: the Lorenz attractor. It’s sort of a bent figure-eight. It’s dimensionality isn’t (to my knowledge) known precisely – but it’s a hair above two (the best estimate I could find in a quick search was in the 2.08 range). It’s not a particularly complex system – but it’s fascinating. If you look at the paths in the Lorenz attractor, you’ll see that things follow an orbital path – but there’s no good way to tell when two paths that are very close together will suddenly diverge, and one will pass on the far inside of the attractor basin, and the other will fly to the outer edge. You can’t watch a simulation for long without seeing that happen.

While searching for information about this kind of stuff, I came across a wonderful demo, which relates to something else that I promised to write about. There’s a fantastic open-source mathematical software system called sage. Sage is sort of like Mathematica, but open-source and based on Python. It’s a really wonderful system, which I really will write about at some point. On the Sage blog, they posted a simple Sage program for drawing the Lorenz attractor. Follow that link, and you can see the code, and experiment with different parameters. It’s a wonderful way to get a real sense of it. The image at the top of this post was generated by that Sage program, with tweaked parameters.

Fast Arithmetic and Fractals

As pointed out by a commenter, there are some really surprising places where fractal patterns can
appear. For example, there was a recent post on the Wolfram mathematica blog by the engineer who writes
the unlimited precision integer arithmetic code.

Continue reading

Fractal Applications: Logistical Maps and Chaos

In the course of the series of posts I’ve been writing on fractals, several people have either emailed or commented, saying something along the lines of “Yeah, that fractal stuff is cool – but what is it good for? Does it do anything other than make pretty pictures?”

That’s a very good question. So today, I’m going to show you an example of a real fractal that
has meaningful applications as a model of real phenomena. It’s called the logistic map.

Continue reading

Fractal Mountains

When you mention fractals, one of the things that immediately comes to mind for most people
is fractal landscapes. We’ve all seen amazing images of mountain ranges, planets, lakes, and things
of that sort that were generated by fractals.

mount2.gif

Seeing a fractal image of a mountain, like the one in this image (which I found
here via a google image search for “fractal mountain”), I expected to find that
it was based on an extremely complicated fractal. But the amazing thing about fractals is how
complexity emerges from simplicity. The basic process for generating a fractal mountain – and many other elements of fractal landscapes – is astonishingly simple.

Continue reading

The Julia Set Fractals

julia2.jpeg

Aside from the Mandelbrot set, the most famous fractals are the Julia sets. You’ve almost definitely seen images of the Julias (like the ones scattered through this post), but what you might not have realized is just how closely related the Julia sets are to the Mandelbrot set.

Continue reading

Fractal Dimension

pink-carpet.png

One of the most fundamental properties of fractals that we’ve mostly avoided so far is the idea of dimension. I mentioned that one of the basic properties of fractals is that their Hausdorff dimension is
larger than their simple topological dimension. But so far, I haven’t explained how to figure out the
Hausdorff dimension of a fractal.

When we’re talking about fractals, notion of dimension is tricky. There are a variety of different
ways of defining the dimension of a fractal: there’s the Hausdorff dimension; the box-counting dimension; the correlation dimension; and a variety of others. I’m going to talk about the fractal dimension, which is
a simplification of the Hausdorff dimension. If you want to see the full technical definition of
the Hausdorff dimension, I wrote about it in one of my topology posts.

Continue reading

The Sierpinski Gasket by Affine

sier-shear.png

So, in my last post, I promised to explain how the chaos game is is an attractor for the Sierpinski triangle. It’s actually pretty simple. First, though, we’ll introduce the idea of an affine transformation. Affine transformations aren’t strictly necessary for understanding the Chaos game, but by understanding the Chaos game in terms of affines, it makes it easier to understand
other attractors.

Continue reading

Iterated Function Systems and Attractors

Most of the fractals that I’ve written about so far – including all of the L-system fractals – are
examples of something called iterated function systems. Speaking informally, an iterated function
system is one where you have a transformation function which you apply repeatedly. Most iterated
function systems work in a contracting mode, where the function is repeatedly applied to smaller
and smaller pieces of the original set.

There’s another very interesting way of describing these fractals, and I find it very surprising
that it’s equivalent. It’s the idea of an attractor. An attractor is a dynamical system which, no matter what its starting point, will always evolve towards a particular shape. Even if you
perturb the dynamical system, up to some point, the pertubation will fade away over time, and the system
will continue to evolve toward the same target.

Continue reading

L-System Fractals

fb.gif

In the post about Koch curves, I talked about how a grammar-rewrite system could be used to describe fractals. There’s a bit more to the grammar idea that I originally suggested. There’s something called an L-system (short for Lindenmayer system, after Aristid Lindenmayer, who invented it for describing the growth patterns of plants), which is a variant of the Thue grammar, which is extremely useful for generating a wide range of interesting fractals for describing plant growth, turbulence patterns, and lots of other things.

Continue reading

Fractal Dust and Noise

cantor-dust-tri.png

While reading Mandelbrot’s text on fractals, I found something that surprised me: a relationship between Shannon’s information theory and fractals. Thinking about it a bit, it’s not really that suprising; in fact, it’s more surprising that I’ve managed to read so much about information theory without encountering the fractal nature of noise in a more than cursory way. But noise in a communication channel is fractal – and relates to one of the earliest pathological fractal sets: Cantor’s set, which Mandelbrot elegantly terms “Cantor’s dust”. Since I find that a wonderfully description, almost poetic way of describing it, I’ll adopt Mandelbrot’s terminology.

Cantor’s dust is a pathological set. It’s caused no small amount of consternation among mathematicians and physicists who find it too strange, too bizarre to accept as anything more than an extreme artifact of logic in the realm of pure math. But it’s a very simple thing. To make it, you start a line segment. Cut it into three identical parts. Erase the middle one. Then repeat the cutting into thirds and erasing the middle on each of the two remaining segments – and then the segments remaining from that, and so on. The diagram below shows a few steps in the construction of the cantor dust.

cantor-dust.png

Why should it be called a dust? Because geometrically, in the limit, it’s got to be a set of completely disconnected points – a scattering of dust across the original line-segment.

It’s so simple – what is pathological about it? Well, it’s clearly 0-dimensional in the pure sense – as I said above, it’s just a collection of points. Topologically, it’s a set of points with empty neighborhoods. And yet – look at it. It’s clearly not zero-dimensional. It’s got a 1-dimensional geometric structure. But naive topology insists that it doesn’t. But there’s worse to it. It’s a similar problem to what we saw in the shape-filling curves. Clearly, the dust disappears into nothingness. Every part of it has zero length – it’s seems like it must converge to something very close to the empty set. And yet it doesn’t: the set of points in the Cantor dust has the same cardinality as the cardinality of the set of points in the original line.

For those of us who came of age as math geeks in the late 20th century, this doesn’t really seem that strange at all. But you’ve got to remember: the 20th century was a time of great turmoil in math. At the beginning of the century, the great work was solving mathematics: turning all of math into a glorious, perfect, clean, rational, elegant edifice. The common belief of the time was that math was beautiful and perfect – that while the real world might be ugly, might have all sorts of imperfections and irrationalities, that those real-world flaws could never touch the realm of pure math: math was, in the words of one famous mathematician “the perfect mind of God”. And then came the crash: the ramifications of Cantor’s set theory, Gödel’s incompleteness, Church and Turing’s uncomputability, fractals, Chaitin’s strange numbers… The edifice collapsed; math was flawed, imperfect, incomplete as anything else in the world. It was hugely traumatic, and there was (and in some circles still is) a great deal of resistance to the idea that so much irregularity or ever irrationality was a part of the world of math – that the part of math that we can really grasp and use is just an infinitessimal part of the monstrous world of what really exists in our abstractions.

But getting back to the point at hand: what does Cantor’s dust have to do with information and noise?

Imagine that you’re listening to sound through a telephone wire with incredibly precise recording equipment. You’re sending a perfectly clear sine-wave over the line, in order to see how much noise there is.

You start pretty high – you only want to record noises that exceed, say, 20% of the amplitude of the basic sine-wave. You wind up with a pattern of bursts of noise. Those noises are scattered around, temporally. Now, mark off every time period of greater that 5 minutes where there is no noise. Those are gaps in the noise – the largest gaps that you’re going to look at. Now look in the bursts of noise – that is, the periods of time where there was no gap in the noise longer than 5 minutes. Look for periods of 1 minute where there wasn’t any noise. In between the 5 minute gaps, you’ll get a collection of smaller 1 minute gaps, separated by smaller bursts of noise. Then look into those 1 minute gaps, for 10 second periods with no noise – and you’ll break the bursts of noise up further, into bursts longer than 10 seconds, but shorter than a minute. Keep doing that, and eventually, you’ll run out of noise. But turn down your noise threshold so that you can hear noise of a smaller amplitude, and you can find more noise, and more gaps, breaking up the bursts.

If you look at the distribution of noise, one thing you’ll notice is that the levels are independent: the length of the longest gaps has no relation to the frequency of smaller gaps between them. And the other thing you’ll notice is that the frequency of gaps is self-similar: the distribution of long gaps relative to sections of the recording of long length are the same as the distribution of short gaps relative to shorter sections of the recording. The noise distribution is fractal! In fact, it’s pretty much a slightly randomized version of Cantor’s dust.

Understanding the structure of noise isn’t just interesting in the abstract: it provides a necessary piece of knowledge, which is used regularly by communication engineers to determine the necessary properties of a communication channel in order to ensure proper transmission and storage of information. Recognizing the fractal nature of noise makes it possible to better predict the properties of that channel, and determine how much information we can safely pump through it, and how much redundancy we need to add to the information to prevent data loss.

Fractal Pathology: Peano’s Space Filling Curve

colored-hilbert.jpg

One of the strangest things in fractals, at least to me, is the idea of space filling curves. A space filling curve is a curve constructed using a Koch-like replacement method, but instead of being self-avoiding, it eventually contacts itself at every point.

What’s so strange about these things is that they start out as a non-self-contacting curve. Through further steps in the construction process, they get closer and closer to self-contacting, without touching. But in the limit, when the construction process is complete, you have a filled square.

Why is that so odd? Because you’ve basically taken a one-dimensional thing – a line – with no width at all – and by bending it enough times, you’ve wound up with a two-dimensional figure. This isn’t just odd to me – this was considered a crisis by many mathematicians – it seems to break some of the fundamental assumptions of geometry: how did we get width from something with no width? It’s nonsensical!

Continue reading

Fractal Curves and Coastlines

Von_Koch_curve.gif

I just finally got my copy of Mandelbrot’s book on fractals. In his discussion of curve fractals (that is, fractals formed from an unbroken line, isomorphic to the interval (0,1)), he describes them in terms of shorelines rather than borders. I’ve got to admit
that his metaphor is better than mine, and I’ll adopt it for this post.

In my last post, I discussed the idea of how a border (or, better, a shoreline) has
a kind of fractal structure. It’s jagged, and the jags themselves have jagged edges, and *those* jags have jagged edges, and so on. Today, I’m going to show a bit of how to
generate curve fractals with that kind of structure.

Continue reading

Fractal Borders

Part of what makes fractals so fascinating is that in addition to being beautiful, they also describe real things – they’re genuinely useful and important for helping us to describe and understand the world around us. A great example of this is maps and measurement.

Suppose you want to measure the length of the border between Portugal and Spain. How long is it? You’d think that that’s a straightforward question, wouldn’t you?

It’s not. Spain and Portugal have a natural border, defined by geography. And in Portuguese books, the length of that border has been measured as more than 20% longer than it has in Spanish books. This difference has nothing to do with border conflicts or disagreements about where the border lies. The difference comes from the structure of the border, and way that it gets measured.

Natural structures don’t measure the way that we might like them to. Imagine that you walked the border between Portugal and Spain using a pair of chained flags like they use to mark the down in football – so you’d be measuring the border on 10 yard line segments. You’ll get one measure of the length of the border, we’ll call it Lyards

Now, imagine that you did the same thing, but instead of using 10 yard segments, you used 10 foot segments – that is, segments 1/3 the length. You won’t get the same length; you’ll get a different length, Lfeet.

Then do it again, but with a rope 10 inches long. You’ll get a *third* length, Linches.

Linches will be greater than Lfeet, which will be greater that Lyards.

border.jpg

The problem is that the border isn’t smooth, it isn’t a differentiable curve. As you move to progressively smaller scales, the border features progressively smaller features. At a 10 mile scale, you’ll be looking at features like valleys, rivers, cliffs, etc, and defining the precise border in terms of those. But when you go to the ten-yard scale, you’ll find that the valleys divide into foothills, and the border line should wind between hills. Get down to the ten-foot scale, and you’ll start noticing boulders, jags in the lines, twists in the river. Go down to the 10-inch scale, and you’ll start noticing rocks, jagged shapes. By this point, rivers will have ceased to appear as lines, but they’ll be wide bands, and if you want to find the middle, you’ll need to look at the shapes of the banks, which are irregular and jagged down to the millimeter scale. The diagram above shows a simple example of what I mean – it starts with a real clip taken from a map of the border, and then shows two possible zooms of that showing more detail at smaller scales.

The border is fractal. If you try to measure its dimension, topologically, it’s one-dimension – the line of the border. But if you look at its dimension metrically, and compute its Hausdorff dimension, you’ll find that it’s not 2, but it’s a lot more than 1.

Shapes like this really are fractal. To give you an idea – which of the two photos below is real, and which is generated using a fractal equation?

sharpestp.jpg
arizona4_640.jpg

The Mandelbrot Set

800px-Mandelbrot_set_with_coloured_environment.png

The most well-known of the fractals is the infamous Mandelbrot set. It’s one of the first things that was really studied as a fractal. It was discovered by Benoit Mandelbrot during his early study of fractals in the context of the complex dynamics of quadratic polynomials the 1980s, and studied in greater detail by Douady and Hubbard in the early to mid-80s.

It’s a beautiful example of what makes fractals so attractive to us: it’s got an extremely simple definition; an incredibly complex structure; and it’s a rich source of amazing, beautiful images. It’s also been glommed onto by an amazing number of woo-meisters, who babble on about how it represents “fractal energies” – “fractal” has become a woo-term almost as prevalent as “quantum”, and every woo-site that babbles about fractals invariably uses an image of the Mandelbrot set. It’s also become a magnet for artists – the beauty of its structure, coming from a simple bit of math captures the interest of quite a lot of folks. Two musical examples are Jonathon Coulton and the post-rock band “Mandelbrot Set”. (If you like post-rock, I definitely recommend checking out MS; and a player for brilliant Mandelbrot set song is embedded below.)

So what is the Mandelbrot set?

overall-mandelbrot.gif

Take the set of functions
f_C(x) = x^2 + C where for each f_C, C is a complex constant. That gives an infinite set of simple functions over the complex numbers. For each possible complex number C, you look at the recurrence relation generated by repeatedly applying f, starting with x=0:

  1. m(0,C)=f_C(0)
  2. m(i+1,C)=f_C(m(i, C))

If m(i,C) doesn’t diverge (escape) towards infinity as i gets larger, then the complex number C is a member of the Mandelbrot set. That’s it – that simple definition – repeatedly apply f(x)=x^2 + C for complex numbers – produces the astonishing complexity of the Mandelbrot set.

If we use that definition of the Mandelbrot set, and draw the members of the set in black, we get an image like the one above. That’s nice, but it’s probably not what you expected. We’re all used to the beautiful colored bands and auras around that basic pointy black blob. Those colored regions are not really part of the set.

Mandelbrot1.png

The way we get the colored bands is by considering *how long* it takes for the points to start to diverge. Each color band is an escape interval – that is, some measure of how many iterations it takes for the repeated application of f(x) to diverge. Images like the ones to the right and below are generated using various variants of escape-interval colorings.

images-1.jpg

images-2.jpg

images.jpg

My personal favorite rendering of the Mandelbrot set is an image called the Buddhabrot. In the Buddhabrot, what you do is look at values of C which *aren’t* in the mandebrot set. For each point m(i,C) before it escapes, plot a point. That gives you the escape path for the value C. If you take a large number of escape paths for randomly selected values of C, and you plot them so that the brightness of a pixel is determined by the number of escape paths that cross that pixel, you get the Budddhabrot. It’s fascinating because it reveals the structure in a particularly amazing way. If you look at a simple unzoomed image of the madelbrot set, what you see is a spiky black blob; the actually complexity of the structure isn’t obvious until you spend some time looking at it. The Buddhabrot is more obvious – you can see the astonishing complexity much more easily.

600px-Buddhabrot-deep.jpg

An Introduction to Fractals

gasket.jpg

I thought in addition to the graph theory (which I’m enjoying writing, but doesn’t seem to be all that popular), I’d also try doing some writing about fractals. I know pretty much nothing about fractals, but I’ve wanted to learn about them for a while, and one of the advantages of having this blog is that it gives me an excuse to learn about things that that interest me so that I can write about them.

Fractals are amazing things. They can be beautiful: everyone has seen beautiful fractal images – like the ones posted by my fellow SBer Karmen. And they’re also useful: there are a lot of phenomena in nature that seem to involve fractal structures.

But what is a fractal?

The word is a contraction of fractional dimension. The idea of that is that there are several different ways of measuring the dimensionality of a structure using topology. The structures that we call fractals are things that have a kind of fine structure that gives them a strange kind of dimensionality; their conventional topological dimension is smaller than their Hausdorff dimension. (You can look up details of what topological dimension and Hausdorff dimension mean in one of my topology articles.) The details aren’t all that important here: the key thing to understand is that there’s a fractal is a structure that breaks the usual concept of dimension: it’s shape has aspects that suggest higher dimensions. The Sierpinski carpet, for example, is topologically one-dimensional. But if you look at it, you have a clear sense of a two-dimensional figure.

carpet.jpg

That’s all frightfully abstract. Let’s take a look at one of the simplest fractals. This is called Sierpinski’s carpet. There’s a picture of a finite approximation of it over to the right. The way that you generate this fractal is to take a square. Divide the square into 9 sub-squares, and remove the center one. Then take each of the 8 squares around the edges, and do the same thing to them: break them into 9, remove the center, then repeat on the even smaller squares. Do that an infinite number of times.

When you look at the carpet, you probably think it looks two dimensional. But topologically, it is a one-dimensional space. The “edges” of the resulting figure are infinitely narrow – they have no width that needs a second dimension to describe. The whole thing is an infinitely complicated structure of lines: the total area covered by the carpet is 0! Since it’s just lines, topologically, it’s one-dimensional.

In fact, it is more than just a one dimensional shape; what it is is a kind of canonical one dimensional shape: any one-dimensional space is topologically equivalent (homeomorphic) to a subset of the carpet.

But when we look at it, we can see it has a clear structure in two dimensions. In fact, it’s a structure which really can’t be described as one-dimensional – we defined by cutting finite sized pieces from a square, which is a 2-dimensional figure. It isn’t really two dimensional; it isn’t really one dimensional. The best way of describing it is by its Hausdorff dimension, which is 1.89. So it’s almost, but not quite, two dimensional.

Sierpinski’s carpet is a very typical fractal; it’s got the traits that we use to identify fractals, which are the following:

  1. Self-similarity: a fractal has a structure that repeats itself on ever smaller scales. In the case of the carpet, you can take any non-blank square, and it’s exactly the same as a smaller version of the entire carpet.
  2. Fine structure: a fractal has a fine structure at arbitrarily small scales. In the case of the carpet, no matter how small you get, it’s always got even smaller subdivisions.
  3. Fractional dimension: its Hausdorff dimension is not an integer. Its Hausdorff dimension is also usually larger than its topological dimension. Again looking at the carpet, it’s topological dimension is 1; it’s Hausdorff dimension is 1.89.