Composition Algorithmendif ?>
One of the most important (but also most complex) factors in graphic design is composition: is the image as a whole dynamic or static? Does it express tension or tranquility? Is it trendy or old-fashioned?
We don't have a clear solution on composition at this moment, there are a few things getting jumbled up that cannot seem to be separated from each other. There is composition by itself, but also the elements in a composition and the relation between those elements - which apparently is something quite different than composition.
We have three research threads going:
A mediator imposes a grid on the canvas. This is the most straightforward solution. All elements in the composition get stuffed in the grid. The grid allocates more space for important or big elements (we know if elements are big or small by mapping language to formal parameters in the same fashion as Prism) and as such is growing or dynamic. However, this makes self-reflection almost impossible since the elements have no way of discerning for themselves if they are happy.
Each composition has a direction, and a tension and gravity. These are all three aspects of one supervector that describes the composition. Once the supervector is determined, all elements can be placed according to its path (this would be somewhat similar to superstring theory).
Each element in the composition is an intelligent ant that communicates with the other elements: do we overlap? Are you happy or should I move a bit more to the left? Possible solutions for this approach is a backtracking algorithm (which would actually make the computer-generated design self conscious!) and a relational approach, in which composition is described purely in terms of relations between elements; the rest they figure out by themselves.
Our current experiments are dealing with Boids, Craig Reynolds' algorithm for coordinated, leaderless animal movement, e.g. flocking. Right now, Boids seem to be the panacea since it apparently addresses all of the problems we are dealing with:
- elements keep their distance from each other
- but at the same time following a shared movement pattern
Just as with the colors algorithm, it would make sense to form a semantic bridge to describe relationships.
What we have to do here is transform from content-sensitive relationships to spatial relationships.
In other words, a translation from essential formal content (hard/soft, warm/cold, sharp/thump, ...) to a formal language of composition, explaining the spatial relationships between the elements:
- C from A to B
- B under A
- B at last A
- A smaller than B
- C between A and B
A solver then solves this equation, and finds an optimal structure between the elements.
Quote from The Art of Unix Programming:
One theme that runs through at least three of the Documenter's Workbench minilanguages is declarative semantics: doing layout from constraints. This is an idea that shows up in modern GUI toolkits as well -- that, instead of giving pixel coordinates for graphical objects, what you really want to do is declare spatial relationships among them ("widget A is above widget B, which is to the left of widget C") and have your software compute a best-fit layout for A, B, and C according to those constraints.
A tool that does this already is the age-old pic(1) unix program. It compiles pictures for troff or TeX, and describes these layouts. Amazingly, pic(1) is installed by default on every OS X system.include("util/comment.php"); ?>