The most natural solution to many rendering problems is to simply implement a function to return the colour at an 2D image coordinate. However, such image-space implementations tend to be uncompetitive with object-space alternatives which can exploit the superior coherence of that space (although often at the cost of increased implementation complexity). Voluminum contains some experimental code which attempts to redress the balance by capitalizing on the potential of the image-space approach to deploy a limited budget of samples in a smarter way reflecting actual scene saliency. Whether it results in something of practical use remains to be seen...

Currently there's nothing properly released, just the code, which includes a few different "smart sampling" algorithms, some interesting "image functions" to render (fractals, volume rendering) and a viewer application to demonstrate them.

To illustrate the general principles (click images for full-resolution):
Supposing the "Lena" test image required some large amount of time to compute each pixel:
In order to extract the best value from a certain amount of compute budget, voluminium's "scaccarium" (meaning "chessboard") renderer automatically adaptively adjusts sampling density. This is done without any prior knowledge of the image content or structure, purely in response to the values returned from the image function. The images below show the samples actually made, and the the resolution domains (green is higher resolution) and hierarchy:
Lena samples Lena resolution map Lena resolution map
The result is that a more recogniseable image can be generated (left) compared with what can be achieved in the same time using a conventional regular grid of samples (right):
Lena by scaccarium Lena by tabula

This is a SourceForge.net Logo hosted project.