The most natural solution to many rendering problems is to simply implement a function to return the colour at an 2D image coordinate. However, such image-space implementations tend to be uncompetitive with object-space alternatives which can exploit the superior coherence of that space (although often at the cost of increased implementation complexity). Voluminum contains some experimental code which attempts to redress the balance by capitalizing on the potential of the image-space approach to deploy a limited budget of samples in a smarter way reflecting actual scene saliency. Whether it results in something of practical use remains to be seen...
Currently there's nothing properly released, just the code, which includes a few different "smart sampling" algorithms, some interesting "image functions" to render (fractals, volume rendering) and a viewer application to demonstrate them.
To illustrate the general principles (click images for full-resolution):