It has been an excessively long time since I added anything to this blog, but I have in fact been working on various little side projects in my free time – they just haven’t been publicised. Here’s to trying to keep up an online presence in the future. 🍺
The convoluted name above basically summarises the functionality of this app, which was inspired by a fantastic interactive website I stumbled across ages ago but cannot for the life of me find again (and if anyone ever reads this and happens to know, please jump at the opportunity to troll me and then provide a link).
The basic premise is to map the RGB values of each pixel in a provided image to a corresponding XYZ coordinate in 3D-space. Because we are directly mapping values, the objects representing pixels will all fall within X, Y, and Z values of 0 to 255. Additionally, we apply the colour of the pixel that the object represents to the object’s material, and if duplicate colour values are found we increase the scale of the existing object.
The Unity scene is very simple. A planar mesh renderer displays the image that is currently being represented and this is accompanied by loading text above it so that we can monitor the progress of processing. In front of this is the spawning ground of all pixels in the environment – a 255 x 255 x 255 unit volume of space, starting at world origin (0, 0, 0) and extending to (255, 255, 255).
For every pixel of the image loaded, the (R, G, B) value is directly mapped to an (X, Y, Z) coordinate. Unity stores RGB information as a float value between 0f and 1f, and thus each colour value is scaled by 255 in order to achieve a more manageable space.
As the image is processed, each RGB combination is stored as a key that maps to a specific object that represent this colour. In this way, when duplicate coloured pixels are encountered any existing objects at that position have their scale modified, instead of creating a new object. The size of each object is thus indicative of the frequency of that colour in the image.
Note that each object is responsible for its own scaling so that the image processor can simply tell an object to scale and continue processing the image, while the object enters a coroutine to scale itself to the required target size.
Each pixel is represented as a cube object, as I thought this looked best. This is defined by a prefab though and can easily be changed to anything else. Once objects exceed a certain size, I swap the material for a transparent version – this is to avoid large objects obscuring smaller objects which they could envelop.
Firstly, the pixel count (and thus the object scene count) is massive! The image processor is resposible for analysing a pixel and setting up an object accordingly. As this is quite intensive, the main processing loop runs as a coroutine with a frame-time check, yielding for a frame when the maximum frame time has elapsed. This aims to keep the framerate around 30fps, but subsequently slows processing speed considerably.
All objects in the scene are preallocated to avoid runtime creation/deletion overhead (which is very expensive) and are enabled/disabled when required. They also make use of unlit textures to save rendering cost, and only recieve a transparent texture when required.
As duplicate pixels are to scale existing objects, a Dictionary is used to hash an RGB key to its corresponding object.
Finally, even though it causes lag I chose to clear objects in chunks in order to speed up transitions between images. After all it’s perfectly fine watching an image get built smoothly, but watching the process in reverse is tedious.
Gradients look amazing! Monochrome is interesting too.
I would like to have used true threading, but Unity does not allow many of its method calls to occur off of the main thread. Things such as calls to game objects (creation, deletion, enabling, disabling) and analysing textures (width and height) which were very core this program. There are work arounds to this, but it would not be worth the hassle and mass of multi-threading logic – this is just a technical experiment after all.
Although the code is somewhat optimised, there is definitely room for improvement. This was out of the scope of the project however so I decided to leave it in its current state. Note that there is also a memory leak… Drip, drip, dip.
Finally, here is a stupidly large and low-framerate GIF demonstrating some images. Please excuse the soddy camera animation.