Crossvision

Crossvision

Description

Modern philosophy deems it impossible for men to ever achieve an objective grasp of reality. So, in today’s popular culture, TV and social media all focus lies on comparing the individual perceptions of shared experiences. Sometimes, however, a reminder is needed that even the impressions we can agree upon are deeply subjective and specific to the human race. The video sequences of the Crossvision project are just as true or false as any conventional video would be. However, by rearranging the dimensional planes they no longer simulate the human sensory perception. By combining the slit-scan capturing technique with a camera slider it was possible to create video art sequences of recognisable yet disorienting sceneries. They are the result of merely rearranging pixel data taken from a tracking shot. No visual information was added or lost in the process.

Interview


What have you made?

I developed a new image capturing technique that uses a self-built camera slider and the slit-scan process.

What gave you the initial inspiration?

By chance I learned about slit-scan photography and through this I came across the works of Adam Magyar and Jay Mark Johnson.

What is the original idea behind this project?

When looking closely at conventional slit-scan photographs, captured with a static camera, it is noticeable that moving objects in the back of the image appear compressed. This is because the vanishing point in these images becomes a vanishing horizon. I figured that I could amplify this effect by moving the camera laterally. Naturally the effect is stronger the longer the slider is. Since ready to buy camera sliders normally end at 1.5-meters and are very expensive I had to build my own 2.5-meter slider.

How does it work?

An Arduino driven camera slider is used to make a tracking shot of a scenery. The footage is then run through a Processing script that extracts the first pixel column of every single video frame and aligns these to form one large image. This is repeated passing to the next pixel column each time. The so generated images are then again used as frames for the final video sequence.

How long did it take to make it real?

About 3 months. The overall concept is relatively simple but getting the slider as precise as I needed was quite tricky.

How did you build it?

The slider is built on two 2.5m precision rails intended for use in CNC mills. The slider and the rails are held together by thick aluminium plates with CNC milled mounting holes for precision. Originally the slider was powered by a stepper motor but this drained the lead battery in just half an hour. The geared DC motor allows a battery life of up to 6 hours. The logic consists of an Arduino Uno and a Pololu motor shield. The library that came with the motor shield made programming very easy, so the bigger part of the code deals with display, settings menu and rotary encoder. The end-switches tell the Arduino when the sled has reached the end of the rail. The mountings for the end-switches are 3D-printed. It turned out that the cable to the switch on the far end becomes a long antenna and sends false signals. A lot of experimentation with different pull-up resistors, ferrite rings and software debouncing was needed to solve this problem.