The idea of non-human photography is an interesting one as at some point in the process there has to have been a human action involved. For example, I was quite taken with Flusser’s assertion that: “The green of a photographed field, for example, is an image of the concept ‘green’, just as it occurs in chemical theory, and the camera (or rather the film inserted into it) is programmed to translate this concept into the image” (2000:43) And the same could be said of how a digital sensor resolves an image according to its programmed values. Each film emulsion and camera sensor has qualities that are unique to them, which have been developed by a human. For example, the way that a Canon camera is able to resolve skin-tones vs the way a Nikon is able to all have a subtle impact in the way that the image is read and albeit highly subtle, a bias can be attributed to these differing programmed values.

Figure 1:  Nikolaus Studnicka (2004) Overview and detail of the digital elevation model of the Giza Plateau created by 4 single scans from the top of the Cheops Pyramid visualised in ARC GIS 8.2

For my example, I have chosen this laser scan of the pyramids (Fig:1), which is slowly replacing the use of large format film cameras as a way of recording them. It is a non-lens based technology that is started and then left to conduct multiple scans the object (in this case one of the pyramids of Giza), which is then composited together to form the image. The use of ai and computational modelling is part of the process in creating these images. Arguably more representative than any traditional form of lens based capture as it requires direct contact of the laser to the object in order to create an image vs photographic reflection of light, however a human would have created the technology and the program for it to run.


Flusser, V., 2000. Towards a Philosophy of Photography. 2018 Reprint ed. London: Reaktion Books.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.