Below is a condensed version of an essay about the artists role in relation the use and power of networked images.

Drafted in 2018.

Image: Working drawing. S. Oliver Studio 2020

Image datasets are a type of story which we tell to a machine - and so eventually ourselves - about what we see and how we organise our understanding of the visible world, images and objects. Using images to train machines provides an opportunity to reflect on the power of the networked image, the symbolic world we store inside of language, and our dreams for possible futures. 

Image: Bruce Nauman “Tony Sinking into the Floor, Face Up, and Face Down”

I’m interested in the artistic potential of image and video data sets, and in countering the heuristic approach of data science with speculative, practice led studies of data sets and classes, toward the development of new artistic algorithmic reading strategies. I’m engaging implicitly with the biases and presumptions that underpin our relationship with the networked image, through the production of artworks.

“Algorithms enact theoretical ideas in pragmatic instructions, always leaving a gap between the two in the details of implementation. The implementation gap is the most important thing we need to know, and the thing we most frequently misunderstand, about algorithmic systems. Understanding how we can know that requires the critical methods of the humanities. This is algorithmic reading: a way to contend with both the inherent complexity of computation and the ambiguity that ensues when that complexity intersects with human culture.”

Finn, E. (2017)

These implementation gaps provide a site for my research. Art can open them up, and work playfully against the will of computational logic. What might be derived artistically from training one’s attention on a large scale image set? and how might an artist conceive of dealing with images at this volume?

Artists have a responsibility to exploit their unique permission to speculate without aim upon the aesthetic, philosophical, political and moral value of images. I deliberately avoid any notion of use value. The conversation in the science and technology industries naturally leans toward application and utility; I try to work in a way which offers up the complexity, otherness and subjective poetry of images.

Current practice in this area is exemplified by filmmaker, writer and artist Hito Steyerl whose work interrogates the politics of representation (See her most recent book  “Duty Free Art”). The free associations and speculative capacity of her essays strike a tone that I hope to approach through image-making.

Image: Hito Steyerl

The Conceptual artists who came to prominence in the middle of the last century offer up some useful examples of deconstructive strategies. John Cage’s experiments with musical structures and Bruce Nauman’s exploration of the points where language becomes unmoored from meaning make for good practical models. How could a machine fully comprehend something as oblique and poetic as Nauman’s “Tony Sinking into the Floor, Face Up, and Face Down” or Cage’s “4.33”? If we feed Andy Warhol’s 1964 film “Empire” to a machine, will ‘understanding’ alter significantly as notions of endurance, sameness, absurdity, humour and politics creep in… and then back out again?

 

When designing a machine learning task, the algorithm is working towards a moment whereby context can be inferred at the level of the image-set, rather than that of the individual image. Context is most often provided by an objective text label which answers the question "What's in this image, and where in the image is it located?". I’m interested in exploring how we theorise meaning from images. What we are able to deduce from an image is built out of far more than a dumb recognition of objects and object relationships. We have emotional responses which colour our understanding, memory plays an important role, and of course we can read contradictory or parallel meanings in any image if we look carefully enough. This is a far slower and more nuanced way of registering content, context and a range visual, conceptual and semiotic elements. This isn’t an anti-technology gesture, it is a type of responsible and curious behaviour that anyone interested in the power of images should be showing.

Image: Roni Horn "You are the weather"

I’m fascinated by new procedures designed to train machines to generate images independently (See GANs). Here it is the training materials (Image datasets) and the ‘test questions’ (the intentions behind the training) which interest me, far more than the output. 

It is through open-ended, playful enquiry that I attempt to expose something of the nature and power of networked images, as well as “the hardwired ideologies of a machinic vision” (Pattern Discrimination, 2018) common amongst all predictive digital procedures. The whole notion of machine learning rests on a misnomer. We are in a world of basic outlines, default characteristics, discrimination and exclusion. It is not excessive to view things in this way, not when outcomes are real, globally reverberant and have biases baked in from the moment we accept any raw data as “Ground Truth”.

© 2020 Steve Oliver Studio

  • Twitter