SoftEvidence-testimony-softevidence-dejha-ti-ania-catherine00 (1).png
       
     
SoftEvidence-machinelearning-deepfake (1).png
       
     
SoftEvidence-contract (1).png
       
     
       
     
Soft Evidence (2021)

Soft Evidence, by Ania Catherine and Dejha Ti, is a slow synthetic cinema series, depicting events that never happened: nine scenes and testimony. Using AI, computer vision, machine learning and deep learning, the artists created synthetic media vignettes (deepfakes) with no trace of AV manipulation.

The word deepfake typically conjures up images of face-swapped politicians speaking, celebrities saying ridiculous things, or even–the technique’s most popular and darkest use–personal attacks via non-consensual sexual imagery and pornography. Soft Evidence considers two major prongs of the issue: 1) the ability to depict someone doing something they have never done, being somewhere they’ve never been or saying something they have never said, and 2) the introduction of plausible deniability (‘the liar’s dividend’), which enables people to escape accountability for actions caught on photo/video: the intersection of this new ability and existing social power and privilege cannot be overstated.

Soft Evidence offers a way for audiences to learn through feelings states, sensory perception and experiential modalities as they come up against synthetic audio/visual material. Receiving simultaneously conflicting messaging–e.g. hearing a statement and seeing a film clip that negates what you’re hearing–is a strategy the artists are using to create what they call productive confusion. Considering deepfakes’ potential to further political polarization, the artists strategically depict scenes that are apolitical in content, in a very naturalistic, classic cinematic aesthetic. The artists also depict an erotic scene of love between two women; they intentionally take face-swapping–a technique primarily used as a tool for sexual violence against women–to depict a moment of consensual queer intimacy.

Audiences engage with the work on Telegram via an unfolding thread of video, image, text, links, and voice notes in a closed-group operated by a mix of actors and real people. This digital surface is a conceptual choice as it draws attention to the rampant distribution of deepfakes and other synthetic media on messaging apps. The work will also manifest physically in an immersive experience culminating in a rich resource and practical takeaway informed by the artist’s conversations with the world’s leading experts in deepfake creation, detection, ethics, and policy throughout their process. Soft Evidence provides a neutral ground for conversation about AV manipulation—a place to find commonality in learning to navigate a synthetic future.

Numerous ethical issues are typically baked into synthetic media processes which include scraping the internet of data, and/or relying on existing open data sets which often include data that were not consensually procured and contain classificatory violence. After all, open data sets are not inherently free of problematic issues. To highlight these problems, the artists created their own data set of a source subject shot over five days in Mexico City, shooting art directed scenes with a body double (“destination” footage) and capturing facial data (“source” footage). Additionally, they engaged in a legal exercise, that is, inventing a “Synthetic and Manipulated Media” contract for the subject to approve the explicit intent of face-swapping and other synthetic media techniques. While the artists’ synthetic media process is not posing as a solution, it does demonstrate the significant labor, time, resource and consent required when data isn't “free,'' that is, just there for the taking despite its ubiquitous availability. To create the final works, the artists leverage a deepfake algorithm that uses convolutional neural networks to achieve its results by using an autoencoder to take input "x" and reconstruct it as best as possible.

Issues considered by Soft Evidence include: the increasingly unstable relationship between image and reality, AV manipulation as a form of gender-based violence, the intersection of plausible deniability with privilege, complications in the justice system in considering visual evidence as hard evidence, mis- and disinformation, political polarization, regulation of technologies that move faster than legal enforcement mechanisms.

Soft Evidence, supported by Media Futures, has received funding from the European Union’s framework Horizon 2020 for research and innovation programme under grant agreement No 951962

SoftEvidence-testimony-softevidence-dejha-ti-ania-catherine00 (1).png
       
     
SoftEvidence-machinelearning-deepfake (1).png
       
     
SoftEvidence-contract (1).png