Skip to content

DeepFakes

The broad concept of DeepFakes refers to the use of technology to create false representations of a persons actions or statements; or to mislead persons via the use of computer generated content that is in-turn disseminated in a manner that is intended to be engendered with FalseAttribution.

The provonance of these sorts of issues pre-date the more expansive implications of behaviours that have historically sought to evoke a response based upon false attribution of a statement or direction; that may in-turn be linked to issues such as those noted by TheSecret alongside others. Yet, through the emergence of advanced technology the ability to modify, alter and/or generate content computationally brings about the means to create synthetic content that may be used positively or negatively.

A positive use-case example is that a content artifact of a person making a speech could be translated to a different language and that the footage of the person making the speech is also modified as to support 'lip sync'.

Yet, there are many negative examples that pose great jeopardy, serious implications and require a great deal of WebScience related considerations, in-order to form useful recommendations about how solutions may be best employed.

Technically,

Human Centric AI ecosystems should be intended to provide a capacity for persons to associate themselves with instruments that they control, that are able to be used to validate content, and in-turn provide support for 'approved deep-fakes'; whilst also, providing tools that can mitigate some (/ many) risks associated with circumstances where there may otherwise be a lack of capacity to do anything useful and/or that any tools made available to address these sorts of problems, are coupled to other unwanted qualities; that act to compromise the good purposes associated with seeking to address these sorts of issues.

Edit this page
Last updated on 2/9/2023