
This text was written during the public debate on amendments to the Criminal Code. Judging by the draft, the area of digital sexual violence will have to wait for “worse times,” as “revenge porn”[1] and “deepfake porn”[2] do not seem to be disturbing enough for legislators, though they are for women and girls in Serbia. Let us remind ourselves, that although there are numerous relevant initiatives, research, public testimonies from women, and evidence of the widespread nature of these forms of violence, there is no professional or political will to address this issue in a systematic and timely manner.
It seems that the petition launched by the Autonomous Women’s Center, aimed at criminalizing “revenge porn”—supported by 21,723 signatures— the exposure of the scale of sexual violence in Telegram groups by the group OsnaŽene (Empowered Women), women’s testimonies about experiences of violence using the hashtag #IDidNotReport (#NisamPrijavila), and many other examples, are not enough to spark a serious public debate in society or to prompt a revision of the law. As someone insightfully pointed out, we are fighting against the abuse of technology with laws that were written before its emergence.
When discussing the phenomenology of deepfake, we could summarize the issue as: at first, we laughed, now we cry. Irony aside, the fact is that part of the public initially followed the development of deepfake technology through cases involving public figures. Somewhat predictably, fabricated pornography, violence, and shaming that celebrities experienced became normalized with a “the rich cry too” mentality. Few understood that the widespread and unpunished use of deepfake tools could cause serious individual as well as collective harm. As with previous technological innovations, humor and profit were the words that explained everything. How is the development of technology, particularly artificial intelligence (AI), connected to gender-based violence? Let’s take the example of the Deep Nude application. This and similar applications are designed to enable digital undressing, also known as “undressing bots” for women. The operation of one of the Deep Nude applications is based on the pix2pix open-source code developed by researchers at the University of California, Berkeley, in 2017. This code uses generative adversarial networks (GANs), a type of machine learning that trains an algorithm on a large dataset, thereby improving its own performance. In the case of the Deep Nude application, 10,000 photographs of naked women were used to train the algorithm to achieve the most realistic results possible. Other applications in this group, such as Undress AI, work similarly.
It is important to emphasize that some basic versions of these applications initially functioned by being “fed” exclusively with photos of women and girls. The explanation for this was found in the supposedly greater availability of female photographs online. As with other digital products and services, the free version of deepfake tools offers results of limited quality, while paid versions—available in several price ranges—provide more realistic results. The use of deepfake technology has been simplified to the maximum. No specific knowledge or skills are required to operate deepfake tools. Estimates suggest that the use of deepfake tools has increased enormously in recent years, and the consequences are noticeable, not only in terms of human rights but also in the functioning of political systems, democracy, and even financial institutions.
The trouble with deepfake tools is that they can put any woman or girl on the planet in a situation where they become victims of digital sexual violence. When one of the Deep Nude applications was spotted and later tested on the open internet market, it sparked a wide public debate about the nature of this technology, regulation, and the protection of the rights of victims or those harmed. Under public pressure, led predominantly by women’s rights activists, the application’s designer, known as “Alberto,” pulled it from the market, expressing regret that “the world is still not ready for the Deep Nude application.” Besides the justified outrage, this statement provokes in many, it deserves serious attention and analysis.
What kind of world do the people developing technology envision? Are they concerned about questions of ethics and social responsibility? Do they analyse the effects that could deepen violence, hate speech, marginalisation of vulnerable groups, or fear? Do they consider the universality of human rights, or are values reduced to functionality, profit, and fame? The relevance of these questions was recently confirmed by one of the founders of the internet, Google’s vice president, Vinton Cerf. He also believes that engineers and programmers generally do not think about social consequences, but focus on making their technology work. The lack of communication between the humanities and technology is a serious challenge when it comes to technological development.
From the perspective of protecting women’s human rights, it is important to point out that, unfortunately, many companies and startups in information and communication technologies employ highly problematic development and marketing strategies for promoting their products and services. It is easy to notice that behind the eroticized voices, appearances of robots, and chatbots, misogyny and socially acceptable pornography are being pushed. The objectification and exploitation of female sexuality is not a new phenomenon, though, with the development of AI and AI deepfake tools, it is taking on new dimensions. This view is supported by statistics showing that about 95% of deepfakes are sexually explicit in nature[3], with women and girls being the predominant subjects.
Although there is no comprehensive legal framework addressing the issue of deepfakes, which could contribute to a better understanding of penal policy and the conflicts over jurisdiction and authority, there are examples from various countries that could serve as learning points. Australia and Canada have classified deepfake pornography under the umbrella of non-consensual pornography, which includes “revenge porn,” “upskirting”[4] photos, and other explicit content recorded without a person’s consent. However, in most parts of the world, victims essentially have no legal protection when their images are used against their will to generate pornographic videos.
Exceptions include the Online Safety Act[5] in the United Kingdom and South Korea’s Act on Special Cases Concerning the Punishment of Sexual Crimes[6], which is particularly commendable because it does not require proving harmful intent by the perpetrator. A debate on the criminalization of deepfakes is currently underway in Germany, where analysts are raising concerns about the negative aspects of criminalization, specifically the potential limitations on artistic freedom and critical expression through audiovisual content. Women’s rights activists argue that the question of consent to create and/or distribute certain content is more important than proving the perpetrator’s intent or motive.
Until legislators reach an agreement, citizens have few choices. And with the increasing sophistication of AI technology, it seems it will become even more difficult to discern the difference between deepfakes and real or truthful content.
From the perspective of deepfake technology victims, the most important question is how to ensure that deepfake content, as well as revenge porn, is removed from the internet. A comprehensive systemic solution has yet to be found, as companies, legislators, and citizens are not speaking the same language. Currently, non-profit sector platforms are available to offer support to victims in removing intimate content from the internet. However, even here, there are malicious websites/software that can misuse the uploaded content. For support in removing misused intimate content, victims can contact Stop Non-Consensual Intimate Image.
Absolute protection for users on the internet does not exist. What is available, and mostly free, is learning and fostering a critical attitude towards technology through the development of digital literacy and culture. Technology is not a neutral or harmless phenomenon designed solely for entertainment and leisure. It is an important factor shaping social reality and the dynamics to which we are sometimes willingly and sometimes unwillingly exposed.
Author: Anita Pantelić
[1] The widely used term “revenge porn” refers to the online distribution of photographs or videos containing sexually explicit content without the consent of the person depicted. The perpetrator is often an ex-partner who obtains images or videos during an emotional relationship and aims to publicly shame and humiliate the victim as retaliation for the breakup. However, perpetrators are not necessarily current or former partners, and the motive is not always revenge. Content can also be obtained by hacking into the victim’s computer, social media accounts, or phone, and may aim to cause harm to the target’s life in the “real world” (for example, with the intent to get the person fired, or in some cases, to provoke suicide). – taken from EIGE
[2] Deepfake technology utilizes the power of deep learning technology to produce audio and audiovisual content that can convincingly show people saying or doing things they have never done, or create people and events that have never existed or occurred. In the strictest sense, a deepfake represents a type of synthetic media generated or manipulated using artificial intelligence. – taken from Europol
[3] https://www.techpolicy.press/exploring-legal-approaches-to-regulating-nonconsensual-deepfake-pornography/
[4] The English term “upskirt” refers to the practice of taking malicious photographs or recordings of a person wearing a skirt, without their consent and knowledge, in order to capture images of their crotch, underwear, or genitals.
[5] https://www.gov.uk/government/publications/online-safety-act-explainer/online-safety-act-explainer
[6] https://elaw.klri.re.kr/eng_service/lawView.do?hseq=40947&lang=ENG
