Written by Mahratun Samha and Ayesha Noor Fatima (Edited by Jenna Manhau Fung)
Human rights concerns are not just applicable to the real physical world; it is now an issue in the cyber world. Freedom of speech, protection of identity, etc., basic human rights are threatened every day on the internet.
Among thousand aspects of human rights violation, cyber-bullying and Deep-fake technology were chosen as the case topics. There was one case about cyberbullying, an all-time common issue over social media, and three cases about deep-fake Technology (DFT), a relatively newer concern in the cyber world. The cases were taken from Bangladesh, India, Pakistan, and Australia.
Cyberbullying
According to Unicef, cyberbullying is bullying with the use of digital technologies. This is one of the most common rights violations seen on social media. It is very easy to spread fake news about anyone over social media, which in the end, may devastate a company or even a community. Sending vulgar texts, posting unwanted comments, blackmailing, etc., affect the mental health of the victim, which may even lead them to suicide.
Violation of freedom of speech
A case from Bangladesh was presented in this regard. On November 20, 2021, a BTS fan account got deleted from Twitter with a fake allegation of violating copyrights. The allegation came from a group that calls themselves Team Copyright but has no authenticity. This group takes down any account that conflicts with their ideology.
A popular BTS fan group on Twitter, used to promote their views about masculinity, gender issues, and LGBTQ rights. This goes against the traditional views of masculinity. Team Copyright, which holds traditional ideologies, is used to target anyone who promotes flower masculinity. So they took down the popular BTS fan group. The BTS fan group was spreading awareness of LGBTQ rights, which made them a target for Team Copyright.
Team Copyright stole a photo from the BTS fan group and uploaded it to their own profile. They changed the upload date and put a date prior to the original upload. Then they reported this to Digital Millenium Act, claiming that the fan group stole their photo. The DMA reviewed it and took down the fan group.
There are two aspects to this case. Firstly, the freedom of speech and the awareness about freedom of sexuality- both were threatened. Secondly, this case showed how easy it is to manipulate a digital act to commit a crime.
This case not only calls for a review of the existing cybersecurity laws but also alarms the social media authorities about the increasing amount of fake accounts and fake reports.
Deep-Fake Technology
Deep-fake technology hides the liar’s creative role. Deep fakes will emerge as powerful mechanisms for some to exploit and sabotage others.” (Chesney and Citron, 2018: p. 16)
In simple words, Deep fake technology refers to a machine or AI-generated fake images and videos that are used to replace a real picture or video of a particular person. It is next to impossible to understand which one is a real photo and which is AI-generated. This is a vicious problem in the cyber arena. DFT can do anything from humiliating a person to ending a person’s whole career. It can create fake news and misleading counterfeit videos, which is a big threat to new media journalism. Between 90% and 95% of all online deepfake videos are non-consensual porn, and around 90% of those feature women.
A threat to Human Rights Activism
Rana Ayyub, a journalist in India, spoke out against the government’s response to the rape of an eight-year-old girl. She criticized India’s stance towards child molestation and the country’s shameful behaviour in protecting the perpetrators in the BBC and Al Jazeera in April 2018. To diminish her value as a journalist, a renowned political party created a series of fake offensive tweets claiming to be written by her. Moreover, her face was doctored onto a young porn actress’s body using DFT. DFT here created not only an identity threat but also a huge span of sexual harassment. These all happened just because Rana herself was active against sexual violence. As a consequence of the targeted campaign against her, she became more cautious about expressing her opinions online and limited her activity on social media. This is another example of how technology is being used as an instrument for gender subordination.
A means of sexual harassment
Noelle Martin, an Australian woman, fell victim to the malicious use of deep fake technology at the age of 17. Her pictures taken from her private Facebook account were superimposed onto pornographic images and videos and shared multiple times on famous porn websites and forums. Her face was also used as a cover for porn DVDs. When she spoke against it, the harassers used her private address and contact information to silence her. Her struggles with harassment continued for more than 6 years. She talked on the media to make people aware, including TedTalk. Her action gave birth to laws that make the circulation of non-consensual intimate images illegal in Australia. As a result of her efforts and resilience to address and counter the malicious use of Deep Fake Technology, in 2019 she was given the Australian of the year award.
A means of defamation
One of the recent DFT crimes happened with Senator Azam Swati from Pakistan Tehreek-e-Insaf (PTI) and his wife. Last year, their rivals created obscene videos and AI-generated images of them. They were blackmailed based on this. Their political honour was on the verge of ruin. Finally, on 5th November 2022, a press release stated that the forensic analysis standard found the images and videos to be edited with deep-fake tools. This is another example of how technology is being used for political victimization.
There are a few ways to detect deep-fake technology, e.g., the faces do not blink, a weird mixture of two false faces, uneven skin tone, and improper lip-synching may exist. The problem is- as soon as a weakness is revealed, it gets fixed. Advanced AI tools and appropriate legal actions were proposed to solve this problem. The European Commission has already updated a code of practice on disinformation- big tech companies: Meta, Google, and Twitter are set to face huge fines if they don’t tackle deep fakes and fake accounts on their platforms. Governments, universities, and tech firms are suggested to do further research. The use of Blockchain was also recommended. For instance: digital signatures and standard encryption – timestamping on the blockchain can be useful to identify deep fakes
People generally do not understand how deep-fake technology works, but they certainly do see edited images and videos. Ignorant people often do not believe that these are fake. So the result of defamation, identity theft, and harassment are very tough to mitigate. Once it happens, there is no way to erase it hundred percent.
References
Asghar, I. (2022, November 5). Video of Swati, wife made using ‘deepfake’ technology: FIA. The Express Tribune. https://tribune.com.pk/story/2384813/private-video-of-swati-wife-made-using-deepfake-technology-fia
Cyberbullying: What is it and how to stop it. (n.d.). UNICEF. Retrieved February 1, 2023, from https://www.unicef.org/end-violence/how-to-stop-cyberbullying
Enerio, D. (2021, November 24). BTS Fan Accounts Suspended Over Fake Copyright Claims Made … International Business Times. https://www.ibtimes.com/bts-fan-accounts-suspended-over-fake-copyright-claims-made-trolls-3344269
Hassan, A. (2022, March 30). Rana Ayyub, journalist and Modi critic, barred from leaving India | Freedom of the Press News. Al Jazeera. https://www.aljazeera.com/news/2022/3/30/rana-ayyub-india-journalist-stopped-from-boarding-london-flight
Sherman, J. (2021, October 14). Completely horrifying, dehumanizing, degrading”: One woman’s fight against deepfake porn. CBS NEWS. https://www.cbsnews.com/news/deepfake-porn-woman-fights-online-abuse-cbsn-originals/