Is Tik Tok truly doing enough to protect its users from hateful content? Berkeley University plans an investigation:

Rhianna Benson
7 min readSep 10, 2021

An artificial intelligence research team at Berkeley University have set out plans to analyse the competency of the content moderating software currently being used by the social media platform, Tik Tok.

(image by Yahoo Inc via Creative Commons)

The video sharing app, Tik Tok, has frequently found itself in the firing line of news articles due to an apparent rise in concerns surrounding the app’s censorship moderation and the types of content that is being shared.

Despite Tik Tok’s guidelines prohibiting the sharing of any dangerous or harmful content, the videos that are posted by users are not manually evaluated. A digital algorithm is in place with the purpose of categorising content posted on the platform based on its nature; in turn, this involves filtering out harmful content.

An extract from Tik Tok’s ‘Community Guidelines’ — https://www.tiktok.com/community-guidelines?lang=en

Marc Faddoul, AI Researcher at the University of California, Berkeley, had gathered a team of fellow research scientists with the aim of building an infrastructure that will assess the competence of the type of monitoring and censoring software that is currently being used by Tik Tok. Despite growing up in Paris, Mr Faddoul, 26, has worked in AI at Berkeley University for the last five years, primarily focussing on the areas of Algorithmic Fairness and Computational Propaganda.

“When it comes to considering how Tik Tok differed from other social media platforms, especially over the last year since the pandemic, there has been a huge burst in numbers of downloads and regular users. Tik Tok was originally very entertainment-focussed — dancing, lip-syncing, performing, etc. — whereas now, with the flood of new users that downloaded the app and use it regularly, new topics and areas of interest. With increased popularity comes increased diversity.

“What concerns myself and my team however, is the increasing amount of dangerous and pressurising content we’re observing via the “fake” automated accounts that my team set up as part of the initial research stages of our project. We’ve observed more political, religious, socio-economic content being posted over the last year or so, despite none of our accounts intentionally seeking these areas of interest.”

Being that 41% of Tik Tok’s global audience fall into the 10–19 age bracket (despite the app’s official age restriction being 13 years old), concerns are being raised that Tik Tok may not be doing enough to monitor potentially harmful content that is being shared across the app.

(created on Visme; information from Wallaroo)

“Algorithmic detection is supposed to filter out videos containing nudity, videos endorsing violence, far-right political content, religious protests, but through our own first-hand research as part of this project, Tik Tok moderating devices and algorithms are not doing enough to prevent such content from seeping through the cracks and being accessed by younger, perhaps more naïve and vulnerable, users.

Mr Faddoul, who previously worked as an AI Scientist for Facebook as well as a Freelance Algorithm Designer, acknowledged the fact that some may argue this issue with harmful content escaping moderation could also be applied to other social media platforms, but his team believe that the issue may be heightened when it comes to Tik Tok especially due to the heavy reliance upon the audio-visual.

“Starting this month, I will be working with a team at Berkeley called ‘Tracking Exposed’ to launch a full-scale investigation into Tik Tok. We have already begun our project by simulating users and analysing what content is being recommended as well as what ‘harmful’ material comes our way.”

Marc Faddoul (image courtesey of Mr Faddoul); [right] image by J. Paxon Reyes via Creative Commons

Mr Faddoul claimed that during the initial stages of his project, his team have also noticed a significant number of cases of individuals from ‘marginalised communities’ finding their apparently non-harmful content being removed or blocked by Tik Tok.

“There are countless cases in the press and across social media of content being blocked where it should perhaps not be. In Russia, it seems to be the case that LGBTQ+ content is immediately blocked on the grounds that it is harmful. Equally, we have noticed a lot of cases where individuals from marginalised communities and groups are finding their content being removed or blocked by Tik Tok, particularly in the black community regarding the ‘Black Lives Matter’ (BLM) movement.”

Reports that numerous countries are finding the LGBTQ+ content banned on TikTok are emerging (via Twitter)

In June this year, a blackout protest carried out by US users took place on the app amidst accusations of censorship and suppression of black content.

Many US users are claiming that their black content is being banned with no explanation (via Twitter).

Vanessa Pappas, Tik Tok US General Manager, publicly apologised for the incident at the time, referring to it as a “technical glitch”, that meant content containing ‘#BlackLivesMatter’ and ‘#GeorgeFloyd’ were receiving no views. She said: “First, to our Black community: We want you to know that we hear you and we care about your experiences on Tik Tok. We acknowledge and apologize to our Black creators and community who have felt unsafe, unsupported, or suppressed. We don’t ever want anyone to feel that way. We welcome the voices of the Black community wholeheartedly.”

In response to the apology from Tik Tok, Mr Faddoul claims that there are still instances being reported where the “wrong” types of content is being suppressed: “Yes, everyone should feel as though they have the right to freedom of expression —only as long as nobody is being harmed in the process.”

In recent times, numerous global organisations, including the World Economic Forum, have spoken out against the app, accusing its moderators of not taking a harsh enough stance in contesting extremist thoughts and ideologies, political and medical misrepresentation and hate speech.

The World Economic Forum have also accused Tik Tok of not doing enough to protect users (image via World Economic Forum via Creative Commons).

In response to hearing of Mr Faddoul’s upcoming investigation, Laylah Scott, Business Analyst at Tik Tok UK said: “Tik Tok’s Community Guidelines are explicitly laid out across our global websites and on the app, and everyone who sets up a profile has an expectation to abide by them.

Tik Tok’s guidelines set out that any content centring on these subjects must not be posted, uploaded, streamed or shared. They also state that any content exhibiting such items will be removed, and that repeat offenders should expect to have their accounts suspended or ultimately banned.

“We rely on a technological algorithm to automatically ban content that breaches the guidelines laid out — this includes harmful hashtags, sounds and comments,” says Miss Scott, 24. “As well as this, we have human moderators that scan our app for the dangerous content that maybe our algorithmic system didn’t initially detect.

“We believe that every user has a responsibility to keep other users safe, but as case studies have shown, sometimes this is not good enough. We understand that. We do not deny the suggestion that more could be done — there’s always something that could be done to improve our relationship with social media.

“I’m sure that if a more advanced, sophisticated type of algorithmic technology was recommended to us with the purpose of protected our users, we wouldn’t ignore it.”

Along with monitoring the strengths and failures of the software currently being used at Tik Tok, Mr Faddoul also hopes to analyse potential geographical disparity in terms of content being permitted or disallowed.

Mr Faddoul’s team also seeks to investigate geographical disparity within the app (image by Savannah River Site via Creative Commons).

“Our plan is to build a full-scale monitoring infrastructure for the main, recommended system of Tik Tok as it’s currently the biggest platform. We will be investigating whether there is an intrinsic demotion of certain types of political content — BLM, the CPC [Communist Party of China], vaccine diplomacy, etc. — by seeing how hashtags and content related to these issues is spreading across the platform, and why other types of content — LGBTQ+ content, for example — are seemingly being suppressed.”

“We understand the challenge ahead of us. The increasingly conflicting human opinions on what should or should not be considered harmful make it difficult, but what we want to know is whether we can design and build a sophisticated piece of infrastructure that will use technology to rightly catalogue content accurately, fairly, and with total transparency.”

“This is all aimed at protecting the average Tik Tok user, particular young and vulnerable audiences.”

Mark and his team at Tracking Exposed hope to complete the initial research stage of their investigation in the coming weeks, with plans to put their infrastructure designs into practice before November.

--

--