To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.
Abstract
During the last few years, social media platforms have quietly adopted a new and powerful fashion of moderating content without removing it. It relies on reducing the exposure of the content by downranking it, limiting the areas in the platforms where it is made available, tweaking the recommendation systems, and rearranging the platforms’ architecture.
This reduction practice can be performed as an independent sanction, or in-tandem with other sanctions such as annotation (addition of labels or warnings). Oftentimes, it is described as a tool to curb “borderline content”, which is content that does not violate the policies of the platforms, but “gets close” to violating them, and might be “upsetting” to some people.
The way this practice is applied is very troubling. First, it is conducted with very little transparency and accountability. What exactly is considered “borderline”, alongside other components of the reduction practice, remains undisclosed. Users are often unaware that their content was restricted and thus cannot appeal it (even if they became aware, in platforms such as Facebook and Instagram, there is no technical option to appeal this type of sanction). Moreover, unlike other content moderation methods (particularly deletion), reduction is independent from content policies (such as Facebook “Community Standards”). As such, this backyard practice does not enjoy the public attention that is directed to these policies by journalist, academics, civil society organizations and regulators. Finally, as this practice is entirely AI-driven it raises many of the difficulties that surround these technologies, including bias, context-identifying limitations, and errors.