Written by Francesca Trevisan
Social media platforms have the control over their users’ data and the information users are exposed to. This control is shaped by opaque algorithms informed by business models that want to maximize users’ engagement, hence revenue received. This control is also exercised with a lack of transparency, and accountability that alleviate platforms from taking responsibility for the risks they pose for users’ fundamental rights. This becomes particularly problematic when it comes to the representation of communities on which societies perpetrate systemic discrimination and violence. Against these communities, social media shape new forms of surveillance and discrimination. They reinforce stereotypes and have direct implications on how these marginalised groups are represented and treated by societies. Migrants and refugees are among those communities that are subjected to these new forms of discrimination and surveillance.
In the first Re:framing migrants report, we have analyzed how social media contributes to the misrepresentation and silencing of marginalized groups at different steps: in their content moderation, content selection and targeted advertisement procedures. We have also explored how migrants and refugees are depicted under a negative light, stereotyped and subjected to social media surveillance, hate speech and disinformation which reduce the complexities of their stories and dehumanise them. These issues are not effectively addressed by social media platforms and they must be taken seriously as they violate fundamental rights such as the right to human dignity, privacy and non-discrimination. The recently approved Digital service act wants to address some of these concerns. For example Article 26 provides that very large platforms -platforms with more than 45 millions monthly average online users- should identify, analyse and assess any negative effects for the exercise of fundamental rights. The challenge is that the violation of fundamental rights on these online platforms hits harder on very specific communities, with specific issues and consequences.
ETICAS Research and Consulting works to align technology with human rights. It was created in 2012 with the aim of working on the impact of social, ethical and legal aspects of security policies, innovations and tech developments. Our contribution focuses on the analysis of contextual factors that can and should guide tech development. teams up with organisations to identify black box algorithmic vulnerabilities and retains AI-powered technology with better source data and content.
Francesca is a researcher at Eticas where she explores how society deals with artificial intelligence and tech innovation. She has a PhD in social psychology, and she likes to apply critical theories to flag injustice and undo the power structure. She is interested in inequality, education, tech and human rights.