Revista Latina de Comunicación Social. ISSN 1138-5820
Esta obra está bajo una licencia internacional Creative Commons Atribución-NoComercial-SinDerivadas 4.0
Tamara Vázquez-Barrio
San Pablo CEU[1] University of Madrid. Spain. tamarav@ceu.es
Jacob González-Castro
San Pablo CEU University of Madrid. Spain. jacob.gonzalezcastro@ceu.es
David García-Marín
Rey Juan Carlos University. Spain. david.garciam@urjc.es
Funding: This research has been funded by CEU San Pablo University, CEU Universities, within the framework of the Call for Grants for Recognized Research Groups (GIR, in Spanish). The grant has been awarded to the ThinkOnMedia Recognized Research Group, affiliated with the Faculty of Humanities and Communication Sciences at CEU San Pablo University.
How to cite this article / Standard reference:
Vázquez-Barrio, Tamara; González-Castro, Jacob & García-Marín, David (2026). Online hate: mapping youth symbolic violence in the digital ecosystem. Revista Latina de Comunicación Social, 84, 1-24. https://www.doi.org/10.4185/RLCS-2026-2550
Date of Receipt: Aug. 19, 2025
Date of Acceptance: Nov. 31, 2025
Date of Publication: Jan. 29, 2026
![]()
Introduction: Hate speech finds fertile ground for spreading online. The difficulty of identifying the perpetrators, coupled with how quickly messages spread on social media, contributes to the persistence and normalization of this form of violence. Young people are of particular interest for analysis given their high level of exposure and active role in online spaces for interaction. Methodology: This study consisted of a nationally representative survey of 1,205 young people aged 16 to 29. Data were collected between March and April of 2023 through computer-assisted web interviews (CAWI). Results: More than half of the participants reported receiving negative comments related to their physical appearance or clothing. Women are more vulnerable to attacks related to gender and aesthetics. A high prevalence was also observed among those who identify with left-wing ideologies. Eighty-three percent of respondents admitted to practicing self-censorship as a preventive measure to avoid conflicts on social media. Discussion and Conclusions: The study reveals the troubling normalization of hate speech online, where attacks primarily target gender and political ideology. It is urgent to develop prevention strategies and promote critical digital literacy aimed at protecting vulnerable groups and fostering safe participation in the digital space.
Keywords: hate speech, digital violence, exclusion, social media, young people, immigration, gender.
The United Nations (UN, 2019) defines hate speech as any form of communication —oral, written, or behavioral— that attacks a person or group because of their religion, ethnicity, nationality, race, color, ancestry, gender, or other identity factors (p. 3). This definition focuses on elements of symbolic aggression related to a person's identity.
From a more analytical perspective, Kaufman (2015, cited in Bustos Martínez et al., 2019) proposes that speech can be considered hate speech if it meets the following four criteria: (1) it is directed at a historically vulnerable or discriminated group; (2) it humiliates symbols representing that group; (3) it incites the denigration of its members; and (4) it explicitly intends to exclude. This characterization adds an intentional and contextual component to the concept.
However, recent research has shown that these approaches, which focus exclusively on vulnerable groups, are insufficient. Fuentes Osorio (2024) demonstrates that, in about half of the cases examined, hate speech is not directed at traditionally discriminated groups and that ideology is the main driver of its production. This finding suggests that the phenomenon is broader than classic frameworks of discrimination and is articulated around political or ideological conflicts.
In turn, Sponholz (2022) cautions that hate speech should not be understood solely by the vocabulary it uses, but rather by the discriminatory meaning it takes on in a given context. In other words, the words' symbolic weight, the social effect of the communicative action, and their capacity to stigmatize or exclude are all relevant.
Other authors have made contributions in the form of classification proposals that allow for a better understanding of the complexity of the phenomenon. For example, Miró Llinares (2016) distinguishes between discourses with discriminatory aims and those that contain symbolic violence without seeking exclusion, thus considering the origin and motivation of the discourse. Furthermore, they identify the most frequent reasons for criminalization: attacks on a person's honor or dignity, denigration for belonging to a particular group, and collective humiliation.
Similarly, Esquivel Alonso (2016) classifies hate speech into three categories: (1) hate based on race or ethnicity, (2) hate based on nationality or religion, and (3) hate based on gender or sexual orientation. Pahor de Maiti et al. (2023) offer a similar classification, analyzing haters —those who generate hate speech— who are primarily motivated by ethnicity, religion, or sexual orientation. Assimakopoulos (2017) and Moreno López and Morales Calvo (2022), on the other hand, argue that cultural or ethnic affiliation and sexual orientation are the main triggers of hate speech. However, they also highlight a significant factor: physical appearance. Within this category, elements that are visually perceptible stand out, such as body features, clothing style, and personal style. These elements influence the social perception of individuals and can make them potential targets of hostility.
These forms of visual and symbolic discrimination are closely related to slut-shaming, which stigmatizes individuals who do not adhere to traditional standards of femininity, especially with regard to sexual behavior. How someone dresses or expresses their sexuality may be seen as a violation of gender norms, leading to social sanctions such as moral judgment, exclusion, or public humiliation. Thus, slut-shaming operates as both a form of control over the body and behavior and a specific manifestation of gender-based hate speech. A recent review based on the PRISMA protocol analyzed 19 studies selected from 585 articles retrieved from the Scopus and Web of Science databases (Miano & Urone, 2023). The qualitative analysis revealed that rigid gender norms and the existence of a double standard are key factors in exposure to slut-shaming. Adolescent girls, young women, and LGBTQI+ individuals are the most vulnerable and experience the most severe consequences. In this context, slut-shaming functions as a mechanism of social control that punishes individuals for adopting sexual behaviors deemed inappropriate for women, such as having multiple partners or displaying an active sexual attitude. This reinforces traditional gender roles and norms. Other studies confirm the impact of slut-shaming on the physical and psychological well-being of girls from adolescence onward (Goblet & Glowacz, 2021).
Hate speech has spread rapidly and effectively via the internet, particularly social media (Igareda González, 2022). This phenomenon is closely linked to the rise of information and communication technologies (ICTs) and, more specifically, to the consolidation of social media as spaces for communication, interaction, and community building over the last decade (Ramírez-García et al., 2022).
Factors that facilitate the spread of hate speech in digital environments include anonymity, which can lead to impunity (Moreno López & Arroyo-López, 2022), and other characteristics inherent to online communication. For instance, disconnection from physical space creates a sense of emotional distance, reducing empathy and encouraging the posting of offensive comments without fully considering the consequences. This psychological distance and the apparent absence of risk make digital platforms seem like safe environments for expressing hate speech (Falxa, 2014). Additionally, the legal complexity of regulating these behaviors in an ethereal, transnational medium like the Internet poses a challenge. As Jubany and Roiha (2018) point out, the lack of clear borders and defined timeframes hinders the application of effective legal norms, thus limiting the ability to address these practices.
Furthermore, support and validation within online communities contribute to the legitimization of these discourses, especially when they are promoted or endorsed by institutional entities. This occurs in Spain and other European countries with far-right or radical right-wing parties regarding racist or xenophobic ideas (Camargo Fernández, 2021; Said-Hung et al., 2023). This phenomenon resembles group cohesion dynamics, wherein the explicit support of certain political forces reduces the perceived severity of these discourses and facilitates their circulation without symbolic penalty (Bustos Martínez et al., 2019, pp. 38-39).
Digital platforms play a key role in the spread of hate speech as distribution channels, especially those with a high level of interaction and little content moderation. These networks amplify discriminatory messages and create environments that normalize and reproduce them systematically. Piñeiro-Otero and Martínez-Rolán (2021) point out that platforms like X (formerly Twitter) have become toxic spaces used by aggressors who intensify their attacks when victims are exposed publicly, generating a more humiliating and public form of violence.
Similarly, according to García-Prieto et al. (2024), TikTok has become another channel most prone to disseminating discriminatory content. The lack of effective regulation and the immaturity of the predominant audience on this platform contribute to its transformation into a space where symbolic violence easily circulates. In this context, even individuals who reject physical violence may tolerate and internalize verbal or symbolic violence, particularly that directed toward physical appearance, cultural identity, or sexual orientation (Moreno López & Arroyo López, 2022). Thus, social networks not only facilitate the circulation of hate speech technically, but also shape the social and emotional frameworks that allow for its acceptance or trivialization. This generates an ecosystem in which symbolic violence is presented as a normalized part of digital exchange.
Age is one of the factors that most affects vulnerability to hate speech. Young people and adolescents are among the most exposed groups. Their exposure is directly related to how much time they spend online and how intensely they use social media. Today, social media is their primary environment for interaction. According to the IAB Spain (2025) study, young people are the main users of these platforms. Meanwhile, the Save the Children report (2024) indicates that nearly 90% of adolescents access the internet several times a day and that 20% are almost constantly connected.
This high digital presence translates into a greater likelihood of experiencing online risks. The Microsoft Online Safety Survey (2025) shows that 66% of young people have experienced some form of digital risk and ranks hate speech as the second most frequent type of risk, surpassed only by disinformation. Teenage girls and LGBTQ+ youth are particularly affected by this type of symbolic violence, as they have higher levels of exposure and are frequent targets of attacks.
However, young people's role in hate speech is not limited to being victims. Several studies also highlight their role in disseminating these messages. Wachs et al. (2022) emphasize that peer pressure is one of the most decisive factors in this behavior. This pressure is particularly intense during adolescence when the desire for recognition or acceptance can prompt young individuals to perpetuate hate speech to fit in socially and avoid isolation. A sense of belonging to a group reinforces these dynamics by justifying behaviors that would otherwise be considered socially unacceptable. In this context, collective action acts as a validation mechanism; those who disseminate hate speech perceive it as generating impact, visibility, and debate, which reinforces their motivation to continue this practice. Adding to this scenario is a limited capacity to manage negative emotions, such as frustration, which often stems from a lack of emotional education. This deficiency can lead to impulsive, aggressive, or vengeful reactions, thus facilitating the spread of hate speech among young people. This highlights the need to address the phenomenon comprehensively, combining digital literacy, emotional education, and promoting a culture of respect and coexistence, especially in spaces where young people develop their identities and social relationships.
Gender is another key factor in differential exposure to hate speech. As Esquivel Alonso (2016) pointed out, a specific category of this discourse targets gender and sexual orientation issues. Several studies support this assertion. According to Pew Research Center data (Duggan, 2017), 20% of women between 18 and 29 years old experienced online harassment, compared to 9% of men in the same age group. The report also reveals that 53% of these women received unsolicited sexually explicit images, compared to 37% of men who reported similar experiences. These practices constitute specific forms of symbolic violence that affect young women disproportionately. One example is the use of intimidating language against women simply because of their physical appearance or clothing choices (Romo Parra et al., 2023). This reality is particularly evident in the online gaming sphere. De Lima-Vélez et al. (2023) found that hate speech is frequent against female gamers due to their appearance and clothing choices while playing.
Victims of this type of harassment often experience psychological consequences or resort to self-censorship; these effects have been documented by Vázquez Barrio et al. (2020) and Martínez-Valerio and Mayagoitia Soria (2021). Self-censorship intensifies in a digital environment dominated by horizontal surveillance, which is exercised by platform users themselves. According to Correcher Mira (2020), this phenomenon, combined with cancel culture, reinforces social control over online self-expression.
In these scenarios, the aggressors' impunity contrasts sharply with the victims' vulnerability. Often, the victims choose to remain silent to avoid social or personal consequences. This dynamic relates directly to the spiral of silence theory, which Noelle-Neumann (1993) formulated to explain how people tend to suppress their opinions when they perceive that these do not align with those of the dominant majority.
According to Hernández Prados et al. (2024), women are more aware of hate speech on social media and the platforms where it proliferates. This heightened awareness is likely due to their historical status as primary targets of discriminatory discourse related to their sexuality, public image, or conformity to contemporary beauty standards.
Esteban-Ramiro and Moreno-López (2023) assert that women and non-heterosexual individuals are far more exposed to hate speech than heterosexual men. This difference may be related to how they engage with digital spaces. Torrecillas-Lacave et al. (2022) point out that women tend to use communication services that allow them to interact, share content, and express themselves publicly. This increases their visibility and, consequently, their exposure to symbolic attacks.
Taken together, these data demonstrate that gender and sexual orientation influence not only the likelihood of being targeted by online hate speech but also how these attacks are experienced, perceived, and managed. Addressing this structural vulnerability requires public policies, regulatory frameworks, and educational strategies that recognize symbolic violence as a real form of harm and promote equitable, safe, and inclusive digital environments for all identities.
Ethnic origin and migrant status are key factors that shape hate speech on social media. Following Elon Musk's acquisition of X (formerly Twitter), Hickey et al. (2025) observed a substantial surge in discriminatory content, particularly racist, homophobic, and transphobic messages. This positioned the LGBTQ+ community and migrants as the primary targets of symbolic aggression. As Rivera-Martín et al. (2022) point out, these aggressions against the LGBTQ+ community manifest not only as insults and threats, but also as rejection of non-normative gender identities. This gives rise to a type of LGBTQ+phobia that delegitimizes their existence under the guise of personal opinion. In line with this, Unlu et al.'s (2025) study focuses on online hate speech in Finland and analyzes messages targeting the LGBTQ+ community on X. The results show that the LGBTQ+ community is portrayed as a threat to traditional norms and values, which reinforces stigmas and promotes exclusionary narratives.
In the case of migrants, data from the Spanish Observatory on Racism and Xenophobia (2024) reveals a similarly troubling situation. The First Annual Report on Monitoring Hate Speech on Social Media revealed that major digital platforms (X, Facebook, Instagram, TikTok, and YouTube) failed to remove over half of the reported hate speech content, even when it was identified as potentially criminal. The report also revealed that people of North African origin and the Muslim community in general were the main groups targeted, with Islamophobia being one of the most frequent forms of aggression. These discourses dehumanize migrants, particularly those of Moroccan origin, associating them with public insecurity and generating a constant threat perception that fosters social polarization.
Hate speech against migrants is not only expressed by anonymous users but also by organized virtual communities that receive institutional validation and support. Contrary to Fluck's (2017) assertion that ideology plays a minor role in shaping hate speech, Wachs et al. (2022) affirm that political and ideological currents influence the perpetration of this discourse. Camargo Fernández (2021), Said-Hung et al. (2023), Matarín Rodríguez-Peral et al. (2025), and Pérez-Escolar et al. (2025) warn of the role of far-right political groups in Spain and Europe in normalizing and disseminating racist and xenophobic messages. These dynamics resemble group cohesion mechanisms where political leaders' explicit support diminishes the seriousness of hate speech and favors its public legitimization (Bustos Martínez et al., 2019, pp. 38-39).
Parties like Vox actively use social media platforms such as X, YouTube, TikTok, and Instagram to portray migrants as a threat to the welfare state. They employ fear as a political mobilization strategy (González-Castro, 2023). According to García González (2022), this narrative dehumanizes migrants by presenting them as responsible for their situation and as a potential burden on receiving countries. Furthermore, these discourses are not uniformly directed at the entire migrant population. As Aranda (2023) explains, Vox primarily directs its hostility toward migrants who do not fit into its concept of the "Hispanosphere," centered on Latin American countries. This allows Vox to construct a specific enemy —Muslim-majority countries and communities— and focus its attacks in a particular direction without openly rejecting migration altogether. Within this framework, religion acts as a filter for acceptance or rejection, becoming the primary criterion for defining the "other."
Fuentes-Lara and Arcila-Calderón (2023) conclude that, in Spain, social acceptance of Islamophobic discourse on social media is often accompanied by other forms of hatred that deepen social polarization. According to the authors, this reality is not reproduced with the same intensity in other European countries, such as France. This phenomenon reflects the specificity of the Spanish context, where digital racism, especially when disguised as concern for security or national identity, enjoys a worrying degree of public legitimacy.
These data show that ethnic origin and migrant status increase vulnerability to hate speech on social media and have become central themes in polarizing narratives with strong ideological overtones. The political exploitation of racism and xenophobia, coupled with the inaction or complicity of certain digital platforms, contributes to the normalization of these messages and reinforces the symbolic exclusion of marginalized groups.
The analyzed theoretical framework allows for the understanding that hate speech on social media does not affect all groups equally. Rather, it is primarily directed against historically marginalized groups, including women, LGBTQ+ individuals, young people, migrants, and individuals of non-majority ethnic origin. These symbolic attacks occur in a digital environment that amplifies their reach and, in many cases, legitimizes and normalizes them due to a lack of effective regulation, social validation among users, or their usage for political purposes. Therefore, it is crucial to adopt a comprehensive approach combining public policies, emotional education, digital literacy, and coordinated institutional actions to create safer, more inclusive, and respectful digital spaces.
This research has one general objective and four specific objectives, outlined below:
The specific objectives guiding this research are as follows:
This study employed a nationally representative survey of young people aged 16 to 29 who reside in Spain. Fieldwork was outsourced to the specialized company GAD3, which was responsible for the technical execution of data collection. The sample size was 1,205 and was constructed using quotas for sex and age based on updated July 2021 data from the National Institute of Statistics (INE, in Spanish) to ensure representativeness of the analyzed population.
The sample was balanced in terms of gender (48.7% women and 51.3% men) and age. Thirty-three percent of respondents were between 16 and 19 years old, 32.6% were between 20 and 24 years old, and 34.1% were between 25 and 29 years old. The survey's margin of error is 2.9%, with a 95.5% confidence level (two sigmas) under the assumption of simple random sampling and maximum indeterminacy (P = Q = 0.5).
Fieldwork was conducted in March and April 2023 using self-administered online interviews via CAWI (Computer Assisted Web Interviewing). The questionnaire included 45 closed-ended questions, and each interview averaged 15 to 20 minutes.
To measure the degree to which the variables (1) age, (2) ideology, and (3) gender influenced the dependent variables, binary logistic regressions were performed. Due to the qualitative and dichotomous nature of the dependent variables, which were reduced to "Yes/No" categories with the "I Don't Know/No Answer" (NS/NC, in Spanish) category assumed as missing, logistic regression calculations were deemed most appropriate. To do so, all independent variables were also converted into dichotomous (Yes/No) variables. The following new variables were created as a result of this process:
The variable relating to the respondent's sex was not modified because it was originally established as a dichotomy (male or female).
In the final drafting phase, a generative artificial intelligence tool based on language models (LLM) was used to improve the clarity, coherence, and readability of the text (ChatGPT, GPT-4 model, used in March 2025). This tool was used similarly to spelling and grammar checkers that are already part of editorial processes. Work was done exclusively on material previously written by the authors. No new content was generated. The authors of the article bear full responsibility for the content, its analysis, and its conclusions.
For the purposes of this research, hate speech is defined as written, verbal, or visual expressions that denigrate, insult, threaten, or attack a person because of their sex, gender, physical appearance, clothing, religious beliefs, ethnic or racial origin, sexual orientation, economic status, opinions on gender equality, or political ideas.
Overall, the data show that nearly 8 out of 10 respondents (79.6%) report having experienced hate speech for at least one of the ten reasons included in the survey (Table 1).
When the data are broken down by sex, it is observed that 76.8% of men acknowledge receiving this type of comment, while 82.5% of women report the same. The 6% difference between the two groups indicates that women are more exposed to hate speech in digital environments.
Regarding age, the results reveal that the incidence of hate speech increases as the user's age decreases. Young people between 16 and 19 years old are the most affected group, with 81.6% reporting receiving hate messages. This is followed by the 20-24 age group (80.5%) and finally the 25–29 age group (76.9%).
Table 1. General data based on gender and age.
|
Descriptive statistics. Hate speech |
|||
|
General |
Yes: 959 (79.6%) |
No: 229 (19%) |
NS/NC: 46 (1.4%) |
|
Gender |
Man |
Yes: 76.8% No: 21.1% NS/NC: 2.1% |
|
|
Woman |
Yes: 82.5% No: 16.8% NS/NC: 0.7% |
||
|
Age |
16-19 |
Yes: 81.6% No: 16.4% NS/NC: 2% |
|
|
20-24 |
Yes: 80.5% No: 18.2% NS/NC: 1.3% |
||
|
25-29 |
Yes: 76.9% No: 22.1% NS/NC: 0.9% |
||
Source: Elaborated by the authors.
Young people primarily identify two reasons for receiving negative comments: physical appearance and clothing. Over half of those surveyed reported being attacked because of their appearance, and 45.5% cited clothing as a reason for aggression. Other reasons of an ideological and identity-related nature follow, with views on gender equality, gender identity, and political beliefs accounting for over 30% of the responses (Figure 1). These data highlight the widespread presence of hate speech and reveal that its main targets are the physical appearance and personal opinions and identities of those who experience it.
Other motivations, such as sexual orientation, religious beliefs, having had multiple partners, or belonging to a particular ethnicity or race, are cited less frequently. This finding is particularly striking when considering that previous research indicates immigrant groups and the LGBTQ+ community are often among the most targeted in digital environments (Hickey et al., 2025). This apparent contradiction could be due to factors such as the underrepresentation of certain groups in the sample or the normalization of certain forms of discrimination, making it difficult for respondents to identify them as hate speech.
Figure 1. Reasons for receiving negative feedback

Source: Elaborated by the authors.
Being a woman is a clear vulnerability factor for hate speech in digital environments. According to the collected data, women report being targeted by offensive comments more often than men in all situations considered in the survey, except for three: economic status, sexual orientation, and ethnic or racial origin (see Figure 2).
Figure 2. Conditioning factors based on gender

Source: Elaborated by the authors.
Table 2. Binary logistic regression.
|
Dependent Variables Have you ever received negative comments for these reasons? |
Predictive Factors |
Statistics |
|
Because of my sexual orientation |
To be left-wing |
B=.913 Exp (B)=2.491 p<.001 |
|
Gender (being a man) |
B=.439 Exp (B)=1.551 p=.004 |
|
|
Because of my ethnic or racial origin |
Gender (being a man) |
B=.381 Exp (B)=1.463 p=.018 |
|
Because of my political views |
To be center-left |
B= -.523 Exp (B)=.593 p=.014 |
|
To be center-right |
B= -1.224 Exp (B)=.294 p<.001 |
|
|
Because of the way I dress |
To be between 16 and 19 years old |
B=.434 Exp (B)=1.544 p=.003 |
|
To be left-wing |
B=.447 Exp (B)=1.563 p=.022 |
|
|
Gender (being a woman) |
B= -.425 Exp (B)=.653 p<.001 |
|
|
Because of my physical appearance |
To be between 16 and 19 years old |
B=.349 Exp (B)=1.418 p=.018 |
|
To be left-wing |
B=.622 Exp (B)=1.862 p=.002 |
|
|
To be center-left |
B=.559 Exp (B)=1.749 p=.009 |
|
|
Gender (being a woman) |
B= -.514 Exp (B)=.598 p<.001 |
|
|
Because of my religious/spiritual beliefs |
To be between 16 and 19 years old |
B=.377 Exp (B)=1.458 p=.036 |
|
To be center-right |
B= -.572 Exp (B)=.564 p=.004 |
|
|
Because of my way of thinking about equality between women and men |
To be between 16 and 19 years old |
B=.548 Exp (B)=1.730 p<.001 |
|
To be center-right |
B= -.573 Exp (B)=.564 p=.001 |
|
|
Gender (being a woman) |
B= -.351 Exp (B)=.704 p=.006 |
|
|
Because of being a woman or a man |
To be between 16 and 19 years old |
B=.469 Exp (B)=1.598 p=.003 |
|
To be left-wing |
B=.713 Exp (B)=2.041 p=.001 |
|
|
To be center-left |
B=.603 Exp (B)=1.827 p=.009 |
|
|
Gender (being a woman) |
B= -1.271 Exp (B)=.280 p<.001 |
Source: Elaborated by the authors.
As shown in Figure 2, both men and women most frequently cite physical appearance and clothing as reasons for being victims of hate attacks in digital environments. However, there are significant gender differences. For example, 64% of women report receiving negative comments about their appearance, compared to 49% of men. Similarly, 52% of women report being criticized for their clothing, compared to 39% of men.
Logistic regression tests (Table 2) support this trend and confirm the significant influence of sex on the likelihood of receiving negative comments about physical appearance. Women are more exposed to this type of aggression than men. These findings reinforce the idea that the female body continues to be subject to scrutiny and social control, a dynamic closely related to body shaming. However, to a lesser extent, this social pressure has also extended to men, who are beginning to experience criticism about their physical appearance. This suggests a broadening of the normative body ideal and the mechanisms of aesthetic policing toward men.
Gender also influences the likelihood of receiving negative comments about clothing. The most significant difference, however, is being a woman being used as a motive for attack, demonstrating a clear component of gender discrimination. More specifically, 47.2% of women have been targeted for this reason, compared to 19.6% of men. This data can be interpreted as a manifestation of misogyny, in which female identity itself becomes a target of symbolic aggression.
The results also show that sex significantly influences the likelihood of receiving negative comments when expressing opinions about equality between women and men. Women are more likely to be questioned or attacked for their ideas in this area, revealing social resistance to feminist discourse when it is expressed by women. However, a relevant trend was also observed among men. Their views on gender equality were the third most frequent reason they were victims of hate speech. This suggests that expressing egalitarian positions can generate rejection and trigger hostile responses in the public sphere, even for men.
As previously noted, the survey data indicate three reasons why men are attacked more than women: economic status, sexual orientation, and ethnic or racial origin. However, the statistical analysis challenges this perception regarding economic status, as none of the factors related to age, ideology, or sex predict receiving negative comments. Regarding ethnic or racial origin, the results show that being male increases the likelihood of receiving negative comments: men are 46.3% more likely than women. Finally, with respect to sexual orientation, being male increases the likelihood of receiving negative comments by 55.1%.
Ideology is the second most significant factor influencing exposure to hate speech. The data show that those who identify with the left are the most affected group in several sensitive categories. According to the frequency data (Figure 3), the main reasons these young people report receiving negative comments are related to their sexual orientation (43.2%), political views (41%), and views on equality (38.2%). Logistic regression tests confirm this trend. The results show that being left-wing increases the likelihood of receiving negative comments based on sexual orientation by 2.5 times and doubles the likelihood of receiving hate messages for being female or male.
Conversely, people who identify as right-wing report receiving fewer negative comments than those in the center or on the left. Among respondents who reported being targeted, the most frequent reasons cited were religious beliefs (29.4%), political views (29.1%), and economic situation (24.5%). However, in all three cases, these figures were lower than those among center-left or left-leaning respondents.
Analyzing the data with logistic regression confirms that identifying as right-wing significantly reduces the likelihood of being a victim of hate speech for religious or gender-related reasons. The lower level of perceived or reported hostility toward those who identify with right-wing ideological positions could be due to their discourse aligning with traditional normative frameworks that retain legitimacy in various social and political contexts. Additionally, it is worth considering whether this trend is reinforced by emerging institutional discourses and the algorithmic logic of digital platforms, which may favor content that does not openly challenge the status quo. However, these interpretations require empirical verification through research that delves deeper into the role of these variables.
In summary, the data reveal an asymmetry in exposure to hate speech: ideological positioning directly influences the intensity and type of symbolic violence young people experience on social media. Further analysis shows that young people who identify as centrist receive the most negative comments in all scenarios, except when criticism is more intensely directed toward those who identify as left-wing. This apparent paradox in the context of current polarization can be explained, in part, by their intermediate position, which exposes them to criticism from both ideological extremes. Their lack of definition can be interpreted as lukewarmness, opportunism, equidistance, or a lack of commitment by both sides, leading to a high volume of negative interactions.
This scenario suggests that ideological polarization manifests as digital discrimination, where expressing political ideas, social positions, and sexual identity becomes grounds for symbolic aggression. Thus, social networks are not neutral spaces for debate; rather, they reproduce dynamics of ideological exclusion that affect young people differently depending on their political stance and level of exposure.
Figure 3. Conditioning factors based on ideological identities

Source: Elaborated by the authors
A third variable that appears to be significant in several of the cases analyzed is age. The results indicate that younger people are more likely to receive negative comments about their dress, their opinions on gender equality, and are 45.8% more likely to receive hate for their religious or spiritual beliefs. In addition, younger people are also more likely to be targeted simply for being male or female —being between 16 and 19 years old increases the chances of receiving this type of message by 60%—. These data reflect greater vulnerability among younger people, who seem to be more exposed to judgment, criticism, and hate speech about their identity, appearance, and ideas.
Self-censorship is one of the most widespread behaviors among young people to avoid tense or controversial situations on social media. An overwhelming 83.3% of respondents admitted to preferring to remain silent or avoid certain topics to avoid controversy, relating directly to Elisabeth Noelle-Neumann's (1993) spiral of silence theory. According to this theory, people tend to silence their opinions when they perceive themselves to be in the minority, fearing isolation or social rejection. Digital environments seem to reproduce this same pattern. Thus, social networks, initially conceived as spaces for free expression, can function as mechanisms of social pressure that reinforce conformity.
Self-censorship manifests not only in the omission of opinions but also in the way people present themselves. Nearly half of the respondents (47.3%) say they think twice before posting a photo of themselves, fearing the comments they might receive. 36% percent directly state that they do not post what they truly think. Faced with this climate, many young people choose to create closed, trusted spaces. 50% report sharing content only in private chats with friends where they feel freer to express themselves.
Other behaviors deviate from self-censorship, showing more strategic or defensive responses. For example, 16.6% use anonymous profiles to express themselves more freely without exposing their identities, and 10.9% adopt a provocative attitude by posting comments contrary to the majority in order to generate controversy. Some respondents choose to conform to the dominant discourse by publishing content they know will be socially accepted, even if it doesn't truly reflect their thoughts. This extends the spiral of silence beyond literal silence to include strategically adapted forms of expression.
Figure 4 clearly shows gender differences in self-censorship behaviors and digital identity management. Compared to men, women tend to think carefully before posting a picture of themselves or expressing their opinion on social media. In contrast, men more frequently exhibit two behaviors that deviate from this more restrained pattern. They are more likely to report using anonymous profiles to express themselves on social media and more likely to say they feel comfortable generating controversy or contradicting the dominant discourse. These two behaviors, in which male participation surpasses female participation, could be related to a perception of lower social or personal risk in digital conflict or to socialization that tolerates and sometimes rewards public confrontation as a form of self-affirmation.
Figure 4. Attitudes toward posting on social media based on gender

Source: Elaborated by the authors.
The results of this research allow for the achievement of the overall objective by offering a detailed view of how hate speech manifests on social media among young Spanish individuals, the factors that provoke it, the most vulnerable groups, and the strategies that users adopt to protect themselves.
Regarding objective 1 (SO1), the data confirm the high prevalence of symbolic aggression in the digital environment, indicating an alarming normalization of hate speech online. A significant proportion of young people report having experienced hostile or discriminatory comments, reflecting the prevalence of this issue.
Regarding objective 2 (SO2), the study identified the most common motivations for the attacks. Physical appearance and ideology were the most prevalent, though cases related to gender, sexual orientation, and identity were also observed. However, it is notable that categories typically linked to hate speech, such as sexual orientation and ethnic origin, were mentioned less frequently by participants. This apparent contradiction with previous studies (Fuentes-Lara & Arcila-Calderón, 2023) could be explained by several factors, including the underrepresentation of vulnerable groups in the sample and the difficulty of recognizing subtle or daily discrimination. In this sense, the data invite us to reflect on the limits of individual perception of the phenomenon of online hate and the need to broaden the methodological approach to capture its full complexity.
The third objective (SO3) is addressed by identifying the groups most vulnerable to hate speech. Being a woman significantly increases the likelihood of experiencing aggression. This conclusion coincides with previous research (Hernández Prados et al., 2024), particularly with regard to physical appearance and clothing. This pattern reproduces historical dynamics of social control that have positioned the female body as an object of public scrutiny. In the digital environment, these dynamics manifest as body shaming and slut-shaming. Beyond aesthetics, women are attacked simply for being women or for defending or expressing egalitarian ideas, revealing the existence of misogynistic and antifeminist discourses that are especially hostile toward egalitarian voices from women.
Alongside this observation, another interesting fact emerges: aesthetic pressure no longer affects women exclusively. There has been an increase in criticism of men for reasons related to their appearance. This indicates a broadening of the normative body ideal and the mechanisms of symbolic surveillance toward the male gender.
Additionally, being male is associated with a greater likelihood of receiving negative comments related to sexual orientation, placing non-heterosexual individuals among the most vulnerable groups. This finding aligns with the reviewed literature (Esteban-Ramiro & Moreno-López, 2023).
Finally, age emerges as a significant variable. Younger individuals are more frequently reported to be victims of offensive comments. Their high presence and participation in digital spaces (Save the Children, 2024) can partly explain this greater exposure to hate speech, where they interact with greater intensity and visibility.
Together, these findings help define the precise profile of audiences vulnerable to online hate speech. They also provide a solid empirical basis for designing policies that raise awareness, prevent hate speech, and protect vulnerable groups.
Ideology is the second most influential factor in exposure to hate speech in digital environments. The data show that individuals who identify with the left are more likely to receive negative comments about sensitive topics, such as sexual orientation, physical appearance, and gender identity. This greater vulnerability may be associated with greater public exposure or more active participation in social debates, which often generate polarization.
Conversely, respondents who identify with the conservative spectrum report receiving fewer attacks. When they have been the target of offensive comments, the main reasons are related to their religious beliefs, political opinions, and, to a lesser extent, economic situation. This reduced perception of vulnerability may be due to the greater public acceptance of discourses that were previously socially penalized. The rise of discourses aligned with radical right-wing positions has contributed to the normalization of messages that were previously considered disruptive, marginal, or politically incorrect regarding immigration, feminism, and gender violence (Said-Hung et al., 2023). This has reduced the self-censorship that, according to the spiral of silence theory (Noelle-Neumann, 1993), inhibited the expression of non-majority opinions. Recent research (Velasco & Rodríguez-Alarcón, 2020; Rosenberg, 2022; Fuentes-Lara & Arcila-Calderón, 2023) highlights a reconfiguration of the public sphere, in which some conservative voices have gained discursive ground and legitimacy.
This new communicative context explains the situation of those who position themselves ideologically in the center. Despite their apparent neutrality, this group reports the highest frequency of negative comments in most scenarios, except when criticism is most intensely directed toward young leftists. This finding suggests that centrist discourse may be vulnerable in an increasingly polarized environment where intermediate positions may be criticized from both ideological sides.
Regarding SO4, the analysis reveals the various self-protective strategies adopted by young people. The most common strategy is self-censorship, which does not always manifest as overt silence but rather takes more subtle and complex forms. Some young people choose to withhold controversial opinions and also moderate how they present themselves on social media. This includes publishing content designed to conform to socially acceptable norms, even if it does not accurately reflect one's own ideas or values. Thus, the concept of the spiral of silence (Noelle-Neumann, 1993) expands beyond literal silence to include adaptive expressive strategies where public expression is adjusted according to the prevailing opinion. This conclusion aligns with other recent research demonstrating that members of online communities not only remain silent when prevailing opinions on controversial topics differ from their own but also argue against their own ideas (Haug et al., 2025). This behavior is more common among women and has been identified in previous studies (Vázquez Barrio et al., 2021; Torrecillas-Lacave et al., 2022), which point to greater social pressure on women regarding their physical appearance and online behavior. In this context, female self-censorship emerges as a form of self-protection against potential judgment, unwanted comments, or even harassment.
In contrast, a minority —mostly men— adopts more defensive or strategic responses. Some resort to anonymity so they can freely express controversial opinions without exposing themselves. Others adopt an openly provocative stance. They publish messages that are contrary to the prevailing opinion in an attempt to challenge it or generate controversy. This divergence in attitudes can be interpreted through the spiral of silence theory. While women tend to internalize the fear of social rejection and moderate their outreach, some men react with confrontation or a search for visibility, even if it means hiding their identity.
Overall, the findings allow for a more precise mapping of the impact of hate speech on Spanish youth and demonstrate how digital environments reproduce —and sometimes intensify— structural inequalities. Thus, the research provides a solid empirical basis for designing preventative interventions, digital literacy programs, and public policies aimed at protecting the most vulnerable groups in the virtual world.
Aranda, G. (2023). De la hispanidad a la hispanoesfera. Conciencia imperial y nacionalismo centrípeto en la derecha voxista (2013-2020). Intus-Legere Historia, 17(2), 206-234. https://intushistoria.uai.cl/index.php/intushistoria/article/view/604
Assimakopoulos, S., Baider, F. H., & Millar, S. (2017). Young People’s Perception of Hate Speech. En Online Hate Speech in the European Union: A Discourse-Analytic Perspective (pp. 53-85). Springer. https://doi.org/10.1007/978-3-319-72604-5_4
Bustos Martínez, L., De Santiago Ortega, P. P., Martínez Miró, M. Á., & Rengifo Hidalgo, M. S. (2019). Discursos de odio: Una epidemia que se propaga en la red. estado de la cuestión sobre el racismo y la xenofobia en las redes sociales. Mediaciones Sociales, 18, 25-42. https://doi.org/10.5209/meso.64527
Camargo Fernández, L. (2021). El nuevo orden discursivo de la extrema derecha española: de la deshumanización a los bulos en un corpus de tuits de Vox sobre la inmigración. Cultura, Lenguaje y Representación, 26, 63-82. http://dx.doi.org/10.6035/clr.5866
Correcher Mira, J. (2020). Discurso del odio y minorías: redefiniendo la libertad de expresión. Teoría & Derecho. Revista de pensamiento jurídico, 28, 166-191. https://core.ac.uk/download/pdf/491098258.pdf
De Lima-Vélez, V., Puello-Martínez, D., Mendoza-Curvelo, M., & Acevedo-Merlano, Á. (2023). Hipersexualización del personaje femenino en el anime: una mirada desde Latinoamérica. El caso Genshin Impact. Comunicación y Género, 6(1), 1-14. https://doi.org/10.5209/cgen.84885
Duggan, M. (2017, July 11). Online harassment 2017. Pew Research Center. https://www.pewresearch.org/internet/2017/07/11/online-harassment-2017/
Esquivel Alonso, Y. (2016). El discurso del odio en la jurisprudencia del Tribunal Europeo de Derechos Humanos. Cuestiones Constitucionales. Revista Mexicana De Derecho Constitucional, 1(35), 3-44. https://doi.org/10.22201/iij.24484881e.2016.35.10491
Esteban-Ramiro, B., & Moreno-López, R. (2023). Nuevas formas de violencia y discursos de odio hacia las mujeres en juegos online multijugador. methaodos. Revista de ciencias sociales, 11(1). http://dx.doi.org/10.17502/mrcs.v11i1.652
Falxa, J. (2014). Redes sociales y discursos de odio: un enfoque europeo. In F. Pérez Álvarez, & L. M. Díaz Cortés (coord.), Moderno discurso penal y nuevas tecnologías (pp. 89-106). Ediciones Universidad de Salamanca.
Fluck, J. (2017). Why do students bully? An analysis of motives behind violence in schools. Youth & Society, 49(5), 567-587. https://doi.org/10.1177/0044118X14547876
Fuentes Osorio, J. L. (2024). Hateful speech. La expansión del discurso de odio. Revista Electrónica de Criminología, 08(02), 1-30. https://hdl.handle.net/10953/2694
Fuentes-Lara, C., & Arcila-Calderón, C. (2023). El discurso de odio islamófobo en las redes sociales. un análisis de las actitudes ante la islamofobia en twitter. Revista Mediterránea De Comunicación, 14(1), 225-240. https://doi.org/10.14198/MEDCOM.23044
García González, S. (2022). Necropolítica y discursos de odio. Sentimiento antinmigración, vulnerabilidad y violencia simbólica. Isegoría, 67. https://doi.org/10.3989/isegoria.2022.67.07
García-Prieto, V., Bonilla-del-Río, M., & Figuereo-Benítez, J. C. (2024). Discapacidad, discursos de odio y redes sociales: video-respuestas a los haters en TikTok [Disability, hate speech and social media: video replies to haters on TikTok]. Revista Latina de Comunicación Social, 82, 01-21. https://www.doi.org/10.4185/RLCS-2024-2258
Goblet, M., & Glowacz, F. (2021). Slut Shaming in Adolescence: A Violence against Girls and Its Impact on Their Health. International Journal of Environmental Research and Public Health, 18(12). https://doi.org/10.3390/ijerph18126657
González-Castro, J. (2023). La comunicación del miedo en la política de Vox: elecciones en Castilla y León y Andalucía 2022. Revista ICONO 14. Revista Científica De Comunicación Y Tecnologías Emergentes, 21(1). https://doi.org/10.7195/ri14.v21i1.1912
Haug M., Maier C., Gewald H., & Weitzel T. (2025). Supporting opinions to fit in: a spiral of silence-theoretic explanation for establishing echo chambers and filter bubbles on social media. Internet Research, 35(7), 30-51. https://doi.org/10.1108/INTR-03-2024-0413
Hernández Prados, M. Á., Álvarez Muñoz, J. S., & Pina Castillo, M. (2024). Los mensajes de odio en adolescentes. ¿una perspectiva de género? Revista Internacional de Educación para la Justicia Social, 13(1), 269-285. https://doi.org/10.15366/riejs2024.13.1.015
Hickey, D., Fessler, D. M. T., Lerman, K., & Burghardt, K. (2025). X under Musk’s leadership: Substantial hate and no reduction in inauthentic activity. PLoS ONE, 20(2). https://doi.org/10.1371/journal.pone.0313293
IAB Spain. (2025). Estudio de las Redes Sociales. https://iabspain.es/estudio/estudio-de-redes-sociales-2025/
Igareda González, N. (2022). El discurso de odio anti-género en las redes sociales como violencia contra las mujeres y como discurso de odio. Dykinson.
Jubany, O., & Roiha, M. (2018). Las palabras son armas: Discurso de odio en la red. Edicions Universitat Barcelona.
Martínez-Valerio, L., & Mayagoitia Soria, A. M. (2021). Influencers y mensajes de odio: jóvenes y consumo de contenidos autocensurados. Prisma Social: revista de investigación social, 34, 4-39. https://dialnet.unirioja.es/servlet/articulo?codigo=8024369
Matarín Rodríguez-Peral, E., Gómez Franco, T., & Rodríguez-Peral Bustos, D. (2025). Propagation of Hate Speech on Social Network X: Trends and Approaches. Social Inclusion, 13. https://doi.org/10.17645/si.9317
Miano, P., & Urone, C. (2023). What the hell are you doing? A PRISMA systematic review of psychosocial precursors of slut-shaming in adolescents and young adults. Psychology & Sexuality, 15(1), 97-113. https://doi.org/10.1080/19419899.2023.2213736
Microsoft. (2025). Global Online Safety Survey. https://www.microsoft.com/en-us/digitalsafety/research/global-online-safety-survey
Miró Llinares, F. (2016). Taxonomía de la comunicación violenta y el discurso del odio en Internet. IDP. Revista de Internet, Derecho y Política, 22, 82-107. https://www.redalyc.org/pdf/788/78846481007.pdf
Moreno López, R., & Arroyo López, C. (2022). Redes, equipos de monitoreo y aplicaciones móvil para combatir los discursos y delitos de odio en Europa. Revista Latina de Comunicación Social, 80, 347-363. https://www.doi.org/10.4185/RLCS-2022-1750
Moreno López, R., & Morales Calvo, S. (2022). Comunicación en redes y discursos de odio en el contexto español. VISUAL REVIEW. International Visual Culture Review Revista Internacional De Cultura Visual, 10(1), 1-9. https://doi.org/10.37467/revvisual.v9.3557
Noelle-Neumann, E. (1993). The spiral of silence: Public opinion--Our social skin. University of Chicago Press.
Pahor de Maiti, K., Franza, J., & Fišer, D. (2023). Haters in the spotlight: gender and socially unacceptable Facebook comments. Internet Pragmatics, 6(2), 173-196. https://doi.org/10.1075/ip.00093.pah
Pérez-Escolar, M., Morejón‐Llamas, N., & Alcaide‐Pulido, P. (2025). Populist Rhetoric and Hate Speech: Analyzing Xenophobic Narratives in Vox’s 2023 Election Campaign. Politics and Governance, 13. https://doi.org/10.17645/pag.9346
Piñeiro-Otero, T., & Martínez-Rolán, X. (2021). Eso no me lo dices en la calle. Análisis del discurso del odio contra las mujeres en Twitter. Profesional de la Información, 30(5). https://www.researchgate.net/publication/354486369_Eso_no_me_lo_dices_en_la_calle_Analisis_del_discurso_del_odio_contra_las_mujeres_en_Twitter_Say_it_to_my_face_Analysing_hate_speech_against_women_on_Twitter
Ramírez-García, A., González-Molina, A., Gutiérrez-Arenas, M. P., & Moyano Pacheco, M. (2022). Interdisciplinariedad de la producción científica sobre el discurso del odio y las redes sociales: Un análisis bibliométrico. Comunicar, 72, 129-140. https://doi.org/10.3916/C72-2022-10
Rivera-Martín, B., Martínez de Bartolomé Rincón, I., & López López, P. J. (2022). Discurso de odio hacia las personas LGTBIQ+: medios y audiencia social. Revista Prisma Social, 39, 213-233. https://revistaprismasocial.es/article/view/4868
Romo Parra, C., Sell Trujillo, L., Vera Balanza, T., & Delgado Peña, J. J. (2023). Identidades y exposición a las violencias online. Aproximación a una clasificación temática de los mensajes de odio. Revista Latina De Comunicación Social, 81, 539-553. https://doi.org/10.4185/rlcs-2023-1998
Rosenberg, N. (2022). La seguridad como eje rector de la política israelí durante la era Netanyahu: implicancias para el conflicto palestino-israelí (2009-2021) [Tesis de grado, Universidad Nacional de Rosario]. Repositorio Institucional UNR. https://rephip.unr.edu.ar/server/api/core/bitstreams/36111f1b-34c9-4ea2-be6e-7425c4d42803/content
Said‐Hung, E., Moreno‐López, R., & Mottareale‐Calvanese, D. (2023). Promotion of hate speech by spanish political actors on twitter. Policy and Internet, 15(4), 665-686. https://doi.org/10.1002/poi3.353
Save the Children (2024). Derechos #sin conexión: un análisis sobre derechos de la infancia y la adolescencia y su protección en el entorno digital. Save the Children España. https://www.savethechildren.es/actualidad/informe-derechos-sin-conexion
Spanish Observatory on Racism and Xenophobia. (2024). Informe Anual de Momitorización del Discurso del Odio en Redes Sociales, 2023 (Informe Nº 121-24-009-4). Secretaría de Estado de Migraciones, del Ministerio de Inclusión, Seguridad Social y Migraciones. https://ciudadaniaexterior.inclusion.gob.es/documents/20121/1419878/Informe+anual+monitorizaci%C3%B3n+2023_v01.07.24.pdf/2a317d3b-dd6d-1934-852e-6609af8f0b43?t=1720096884522
Sponholz, Liriam. (2022). Hate speech and deliberation: overcoming the words that wound trap. In M. Pérez,, & J. Noguera-Vivó (eds.), Hate speech and polarization in Participatory Society (pp. 49-64). Routledge, Taylor and Francis Group. https://dialnet.unirioja.es/servlet/libro?codigo=859034
Torrecillas-Lacave, T., Vázquez-Barrio, T., & Suárez-Álvarez, R. (2022). Experiencias de ciberacoso en adolescentes y sus efectos en el uso de internet. Revista ICONO 14. Revista Científica De Comunicación Y Tecnologías Emergentes, 20(1). https://doi.org/10.7195/ri14.v20i1.1624
United Nations. (2019). La Estrategia y Plan de Acción de las Naciones Unidas para la lucha contra el Discurso de Odio. https://www.un.org/en/genocideprevention/documents/advising-and-mobilizing/Action_plan_on_hate_speech_ES.pdf
Unlu, A., Truong, S., Sawhney, N., Tammi, T., & Kotonen, T. (2025). From prejudice to marginalization: Tracing the forms of online hate speech targeting LGBTQ+ and Muslim communities. New Media & Society. https://doi.org/10.1177/14614448241312900
Vázquez Barrio, T., Sánchez-Valle, M., & Viñarás-Abad, M. (2021). Percepción de las personas con discapacidad sobre su representación en los medios de comunicación. Profesional de la información, 30(1), 1-12, e300106. https://dialnet.unirioja.es/servlet/articulo?codigo=7791121
Vázquez Barrio, T., Torrecillas-Lacave, T., & Suárez-Álvarez, R. (2020). Diferencias de género en las oportunidades de la digitalización para la participación sociopolítica de los adolescentes. Revista Mediterránea De Comunicación, 11(1), 155-168. https://doi.org/10.14198/MEDCOM2020.11.1.10
Velasco, V., & Rodríguez-Alarcón, L. (2020). Nuevas narrativas migratorias para reemplazar el discurso del odio. Narrativas porCausa. https://porcausa.org/wp-content/uploads/2020/02/Dossier_Nuevas-Narrativas-para-reemplazar-el-discurso-del-odio.pdf
Wachs, S., WeJEtstein, A., Bilz, L., & Gámez-Guadix, M. (2022). Adolescents’ motivations to perpetrate hate speech and links with social norms [Motivos del discurso de odio en la adolescencia y su relación con las normas sociales]. Comunicar, 71, 9-20. https://doi.org/10.3916/C71-2022-01
Authors' Contributions
Conceptualization: Vázquez-Barrio, Tamara. Software: García-Marín, David. Validation: Vázquez-Barrio, Tamara. Formal Analysis: Vázquez-Barrio, Tamara. Data Curation: Vázquez-Barrio, Tamara. Drafting- Preparation of the original draft: Vázquez-Barrio, Tamara and González-Castro, Jacob. Drafting, Revision, and Editing: Vázquez-Barrio, Tamara and González-Castro, Jacob. Visualization: Vázquez-Barrio, Tamara. Supervision: Vázquez-Barrio, Tamara. Project Management: Vázquez-Barrio, Tamara. All authors have read and accepted the published version of the manuscript: Vázquez-Barrio, Tamara; González-Castro, Jacob; and García-Marín, David.
Funding: This research has been funded by CEU San Pablo University, CEU Universities, within the framework of the Call for Grants for Recognized Research Groups (GIR, in Spanish). The grant has been awarded to the ThinkOnMedia Recognized Research Group, affiliated with the Faculty of Humanities and Communication Sciences at CEU San Pablo University.
Conflict of interests: There is no conflict of interest.
Tamara Vázquez-Barrio
San Pablo CEU University
Full professor accredited by ANECA at San Pablo CEU University since 2009. Director of the Master's Program in Corporate Communication, Politics, and Lobbying. She is the Lead Researcher of the Think On Media Research Group and the AlgorLit project. Recognized by CNEAI for two six-year research periods. She is the coordinator of the Doctoral Program in Social Communication at the CEINDO International Doctoral School and the scientific editor of the Doxa Comunicación journal. Since 2004, she has participated in competitive projects, serving as Lead Researcher on three of them. Two of these projects were funded by the Spanish National Plan for Scientific and Technical Research and Innovation. Her research focuses on the uses, risks, and opportunities of the Internet and social media, as well as disinformation and political communication.
Índice H: 18
Orcid ID: https://orcid.org/0000-0003-2789-8554
Scopus ID: https://www.scopus.com/authid/detail.uri?authorId=55567519900
Google Scholar: https://scholar.google.es/citations?user=LWsMlhIAAAAJ&hl=es
ResearchGate: https://www.researchgate.net/profile/Tamara-Vazquez-Barrio
Academia.edu: https://uspceu-es.academia.edu/TamaraV%C3%A1zquezBarrio
Jacob González-Castro
San Pablo CEU University
Since 2024, he has been a tenured lecturer at San Pablo CEU University. His research focuses on communication in the public sphere and citizenship. He holds Bachelor's and Master's degrees in Communication and Teacher Training, as well as a Master's degree in Corporate Communication. He is a member of the Think On Media Research Group and is involved in the research project, "Knowledge, Attitudes, and Opinions of the Spanish Population Regarding Internet Algorithms and the Design of Critical Algorithmic Literacies (AlgorLit)." His research analyzes political and social communication. He examines the participants, situations, and elements involved in communication, as well as the impact of messages on various audiences. This contributes to the formation of public opinion.
Índice H: 3
Orcid ID: https://orcid.org/0000-0003-2480-5703
Scopus ID: https://www.scopus.com/authid/detail.uri?authorId=57428123200
Google Scholar: https://scholar.google.es/citations?user=YVzt2esAAAAJ&hl=es
ResearchGate: https://www.researchgate.net/profile/Jacob-Gonzalez-Castro?ev=hdr_xprf
Academia.edu: https://uspceu-es.academia.edu/JacobG
David García-Marín
Rey Juan Carlos University
Doctorate in Sociology from UNED, specializing in Media and Knowledge Society. Master's degree in Communication and Education on the Web from UNED. He has a Bachelor's degree in Journalism from the Complutense University of Madrid. Accredited as a Contracted Professor with a doctorate by ANECA. He is an assistant professor with a doctorate at Rey Juan Carlos University, where he teaches "New Technologies and the Information Society," "News Genres in Radio and Television," and "Radio News Program Production." He is a visiting professor in various master's programs focusing on digital communication, new pedagogies, and transmedia journalism at UNED. He has also directed courses on digital media and disinformation at UNED. Previously, he was a professor at Carlos III University, where he taught "Media Theory" in English in the bilingual journalism and cultural studies degree programs. His research focuses on podcasting, digital audio, disinformation, fact-checking, and transmedia journalism.
Índice H: 23
Orcid ID: https://orcid.org/0000-0002-4575-1911
Scopus ID: https://www.scopus.com/authid/detail.uri?authorId=57201402902
Google Scholar: https://scholar.google.es/citations?user=DjgRxL4AAAAJ&hl=es&oi=ao
ResearchGate: https://www.researchgate.net/profile/David-Garcia-Marin
![]()
Ballesteros-Aguayo, L., & Ruiz del Olmo, F. J. (2024). Vídeos falsos y desinformación ante la IA: el deepfake como vehículo de la posverdad. Revista de Ciencias de la Comunicación e Información, 29, 1-14. https://doi.org/10.35742/rcci.2024.29.e294
Botija, M. R., Riberas-Gutiérrez, M., & Bueno-Guerra, N. (2024). ¿Quién comparte fake news? Consumo y distribución de información entre adolescentes y su relación con el discurso de odio. Revista de educación, 1(406), 265-291. https://doi.org/10.4438/1988-592X-RE-2024-406-645
Callejo, L. C., & Pernía, M. R. G. (2024). Discurso de odio antifeminista en línea: una revisión de sus estrategias comunicativas. Cuadernos del Audiovisual del Consejo Audiovisual de Andalucía, 12, 136-176. https://doi.org/10.62269/cavcaa.27
Cárdenas Ortiz, L. C. (2024). Emprendimiento y Discursos de Odio: Desafíos para las Víctimas del Conflicto Armado Interno en Colombia. European Public & Social Innovation Review, 9, 1-16. https://doi.org/10.31637/epsir-2024-1270
Conde, M. A. (2024). Explorando las tendencias y tácticas de control en internet: un análisis global de los bloqueos y censura en redes sociales. Revista de Comunicación de la SEECI, 57, 1-19. https://doi.org/10.15198/seeci.2024.57.e870
[1] Center for University Studies