Opinions, Insults, and Digital Civility: A Growing Challenge in the Age of Social Media.

Social media have democratized individual voices and enriched public debate, but they have also created a space where insults, personal attacks, defamation, and targeted misinformation against third parties proliferate. This phenomenon of digital aggression has become a form of cyber-victimization: insults, threats, and social exclusion that, although transmitted through digital means, generate real psychological and social effects on those affected, including persistent anxiety, social isolation, and emotional exhaustion when no effective legal or communicative response exists.

Research on social platforms shows that these conflictive interactions are often aggravated by algorithms that prioritize polarizing content, amplifying confrontational messages over thoughtful debate. This intensifies personal attacks rather than fostering informed deliberation and raises important questions about platform responsibility and technological design in either encouraging or mitigating these dynamics.

Why Are Current Ethical Norms Not Enough?

Many assume that “netiquette”—an informal set of rules for respectful online behavior—could be sufficient to resolve these tensions. However, netiquette lacks legal force and binding enforcement mechanisms, offering no real protection against systematic aggression or repeated digital harassment.

At the same time, platforms have attempted to regulate behavior through their own community guidelines and internal moderation systems. Yet this creates complex dilemmas:

Who decides what is “acceptable”—the user, the platform, or the State?

To what extent do these internal mechanisms balance freedom of expression with other rights such as honor, privacy, and equality?

In practice, these systems operate according to each platform’s own criteria, leading to inconsistencies and a lack of transparency in content moderation decisions.

What Is the “Digital Security Perimeter”?

The concept of a digital security perimeter refers to a comprehensive set of legal, technological, and social norms and safeguards designed to guarantee the safety, dignity, and rights of individuals in the digital environment. This includes:

Effective protection against cyberbullying, hate speech, and defamation.

Transparency and accountability of platforms hosting user interactions.

Accessible and effective tools for reporting, moderation, and redress.

Clear limits on content when it collides with personal rights, without turning those limits into arbitrary censorship.

Although there is not yet a single universal legal definition of this concept, many experts use it as a framework for policies aimed at balancing freedom of expression with the protection of fundamental rights.

Laws and Regulations Responding to the Phenomenon
🇪🇺 Reglamento de Servicios Digitales (DSA) — The Regulatory Pillar in Europe

The European Union’s Digital Services Act (DSA) is currently the most significant regulation governing online platform content. Its objective is to harmonize standards of responsibility, transparency, and safety across the EU’s digital single market. Among its key elements:

Obligation for platforms to detect, notify, and remove harmful content.

Clear and accessible tools for users to report problematic content.

Transparency in moderation decisions and algorithmic functioning.

Enhanced obligations for very large platforms with more than 45 million active users in the EU.

As part of its implementation, the EU has also adopted specific codes of conduct to counter illegal hate speech online, providing clearer guidance for coordinated action with national laws.

Complementary Initiatives

The EU is also advancing an Action Plan against Cyberbullying, focusing particularly on minors and incorporating educational and legal measures to strengthen digital protection frameworks by 2026. The plan acknowledges that an increasing number of adolescents experience online harassment and proposes joint strategies for education, prevention, and legal response.

In comparative perspective, countries such as Austria have introduced specific criminal offenses addressing cyberbullying and persistent online harassment, integrating protections related to honor, damages, and non-consensual image dissemination.

Freedom of Expression and the Risks of “Digital Authoritarianism”

Two fundamental constitutional principles intersect here:

The right to express oneself freely, even with strong opinions, harsh criticism, or unpopular views.

The right of others not to be insulted, harassed, or exposed to symbolic or real violence through online conduct.

This is not a simple dichotomy between absolute freedom and total control. It involves recognizing reasonable limits when expression transforms into systematic aggression that violates basic rights. Freedom of expression, as protected in constitutions and international instruments—including the Carta de los Derechos Fundamentales de la Unión Europea—protects criticism and diverse viewpoints, but not conduct constituting crimes such as threats, incitement to violence, or discriminatory speech.

Many free speech advocates warn that vague definitions or overly broad content control mechanisms may lead to disproportionate censorship or arbitrariness, creating perceptions of “digital authoritarianism” if clear safeguards and procedural guarantees are lacking. A recent example of this debate occurred in the United Kingdom during discussions surrounding the Online Safety Act, which faced criticism from civil liberties groups concerned about the scope of regulatory powers granted in the name of online safety.

Should Individuals Be Free to Choose Their Digital Experience?

Beyond law lies social reality:

Individuals should have the freedom to choose where to participate, which groups to follow, and which opinions to engage with.

Polarization does not arise solely from differences of opinion but is reinforced by algorithmic designs and digital dynamics that reward conflict over reflection.

True freedom means coexisting with opinions we dislike without resorting to verbal or emotional violence. Respect is not imposed uniformly; it is a social practice requiring education, community norms, and, when necessary, clear and proportionate legal standards.

Conclusion: More Laws or Social Improvement?

The solution does not rest on a single instrument:

Clear and proportionate laws, such as those in Europe, establish necessary limits without sacrificing fundamental rights.

Specific protections—for example, against gender-based digital violence or persistent harassment—strengthen safeguards for vulnerable groups.

Education, individual responsibility, and digital literacy are essential; law alone is insufficient if digital culture normalizes abuse.

Continuous ethical and social debate, with citizen participation and transparency, is crucial to ensure that technology and regulation do not become tools of arbitrariness or exclusion.

Additionally, legislation may include economic penalties for individuals who seriously violate others’ digital rights, combining effective deterrence with real accountability and reinforcing a culture of digital respect.

Ultimately, the goal is not to impose a single global cultural model, but to build a framework that allows respect, diversity, and genuine freedom of thought and expression—where everyone can participate without fear of aggression and without violating the rights of others.