OpinionGuest's View

Social Media Conundrum (Part II)

Introduction

This is the 2nd part of the 3-part series Social Media Conundrum – the first part published in Goa Chronicle is here: https://goachronicle.com/social-media-conundrum-part-i/. The 1st part delved into the root issue i.e., censorship of content/users by Social Media companies (esp. Twitter/Facebook), the reasons behind the same and why censorship in Social Media is dangerous. In this part we will analyse why the Government of India (Govt)’s new regulation will not address this muddle.

Govt’s new IT regulation for Social Media mandates three things. Firstly, Social Media platforms must set up a grievance’s redress and compliance mechanism, which includes appointing a resident grievance officer, chief compliance officer and a nodal contact person. Secondly these platforms must submit monthly reports on complaints received from users and action taken. A third requirement for instant messaging apps was to make provisions for tracking the first originator of a message.

While these new rules seem to look good on the surface, it fails to make the cut when analysed against the fundamental problems it is trying to address. These are the issues explained in the Part-1 of this article series i.e., censorship of content & users by the platforms. Towards the same, let us look at the broad categories of the reasons cited by SM platforms for censoring content and why it will not hold water.

Facebook/Twitter feels content to be abusive/offensive to a person/group/community

Abuse is subjective. Firstly, a word/phrase that is abusive or offensive to a person may not sound so for another. While such differences in perception exist for people within the same socio-regional-cultural background, it is obvious that such differences in attitude/perception gets only magnified when considering the span of Social Media that cuts across a plethora of socio-regional-cultural backgrounds.  Secondly, there are so many languages used – again words/phrases (that have the same translated meaning) may not have the same abusive tendency across languages. Thirdly, transliterated (e.g., Hindi in English) words extend this complexity further. Furthermore, there are scores of dialects & slangs within every language.  Then there is the most perplexing angle – custom/personal abbreviations, pet names, fake or mixed-up names, intentional/unintentional spelling changes – for example STFU can be read as ‘Shut the fuck up’ by some – but there is no way to prevent someone to claim that it was ‘STop further uproar’ – abbreviations can be subjective & there is no limit to one’s creativity. Folks can play with spelling (e.g., a$* hole), phonemes or attributes of people/groups to evade identification of a person/community/region – so there is no legal/standardized way for a Twitter/Facebook to even establish that the content was abusive to whoever is alleging to be abused! So, when Social Media companies provide their report with the list of customer complaints, how would the Govt arbitrate? Will the Govt panel review every disputed tweet/post/comment to ascertain whether the content amounts to abuse and if so whether it is really targeting any real person/group/community? Given the unfathomable complexities of the multiple dimensions explained above it is impossible to arbitrate on such disputes in an unbiased & transparent manner. Finally, there is this killer: scale – we are talking about millions of tweets/posts/comments a day.

Facebook/Twitter feels content to be defaming to a person/group/community

This one is a bit related to the abusive/offensive category – maybe we can call this as the rich cousin of the abusive/offensive category😊. Again, this category is extremely subjective and more importantly involves legal angle. A commoner twitter user may tweet that a public office bearer is not fit for the job citing poor performance. For the office bearer this tweet may amount to defamation but for the citizen/user it may just be a normal assessment. Twitter or Facebook have no locus standi to judge whether the content in question amounts to defamation or not. The rationale on why it will not be feasible for the Govt to arbitrate for this category is the same as that of the ‘abusive/offensive’ category!

Facebook/Twitter feels content is factually incorrect

Social media is laden with info that is not factually correct – well it is meant to be so as it is only user generated content (UGC). It is up to the user/consumer to apply their due diligence. More importantly Social Media companies have no basis to verify the factual correctness of a tweet/post/comment. But unfortunately, they indulge in this ‘fact checking’ side business with pomp & fury while shamelessly claiming to be an Intermediary (i.e., only carry the content from the user as it is).

Now with the new regulation, how will the Govt arbitrate complaints from users on their content/account being censored due to this fact-check game of Social Media companies? Will the Govt panel perform independent fact-check on each of these disputes ranging from whether an image was photoshopped or related to a different time period/event etc.? We have millions of real, life-impacting cases pending in our courts and does the Govt have the wherewithal to take up thousands of such fact-check disputes every month?

Facebook/Twitter feels content is of criminal/illegal nature

This pertains to content that are not legal – say if the content has any of these: instigates violence, contains pornography, provokes theft, sharing sensitive/secure info pertaining Govt etc. To be clear, tweets/posts/comments that criticize a person, party, official or Govt is not illegal or criminal – such content would fit under the previous categories i.e. Abuse/Defamation. Unlike the ‘abuse/defamation’ categories the boundaries on what is not legal/criminal is much clearer. But again, Social Media platforms, as intermediaries, have no basis to judge the legality of a content. At best the Govt can only ask them to monitor & flag content that falls under the ‘crime’ purview. Yes, the Govt regulation on the Social Media platforms providing a report of user disputes makes sense in this case as Govt has a direct role when it comes to crime/law and must arbitrate such disputes either directly or through judiciary.

So why will the new regulations not help?

As seen above the new regulations does not help address the bulk of the problems around censorship in Social Media platforms i.e., content is ostensibly/allegedly abusive & defaming to someone or group or party or community or political belief. It can be helpful only in one minor category i.e., content being illegal/criminal.

So, if these new regulations do not work, what else is needed to address this snowballing issue of censorship of users/content by Social Media platforms? We will see about that in the next part.

About the Author

Sundar Rengarajan is a voracious reader, has a flair to analyse, debate & write on a variety of topics including consumer tech & socio-economics. He is an IT leader by profession and currently heads the Data Science practice of a leading FinTech firm: https://www.linkedin.com/in/sundarrengarajan/. He can be reached at [email protected]

 

Back to top button
X

Adblock Detected

Please consider supporting us by disabling your ad blocker