Blue Tick for All? JPC Report on Data Protection Bill Can Impact Online Anonymity, Privacy
Blue Tick for All? JPC Report on Data Protection Bill Can Impact Online Anonymity, Privacy
The JPC report recommends that fake accounts and bots on social media can be stopped only by verification of accounts through simple measures.

One of the bigger achievements of the Winter Session of Parliament has been the tabling of the Joint Parliamentary Committee (JPC) Report on the Data Protection Bill. The report has resolved some issues, while fierce contestation is expected on others. One such issue that needs further deliberation is the provision requiring social media platforms to provide the users a mechanism to voluntarily verify their social media accounts.

While this piece locates this issue in the context of the JPC report and the proposed Data Protection Bill, 2021, it is not limited to the same. A similar stipulation exists in the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 or the IT Rules, 2021. However, the Data Protection Bill, 2021 provides a good opportunity to discuss this provision, because the JPC report reveals the thinking behind this and allows to engage in a more informed debate.

Voluntary Verification and Inclusion in the Bill

The 2018 draft of the Personal Data Protection Bill (PDP Bill) did not classify different kinds of regulated entities (data fiduciaries) based on the functions they performed. As such, social media websites and applications were not defined or identified as a class apart from other data fiduciaries. Social media websites and applications were regulated under the same provisions as generic data fiduciaries.

However, the 2019 draft of the PDP Bill saw “social media intermediaries” being delineated as a separate class of data fiduciaries. Along with the introduction of this concept, under Section 28, it required social media intermediaries designated as “significant data fiduciaries” to enable users, who either register or use its services in India, to be able to voluntarily verify these accounts. Further, a user who had voluntarily verified their account was also supposed to be provided with a demonstrable and visible mark of verification, such that it is visible to all users of that service.

With the notification of the IT Rules, 2021, which also includes an identical provision, it was surmised that the JPC would omit it in its draft Bill. However, the JPC report did not do so. In fact, “proliferation of bots and fake accounts” figures in the initial themes that the JPC identifies. It observes that bots and fake accounts can “push a certain agenda or person, carry malicious campaigns, promote digital scams and even conduct organised phishing and blackmailing”. The report recommends that fake accounts and bots on social media can be stopped “only by verification of accounts under standard norms through simple measures like ID verification, submission of proof of identity etc.” This is indicative of the primacy being placed on identify verification to combat disinformation.

Later in the report, while discussing regulation of social media, the report reiterates the importance of verification of social media accounts. It is made even more fundamental by making the “safe harbour” protection for social media intermediaries provisional. The safe harbour protection provides social media intermediaries immunity from being liable for the actions of third parties on their platforms, barring certain exceptions. This protection is premised on the idea that social media intermediaries are only a conduit that provide the requisite infrastructure for people to communicate with each other and create and share content. As such, they do not have control over this content and should not be held responsible for the same. In India, this protection is afforded to the intermediaries under Section 79(1) of the Information Technology Act, 2000.

ALSO READ | Personal Data Protection Bill: 4 Reasons Why Governments Bat for Data Localisation

The JPC report disagrees with the basis of this protection. It argues that because social media intermediaries have the ability to select the receiver of the content and control access to the content posted on their platform, their liability should be akin to that of a publisher, and not just that of a conduit. Therefore, they should be liable for the content that they host and publish on their platforms. This may be one of the reasons why it moves away from the nomenclature of “social media intermediaries” to “social media platforms”, so as dilute the concept of immunity generally associated with intermediaries. It categorically recommends that “social media platforms, which do not act as intermediaries, will be held responsible for the content from unverified accounts on their platforms”.

Issues in JPC’s Phrasing

This phrasing is problematic on two fronts. The first issue arises on a more conceptual front. It is being recognised across jurisdictions that social media intermediaries cannot be assigned the role of “dumb pipes” anymore that carry all content posted on them without any interference. They are playing an increasingly active role in deciding the kind of content a user sees and ranking of the content, which is crucial to its visibility and accessibility. And while there have been calls for better regulation and increased transparency of these practices and algorithms, it is tenuous to argue that they should be liable for it in the same way a publisher is liable for its content. Social media intermediaries have often argued that given the sheer amount of data that is being shared on their platforms, it is impossible for them to ex ante monitor sharing of this content.

The second issue arises at an interpretative level of the proposed Section 28 of the Bill. There seems to be a dissonance between what the JPC recommends and how it frames the voluntary verification provision. Section 28 of the Bill is couched as an enabling measure i.e. the social media intermediaries are required to provide a mechanism that allows users to voluntarily verify their identity. However, this is different from the manner in which the JPC has phrased its recommendation. It recommends that intermediaries should be held responsible for content posted from unverified accounts on their platforms.

The implication of this seems to be that if an intermediary wants to avoid liability for content, it should necessarily verify the accounts of all those who are registering for its services and posting content on its platform. This may lead to social media intermediaries to derogate from a choice-based model for user verification to one that makes identity verification almost mandatory, albeit in covert ways. If this is challenged, courts may very well look to the report to interpret the provision, given that it is a well-established rule that committee reports are an important external aid to interpretation. Therefore, this gap needs to be clarified in the subsequent parliamentary sessions.

Larger Debates around the Issue

If social media intermediaries were to err on the side of caution to save their skin, and attempt to make voluntary verification mandatory, it would strike at the root of the right to online anonymity. While in India, there has not been a definitive pronouncement on whether this right is recognised or not, anonymous/pseudonymous speech has been considered an important part of the Right to Freedom of Speech and Expression as well as the Right to Privacy. Social media intermediaries also risk falling into a catch-22 situation wherein such mandatory verification would fall foul of the Puttaswamy test on one hand, which requires that for an intrusion in privacy to be justified, it should be “legal, necessary and proportionate”, but on the other hand if they don’t, they might be apprehensive of being held accountable for the content they host.

Further, it needs to be considered whether a data protection legislation is the right place to place such provisions. A data protection legislation should be limited to dealing with the regulation of personal data collected for the purpose of voluntary verification rather than whether or not such verification provisions should exist and the circumstances in which they are triggered. Data protection legislations do not provide the requisite regulatory thinking and paraphernalia for regulating content on social media intermediaries.

ALSO READ | Personal Data Protection Bill: Overbroad Exemptions on Data Processing Dilute Govt’s Own Cause

The Way Forward

While there is a case to be made for voluntary verification for social media users to promote trust and confidence in the platform, it requires further deliberation. On the one hand, it aids law enforcement especially given the increasing use of virtual private networks by bad actors, it also poses a risk to online safety of users by making their identification more accessible.

There are also studies that argue that identity verification is not necessarily effective in curbing the increased presence of illicit online content. It finds that in some cases an enhanced batch of verification could, in fact, lead to wider spread of the fake news by misleading other users into sharing that fake news more vigorously. Therefore, a more nuanced approach to understanding online anonymity and its consequences needs to be considered. And a data protection legislation does not provide adequate context for that discussion.

This is the third in a four-part series on key issues around India’s data policy. You can read the first article and second article here.

Trishee Goyal is a project fellow at at the Centre for Applied Law and Technology Research, Vidhi Centre for Legal Policy. The views expressed in this article are personal and do not represent the stand of this publication.

Read all the Latest Opinions here

What's your reaction?

Comments

https://chuka-chuka.com/assets/images/user-avatar-s.jpg

0 comment

Write the first comment for this!