Svoboda | Graniru | BBC Russia | Golosameriki | Facebook

Search interesting materials

Showing posts with label surveillance. Show all posts
Showing posts with label surveillance. Show all posts

Tuesday, April 04, 2023

Revising the Information Technology Act, 2000

by Rishab Bailey, Vrinda Bhandari, Renuka Sane and Karthik Suresh.

The Information Technology Act, 2000 ('IT Act') is a comprehensive law enacted to build trust in the digital ecosystem by regulating e-commerce, e-filing of documents, and by creating criminal offences applicable to the digital ecosystem. Despite amendments in 2009, it is widely considered that the IT Act is outdated, not least due to the proliferation of the Internet and a range of new technologies (e.g. Bahl, Rahman and Bailey 2020; Nappinai 2017; Nigam et al 2020). Recently, the government has proposed replacing the IT Act with a new legislation known as the 'Digital India Act'.

In a new report, Revisiting the Information Technology Act, 2000, we attempt to contribute to the process of revision of the IT Act, by examining four critical issues pertaining to the online ecosystem. These are:

  1. Censorship: The provisions in the IT Act pertaining to censorship and blocking were framed in an era when the digital ecosystem was not as pervasive as today and before the use of social media platforms exploded. The provisions in the IT Act that empower the government to block content from public access are largely based on Article 19(2) of the Constitution. However, the institutional framework for carrying out blocking suffers from significant lacunae, including a lack of accountability of the relevant oversight institutions. We recommend that appropriate procedural safeguards be introduced through statute, to ensure greater transparency and neutrality in the blocking processes.
  2. Intermediary liability: The IT Act protects intermediaries from prosecution for content posted or transmitted by third parties upon the following three conditions: (a) that they act as passive agents (or distributors) of content, (b) they disable access to unlawful content upon receiving 'actual knowledge' thereof, and (c) they observe 'due diligence' conditions laid down by the government. The 'safe harbour'' provision was introduced at a time when the digital ecosystem was still nascent. The variety of online harms that have since proliferated raise questions about whether such a system is required. We find that there is value in retaining a safe harbour for intermediaries in contexts where they have played a passive role in the ecosystem. Removing safe harbour is likely to incentivise greater private censorship, a role that intermediaries are not well positioned to undertake. However, this does not mean that intermediaries should not be responsible for ensuring the safety of the digital ecosystem. Any further obligations (such as greater transparency, the introduction of grievance redress mechanisms, etc.) ought to be implemented outside the safe harbour framework and certainly not as part of amorphous 'due diligence' obligations. We point to how new intermediary rules introduced in 2021 and 2022 have imposed a variety of new and onerous obligations on intermediaries. Many of these obligations, such as the obligation to enable traceability of the originator of information on messaging platforms and the obligation or the need to practically police a host of proscribed content, should be done away with. Any new obligations must be introduced based on evidence of harm in a proportionate manner.
  3. Surveillance: The current framework pertaining to interception and monitoring of digital communications was established before the seminal decision of the Supreme Court in Justice K Puttaswamy vs. Union of India which recognized privacy as a fundamental right. Our report builds on the literature on surveillance reform in India to suggest that significant revision is required in our legal framework. Currently, the executive is provided extremely broad powers with insufficient safeguards to mitigate abuse. Certain surveillance programs such as the Centralised Monitoring System are per se disproportionate as they conduct mass surveillance. Our primary recommendation is therefore to enact a new stand-alone surveillance-related legislation, which could harmonise surveillance processes while ensuring that appropriate procedural and institutional safeguards are implemented. In the alternative, the revised IT Act should narrow the scope of powers given to the executive, while also implementing workable oversight and accountability mechanisms, not least ensuring judicial review, legislative oversight, and greater accountability of relevant bodies involved in the surveillance apparatus.
  4. Cybersecurity: While the IT Act lays down various offences pertaining to cybersecurity that are broadly in accordance with international standards, we find that there is a significant need for reform of the institutional mechanisms that manage incident reporting and response. We recommend that the revised IT Act clarify the role and powers of CERT-in and NCIIPC --- the two primar cybersecurity-related agencies in India. In particular, their rule-making powers should be clarified/limited. The law should also avoid duplicating functions of each agency while limiting incident reporting requirements to large and systemically important systems and entities --- this avoids imposing disproportionate costs.

As we move towards an economy that is ever more dependent on the digital ecosystem, it is vital that the law promotes trust in the online ecosystem. This involves finding an appropriate balance between a range of competing interests --- national security and public order, the need to protect fundamental rights, and the need to promote innovation in and development of the digital ecosystem. Finding such a balance will require the government to take a considered stance on several thorny issues. Carrying out detailed and inclusive consultations will also be a vital part of the process towards establishing the digital ecosystem on a sound legal footing.

References

Varun Sen Bahl, Faiza Rahman and Rishab Bailey, Internet intermediaries and online harms: Regulatory Responses in India, Data Governance Network Working Paper no. 6, March 2020.

N S Napinnai, Cyber security and challenges: Why India needs to change IT Act, February 2017.

Aniruddh Nigam, Kadambari Agarwal, Trishi Jindal, Jaai Vipra, Primer for an Information Technology Framework Law, Vidhi Center for Legal Policy, September 2020.

Rishab Bailey, Vrinda Bhandari, Renuka Sane, Karthik Suresh, Revisiting the Information Technology Act, 2000, XKDR Forum, March 2023.


Rishab Bailey and Karthik Suresh are researchers at XKDR Forum. Vrinda Bhandari is a practising advocate. Renuka Sane is a researcher at TrustBridge.

Monday, May 10, 2021

Backdoors to Encryption: Analysing an Intermediary's Duty to Provide 'Technical Assistance'

by Rishab Bailey, Vrinda Bhandari, and Faiza Rahman.

The rising use of encryption is often said to be problematic for law enforcement agencies (LEAs) in that it directly impacts their ability to collect data required to prosecute online offences. While certainly not a novel issue, the matter has risen to global prominence over the last four or five years, possibly due to the increased usage of privacy enhancing technologies across the digital ecosystem.

While there have been a number of policy proposals that seek to address this perceived impasse, no globally accepted best practice or standard has been evolved thus far. In India (as in many other jurisdictions), the government has increasingly sought to regulate the use of encryption. For instance, the recently announced Intermediary Guidelines under the Information Technology Act, 2000, seek to extend the "technical assistance" mandate of certain intermediaries to ensure traceability, by enabling identification of the first originator of the information on a computer resource. The scope of the term "technical assistance" has not been clearly defined. However, the provision appears to go well beyond existing mandates in the law that require holders of encryption keys to provide decryption assistance, when called upon to do so, in accordance with due process, and based on their capability of decrypting the encrypted information. Courts have also weighed in on this debate, with the Madras High Court and the Supreme Court hearing petitions that seek to create mechanisms whereby LEAs could gain access to content protected by end-to-end encryption (E2E), thereby enabling access to user conversations on popular platforms such as WhatsApp. A Rajya Sabha Ad-hoc Committee Report released in 2020 has also recommended that LEAs be permitted to break or weaken E2E to trace distributors of illegal child sexual abuse content.

Against this background, our recently released paper examines the scope of the obligations that ought to be imposed on intermediaries to provide "technical assistance" to LEAs, and whether that should extend to weakening standards of encryption, for instance, through the creation of backdoors. Broadly speaking the term "backdoors" refers to covert methods of circumventing encryption systems, without the consent of the owner or the user. The paper also evaluates, in brief, proposals for alternatives, such as the use of escrow mechanisms and ghost protocols.

We argue that the government should not impose a general mandate for intermediaries to either weaken encryption standards or create backdoors in their products/platforms. This can significantly affect the privacy of individuals and would constitute a disproportionate infringement into the right to privacy. Such a mandate will also likely fail a cost-benefit analysis, not least in view of the possible effects on network security as well as broader considerations such as growth of the Indian market in securities products, geopolitical considerations, etc. This however, does not mean that the law enforcement agencies have no options when faced with the prospect of having to access encrypted digital data. A first step in this regard would be to implement rights-respecting processes to enable law enforcement to access data collected by intermediaries in a timely manner. In addition, there should be greater focus on enhancing government and law enforcement capacities, including by developing hacking capabilities, with sufficient oversight and due process checks and greater funding to research and development efforts in the cybersecurity and crypto spaces.

This post seeks to throw light on the key issues around the encryption debate, and summarises our main arguments and suggestions on how India should address them.

Understanding the encryption debate

Encryption is the process of using a mathematical algorithm to render plain, understandable text into unreadable letters and numbers (Gill, 2018). Typically, an encryption key is used to carry out this conversion. Reconverting the encrypted text back to plain-text also requires an encryption key. Depending on the manner of encryption, the same encryption key can be used to encrypt or decrypt information, or alternatively, one may require different encryption and decryption keys. Encryption therefore ensures that the message can only be read by the person who has the appropriate decryption key, particularly as newer forms of encryption make it inefficient, if not impossible, to reverse the encryption process (Gill, 2018).

Encryption essentially improves the security of information. It secures information against unwarranted access and ensures the confidentiality and integrity of data, thereby fostering trust in the digital ecosystem and protecting the private information of citizens and businesses alike.

However, the use of encryption can also enable criminals to "go dark", making it difficult for LEAs to carry out their functions. For instance, it is estimated that upwards of 22 percent of global communication traffic uses end-to-end encryption (Lewis et al, 2017). This puts a quarter of communications virtually out of reach for LEAs, not least as the use of modern encryption systems makes it harder for LEAs to use the traditional "brute force" method to access encrypted data (Haunts, 2019). LEAs therefore have increasingly called for limitations to be placed on the use of encryption so as to enable them to have access to information they require to pursue their law enforcement functions. They point to the need to ensure accountability for online harms, and therefore argue that intermediaries must provide them with all data relevant to an investigation.

The concerns with the use of encryption are driven by a number of factors such as the growing instances of cybercrime, the use of data minimisation practices such as disappearing messages and the use of encryption by default in various technology products. For instance, WhatsApp and Signal automatically encrypt communications in transit and also give users the option of automatically deleting their messages. Similarly, Apple uses encryption based authentication on its iPhones (which render the content accessible only if an appropriate passcode is provided. If not, the content on the phone could even be deleted after a certain number of failed attempts) (Lewis et. al, 2017).

These concerns have led to calls for Internet intermediaries to weaken encryption standards or create backdoors in their products/services. These demands are not new. Notably, the 1990s saw the issue being debated in the United States, with the FBI proposing the use of the "Clipper Chip", a mechanism whereby decryption keys would be copied from the devices of users and sent to a trusted third party, where they could be accessed on appropriate authorisation by LEAs. More recently, the FBI has been involved in face-offs with technology companies such as Apple, when it refused to provide exceptional access to an iPhone linked to a terrorist. In India too, the government has encountered similar issues - notably forcing Blackberry manufacturers to relocate their servers to India and hand over plain text of communications. The government also circulated a draft National Encryption Policy in 2015, which sought to implement obligations involving registration of encryption software vendors, and the need for intermediaries to store plain text of user data. The draft was however withdrawn after much criticism.

In response to such proposals, security researchers, cryptographers and service providers, have been near unanimous in pointing out that the creation of backdoors is likely to lead to significant costs to the entire digital ecosystem, especially as it leads to the entire population being exposed to vulnerabilities and security threats. Indeed, the need for stronger encryption and other security standards to protect user data is only heightened by the numerous and frequent data breaches that have been reported in India. Interestingly, even the Telecom Regulatory Authority of India has adopted a similar position in its Recommendations on Regulatory Framework for OTT Communication Services of 2020.

Even two commonly discussed methods of a "balanced solution" to the problem - the use of escrow mechanisms and ghosting protocols - have faced significant criticism. For instance, the use of escrow mechanisms (which, as with the Clipper Chip system described above, involve storage of the decryption key with a trusted third-party, who can then provide the same to LEAs when called upon to do so) is likely to lead to significant vulnerabilities being created in computer systems. Not only will such a system require faith in the integrity of the entity holding the decryption key, such an entity would constitute a single point of failure, which is poor system design (Kaye, 2015). Deployment of complex key recovery infrastructure is also likely to impose huge costs on the ecosystem (Abelson et al., 1997). Similarly, suggestions for using ghost protocols (which would require service providers to secretly add an extra LEA participant to private communications) have also faced significant criticism (Levy and Robinson, 2018). Given that this system would essentially require service providers to convert a private conversation between two individuals into a group chat, with a hidden third participant, critics have argued that it is just another form of a backdoor. It would erode trust between consumers and service providers, and provide for a "dormant wiretap in every user's pocket" that can be activated at will. This would also require fundamental changes in system architecture, thereby introducing vulnerabilities that can create threats for all users on platforms (Access Now et al., 2019).

Thus, while the use of such methods can enable LEAs to access user data more quickly than is currently possible, there are numerous concerns - from a civil liberties, economic and technical perspective. We outline the key concerns in this regard below.

Concerns with mandating backdoors

  • Privacy: In view of the recognition of privacy as a fundamental right, private thoughts and communications are protected from government intrusion subject to satisfaction of tests of necessity and proportionality. Mass surveillance can be considered to be per se disproportionate. It is recognised that government surveillance can lead to unwanted behavioural changes, and create a chilling effect. Encryption therefore serves as a method to protect individual privacy, particularly from government excesses.
  • Security: Creating backdoors can weaken network security as a whole since it can be exploited by governments and hackers alike (Abelson et al., 2015). Backdoors can also lead to increased complexity in systems, which can make them more vulnerable to attack (Abelson et al., 2015).
  • Right against self-incrimination: Mandating decryption of data can arguably also be seen as violating an individual's right against self-incrimination (Gripman, 1999; ACLU and EFF, 2015).
  • Due process requirements: Criminal investigation in general and surveillance in particular is not meant to be a frictionless process. Introducing inefficiencies in the functioning of LEAs is what separates a police state from a democracy (Richards, 2013; Hartzog and Selinger, 2013). As is the case of due process requirements, encryption creates procedural hurdles, ensuring some checks and balances over the functioning of LEAs and the possibility of mass surveillance. It therefore helps re-balance the asymmetric power distribution between the State and citizen.

Scope of "technical assistance": Should it extend to creating backdoors?

Given the aforementioned concerns, the question arises, should the duty of "technical assistance" that intermediaries are required to provide to LEAs, extend to the creation of backdoors or otherwise weakening encryption systems?

We argue that as far as recoverable encryption is concerned, i.e. encryption where a service provider already has a decryption key in the normal course of service provision, there is no requirement for such a mandate. Indian law already requires service providers to decrypt data in such cases, in addition to providing various other forms of assistance. Here, the need is to focus on implementing proper oversight and other procedural frameworks to ensure that LEAs exercise their powers of surveillance or decryption in an appropriate manner. We find however, that the Indian framework is lacking in this regard. There is no judicial oversight of decryption requests, no proportionality requirements in the law, and no meaningful checks and balances over decryption processes at all. We therefore proposed various changes in order to improve the transparency and accountability of the system. Further, research indicates that the primary problem of LEAs in India may relate to the relatively old and slow processes that must be used by LEAs when accessing data held by intermediaries, particularly those based outside India. This points more to the need for LEA data access processes to be revised/streamlined in accordance with modern needs.

As far as unrecoverable encryption is concerned, i.e. encryption where even the service provider cannot access the content (such as with E2E) as it does not have access to the decryption key, which is retained by the user, the situation is undoubtedly more complex. However, even in such instances, for the reasons elaborated above, we believe that mandating backdoors or weakening encryption is not an appropriate solution.

Moreover, LEAs already have multiple alternatives to collect information, including by accessing metadata and unencrypted backups of encrypted communications. They can also use targeted surveillance methods to conduct investigations (National Academy of Science, Engineering and Medicine, 2018). Indeed, the current Indian framework - governing telecom service providers in particular, but also other intermediaries - already gives significant and arguably excessive powers to the State. It should also be noted that LEAs in India are already using spying technology, as we saw in the Pegasus case. LEAs also have other covert methods of gathering data - from key-stroke logging programmes to exploiting weaknesses in implementation of encryption systems. While one cannot argue against the use of such systems in appropriate cases, it is clear that such powers must only be exercised through institutionalised processes, and importantly, subject to appropriate regulatory oversight. There is therefore a case for formulating a legal framework in India, along the lines of the US vulnerabilities equities process, to ensure due process even when the government resorts to exploitation of vulnerabilities within information systems for national security and law enforcement purposes.

Accordingly, we point to the need to carry out a more detailed cost-benefit analysis before deciding on the need to implement such a mandate (which unfortunately, has not been done in the case of the recent Intermediary Guidelines Rules). We point to how such a cost-benefit analysis should consider:

  • Whether the use of unrecoverable encryption is indeed a significant hurdle for LEAs in collecting relevant information. While no data is available in this context in India, data from the US in the period 2012-2015 indicates that of the 14,500 wiretaps ordered under the Communications Assistance for Law Enforcement Act, only about 0.2 percent of wiretaps encountered unrecoverable encryption (Lewis et al., 2017). While this share has likely increased in view of the greater use of unrecoverable encryption in the ecosystem, a similar empirical analysis must be conducted in India to understand the impact of such types of encryption.
  • The cost to intermediaries in changing their platform architecture are unlikely to be insignificant. It is also worth keeping in mind that often intermediaries will avoid using certain types of encryption purely to keep in the good books of LEAs in a form of "weakness by design". Notably, companies such as Apple and WhatsApp have dropped plans to encrypt user back-ups stored in the cloud. Such data can therefore be accessed by LEAs without compromising encryption.
  • The risk of such laws getting caught up in global geopolitics. This has been the case for example, with Huawei and ZTE, who have faced significant international pressure in view of the Chinese government's purported ability to access data flowing through their networks.
  • The possible effectiveness of such laws, considering that many criminals may use open source encryption or encryption from platforms that are not amenable to Indian jurisdiction. Further, the pace of technical development is difficult to keep up with from a regulatory perspective. Notably, institutions such as Europol and Interpol are increasingly concerned about the use of steganography (the technique of hiding the very existence of a message) and open source encryption by international criminals and terrorist groups. Therefore, even if there is a bar on using strong encryption, those who want to break this law, will continue to do so.

We therefore argue that while a mandate for targeted decryption or technical assistance may be constitutional if backed by a law with sufficient safeguards, a general mandate for the creation of backdoors (or an interpretation of the Intermediary Guidelines requirement to provide "technical assistance" to extend to such generic obligations) is unlikely to pass constitutional muster, assuming a high intensity of proportionality review is applied. A higher intensity of review will have to look at not just whether the proposed intervention would substantially improve national security, but would also need to engage with the fact that it would (a) compromise the privacy and security of individuals at all times, regardless of whether there is any evidence of illegal activity on their party, and (b) the existence of alternative means that are available to LEAs to carry out their investigations. Thus, we believe that a general mandate for creating backdoors will not be the least restrictive measure available.

Conclusions and Recommendations

We argue that a general mandate that requires Internet intermediaries to break encryption, use poor quality encryption, or create backdoors in encryption is not a proportionate policy response given the significant privacy and security concerns, and the relatively less harmful alternatives available to LEAs. Instead, the Indian government should support the development and use of strong encryption systems.

Rather than limiting the use of certain technologies, or mandating significant changes in platform/network architecture of intermediaries that compromises encryption, the government ought to take a more rights-preserving and long-term view of the issue. This will enable a more holistic consideration of interests involved, avoid unintended consequences, and limit costs that come with excessive government interference in the technology space. The focus of the government must be on achieving optimal policy results, while reducing costs to the ecosystem as a whole (including privacy and security costs). A substantive mandate to limit the use of strong encryption would increase costs for the entire ecosystem, without commensurate benefits as far as state security is concerned.

The tussle between LEAs and criminal actors has always been an arms race. Rather than adopting steps that may have significant negative effects on the digital ecosystem, the government could learn from the policies adopted by countries such as Germany, Israel and the USA. This would involve interventions along two axes - legal changes and measures to enhance state capacity.

Legal changes that the government must consider implementing, include:

  • Reforming surveillance and decryption processes, to clarify the powers of LEAs, and ensure appropriate transparency, oversight and review. It is also essential to standardise and improve current methods of information access by LEAs at both domestic and international levels. There must be greater transparency in the entire surveillance and information access apparatus, including by casting obligations on intermediaries and the State to make relevant disclosures to the public.
  • Adoption of a Vulnerabilities Equities Process, such as that adopted in the United States, which could enable reasoned decisions to be made by the government about the disclosure of software/network vulnerabilities (thereby allowing these to be patched, in circumstances where this would not significantly affect security interests of the State). Such a process, while not without critics, does chart a path forward and must become central to the Indian conversation around due process in LEA access to personal data.
  • Amending telecom licenses, which currently give excessive leeway for exercise of executive authority, without sufficient checks or safeguards.

Rather than implement ill-thought out policy solutions that would significantly harm the digital ecosystem and user rights, the government could also focus on enhancing its own capacities. This can include measures such as:

  • Developing and enhancing covert hacking capacities (though these must be implemented only subject to appropriate oversight and review processes). To this end, there must be appropriate funding of LEAs, including by hiring security and technical researchers.
  • Investing in academic and industry research into cryptography and allied areas. The government should also aid the development of domestic entities who can participate in the global market for data security related products. Enhancing coordination between industry, academia and the State is essential.
  • Increasing participation in international standard setting and technical development processes.

To conclude, the crux of this issue can be understood using an analogy. Would it be prudent for a government, engaged in a fight against black money, to require all banks to deposit a key to their customer's safe deposit boxes with it? One would venture that this would be an unworkable proposition in a democracy. It would lead to people looking for alternatives to the use of safe-deposit boxes due to the lack of trust such a system will create. Innocent people will be exposed to increased risks. A preferable solution may be for the government to develop the ability to break into a specific safe deposit box, upon learning of its illegal contents, and subsequent to following due process. This would enable more targeted interventions, that would also preserve the broader privacy interests of innocent customers while protecting banks from increased costs (or loss of business).

References

Gill, 2018: L Gill, Law, Metaphor and the Encrypted Machine, Osgoode Hall L.J. 55(2) 2018, 440-477.

Lewis et al., 2017: James Lewis, Denise Zheng and William Carter, The Effect of Encryption on Lawful Access to Communications and Data, Center for Strategic and International Studies, February 2017.

Haunts, 2019: Stephen Haunts, Applied Cryptography in .Net and Azure Key Vault: A Practical Guide to Encryption in .Net and .Net Core, APress, February 2019.

Kaye, 2015: David Kaye, Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, United Nations, Human Rights Council, May 2015.

Abelson et al., 1997: Hal Abelson, Ross Anderson, Steven Bellovin, Josh Benaloh, Matt Blaze, Whitfield Diffie, John Gilmore, Peter Neumann, Ronald Rivest, Jeffrey Schiller, and Bruce Schneier, The Risks of Key Recovery, Key Escrow, and Trusted Third-Party Encryption, May 27, 1997.

Levy and Robinson, 2018: Ian Levy and Crispin Robinson, Principles for a More Informed Exceptional Access Debate, LawFare Blog, November 29, 2018.

Cardozo, 2019: Nate Cardozo, Give Up the Ghost: A Backdoor by Another Nam et al.e, Electronic Frontier Foundation, January 7, 2019.

Access Now et al., 2019: Access Now, Big Brother Watch, Center for Democracy and Technology, et al., Open Letter to GCHQ, May 22, 2019.

Harold Abelson et al., 2015: Harold Abelson, Ross Anderson, Steven Bellovin, Josh Benaloh, et al., Keys Under Doormats: Mandating insecurity by requiring government access to all data and communications, MIT-CSAIL Technical Report, July 6, 2015.

Gripman, 1999: David Gripman, Electronic Document Certification: A Primer on the Technology Behind Digital Signatures, 17 J. Marshall J. Computer and Info. L. 769 (1999).

ACLU and EFF, 2015: American Civil Liberties Foundation of Massachusetts, the American Civil Liberties Union Foundation, and Electronic Frontier Foundation, Brief for Amici Curiae in Support of the Defendant-Appellee in Commonwealth of Massachusetts v. Leon Gelfgatt, 2015

Richards, 2013: Neil Richards, Don't Let US Government Read Your E-Mail, CNN, August 18, 2013.

Hartzog and Selinger, 2013: Woodrow Hartzog and Evan Selinger, Surveillance as Loss of Obscurity, Washington and Lee L.R. 72(3), 2015.

National Academy of Science, Engineering and Medicine, 2018: National Academy of Science, Engineering and Medicine, Decrypting the Encryption Debate: A Framework for Decision Makers, National Academies Press, Washington DC.


Rishab Bailey is a researcher at NIPFP. Vrinda Bhandari is a practising advocate. Faiza Rahman is a PhD candidate at the University of Melbourne.

Sunday, July 12, 2020

Response to the Consultation Whitepaper on 'Strategy for National Open Digital Ecosystems (NODEs)'

by Rishab Bailey, Harleen Kaur, Faiza Rahman, and Renuka Sane.

The Ministry of Electronics and IT, Government of India (MeitY) had sought public comments on a Consultation Whitepaper (CW) titled a "Strategy for National Open Digital Ecosystems (NODEs)" earlier this year. NODEs are defined as:

open and secure delivery platforms, anchored by transparent governance mechanisms, which enable a community of partners to unlock solutions and thereby transform social outcomes.

The NODES framework will allow the opening up, and sharing of personal and non-personal data held in various sectors (such as healthcare, agriculture, and skills development). Each NODE will consist of infrastructure developed and operated by the government. The private sector will utilise the common infrastructure and data to provide solutions to the public. Per the CW, this will enable greater intra-government and public-private coordination and create efficiency gains. This framework will promote access to innovative e-governance and other services for citizens while enabling robust governance processes to be implemented.

We wrote a detailed response to MeitY. In our submission, we make suggestions on four key issues with the CW:

  • Role of the state: The CW needs to demonstrate clarity on the need for government intervention on the scale proposed. The market failures that require State intervention must be identified on a sectoral basis.
  • Centralisation of governance and technical systems: The CW envisages establishing monolithic, stack-based digital systems in a variety of sectors. The government would be responsible for establishing and operating the technology infrastructure as well as the governance of such systems. However, excessive centralisation can reduce competition, innovation, and produce unsecure systems.
  • Alignment with the existing government policies: The CW needs to consider existing government policies on the adoption of open source software (OSS), open APIs and open standards. Further, the CW needs to account for existing open data and e-governance related initiatives in the identified sectors, and how these would interact with the NODEs framework.
  • Preserving and protecting Constitutional norms: The CW needs to ensure the protection of fundamental rights, democratic accountability, and transparency in the creation and regulation of NODEs. Further, it also needs to account for the federal division of competencies enshrined in the Constitution.

This article summarises our comments and suggestions on the above mentioned issues.

Role of the State

The CW adopts a 'solutionist' approach, in that it does not undertake sufficient analysis of the circumstances and problems in each sector. For instance, the CW identifies two market failure in the skills sector: (i) information asymmetry amongst the stakeholders, and (ii) a lack of trust in the information that is available. It proposes a Talent (Skilling and Job) NODE as a one-stop solution to connect employers, job seekers, counsellors and skilling institutes. Instead of the approach undertaken by the CW, one should consider if private entities can or are already innovating to bridge the information asymmetry and trust issues in the sector, and what policies could provide an environment where such information asymmetry may be reduced. If the problem in the skills sector is a lack of trust, it is unclear why this cannot be solved by interventions such as certification standards.

As a general rule, the State should be involved with building technological systems only for essential state activities (Kelkar and Shah, 2019). It is therefore critical to differentiate between sectors where the State has a legitimate role (say in the provision of its welfare and statutory functions), from sectors where private sector solutions could suffice. For example, the State could have a role in providing access to Public Distribution System (PDS), but need not be a player in building a platform for access to rail reservations.

The responsible ministry should analyse if the NODE is serving welfare or other essential function of government. In case there is no such element, the government should not use its finances on creating infrastructure for such a NODE. Such an approach would promote innovation, prevent the emergence of a state-centric technological mono-culture, and allow the private sector to respond appropriately to requirements of any particular sector. Entities would not be forced to build on top of state-mandated infrastructure, which may not always be necessary or appropriate.

In the context of the NODEs framework, the State should primarily have three roles:

  1. Open up data: The government must focus on building databases and providing access to the public, in a non-discriminatory manner. The benefits of enabling free flows of information are well known. That said, it is important to keep in mind the need to ensure non-discriminatory access to ensure data quality, and to prevent against privacy and other downstream harms. For instance, the Delhi government recently shared locations for COVID-19 relief centers on Google maps, thereby giving Google a competitive edge over other mapping solutions. We believe that an appropriate approach would involve the Delhi government making the relevant information open. This can be done by providing the geo-tagged locations on its open data governance website. Methods to embed this data in third-party apps and services could be provided to enable non-discriminatory access. Similarly, the benefits of opening up railways related data, which is currently monopolised by the IRCTC can enable the provision of customised travel solutions. Greater linkages could be formed with private players in the hospitality and tourism sectors, leading to mutual benefits to the railways as well as the private sector and consumers.
  2. Implement regulatory frameworks: The government should institute regulatory processes and norms based on the need to protect and promote fundamental rights and correct market failures. Interventions must be designed to (a) promote effective competition and the maintenance of a level playing field, (b) avoid function creep, (c) protect and promote fundamental rights, (d) ensure appropriate apportioning of functions, obligations and responsibilities/liabilities.
  3. Ensure democratic accountability: It is now well-established that "code is law" (Lessig, 1999). This makes it imperative for the government to establish systems of democratic accountability, transparency and openness in the creation and regulation of public digital systems. Transparency and accountability measures should be implemented both at the conceptualisation stage as well as thereafter. This should involve:
    • An open and transparent consultation process in the design of the NODEs, similar to the the Report of the Financial Sector Legislative Reforms Commission recommendations for regulation making.

    • A cost-benefit analysis that takes into account the economic costs and benefits of operationalising a NODE within a sector. This would also allow for suitable alternative approaches to be explored.

    • Integration of principles of participatory and democratic governance into the implementation and operation phases. This would promote citizen-centric governance, particularly in the context of privatisation of regulatory functions. For example, the National Payments Corporation of India (NPCI) functions as a quasi-regulatory agency due to the scope of its powers, functions, and de-facto regulatory monopoly. However, being a private entity, it has not been brought under the purview of the Right to Information Act, 2005. This limits citizen engagement with governance processes.

    • Mechanisms to enable allocation of responsibilities and coordination between government entities at different levels (local, state, and central). This is especially important when dealing with common issues (such as tagging of data sets, instituting grievance redress mechanisms, etc.) without usurping constitutional and statutory functions.

Centralisation of governance and technical systems

Enabling the government to pick technological winners and losers or enabling a technical monoculture would decrease innovation and competition. It is well-recognised that centralisation can lead to increased security concerns. One must also be wary of unintended consequences of even the best planned regulation in the technology space. Technology moves too fast and has multiple possible future use cases. Over-regulation or excessive centralisation could have negative effects on expected outcomes.

In cases where the government is required to create digital systems, these must be federated and decentralised to the extent possible. The creation of monolithic technical architectures, which are often de facto mandatory, must be avoided. For instance, the creation of a centralised identification system -Aadhaar- which was thereafter mandated for use across different sectors has caused various problems ranging from exclusions, intrusions into privacy rights of citizens and inhibiting innovation (i.e. such a system is preferred over other possible forms of identification that could suffice in any particular use-case). Implementing a centralised system of 'public infrastructure' may therefore not be necessary and may in fact reduce competition and civil liberties protections.

Instead, the focus of the government should be on enabling the private sector to develop relevant platforms and technologies that compete with one another on a level playing field, albeit with due consideration for regulatory, human rights and other problems that may arise in any given context. Such a system would also promote greater security. The use of federated databases, enabling the development of alternative technical solutions to be built on data, etc., would mean that problems associated with having a single source of truth or a single source of failure can be avoided.

Alignment with existing government policies

The CW proposes principles of open and interoperable delivery platforms. There are two concerns in the manner in which these are described in the CW.

  1. The CW does not refer to existing government policies on the use of OSS in e-Governance projects. Various policies specifically deal with the issue at hand (for example, National Policy on IT, 2012, the policy on Adoption of Open Source Software for Government of India, and the policy on Open Standards for e-Governance).

  2. The scope of the word 'open' as used in the CW is vague and appears to confuse concepts of "open access" and "open source". The CW suggests that each NODE will require a different degree of openness to adhere to specific objectives, context, or mitigate potential risks. This approach can dilute existing policies (mentioned above) that contain clear definitions and mandates on the use of open source solutions by the government.

It is imperative that the NODEs framework build on and strengthen existing government commitments towards the use of OSS solutions. This will unlock the benefits of OSS/Open APIs/open standards such as enhanced security and verifiability, no vendor lock-in, etc.

Preserving and promoting constitutional safeguards

The creation of NODEs platforms would significantly impact fundamental rights. We envisage three instances where the NODES environment needs to be careful about preserving constitutional safeguards.

  1. Right to equality, right to life, and personal liberty: Digitisation at the scale contemplated by the CW may lead to concerns about access to services and possible exclusions therefrom. Ensuring rights protection may be particularly important in the context of the use of AI-based solutions and possible discrimination that may arise as a result. The understanding of what amounts to discrimination must be evolved by each NODE distinctly and will depend on the sector.

  2. Right to privacy: Each of the NODEs will invariably result in the collection and processing of personal data and non-personal data by both government and private entities. The collection and use of personal data by different state entities must necessarily satisfy the tests laid down by the Supreme Court in the Puttaswamy decisions (2017 and 2018). Similarly, principles relating to the use of data by the private sector as laid down in the context of the Aadhaar judgment (Puttaswamy, 2018) must also be adhered to. Due regard must also be given to (the developing) regulatory frameworks concerning personal and non-personal data.

  3. Federal structure: The NODEs framework must also consider the impact on the division of subject matter competencies under the Constitution. One could envisage benefits arising from NODEs in areas such as agriculture, judicial services, healthcare, etc. However, these sectors fall under the State List in the Seventh Schedule to the Constitution. Implementation of NODEs in these sectors should not result in de facto centralisation of federated competencies. Instead, mechanisms to ensure coordination and cooperation between different levels of government must be considered.

We, therefore, recommend that each NODE be backed by an appropriate statute, to the extent possible. This would ensure greater democratic deliberations, prevent excessive and arbitrary executive action, set out the rights of citizens and private entities, and clarify the scope/ limits of any particular project. Providing statutory backing would also limit mission creep, while delineating rights and obligations and governance processes. For instance, despite its various faults, the statutory mandate provided to the Unique Identification Authority of India and the restrictions on data sharing in the Aadhaar Act have proven invaluable in ensuring that biometric and other data is not made freely available for non-Aadhaar purposes by the public sector, including for instance, in criminal investigations. In contrast, projects such as FASTags (which aims to digitise highway toll systems) are being gradually expanded with plans to integrate the system with criminal tracking networks, amongst others.

Conclusion

The CW provides a basic overview of the concept of a NODE and identifies certain sectors in which such a system could lead to gains (such as the skills and health sectors). For various reasons outlined in our submission, our recommendation is to not proceed with implementing the NODEs framework in the manner currently outlined in the CW. We believe that the CW should be seen as an exploratory document. Greater clarity is required on the need for interventions on the scale envisaged in the document, particularly in view of the proposed centralised, stack-based approach. The NODEs framework should consider the need for openness at lower layers of the stack (infrastructural layers), adhere to existing government policies on the use of OSS, Open APIs and Open Standards, and consider policy developments concerning the regulation of personal and non-personal data. The CW should also ensure greater transparency and democratic accountability of governance frameworks and the processes for the creation of a NODE.

References

Bailey et al, 2020: Rishab Bailey, Vrinda Bhandari, Smriti Parsheera and Faiza Rahman, Comments on the draft Personal Data Protection Bill, 2019: Part I, LEAP blog, 2020.

Centre for Digital Built Britain, 2018: A Bolton, M Enzer, J Schooling et al, The Gemini Principles: Guiding values for the national digital twin and information management framework, Centre for Digital Built Britain and Digital Framework Task Group, 2018.

FICCI & KPMG, 2014: FICCI and KPMG, Skilling India: A look back at the progress, challenges and the way forward, 2014.

Kelkar and Shah, 2019: Vijay Kelkar and Ajay Shah, In service of the republic: The art and science of economic policy, Penguin Allen Lane, 2019.

Lessig, 1999: Lawrence Lessig, Code and other laws of cyberspace, Basic Books, 1999.

Puttaswamy, 2018: Justice K.S. Puttaswamy v. Union of India (Aadhaar case), 2019 (1) SCC 1.

Leblanc, 2020: David Leblanc, E-participation: A quick overview of recent qualitative trends, DESA Working Paper No. 163, United Nations Department of Economic and Social Affairs, 2020.

Michealson, 2017: Rosa Michealson, Is Agile the answer? The case of UK universal credit, in Grand Successes and Failures in IT - Public and Private Sector, IFIP Advances in Information and Communication Technology, Springer, 2017.

Ministry of Rural Development, 2013:Ajeevika skills guidelines, Ministry of Rural Development, Government of India, 2013.

Puttaswamy, 2017: Justice K.S. Puttaswamy v. Union of India (Right to privacy case), 2017 (10) SCC 1.

Raghavan and Singh, 2020: Malavika Raghavan and Anubhutie Singh, Building safe consumer data infrastructure in India: Account aggregators in the financial sector - Part I, Dvara Research, 2020.

Steinberg and Castro, 2017: Michael Steinberg and Daniel Castro, The state of open data portals in Latin America, Centre for Data Innovation, 2017.

Zambrano, Lohanto and Cedac, 2009: Raul Zambrano, Ken Lohanto and Pauline Cedac, E-governance and citizen participation in West Africa: Challenges and opportunities, The Panos Institute, West Africa and the United Nations Development Programme, 2009.


The authors are researchers at NIPFP.

Monday, May 25, 2020

Constitutionalism During a Crisis: The Case of Aarogya Setu

by Vrinda Bhandari and Faiza Rahman.

The Aarogya Setu app

Aarogya Setu is a contact tracing app that was launched by the government on April 2, 2020, as a tool to combat the COVID-19 crisis. Although initially meant to be voluntary, some government organisations, state governments, and eventually the Ministry of Home Affairs ("MHA") began mandating the installation and use of the Aarogya Setu app for their employees soon after. In a welcome move, on May 17, 2020, when the MHA issued fresh lockdown guidelines, it changed the directive for downloading the app from mandatory to a "best effort basis". However, there is still some uncertainty about the meaning of these guidelines, since the Indian Railways, and the Delhi Metro continue to require residents to download the app in order to use their services. Recent reports also indicate that the installation of Aarogya Setu will be compulsory for all air passengers above the age of 14 years. Therefore only time will tell as to whether downloading the app will de facto become mandatory. The Aarogya Setu app provides a good practical framing, to think deeply about coercion in a liberal democracy during a crisis.

There are four interesting aspects about the Aarogya Setu app.

  1. The use of state coercion. The level of coercion in play has been significantly diluted by the latest MHA guidelines where the softer words "best effort" are used. However in the case of air and rail travel, there is uncertainty about whether passengers will be prohibited from travelling, if they have not downloaded the app.
  2. The problem of privacy and security. The issues have been been discussed extensively in the Indian discourse [privacy, security].
  3. The lack of legislative foundations. A clear and specific legal basis for deploying and using the app - an anchoring legislation, with proper safeguards - would have helped allay some of the privacy and security concerns, and would have provided a proper avenue for grievance redress.
  4. Practical governance considerations. Governance related issues with the design and roll out of the app have come to the fore, especially the problems of lack of post-facto consultation, transparency, and accountability.

The first two problems (state coercion, privacy and security) have been extensively analysed by researchers in recent months. In this article, we focus on the latter two issues, aiming to obtain clarity on the issues and offer constructive policy proposals for the way ahead.

Underpinning all four issues, however, is the foundational problem of executive discretion in a crisis. While it true that the executive arm of the government has a greater ability to take emergency measures during a pandemic, it does not mean that the role of judicial review is or should be reduced to nought. We start by exploring these foundations.

Principles of evaluating executive action during a crisis

We are in the middle of a COVID-19 pandemic, which is one of the worst global health crises in a century. More than 60 countries have responded by invoking some form of emergency powers to deal with the crisis. These emergency responses have resulted in hitherto unacceptable restrictions on freedoms and civil liberties and a curtailment of the right to privacy. In India, we have witnessed among other things, the deployment of drones to monitor people's movements, the publication of the names of individuals on quarantine lists, and the roll out of a centralised contact tracing app. When government actions have been challenged in court, the courts have generally taken the view that "extraordinary situations call for extraordinary measures". This reflects the general belief that the executive should be given more leeway during a crisis.

As plausible as that argument sounds, it is not entirely correct. As Wiley and Vladeck (2020) explain, COVID-19 reinforces the case for "regular" judicial review, and not a suspension of civil liberties in times of crisis. This is for three reasons. First, emergency powers are supposed to be exercised for a crisis that is finite and limited in duration (such as the Tsunami that led to the enactment of the Disaster Management Act, 2005 in India). By its very nature, the COVID-19 crisis, with fears of a second wave, does not lend itself to a near end-point, at least not till a vaccine is developed. A prolonged use of emergency powers risks normalising the centralisation of power and potentially damages the fabric of our democracy in the long run.

Second, there is an assumption (or fear) that if courts were to perform their role of judicially reviewing government action, they would easily strike down executive orders, thus impeding the government's fight against COVID. In a sound liberal democracy, this is not the case. The doctrine of proportionality requires the government to demonstrate, rather than simply cite, its compliance with the four prongs of (a) legality: existence of a law; (b) suitability: rational connection between the government measure and the aim to prevent the spread of COVID; (c) necessity: was there a less restrictive measure the government could have employed; and (d) balancing the public interest with the loss of liberty. In times of a public health crisis, a government may well be able to satisfy these tests for the unusual actions that it takes. But in a well functioning liberal democracy, it does need to provide adequate evidence and justification for its actions. Proportionality, and judicial review, thus only ensure that we do not cut a blank cheque to the government.

The judiciary is the only branch of the Indian state that has the structural power and institutional credibility to protect the Constitution, especially in times of crisis. A robust judicial response can lead to better governmental action and protection of democracy in the long run. For example, after the Kerala High Court stayed a government orders on the deferral of salary payment, the Kerala State government brought an ordinance -- thus achieving the same result, but through a better process.

Absence of a clear and specific law

Our analysis of the Puttaswamy (2017) verdict describes how any valid restriction on the fundamental right to privacy has to satisfy the four-pronged test of legality, legitimate aim, proportionality and procedural safeguards. The first prong of legality demands that any restriction on the right to privacy must be prescribed by a publicly available law. The principle of legality, however, does not mean the mere existence of a law. Especially, in the context of communications surveillance, the principle demands that this law ought to meet a standard of clarity and specificity that is sufficient to guarantee that individuals have advance notice of and can foresee the manner in which it will be implemented.

While the issue of mandatory download of the app is behind us, many statutory agencies and private organisations continue to coerce their users or employees to install the app. Hence, the need for a law remains. The collection of personal data of an individual, without their informed consent, undermines the principles of privacy, autonomy, and informational self determination, that have been emphasised in Puttaswamy. The various privacy and security concerns associated with the Aarogya Setu app, have been well documented, including by former intelligence officials. Consequently, any direction to mandatorily install the Aarogya Setu app in order to access any service, when it is known that the app continuously collects personal information such as location data through GPS and bluetooth, has to be traced to a valid law, if it is to satisfy the proportionality test.

Drawing a parallel with the Aadhaar experience is useful. Although initially set up on the basis of an executive notification passed by the Planning Commission, the UIDAI was eventually given a statutory basis through the passage of the Aadhaar Act in 2016. The enactment of the Aadhaar Act represents an implied, if belated, admission on the part of the government that citizens' privacy cannot be violated without an enabling legislative framework. At the same time, there is a precedent, in the Aadhaar story, of making Aadhaar de facto mandatory, even though the Aadhaar Act was clear that it was voluntary.

At present, the only possible legal basis for the Aarogya Setu app could come from the issuance of MHA Guidelines under the Disaster Management Act, 2005 or the issuance of an order under Section 144, Cr.P.C. (as in Noida) However, both these provisions are inadequate and unsatisfactory as legal foundations for the app. Let us analyse each of these.

Is the Disaster Management Act an adequate legal foundation for the app?

The MHA Guidelines draw their authority from Section 10 (2) (l) of the Disaster Management Act, 2005. However, this provision cannot satisfy the legality requirement since it is a broad, omnibus provision that simply gives the power to the government to "lay down guidelines for, or give directions to, the concerned Ministries or Departments of the Government of India, the State Governments and the State Authorities regarding measures to be taken by them in response to any threatening disaster situation or disaster." As the sentence shows, the law gives the power to coerce arms of the government, and not private actors.

The restriction of fundamental rights must be grounded in a specific legal provision that specifies the conditions under which the right can be infringed and sets out the procedural and substantive safeguards to protect privacy. As Justice Srikrishna has observed, the National Executive Committee set up under Disaster Management Act, that issued the May 1, 2020 Guidelines directing the installation of Aarogya Setu, is not a statutory body. In the present case, there is no evidence of any specific parliamentary approval having been sought for directing the mandatory installation of the Aarogya Setu app by all smartphone holders (apart from the fact that there is a lot of ambiguity around how these mandates will apply to the majority of Indians who do not own a smartphone).

The issue regarding the lack of legislative basis arose in another context before the Kerala High Court last month. In light of the COVID-19 pandemic, the Kerala Government had issued an executive order deducting the salaries of government employees. When the order was challenged on the ground of legality, the State Government tried to rely on the Disaster Management Act, 2005 as well as the Kerala Ordinance amending the Epidemic Disease Act, 1897 as providing adequate legislative basis for the government order. However, the High Court rejected the government's contention on the ground that, " the provisions that were read out, specifically Sections 38 and 39 of the Disaster Management Act 2005, do not specify or confer any power upon any Government to defer the salary due to its employees during any kind of disaster. Prima facie, I feel that law is found wanting to justify the issuance of [the order]." The government eventually passed an ordinance to achieve its intended aim.

There is also the issue of excessive delegation. Section 10 (2) (l) of the Disaster Management Act does not delegate the power to the National Executive Council to create a data collecting app, nor does it provide any guidance on the exercise of powers. For instance, in United Kingdom v. Malone, the European Court of Human Rights ("ECHR") held that the secret and opaque nature of communications surveillance meant that "it would be contrary to the rule of law for the legal discretion granted to the executive to be expressed in terms of an unfettered power". Consequently, the ECHR held that in order to satisfy the principle of legality, the law must indicate the scope of any such discretion conferred on the competent authorities and the manner of its exercise with sufficient clarity, having regard to the legitimate aim of the measure in question, to give the individual adequate protection against arbitrary interference.

On May 11, 2020, the government released the Aarogya Setu Data Access and Knowledge Sharing Protocol, 2020 ("Protocol") for the "effective implementation" of the MHA Guidelines. This Protocol lays down certain principles regarding the collection, processing, and sharing of personal data. However, the Protocol does not have the status of law, nor can it derive any statutory backing from the Disaster Management Act, 2005. More importantly, it does not seek to confer any legal status to the app itself. There is no mechanism to verify that the app actually works as stated, and nothing prevents a change in the working of the app under conditions of non-transparency. Hence, the release of the Protocol cannot be seen as providing legal foundations for the use and deployment of the Aarogya Setu app.

Is Section 144, Cr.P.C., an adequate foundation for the app?

As an example, the Gautam Budh Nagar (Noida) administration in Uttar Pradesh had earlier passed an order under Section 144 of the Code of Criminal Procedure ("Cr.P.C."), mandating the installation of the Aarogya Setu app for residents of the entire district, under the threat of criminal sanction. In another welcome move, the orders under Section 144, Cr.P.C eventually lapsed.

It is an interesting intellectual puzzle, to analyse the ability of the executive to coerce private persons through this route. Section 144 of the Cr.P.C authorises the Magistrate to issue an order in urgent cases of nuisance of apprehended danger directing "any person to abstain from a certain act" or to take certain order with respect to certain property in his possession or under his management. The Calcutta High Court, in a series of decisions in the early 1930s, interpreted this provision to mean that a Magistrate is only entitled to make a restrictive order preventing the opposite party from doing an act. It does not enable him to make a mandatory positive order directing an individual to do a particular act. For instance, in Kusum Kumari Debi (1933), an order by the Magistrate directing the Petitioner to fill up an excavation at her own cost was held to be beyond the remit of Section 144, Cr.P.C, and the subsequent proceedings initiated under Section 188, I.P.C were quashed. Similarly, in B.N. Sasmal (1930), the Magistrate's direction under Section 144, Cr.P.C directing Sasmal to leave the Midnapur District for two months was quashed since it "was in effect not a direction to abstain from doing anything, but a direction upon a person to remove him self from the district." These judgments have subsequently been cited with approval by various High Courts (Ramanlal Patel (1971), Muzaffarpur Electric (1973).) Thus, any order passed by a Magistrate under Section 144, insofar as it directs individuals to download the Aarogya Setu app falls foul of the law.

The importance of a law and the process of legislation

In a constitutional democracy, the authority to coerce private individuals can only flow from a law that has been vetted and approved by democratically elected representatives of the people. While the executive is often charged with filling out the details missing in parliamentary legislations through rules and regulations, the democratic deficit of these instruments is undeniable i.e., these instruments are drafted and approved by members of the executive, bureaucrats or regulators, and not directly by representatives of the people. In contrast, legislations are often preceded by important deliberations, where elected representatives discuss competing policy choices to decide the best course of action, and negotiate middle roads based on the interests of different social groups.

A contact tracing law would regulate (a) the collection, storage, and use of personal data collected by the app; (b) serve as a check on governmental power; (c) enshrine critical privacy protections; (d) create mechanisms for independent oversight of the functioning of the app; and (e) provide a legislative basis for grievance redressal avenues. These elements are particularly important in India given the absence of a general data protection law. For instance, the Protocol states that any violation "may" lead to prosecution under the Disaster Management Act. However, it does not specify the conditions under which prosecution can take place; nor does it actually set up a complaint mechanism to provide an appropriate forum for grievance redressal (leaving aside the vexed question of how the Disaster Management Act will be used to prosecute privacy violations). Even the privacy policy only designates the Deputy Director General at the National Informatics Centre (NIC) as a grievance officer, without providing any further details or powers. Currently, the privacy protections guaranteed to citizens are based exclusively on the privacy policy, the terms of service of the app, and the new "Protocol", which add up to inadequate protections, which can be unilaterally changed by the executive, and lack mechanisms to ensure compliance by the state. This is incompatible with the protection of fundamental rights and the rule of law.

The need for a specific enabling legislative framework for contact tracing has also been reiterated in other countries. In Israel, the Supreme Court recently held that the Israeli Security Agency, the Shin Bet, required a law to continue using emergency powers (granted by the Cabinet) that allowed it to deploy phone location tracking and electronic contact tracing. In reaching its decision, the Court recognised that the State was monitoring individuals, without their consent, without any legislative framework in place.

Similarly, in the UK, the Parliamentary Joint Committee on Human Rights (2020) released a report stating that a contact tracing app should not be rolled out nationally "unless the Government is prepared to enshrine [intended privacy] protections in law", in the form of primary legislation. Legislative backing was deemed essential for the contact tracing app so as to provide the requisite "legal clarity and certainty" regarding the collection, storage, and use of personal data; whilst simultaneously increasing confidence and trust in the app; and an increase in uptake, which could improve the efficacy of the app. Notably, this demand to legislate specifically for contact tracing comes despite the U.K having a comprehensive data protection legislation.

One way forward: An ordinance

Given that the Parliament is not currently in session, the ongoing national lockdown and the urgency of the COVID-19 crisis, the Central Government should have used the ordinance making power under the Constitution, which is precisely provided for such occasions, to set out a legislative framework for the operationalisation of Aarogya Setu app in India. This would have ensured that ordinance either received the scrutiny and approval of the Parliament when it reconvened, or ensured that the ordinance lapsed if it was not approved by the Parliament. Various states like Uttar Pradesh and Kerala have been taking the ordinance route to address legislative lacunae during the COVID-19 crisis.

Addressing the procedural irregularities and governance related issues

Apart from the legal issues highlighted above, the operationalisation process exhibits a number of procedural irregularities and governance related issues. These can be addressed through the following steps:

  1. Need for public consultation: The conceptualisation, design, and implementation of the Aarogya Setu app was not preceded by public consultation. Given the urgent nature of the COVID-19 crisis, it is understandable that the Central Government was not in a position to hold detailed public consultations before designing and rolling out the app. However, the the government should still initiate a formal post facto consultation process to seek comments from civil society, technical experts and other stakeholders regarding, inter alia, the technical and legal framework, and deployment issues with the app. Given low state capacity in India, such consultation processes are particularly valuable in identifying errors and offering solutions.
  2. Enhancing transparency regarding design and deployment choices: So far the Aarogya Setu app has been accompanied only by (a) terms of service (b) privacy policy and (c) the Aarogya Setu Protocol. There is a foundational problem, located in health policy: What is the overall plan for contact tracing, and what is the role that the app will play in this? Can the complex problem, of public administration and state capacity for contact tracing in an epidemic, be short-circuited by using an app? How do we know that there are commensurate benefits, for contact tracing, in return for intruding into the lives of private persons? It is not obvious that the app will help improve public health, and the case needs to be made for it, where an intelligent balance is struck between cost and benefit. There is a `technology theatre' streak in Indian public policy, where solving complex problems is avoided by building and exhibiting a piece of software.

    For instance, it is unclear why the makers of Aarogya Setu chose to collect location data through both GPS and bluetooth when similar apps, built by some of the best technologists in the world, are choosing to use only Bluetooth signals from phones to detect encounters and do not use or store GPS location data. An explanatory memorandum detailing the reasoning behind the various design choices could go a long way in increasing trust in the app and consequently enhancing its uptake.

    A similar trust building measure, that will show the extent to which the actual operations of the app are aligned with the claims made in documents, will be the release of source code. In fact, even with the latest revision to its privacy policy, the source code has not been released. As an example, contact tracing apps being designed by the U.K and Singapore have made their source code public, thereby enabling greater scrutiny from the technical community, and building confidence that the high level documents are being adhered to in the implementation.

    Confidence would be enhanced if small pilots were rolled out prior to large scale deployment, with extensive involvement of researchers in public health, computer engineering, and civil liberty. As an example the NHS contact tracing app being proposed in the U.K is first being trialled in Isle of Wight on a purely voluntary basis. This has helped identify significant glitches with the app.

  3. Setting in place an open and transparent audit mechanism: Confidence will be enhanced by releasing periodic audit reports detailing key insights obtained from analysis of the data collected by the app. For instance, it will be useful for the public and technologists to know details such as the total number of COVID-19 positive cases detected with the help of the app, the number of false positives or false negatives thrown up by the app, the number and nature of user complaints received etc. Publicly available periodic audit reports of this nature will increase confidence in the operation of the app, ensure transparency in its governance, and help evaluate success or failure of the app.

Conclusion

Courts of law are more deferential to the executive in an emergencies. However, it is also widely known that "temporary" leeways granted to the executive during emergencies have a tendency to transform into permanent fixtures that last long beyond the actual duration of the crises (Harari, 2020). This is because governments often use crises as an opportunity to expand and further centralise their powers. Interestingly, while the Aarogya Setu protocol has a sunset date, which is subject to extensions, there is no clarity on how long the app itself will remain operational. The Union Minister for Information and Broadcasting has also indicated that the app may continue to function for one or two years. Dangerous precedents occur in dangerous times.

On May 5, 2020, a writ petition was filed before the Kerala High Court challenging the MHA directive mandating the use of Aarogya Setu by public and private employees on the grounds that it was violative of the right to privacy and personal autonomy. In response, while the Kerala High Court declined to grant any interim relief on the plea, it directed the Central Government to file a statement on the measures taken to protect the privacy of person's whose data is collected by the app. While the new MHA guidelines have since moved away from making the app mandatory, news reports suggest that the access to important services is increasingly being made contigent on the mandatory installation of the app by users.

When faced with a war, a terrorist attack, or a pandemic, there is an instinctive response in India to be deferential to the executive. However, the founders of the Republic did not intend for colonial rule to be replaced by the rule of officials. The Constitution of India does not see liberal democracy as a luxury to be enjoyed in good times. Apart from freedom being valuable in and of itself, there is also a strong pragmatic value in emphasising checks and balances. Under conditions of low state capacity, unchecked power leads to more mistakes. The quality of work in public policy goes up through the operations of checks and balances, and this is even more valuable in difficult times.

References

Paul Daly, The Covid-19 Pandemic and Proportionality: A Framework, Administrative Law Matters (2020).

Sidharth Deb, Privacy prescriptions for technology interventions on Covid-19 in India, IFF Working Paper No. 3/2020 (2020).

Tom Ginsburg and Mila Versteeg, State of Emergencies, Part II, Harvard Law Review Blog (2020).

Oren Gross, Emergency Powers in the Time of Coronavirus ... and Beyond, Just Security (2020).

Yuval Noah Harari, The World After Coronavirus, Financial Times (2020).

Joint Committee on Human Rights, Human Rights and the Government's Response to Covid-19: Digital Contact Tracing United Kingdom Parliament (2020).

SFLC.in, Our concerns with the Aarogya Setu App (2020).

Joelle Grogan, COVID-19 and States of Emergency: Introduction and List of Countries Verfassungsblog (2020).

Lindsay Wiley and Steve Vladeck, COVID-19 Reinforces the Argument for "Regular" Judicial Review-Not Suspension of Civil Liberties-In Times of Crisis, Harvard Law Review Blog (2020).

Emperor v. B.N. Sasmal (B.N. Sasmal), ILR (1930) 58 Cal 1037.

Kusum Kumari Debi v. Hem Nalini Debi (Kusum Kumari Debi), AIR 1933 Cal 724.

Muzaffarpur Electric Supply Co. v. State of Bihar (Muzaffarpur Electric), 1973 Crl. L.J. 143 (Patna).

Justice K.S. Puttaswamy v. Union of India (Puttaswamy), 2017 (10) SCC 1.

Ramanlal Bhogilal Patel v. N.H. Sethna (Ramanlal Patel), 1971 Crl. L.J. 435 (Guj).

Malone v. The United Kingdom (Malone), [1984] ECHR 10.

The International Principles on the Application of Human Rights to Communications Surveillance ("the Necessary & Proportionate Principles") (2013).

 

Vrinda Bhandari is a practicing advocate in Delhi. She is involved in the legal challenge to the app before the Kerala High Court. Faiza Rahman is a researcher in the technology policy team at the National Institute of Public Finance & Policy. We thank Ajay Shah, Renuka Sane, and Smriti Parsheera for useful comments.

Friday, April 10, 2020

Comments on the draft Personal Data Protection Bill, 2019: Part II

by Rishab Bailey, Vrinda Bhandari, Smriti Parsheera and Faiza Rahman.

In our previous post, we had discussed some of the concerns arising out of the draft Personal Data Protection Bill, 2019 (the "Bill"), focusing on how the State-citizen relationship is dealt with under the Bill. We examined the provisions granting wide ranging exemptions to the State for surveillance and law enforcement purposes, as well as the problems in the design and functioning of the proposed Data Protection Authority of India (the "DPA"). In this post, we extend our analysis to discuss certain other issues with the Bill, including the provisions on data localisation, processing of children's data, implementation of privacy by design and regulatory sandbox, inclusion of non-personal data, the employment exception, and the research exemption. We argue that these provisions need to be amended in order to provide more effective safeguards for the privacy of individuals.

Cross Border Data Transfer (Data Localisation)

One of the most contentious issues in the drafting of India's privacy law has been the issue of data localisation, or in other words, the nature and scope of restrictions that should be applied to cross-border data transfers.

Section 33 of the Bill enables the transfer of personal data outside India by imposing transfer restrictions on two sub-categories of personal data. The first sub-category consists of sensitive personal data, such as financial data, health data, sexual orientation data, biometric data, etc., that has to be mirrored in the country, i.e. a copy of such data will have to be kept in India. The second sub-category consists of critical personal data (which has not been defined in the Bill), and which is barred from being transferred outside India. The constituents of this sub-category have not been identified in the Bill and are left to be notified by the Government at a subsequent stage. While imposing these restrictions, the Bill also specifies (in Section 34) a list of conditions that can enable a cross-border data transfer to take place. This includes determination of the adequacy of the laws of another country by the Government or requirements for data processing entities to put in place intra-group schemes or contracts to ensure appropriate standards for the protection of Indian data sent outside the country.

These provisions are significantly more liberal than those proposed in the 2018 version of the draft Data Protection Bill released by the Justice Srikrishna Committee ("PDP Bill, 2018"). The PDP Bill, 2018, required both personal and sensitive personal data to be mirrored in the country, subject to different conditions and exemptions. These provisions attracted significant criticism -- from dissenting members of the Srikrishna Committee, to technology companies (particularly multinationals), as well as sections of civil society (Basu et al., 2019). We had also argued in our submissions on the PDP Bill, 2018 that these restrictions were overly broad and that the costs of strict localisation measures may outweigh any possible gains.

The move to liberalise these provisions will undoubtedly be welcomed by many stakeholders. The less stringent provisions of the Bill imply that costs to business may be limited, and users will have greater flexibility in choosing where to store their data. Prima facie the Bill appears to reflect a more proportionate approach to the issue, thereby bringing it within the framework of the Puttaswamy tests of proportionality and necessity (Bhandari et al., 2017). This is achieved by implementing a sliding scale of obligations, ostensibly based on the sensitivity or vulnerability of the data -- "critical personal data", being the most vulnerable category, is required to be localised completely; while "personal data" being the broadest category, can be freely taken out of the country. The obligations with respect to "sensitive personal data" lie in between these two.

However, we believe that even the revised provisions of the Bill may not withstand the test of proportionality.

As explained by us previously on this blog, there are broadly three sets of arguments that are advanced in favour of imposing stringent data localisation norms (Bailey and Parsheera, 2018):

  1. Sovereignty and Government functions: Referring to the use of data as a resource to be used to further India's strategic and national interests, to enable the enforcement of Indian laws and discharge of other state functions.
  2. Economic benefits: The second claim is that economic benefits will accrue to local industry in terms of creating local infrastructure, employment and aiding development of the artificial intelligence ecosystem.
  3. Civil liberties: The third argument is that local hosting of data will enhance its privacy and security by ensuring Indian law applies to the data and users can access local remedies. It will also protect (Indian) data from foreign surveillance.

If the Bill was localising data for the first two purposes, it would have required that local copies be retained of all the categories of personal data, as was the case with the previous draft of the law. On the other hand, if privacy protection is the main consideration, as it now appears given the changes from the PDP Bill, 2018, and the fact that vulnerability or sensitivity of the data is the differentiating factor in terms of the obligations being imposed, we believe that the aims of this provision can be equally achieved through less intrusive, suitable and equally effective measures. This includes requirements for contractual conditions, and using adequacy tests for the jurisdiction of transfer, as already provided for in Section 34 of the Bill. This is also in line with the position under the European General Data Protection Regulation ("GDPR"). Further, the extra-territorial application of the Bill also ensures that the data protection obligations under the law continue to exist even if the data is transferred outside the country.

In case data localisation is meant to serve any of the goals other than privacy, sectoral obligations can be used to meet these specific objectives based on a perceived and specific need. This is already the case in respect of digital payments data, certain types of telecom data and Government data. Any such move would of course have to be preceded by an open and transparent process setting out the problem that is sought to be addressed and assessing the different alternatives before arriving at localisation as a solution.

Given the infirmities in the Bill, particularly concerning the powers of the State, individuals and businesses may well believe that their data would be more secure if stored and processed in jurisdictions with strong data protection laws and a more advanced technical ecosystem. Therefore, assuming that privacy is the primary motivating factor behind design of this provision, it would make sense to allow individuals to store their data in any location of their choice, provided that the specified conditions are being met.

Accordingly, we believe that Section 33 ought to be deleted from the Bill. As an alternative, general restrictions on cross-border transfers may be imposed only for "critical personal data". In this context, it is also important that the Bill should provide a definition of "critical personal data" or at least clarify the grounds on which personal data may be declared as such. This would help limit the otherwise extremely broad powers of the State in this respect.

Children's Data

Section 16 of the Bill contains an enhanced set of obligations for fiduciaries dealing with children's personal data and sensitive personal data. It requires fiduciaries to act in the best interests of the "child", defined to mean a person below 18 years. The provision mandates age verification and parental consent for the processing of such data, which, while well-intentioned, gives rise to some concerns.

For instance, a large part of India's internet using population comprises young people, including children. Requirements for age verification and parental consent may not be practical for a vast number of children who may not have access to relevant documents, may not receive parental support, or their parents may not be in a position to engage with the technology and verification system. Such a requirement is also likely to have a disproportionate impact on already vulnerable and marginalised communities, including adolescent girls. Section 16 also leads to a loss of agency for many young internet users, who are often the creators and consumers of online content for educational, recreational, entertainment and other purposes.

The procedure to conduct mandatory age verification is also beset with ambiguity, since any requirement to verify children's data will effectively amount to the verification of all users in order to be able to distinguish children from adults. This would clearly be a disproportionate invasion of privacy.

Finally, the Bill does not draw any distinction in the level of protection based on the age of the child, in effect treating children of 5 years and 17 years in the same manner. This, in essence, goes against the UN Convention on the Rights of the Child, to which India is a party. The Convention inter alia recognises that: (a) regulation of children should be in a manner "consistent with the evolving capacities of a child" and that children have a right to engage in play and recreational activities "appropriate to the age of the child" (Articles 5, 14 and 31); (b) children have a right to protection of the law against invasions of privacy and a right to peaceful assembly (Articles 16 and 15); and that (c) access to mass media, particularly from "a diversity of national and international sources" is important for a child's development (Article 17).

In order to allay these concerns, we recommend that the provisions pertaining to parental consent and age verification (Sections 16(2) and 16(3) of the Bill) should be deleted. In the event these provisions are retained, they should be amended to prevent the complete loss of agency for many young internet users; to enable a level of protection that is consistent with the age group of the child; and to ensure that the rights of all individuals to expression and access, including children, are not unduly restricted. Accordingly, Section 16 should lay down that the principle of best interests of the child and the requirement of consent from parents and guardians have to be interpreted "in a manner consistent with the evolving capacities of the child". Further, any requirement of age verification should be limited to guardian data fiduciaries to be classified by the DPA. Finally, the factors to be considered under Section 16(3) while deciding upon the manner of verification, should also include the impact of the verification mechanism on the privacy of other data principals.

Privacy by Design and Sandbox

Section 22(1) of the Bill requires every data fiduciary to prepare a privacy by design ("PBD") policy containing details of the processing practices followed by the fiduciary and the risk-mitigation measures put in place. According to Sections 22(2) and 22(3), the data fiduciary may submit the PBD policy to the proposed DPA for certification, which shall be granted upon satisfaction of the conditions mentioned in Section 22(1). The fiduciary and DPA shall then publish the certified PBD policy on their websites.

Section 22, as it is currently drafted, only requires data fiduciaries to prepare a PBD policy -- it does not require them to implement the same. Without a requirement to implement the PBD Policy, this would remain a mere paper requirement and serve no real privacy enhancing purpose. In contrast, Section 29 of the PDP Bill, 2018, required every data fiduciary to "implement policies and measures to ensure [privacy by design]". Similarly, Article 25 of the GDPR also requires data controllers to "implement appropriate technical and organisational measures" in order to meet the requirements of the regulation.

Further, given the range and scope of duties conferred on the DPA, requiring it to verify and certify every data fiduciary's PBD policy (as an ex-ante measure) could cast an unreasonable burden on the regulator. It must be noted that the scrutiny of a PBD policy will have to take into account each entity's specific business model, and the specific risk mitigation measures proposed to be implemented. This is clearly not an insignificant task. We therefore believe it would be prudent to permit independent data auditors to certify PBD policies, with further review of the certified policies by the DPA in cases where it is assessing the fiduciary's eligibility to participate in the sandbox under Section 40. This would reduce the burden on the DPA while enabling quicker turn-around times for business entities. The DPA could in turn regulate the process of certification by independent auditors through appropriate regulations.

Moving now to the issue of the regulatory "sandbox". This is a new concept in the data protection discourse in India although other sectors, such as finance, have already seen such developments. For instance, the Reserve Bank of India announced the creation of an enabling framework for a regulatory sandbox in 2019. We have also seen international examples that discuss such measures in the data protection context, such as in case of the UK's Information Commissioner's sandbox initiative.

Section 40 of the Bill permits the DPA to restrict the application of specific provisions of the Bill to entities that are engaged in developing innovative and emerging technologies in areas such as artificial intelligence and machine-learning. Presumably, the purpose is to enable companies to experiment with new business models without the fear of falling foul of the law (while at the same time enabling supervision by the authorities), in a controlled setting, where exposure to harm can be limited. According to Section 40, the DPA can modify the application of the provisions of the Bill relating to clear and specific purpose for data processing; collection only for a specific purpose; and limited period of data retention for eligible entities. In order to be eligible for the sandbox, an entity should have in place a PBD policy that has been certified by the DPA (Section 22).

The current draft vests significant discretion in the hands of the DPA in deciding which entities will be included or excluded from the sandbox. Despite this, there is no clear criteria provided in Section 40 that would allow the DPA to judge the entry of an entity into the sandbox. We believe that certain criteria, based on the expected level of innovation, public interest, and viability, should be specified in Section 40 itself, to improve transparency and accountability. The provision of specific criteria needs to be accompanied by the requirement of a written, reasoned decision by the DPA, so as to reduce arbitrariness. Apart from this, the DPA should also be empowered to lay down conditions and safeguards for data fiduciaries to follow (with respect to personal data processed while in the sandbox) once they have exited the sandbox. Finally, changes flowing from the proposed revisions to the certification process of the PBD policy (discussed above) will also need to be made to Section 40.

Non-consensual Processing for Employment Purposes

Section 13 of the Bill gives significant leeway to employers for carrying out non-consensual processing of personal data, other than sensitive personal data, that is necessary in the context of employment. Given the inherent inequality in an employer-employee relationship, we believe that the Bill should have greater safeguards to prevent coercive collection or misuse of employees' personal data by employers.

For instance, the present draft of the provision permits non-consensual processing of personal data of an employee if considered necessary for "any other activity relating to the assessment of the performance" of the employee. This phrase is very wide in scope and can be easily misused by the employer, for instance through continuous monitoring and analysis of all activities of the employee, including the time spent in front of screen, private calls and messages, etc. Given the increasing relevance of remote working arrangements, this sort of monitoring could even be extended outside the office premises.

We have already referred to the significant imbalance of power in the relationship between the employee and employer. There can be many ways in which technology can further tilt the balance of power in favour of the employer. For instance, there has been considerable reporting on the "productivity firings" by Amazon. The company is said to be using "deeply automated tracking and termination processes" to gauge if employees are meeting (very stringent) productivity demands placed on them (Lecher, 2019). Similar stories of management or termination based on algorithmic decision-making are increasingly being heard from many other sectors of the economy. When one considers the advances being made in tracking and privatised surveillance systems, the ability of employers to collect and analyse data of their employees without their consent, can become extremely problematic.

Accordingly, we believe the broad exemption provided for employers should be done away with by deleting this provision. However, if the provision is to be retained, we recommend that two amendments need to be made to it. First, the provision should only permit non-consensual processing as is "reasonably expected" by the data principal. Second, any processing under this provision should be proportionate to the interests being achieved.

Exemption for Research, Archiving, or Statistical Purposes

Section 38 permits the DPA to exclude the application of all parts of the law to processing of personal data that is necessary for research, archiving or statistical purposes, if it satisfies certain prescribed criteria. As highlighted in our earlier submissions, the framing adopted by the provision is very broad as it extends the exemption to research and archiving conducted for a wide variety of purposes, including situations where this may not be appropriate. This includes research that is predominantly commercial in nature. Market research companies carrying out consumer surveys, focus groups discussions, etc., often use intrusive means of data collection and are repositories of large quantities of personal data. We believe that such purposes should not be exempted from the purview of data protection requirements as doing so would significantly lessen the privacy protections offered to individuals, without any significant public benefit being achieved.

Accordingly, we recommend narrowing the scope of the provision only to the processing of personal data where the purpose is not solely commercial in nature and the activity is being conducted in public interest. Notably, the GDPR also limits exemptions granted to research purposes to "archiving purposes in public interest, scientific or historical research or statistical purposes"(Article 89). Further, a somewhat similar approach has been adopted in the Copyright Act, 1957, which in Section 32 provides for the issuance of licenses to produce translations of works, inter alia, for research purposes. Section 32 specifically excludes "industrial research" and "research by bodies corporate" (not being governmental controlled bodies) "for commercial purposes" from the scope of the law -- thus, the exemptions from copyright protection under the law do not apply to the use of copyrighted material for such categories of research.

In addition, it is unclear why provisions pertaining to transparency, fair and reasonable processing, deployment of security safeguards etc. are not made applicable to entities that may avail the exemption under Section 38, as was suggested in the earlier draft of the PDP Bill, 2018. As mentioned above, commercial research companies collect, process and store large quantities of personal data, thereby making them susceptible to significant breach of privacy (in the case of data breaches, unauthorised disclosures, etc). Therefore we suggest that Section 38 should be revised to ensure that the provisions of the law are only exempted to the extent they may significantly impair or prevent achieving the relevant purposes. Notably, the UK Data Protection Act, 2018, also follows a similar approach in Schedule 2 (Part 6, paragraph 27 and 28).

Non-personal Data

Section 91(2) is a new provision that has been introduced in the latest version of the Bill. Under this section, the Central Government may, in consultation with the DPA, direct any data fiduciary or processor to provide any non-personal or personal data that is in an anonymised form. The Government is required to lay down regulations governing this process. This non-personal data is to be used for "better targeting of delivery of services or formulation of evidence-based policies" by the Government.

We find that this provision is misplaced in the Bill and is disproportionate in nature, for the following reasons. First, regulating non-personal data flows is outside the scope of the present law. Notably, the White Paper and Report of the Justice Srikrishna Committee exclusively consider the regulation of personal data, as do the Statement of Objects and Reasons and Recitals to the Bill.

Second, the Government has already constituted a Committee of Experts to examine regulatory issues arising in the context of non-personal data. The inclusion of this provision pre-empts the findings and recommendations of this Committee of Experts.

Third, the provision does not adequately consider and balance all relevant interests, as it provides the State with an omnibus power to call for any non-personal data. This could affect property rights of data fiduciaries, competition in the digital ecosystem (especially where the State is a market participant), and also affect individual privacy, particularly in situations where unrelated data sets available with the Government could be processed to reveal personally identifiable data. There is significant literature on the possibility of anonymised data sets being re-identified through advanced computing, or on being combined or added to new information to reveal personal data.

Fourth, calling for data on grounds that it may be used for "evidence based policy making" is vague, ambiguous and susceptible to arbitrary use. Existing provisions of law allow sectoral regulators and Government agencies to collect relevant data (personal or non-personal) where required for making regulatory or policy interventions. The provision would therefore fail the Puttaswamy tests of ensuring proportionality and being subject to appropriate procedural safeguards.

In the circumstances, we believe the provision must be dropped from the Bill.

Conclusion

In this post, we have highlighted how the Bill offers limited privacy protections for individuals in various contexts, such as when it comes to an employee-employer relationship or in the context of processing of personal data by entities engaged in commercial research and statistical work. At the same time, certain provisions, while they may seem well intentioned, require significant fine-tuning so as to not unduly limit individual rights, such as the requirement for verification of users' age.

We show that by failing to ensure that data fiduciaries must implement a PBD policy, the Bill merely envisages a paper requirement, while at the same time casting a significant burden on the DPA to certify such policies. Similarly, the provision on data sandboxes, while in theory may not be a bad idea, also requires much more discussion and work. To begin with, we propose that the provision needs modifications to limit the discretionary power available to the DPA, particularly in terms of selection of entities to take part in the sandbox. Finally, we also explain why the provisions pertaining to data localisation and non-personal data are poorly conceptualised and disproportionate in nature.

Based on the discussions here and in our previous post on the Bill, we conclude that there are a number of areas where the Bill needs further work before it can be said to be providing an appropriate standard of data protection. Further, the introduction of various completely "new" provisions in the Bill at this stage, such as those pertaining to non-personal data, sandboxes, social media intermediaries, and consent managers is less than ideal given the significant public discussion carried out on the draft law over a two year period. In this context, the fact that the Joint Parliamentary Committee that is currently examining the Bill has called for, and is considering, public comments is a positive step.

References

Bailey and Parsheera, 2018: Rishab Bailey and Smriti Parsheera, Data Localisation in India: Questioning the Means and Ends, NIPFP Working Paper No. 242, October 2018.

Basu et al., 2019: Arindrajit Basu, Elonnai Hickok and Aditya Singh Chawla, The Localisation Gambit: Unpacking Policy Measures for Sovereign Control of Data in India, The Centre for Internet and Society, 19 March, 2019.

Bhandari et al, 2017: Vrinda Bhandari, Amba Kak, Smriti Parsheera and Faiza Rahman, An analysis of Puttaswamy: the Supreme Court's privacy verdict, LEAP Blog, September 20, 2017.

Justice K.S. Puttaswamy v. Union of India (Right to privacy case), 2017 (10) SCC 1.

Lecher, 2019: Colin Lecher, How Amazon automatically tracks and fires warehouse workers for 'productivity', The Verge, 25 April, 2019.

 

Rishab Bailey, Smriti Parsheera, and Faiza Rahman are researchers in the technology policy team at the National Institute of Public Finance Policy. Vrinda
Bhandari is a practicing advocate in Delhi. The authors would like to thank Renuka Sane and Trishee Goyal for inputs and valuable discussions.