BLOG

       Beyond Borders

Welcome to ELSA Tilburg's Blog 

on Law and Technology!

"This blog aims on creating a forum where law students can share their opinions

on current legal topics without the delay which typically comes with more formal publications" 

Miriam Rezaeian
Vice President for Academic Activities 2020/2021

The protection of privacy applied to cloud computing services


Vera Ugolini


Cloud computing services and contracts: main functions and obligations


The term cloud computing means "a multiplicity of contractual hypotheses in which, through the relevant contract, a subject (provider) provides another subject (end user) with one or more services".1 These services may consist in the performance of various services, including the storage, archiving and management service of one's data and documents in a so-called system. cloud, i.e., a computer system that can be accessed via a telematic connection or remotely.2


In fact, through cloud computing services the user can "use storage spaces, software or development environments without the relative resources residing in [his] computer systems (...) but by connecting to remote servers managed by third parties set off".3


This technological innovation has profoundly marked the organization and management of the company, allowing a significant development of IT and technological resources. Before the advent of cloud computing, the storage of data, documents, files, programs and operating systems could only take place on a hard disk or on a portable storage device, therefore always on physical electronic systems. Cloud computing services have made it possible to access digital storage spaces, which can be accessed electronically through any type of electronic device, without having to provide for storage systems or large mass memories in one's computer or storage media external storage to save and store your electronic assets.


Data protection in cloud computing contracts


Regardless of the cloud service model or the context in which cloud computing is used, the security related to the processing of personal data in cloud computing contracts remains a problem today. The Cloud Security Alliance in 2013 identified the nine biggest threats related to the use of cloud computing services.4


The first major threat concerns the breach of one's data protection, which can take place in the form of intentional or unintentional disclosure or disclosure of confidential information in an untrusted context, regarding financial information, personal data, health data, etc. In the event that a data breach occurs, sensitive, protected or confidential personal data can easily be transmitted, copied, viewed, stolen or used by persons not authorized to process them. For example, a malicious hacker could use an alternate channel to extract private cryptographic keys from other users of the same server. Another important risk that can be run in the use of cloud services concerns the loss of data, not only including the deletion of data by malicious hackers, but also by careless cloud service providers or in case of accidents caused by man.


Consequently, the use of cloud services involves the transfer to the service provider of the responsibility and control of personal data and storage systems, which were previously under the direct control of the private entity, user of the service. With the transfer of information, the risks are also shifted from the user to the cloud service provider, losing control over the processing of your data.


Cloud computing contracts pose a significant privacy problem, since they involve processing operations of a significant amount of personal and non-personal data, including the storage, processing and transmission of such data to users and third parties. For this reason, the activities carried out in execution of cloud computing contracts are also subject to the legislation on personal data enshrined in the GDPR. Therefore, even cloud computing contracts must specify the conditions for the processing of personal data, in the same main document or in a separate document, for example the Data protection privacy policy or even the Data processing agreement.5


The application of cloud computing in healthcare and genetic research

Cloud technology has been used in various fields, including healthcare and genetic scientific research. The European Digital Agenda makes explicit reference to the need for improvement and modernization of the health sector, allowing “sustainable medical assistance and the use of Information and Communication Technologies to promote dignity and autonomy”.6


Healthcare providers have therefore begun to adopt cloud systems for the management of healthcare practices, which have subsequently proved indispensable, thanks to their growing potential. In fact, service and performance providers in the healthcare sector have often limited themselves to investing resources exclusively in digital cloud infrastructures, which allow access to applications remotely, therefore in SaaS-type cloud systems.7


However, the use of cloud systems, as already discussed in the previous paragraphs, poses serious problems to the privacy of health service users, since often the personal data entered in the cloud application in the health sector are usually particularly sensitive data. such as those relating to the state of health. In addition, there are many subjects that provide services in the health sector and therefore have access to health data included in the cloud system, including suppliers, administrative staff of hospitals and laboratories and even the patients themselves.8


Bibliography


1 L. Tafaro, Cloud computing: attualità e prospettive, in R. Bellotti (a cura di), Il cloud computing nelle imprese e nella Pubblica Amministrazione. Profili giuridico-economici e applicazioni nel campo sanitario, Giuffrè Francis Lefebvre, Milano, 2019, textual quote p. 2.

2 In the matter of cloud computing see: M. D’Ambrosio, Cloudcomputing, in AA.VV., Manuale di diritto dell’informatica, D. Valentino (a cura di), Napoli, 2016, p. 413-421; A.R. Popoli, Il contratto di cloud computing: natura giuridica e clausole limitative di responsabilità, in giustiziacivile.com, 2015, p. 1 f.; D. Mula, Il contratto di archiviazione e gestione da remoto dei documenti informatici. Qualificazione del contratto di cloud services, p. 51.

3 A Mantelero, Il contratto per l’erogazione alle imprese di servizi di cloud computing, in Contr. impr., 2012, textual quote p. 1216.

4 T. Samson, 9 Top Threats to Cloud Computing Security, in InfoWorld, 25 February 2013.

5 L. Valle, B. Russo, D.M. Locatello, G. Bonzagni, cited work, p. 513 f.

6 COM 19 Maggio 2010, n. 245, “Un’Agenda digitale europea”.

7 L. Napolitano, cited work, p. 50 f.

8 A.K. Soman, Cloud-Based Solutions for Healthcare IT, CRC Press, 2011, in P. Frank, T.A. Ragone, Protecting health privacy in an era of big data processing and cloud computing, in Stanford Technology Law Review, 2014, p. 601






“The judge of the future”


Are artificially intelligent judicial systems possible and, if so, desirable?


Federica Simonelli, LL.M. Comparative European and Transnational Law, University of Trento - Italy


In the recent past, the legal world has been completely altered by the pervasive use of Artificial Intelligence (AI). The private sector, in particular, is increasingly trying to embrace the potential offered by artificial intelligence-driven-technology with massive investments that aim at keeping up with the AI revolution. Legal tech start-ups are proliferating, offering services like legal practice management, e-discovery, lawyers' marketplace and dispute resolution platforms, to improve, automate and speed up performance at law firms. In the public sector, at the same time, the AI revolution has also taken steps forward in the direction of predictive justice. 


But what is predictive justice? What are the purpose and the possible applications of artificial intelligence in the field of justice?


Legal practitioners know that endemic and systemic overload in the judiciary is an old plague in many countries. Particularly in civil dockets, the judicial system leads oftentimes to delayed justice and reduced access to courthouses: this is a common reality in numerous jurisdictions across the world. Unreasonable delay and unbearable costs of both civil and criminal justice has been addressed by legislators in the most diverse jurisdictions.


While alternative dispute resolution methods have gained major importance in facilitating courthouses' management, both in common law and civil law jurisdictions, there could be more innovative solutions to the problem. What if technology could facilitate decision-makers to deliver effective, cost-efficient, timely results? Predictive justice and artificial intelligence are at the fore-front of the debates on the reform of judiciary. 


The processing of judicial data by so-called artificial intelligence systems or methods derived from statistics aim at improving transparency of the functioning of justice, focusing in particular on predictability of the application of the law and the consistency of case law. In fact, predictive justice operates under the assumption that predictable outcomes in judicial adjudication will foster certainty of decision-making, reducing uncertainty. 


One of the most interesting applications of AI in the judiciary, is the creation of Jurisays, a prediction algorithm created at the University of Groningen (NL), that is able to predict outcomes of judgments of the European Court of Human Rights with an overall accuracy of 70.7%. Jurisays receives inputs from published documents from previous years and decisions of the cases judged by the European Court of Human Rights (ECHR) and predicts future court decisions. Every month it learns from its mistakes, using a particular family of algorithms, known as supervised learning algorithms.


Jurisays automatically checks new judgments published by the ECHR and compares its predictions to the actual decisions: when the prediction does not match with the EHCR decision, the algorithm learns from its mistake. 


This intriguing aspect of this AI application surely raises the question: will, or even should, automatic systems be used instead of human judges? The authors themselves clearly state that this is not the idea behind their prediction algorithm. Nevertheless, in certain jurisdictions, there are some minor and tentative applications of these algorithms: Estonia introduced a so called robo-judge to resolve small claims in courts, and some U.S. courts have introduced risk-assessment tools to determine the amount of bail. 


Undeniably, AI systems capable of providing support for legal advice, decision-making assistance or guidance for litigants must operate under conditions of transparency and fair processing. Failure in accurately monitoring transparency and fairness may pose ethical threats to adjudication. 


What if this quest for time- and cost-efficiency and certainty leads to dehumanized, standardized and biased decision-making? Is there a real danger of a robotization of justice, leaving room for a Frankenstein-like spectre? 


In light of its powerful transformative force, AI has sparkled ample debate in the legal community about the principles and values that should guide its development and use. Processing of data must be carried out in compliance with fundamental human rights and ethical principles embedded in constitutional and international human rights charts, such as the European Convention on Human Rights and the Convention for the Protection of Personal Data.


The Council of Europe (CoE) set up the European Commission for the Efficiency of Justice (CEPEJ), specifically designed to improve the efficiency and functioning of justice in the Member States, and the development of the implementation of the instruments adopted by the Council of Europe to this end. 


On January 31st 2018, the CoE adopted the European Ethical Charter on the use of Artificial Intelligence in judicial systems and their environment, prescribing 5 core principles to be observed: respect of fundamental rights, non-discrimination, quality and security, transparency-impartiality-fairness and “under user control” approach. The CEPEJ monitors the progress made in the field of justice, describing particular AI applications, addressing issues of judicial timeframes, quality of justice, evaluation of judicial systems and cooperation among countries.


One fascinating idea would be making a fair and unbiased decision-making system able to assist judges in their decision-making, or even replace them in courts. The idea of courts operating faster and in a more transparent way is appealing, but most of technology and algorithms used in legal tech were developed in other domains and consequences of using these algorithms may be much greater.

The heated debate on artificial intelligence applied to the legal industry goes beyond being a temporary phenomenon: the future will tell how far the implementation of AI in the judiciary will go. Until then, the human being is the one and only capable of bearing responsibility of making decisions. 





Bibliography

Medvedeva, M., Vols, M. & Wieling, M., Using machine learning to predict decisions of the European Court of Human Rights. Artif Intell Law 28, 237–266 (2020) <https://doi.org/10.1007/s10506-019-09255-y>


Medvedeva M., Xiao Xu, Wieling M., Vols M., JURI SAYS: An Automatic Judgement Prediction System for the European Court of Human Rights,
Legal Knowledge and Information Systems 334, 277-280 (2020) <https://ebooks.iospress.nl/volumearticle/56196>


Medvedeva, M., Wieling, M., & Vols, M., The Danger of Reverse-Engineering of Automated Judicial Decision-Making Systems, < https://arxiv.org/pdf/2012.10301.pdf>


Sitography 


JURI SAYS: <https://www.jurisays.com/>

Justice of the future : predictive justice and artificial intelligence <https://www.coe.int/en/web/cepej/justice-of-the-future-predictive-justice-and-artificial-intelligence>

CEPEJ, European Ethical Charter on the use of Artificial Intelligence in judicial systems and their environment 

<https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018/16808f699c> 


FinTech: drive the change


Tommaso Raucci


The pandemic affected almost every aspect of our daily life: restrictive measures rolled out to contrast the Covid 19 have changed the way we communicate to each other and to work too. The technology has made it possible keeping up with our previous habits. Indeed the pandemic has accelerated the process of digitization already underway in many economic and productive sectors; concerns arise in the lack of regulation implemented throughout this process.


FinTech expresses the main example of the lagging regulation in such matters. It represents the union of the two terms “Financial” and “technology” and it is really difficult to draw an accurate definition. Perhaps, as for the most is accepted, it could be defined as “technologically enabled innovation in financial services that could result in new business models, applications, processes or products with an associated material effect on financial markets and institutions and the provision of financial services.1. This is a broad definition that encompasses a large set of financial services underpinned by the technology: peer-to-peer landing, crowdfunding, instant payments and even the more fascinating cryptocurrency.


Nowadays, in the midst of this pandemic, the utilization of financial and banking apps is stepping up, especially among Z and Y generations, and it is thought to stay2: everyday we are making payments through smartphone, checking out our bank account through online banking up or even take out a mortgage through a digital platform or we already have in our digital wallet a bitcoin, the most popular cryptocurrency.


Such a move to the digital in the matter of payments and online banking has gained so far a close attention from institutional investors to the evolution in financial sectors: those actors already have increased their investment in renewing part of their business model.


The growth of digital payments, thus, and the declining use of cash have prompted central banks around the globe to experiment with digital versions of cash, the so-called digital currencies (CBDCs).

Nevertheless a technological divide is still affecting the oldest generations and this concern arises especially in this pandemic. The broader adoption of technology in ways that were unusual before, has required, first of all, a mindset change before than the operational one.


Further concerns are stemming from the entrance into financial services providing from the Big Tech, the already worldwide big actors in the digital sector, exploiting their leverage on the data flow coming from their broad userbase posing a serious threat on the guaranties set by financial intermediaries already existing, the incumbent ones, especially in emerging and developing markets3.


The potential of this pivotal shift could bring many advantages as well as many concerns: fintech could increase financial inclusion, create new and more suitable business models in financial sector, but also could pose relevant concerns over customers privacy and fair competition.


Regulators are coming to tackle those issues.


As noted by EU Authorities “in the face of this and the challenges brought by big techs, Europe needs a strategic vision to ensure that consumers and companies fully reap the benefits of an integrated market, offering secure, fast, convenient, accessible and affordable payment services4.


The European Commission, in such an effort to to reduce the regulatory gaps between the EU member states laying the ground to create the long awaited and wished Capital Market Union, already published in May 2018 Action Plan for FinTech: this document represents a milestone of FinTech development and gives the strongest indication to be followed that technological innovation and disruption will be among the main drivers of the EU’s future agenda.


The measures exposed aimed to understand the current national approaches to FinTech licensing, to support the development of common standards and interoperable solutions for FinTech, to map the innovation facilitators set up by individual Member States and finally to assess the fitness of the existing regulatory framework for the use of disruptive technologies.5


Since then throughout the Europe have been taking place a lot of projects in almost every nation: regulatory sandboxes and innovation hubs were established allowing the better development of the new FinTech services implementing innovative projects and thus facilitating the dialogue between entities and Competent Authorities.

Swiss regulators, one of the more prominent regulator on the matter, already established a ten principle list6 as main guidance to set a proper FinTech regulation: customer protection, FINMA (FINancial Market Authority) digital platform authorization provision, principle-based regulation and open door principle will be the crucial driver in such a great attempt.


The pandemic has played an important role in the FinTech apps utilization, as already said, and in the regulation too.


EU has initiated a public consultation entitled “Consultation on a new Digital Finance strategy” in the early stage of the measures applied to lessen the effects caused by the pandemic. This “consultation was designed to gather stakeholders’ views on policies to support digital finance7.


Many prominent financial industry actors express favor over such a move but, as for them, it will be possible only with an appropriate and clear regulation to put in place. Recently, institutional giants Square, Fidelity, Paradigm, and Coinbase joined forces and created a cryptocurrency council8 dubbed “The Crypto Council for Innovation”.


Change brings always benefits along, but it needs time and resilience to adapt in a better way. That is the case to the adoption of the FinTech and the regulation will represent a significant driver for its broader acceptance.






Bibliography

1 Financial Stability Board, FinTech credit Market structure: models and financial stability implication, may 2017

2 Fintech is here to stay, says JPMorgan chief Jamie Dimon (finextra.com)

3 Bigtech Firms in Finance in Emerging Market and Developing Economies: Market developments and potential financial stability implications (fsb.org)

4 Consultation on a retail payments strategy for the EU | European Commission (europa.eu)

5 The EU’s FinTech Action Plan: charting the way forward | Fintech | Deloitte Lithuania

6 SFTI-Positionspapier FinTech-Regulierung final (swissfintechinnovations.ch)

7 Consultation on a new digital finance strategy - summary of responses (europa.eu)

8 JPMorgan CEO Calls for More Regulatory Clarity for Cryptocurrencies | Blockchain News






“The Future of Legal Tech"


Conor Parry


The two primary challenges facing the adoption of technology by the world’s largest law firms are security and procurement. From the security perspective, the evolution of modern legal technology surrounds moving such applications from on-premises deployments to a Software-as-a-Service (SaaS) model controlled by the vendor, requiring law firms to send confidential information to the vendor’s managed cloud, thus risking the security of sensitive client information. This is, in part, the reason for the slow adoption of legal technology by these law firms, as the reward is not worth more than the risk, even if they can get client approval to use these new SaaS-oriented solutions run by application vendors.


There is now a new frontier being explored by the legal technology sector that eliminates this risk, enabling a vendor to have a common code base for the SaaS offering and an on-premises deployment, thus creating a win-win model for both application vendors and law firms. This approach enables them to harness automation and utilise developments in new Cloud Native Computing Foundation technologies that are being adopted around the world by major corporations and software developers. This new frontier is based on three technologies: Docker Containerized applications, standardized Kubernetes Cluster to run the applications, and Helm Charts to manage the automated deployments of the containerization of legal software.


Put simply, containerization allows applications to be “written once and run anywhere”1. It involves packaging up application code and all of its dependencies, thus isolating the software from its deployment environment, and ensures that it works uniformly and reliably from one computing environment to another2. A law firm can then deploy containerized software to their on-premises infrastructure (aka internal cloud) or to a virtual private cloud (e.g., Microsoft Azure, AWS), ensuring that all sensitive and confidential client information remains within the law firm’s virtual private network that is behind their firewall, which in turn keeps confidential information secure. In essence, bringing apps to the law firm’s data, instead of the other way around.


Reynen Court, the company at which I work, enables this. As a platform that can be deployed and installed by the law firms, applications from legal technology companies can be deployed through the Reynen Court platform, which is secured and managed by the law firms through automation.


In addition to the technical hesitancy in the legal technology sector, there has long been the issue of procurement. More specifically, how law firms find out about new technologies that solve specific legal challenges, and then figure out which technology best solves the problem their lawyers are facing. Reynen Court also has an answer to this in its Solution Store. This is a curated catalogue of legal technology that is aiming to become the ‘one-stop-shop’ for law firms to discover, source, and test different solutions. As the Covid pandemic turned the working world on its head in March 2020, many law firms had to adapt over night to a working from home culture. Lawyers are now seeing the value in harnessing technology to aid them when working remotely. Paul Greenwood, the Chief Information Officer at Clifford Chance, stated that “as a result of the Covid pandemic we have seen an explosion in the rates of digitisation in the legal industry and much greater adoption of new technology”3.


The Reynen Court Solution Store works in a similar way to the App Store you would find on your phone. It is full of tools for lawyers that are easy for law firms to find, purchase and install – saving considerable time and money4. Imagine a contract that was hundreds of pages long that you needed to be translated from German into English. You can find an app for that on the Reynen Court Store (SDL Machine Translation) and you can spin up a ‘Test Drive’ to see if your lawyers like using it. Imagine a highly sensitive transactional document that you needed to have anonymized quickly, and then find there’s also an app for that on the Reynen Court Store (this one is called NAIX), which can also be tested by lawyers before buying it. What once took law firms days, weeks and even months to do, now can take minutes – securely and efficiently. With more than 200 third-party application vendors actively engaged with Reynen Court, 100 of which are under contract with their Solutions published to the Store or working towards publication, the breadth of the catalogue covers a wide range of product categories5.


The ability to Test Drive Solutions that you can find in the Solution Store enables Reynen Court platform users to get an immediate hands-on experience of new applications. This one-click Test Drive capability uses sample data allowing for speedier app evaluation6, enabling a user to compare Solutions and decide upon which best meets their requirements. It also enables innovation teams to conduct this comparison without requiring any support from their internal technology departments7. This is of great benefit to the legal technology companies who support this feature on the Reynen Court Store, as it allows a law firm end user to get a hands on experience of their software. Amir Reshef, the CEO of dealcloser describes how, “nothing helps us close sales more than getting users to take control of the mouse and experience the power of our next generation transaction management solution”8. This eliminates the risk of a law firm spending several months negotiating with a legal technology company, only to get the software in front of their lawyers and discover that it doesn’t meet their needs.


The law firms backing Reynen Court range from Clifford Chance (USA & UK) to Slaughter & May (UK), and Borden Ladner Gervais (Canada) to Nishimura & Asahi (Japan). This demonstrates the global enthusiasm for legal tech and the opportunity to foster a closer relationship between law and technology. In acting as the facilitator between legal technology companies and law firms, Reynen Court is striving to be on the frontline of the next generation of legal technology and to enable the industry to more easily and securely evolve as a whole.





Bibliography


1 IBM Cloud Education, ‘Containerization’ (IBM Cloud Learn Hub, 15th May 2019) <https://www.ibm.com/cloud/learn/containerization> accessed 1st April 2021

2 Docker, ‘What is a Container?’ (App Containerization | Docker) <https://www.docker.com/resources/what-container> accessed 1st April 2021

3 Sara Merken, ‘Legal tech platforms draw Big Law investments from Orrick, Latham, Clifford Chance’ (Reuters Legal, 29th October 2020) <https://today.westlaw.com/Document/I92cfbb301a2d11eb98a7ba49cf8c648b/View/FullText.html> accessed 3rd April 2021

4 Joris Rietbroek, ‘The top lawyers believe in Reynen Court and Lupl’ (Advocatie, 21st October 2020) <https://www.advocatie.nl/innovatie-en-tech/de-topadvocatuur-gelooft-in-reynen-court-en-lupl/> accessed 4th April 2021

5 Reynen Court, ‘Reynen Court launches one-click test drives’ (Reynen Court Press Release, 14th December 2020) <https://mailchi.mp/d53ff7b466e8/reynen-court-releases-version-4889550> accessed 4th April 2021

6 Sam Skolnik, ‘Legal Tech ‘App Store’ Reynen Court Releases Version 2.0’ (Bloomberg Law, 20th August 2020) <https://news.bloomberglaw.com/business-and-practice/legal-tech-app-store-reynen-court-releases-version-2-0> accessed 6th April 2021

7 Reynen Court, ‘Reynen Court launches one-click test drives’ (Reynen Court Press Release, 14th December 2020) <https://mailchi.mp/d53ff7b466e8/reynen-court-releases-version-4889550> accessed 5th April 2021

8 ibid.





WHERE IS MY MIND?

(And who has a right to it?)

Laura Kata Szegi

During the last few decades, as neuroscience has been rapidly advancing, the idea of mind reading through technological tools have popped up in pop culture, one of the most well known examples being the 2002 movie, Minority Report. We have all seen or at least heard about the successful Netflix show, Black Mirror as well. In the episode titled Crocodile, the concept of an investigative technique that visualizes memories of individuals for another party to see is brought up, and even though the episode itself has been critiqued for not exploring this angle more deeply1, the question obviously arises: Who has a right to our mind? Are there any circumstances, for example a criminal investigation, that allow for our thoughts and memories to be monitored or retrieved by the authorities?


It is important to acknowledge that brain scanning technologies are not quite “there yet”. There is no way to draw and print out the perfect image of what someone is dreaming about - although advances are being made, and in some studies researchers were able to at least identify vaguely if the participant was, for example, dreaming about a car.2 Nevertheless, it is still not possible to put a suspect into a futuristic machine and read his memories of the incident in question in pursuit of the truth. However, as neuroscientist Jack Gallant has pointed out, it is worth starting to think about these issues so we can have a framework to work in when the technology finally arrives.3 As is well known, the procedures of implementing laws - or fitting a technology into an existing legal framework - works rather slowly, therefore the question of the legal nature of our thoughts from a criminal procedural perspective deserves to be discussed.


One of the most well known legal phrases, mostly thanks to crime dramas, is the right to remain silent. Strongly connected to this is the privilege against self-incrimination or nemo tenetur. Article 14(3)(g) of the ICCPR, to which 173 countries are party, specifically states that everyone should be entitled to “Not to be compelled to testify against himself or to confess guilt”4. Obviously, if the suspect does not want to incriminate himself, he has to have a right to remain silent, right? But what about your mind? Can you silence your thoughts? Does the prosecution (of the average Western jurisdiction) have a right to look at the images that were taken of your brain, if those images may contain information that you do not want to share?


In its 1996 decision on Saunders v UK the ECtHR stated that the privilege against self-incrimination “does not extend to the use in criminal proceedings of material which may be obtained from the accused through the use of compulsory powers but which has an existence independent of the will of the suspect such as, inter alia, documents acquired pursuant to a warrant, breath, blood and urine samples and bodily tissue for the purpose of DNA testing”5. Simply put, the court decided that if a specific piece of evidence, such as breath or DNA exists independently of the will of the suspect, then it can be obtained and used against him during the trial. Therefore, how we define our thoughts from an evidentiary perspective is the main question to answer. For the purposes of this article, I will not examine this from a philosophical perspective, although the question of free will and consciousness begs itself when trying to find an answer to whether our thoughts exist independently from us. Rather, it is worth looking into some comparisons. The aforementioned court decision also touches upon documents acquired through a warrant. If we look at the images or files of the brain, or however the future technology will present itself, then certainly the information on those documents, provided that they were obtained through a warrant, is legitimate and usable evidence according to the standards set by the ECtHR. However, the warrant themselves can only be obtained if the authorities are certain that such documents exist. However, it is safe to say that determining whether a certain piece of information in a person’s brain exists, is barely ever certain.


Another similar, and already existing tool is the polygraph, commonly known as the lie detector test. These tests have major similarities to the future brain scanning techniques in that they both intend to gain information about something that exists “inside the head” of the person undergoing it. It is a widely known fact that polygraphs are not fool proof however, therefore they are generally not admissible in court in Western societies. Furthermore, and connecting it to the concept of nemo tenetur, in most European countries polygraphs are seen as tools which violate the right to remain silent, and therefore courts have rejected its use, and policies have been enacted against it.6 It is hard to imagine therefore that brain scanning technologies with more significant potential and accuracy would be seen by courts as the way to go forward.


When measuring risks and gains, rights and wrongs, it is important to keep in mind that although science is an amazing tool which can be used to help the criminal justice systems of our societies, at the end of the day, the decision on guilt or innocence is in the hands of the judge or jury, not the neuroscientist.7 Therefore, instead of developing new legal approaches for the new waves of science to fit in, it might be more desirable to use our existing frameworks and approaches, already containing doctrines of fundamental principles, for the new technologies.8 If that is the way forward, then based on the stand Europe took on the polygraph, we might never see the day these brain scan techniques enter the Court of Law.




Bibliography


1Wilkinson A, “Black Mirror's ‘Crocodile’ Plumbs Our Memories and Drags out What Lurks in Them” (VoxDecember 29, 2017) <https://www.vox.com/culture/2017/12/29/16808458/black-mirror-crocodile-recap-season-4-review> accessed April 14, 2021

2Miller G, “Scientists Decode Dreams With Brain Scans” (WiredJune 4, 2017) <https://www.wired.com/2013/04/dream-decoder/> accessed April 14, 2021

3Miller G, “Scientists Can't Read Your Mind With Brain Scans (Yet)” (Wired) <https://www.wired.com/2014/04/brain-scan-mind-reading/> accessed April 14, 2021

4UN General Assembly, International Covenant on Civil and Political Rights, 16 December 1966, United Nations art. 14(3)(g)

5Saunders v. The United Kingdom, 43/1994/490/672, Council of Europe: European Court of Human Rights, 17 December 1996, para. 69

6 Meijer, Ewout H; van Koppen, Peter J (2017). "Chapter 3. Lie Detectors and the Law: The Use of the Polygraph in Europe". In Canter, David; Žukauskiene, Rita (eds.). Psychology and Law : Bridging the Gap. Routledge. ISBN 9781351907873.

7Kraft, C. J., & Giordano, J. (2017). Integrating Brain Science and Law: Neuroscientific Evidence and Legal Perspectives on Protecting Individual Liberties. Frontiers in neuroscience, 11, 621. https://doi.org/10.3389/fnins.2017.00621

8Ligthart, S., Douglas, T., Bublitz, C. et al. Forensic Brain-Reading and Mental Privacy in European Human Rights Law: Foundations and Challenges. Neuroethics (2020). https://doi.org/10.1007/s12152-020-09438-4






JOMO: JOY OF MISSING OUT -

THE MANTRA EVERY PRIVACY LOVING INDIVIDUAL SHOULD ADOPT WHILE USING CLUBHOUSE!


Satwik Singh



In this blog post I analyse the privacy policy1 of the Clubhouse app from the prism of the General Data Protection Regulation (GDPR)2 and the E-Privacy directive (Directive).3 After completing a rigorous course in Privacy and Data Protection it has become a hobby of mine to examine whether the privacy and cookie policies of different websites are in consonance with the principles of GDPR and other privacy regulations/directives. The initial aim behind this exercise was to ascertain my own understanding of the subject more than anything else, however soon I found out a disturbing trend. The trend was that most websites’ privacy and cookie policies were not complying with either the spirit or the letter of the GDPR and the Directive. It either means that even after some years of the GDPR now being in force, the real understanding of the provisions and how to comply with them is still at a nascent stage or the more disturbing conclusion that companies do not care enough about the personal data they process and are more interested in providing a mere lip service to the legislations and hopefully limit their liability.

Naturally the chatter about the huge privacy and cybersecurity risks about the new social media sensation Clubhouse caught my attention. For the uninitiated, Clubhouse is the newest kid on the block of social media and networking apps and has seen meteoric rise in its popularity and usage.

This combination of offering exclusive access and a very unique audio only usage drove this app very high on the popularity list. I had no great expectations when I started reading Clubhouse’s Privacy Policy however I have no hesitation in admitting that I was totally taken aback by the extent of noncompliance with the GDPR provisions. The subsequent section of the blog identifies the concerning parts of the privacy policy along with my explanation of why the said part is contrary to the provisions of the GDPR.


1. “By visiting Clubhouse's website(s) and all other products, services and applications made available by Clubhouse from time to time (collectively, the "Services"), you acknowledge that you accept the practices and policies outlined in this Privacy Policy. By using the Services, you are consenting to have your personal data transferred to and processed in the United States.”


This is part of the first paragraph of the Privacy Policy of Clubhouse app and it immediately raises a big red flag when seen in context of Data Processing under the GDPR and the E-Privacy Directive. Calling it just a big red flag is admittedly an understatement, this is because here the app has not just managed to breach a single article of the GDPR but an entire Chapter. Chapter 5 of the GDPR clearly deals with the various situations in which data can be processed or transferred outside the EU and in this case the United States. Long story short, Articles 45, 46, 47 and 49 basically provide the framework of how data can be transferred outside the territory of EU. The basic premise behind this chapter is that data can be transferred outside the EU if and only if it is ascertained that the third country has an adequate level of protection. The striking down of the EU-US Privacy shield in Schrems II is based on the adequacy principle discussed in this chapter. It is very interesting albeit quite disturbing as well to note that Clubhouse manages to completely ignore everything present in Chapter 5 of the GDPR in the very first paragraph of its privacy policy.


2. “Individuals from the European Union ("EU") may only use our Services after providing your freely given, informed consent for Clubhouse to collect, transfer, store, and share your Personal Data, as that term is defined in the EU's General Data Protection Regulation.”

In the very next paragraph, the Privacy Policy of Clubhouse seems to completely ignore the provisions of Article 5 (1) (b and c) of the GDPR which provides that specific and explicit purpose for which the data is being processed needs to be mentioned and in line with the principle of data minimization only that amount of data should be processed which is required for the purpose. From the text it is amply clear that the wording has been deliberately drafted vaguely, it is not clear for what purposes the consent for processing of Data is being collected and further it is also not taking a clear consent for such processing because the consent is not being freely taken, the wording clearly says “individuals from the EU may only use services…” a plethora of judgments exist on this issue wherein the Court have very strictly interpreted Article 7(4) of the GDPR to state that consent is freely given only when the provision of a service is not conditionally dependent on providing of consent. The explanation is that if a service is being denied to users if they don’t consent to the processing of their personal data, such a consent cannot be taken to construe a “valid consent”. The Privacy Policy clearly fails to take free consent for the processing of personal data.


3. “Certain information that is collected automatically, such as device ID, IP address and phone number, and browsing information that is associated with a user will be treated as Personal Information.”


Prima facie the wording is such that it in no uncertain terms accepts that users are subject to automatic collection and processing of personal data and as such this clearly attracts the application of Article 22(1)4 of the GDPR. The Privacy Policy fails to acknowledge this right of the data subject to choose to not subject himself or herself to this automatic processing of data. In addition to the aforesaid, Clubhouse also accepts that data may be vulnerable to external attacks and accepts no liability for the same.

While Clubhouse really offers a very unique service and may be really beneficial to new businesses by helping them make a brand name of themselves, it is a nightmare in terms of privacy risks. Therefore keeping in context the various risks, it may be advisable to resist the fear of missing out and the bandwagon effect and instead adopt the joy of missing out when it comes to using Clubhouse, an app which prima facie has no concerns about protecting personal data or breaching GDPR principles!




Bibliography


1 The Privacy Policy of Clubhouse app can be accessed at <https://clubhouse.io/privacy/> accessed on 20 March 2021

2 The General Data Protection Regulation can be accessed at < https://gdpr-info.eu/> accessed on 20 March 2021

3The E-Privacy Directive can be accessed at<https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX%3A32002L0058> accessed on 20 March 2021

4 Article 22(1) of the GDPR reads as “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling... or similarly significantly affects him or her".