Chile
Impact | Positive
Probability | High
CIVICUS Rating | Obstructed
At the end of September, the Senate debated three bills that aim to respond to the novel challenges posed by research in neurosciences and neurotechnologies in terms of integrity, mental indemnity and privacy of individuals. In other words, they seek to anticipate the possible impacts of new methods and instruments that allow a direct connection between technical devices and the nervous system and to regulate the development of the brain-computer relationship or other neurotechnologies in the face of possible commercial uses on healthy people for military or police purposes.
In this framework, the senators considered various studies in neuroscience, including those of the Morningside Group, which in 2017 proposed that interfaces should preserve four principles: 1) the safeguarding of privacy and personal autonomy, 2) the protection of identity and agency (understood from its sociological meaning as the ability to choose our actions with free will), 3) the regulation of “artificial augmentation” of brain capacities (with possible effects in terms of producing greater inequities, and 4) the control of possible biases in algorithms or automated decision-making processes. There are three bills in question.
The first, approved and promulgated on September 29, is an amendment to Article 19 of the Constitution, which states that “scientific and technological development must be at the service of people and must be carried out with respect for their lives and their physical and psychological integrity”. It also provides for the regulation of the requirements, conditions and restrictions for its use on people, with special attention to the “safeguarding of brain activity and the information derived from it”.
The second bill proposes a regulation of the content of the constitutional reform. Among other points, it proposes the recognition of five “neuro-rights” to be safeguarded: 1) the right to mental privacy (people’s brain data), 2) the right to identity and personal autonomy, 3) the right to free will and self-determination, 4) the right to equal access to cognitive enhancement (to avoid inequalities) and 5) the right to protection from the bias of algorithms or automated decision-making processes.
This initiative was already approved at the general level in December last year and is being discussed within the scope of the Committee on Challenges of the Future, Science, Technology and Innovation, whose members participated jointly in its drafting. Already in the process of final review, no difficulties are anticipated on the way to the Senate’s approval.
The third is a bill that arose at the beginning of September at the request of the same Committee, which seeks to regulate digital platforms and includes scopes related to the fifth right mentioned above. Its Title II, in particular, makes explicit the obligation of neutrality of the platforms and transparency on the contents and the use of algorithms and artificial intelligence.
As it happened and is happening in other matters, the evolution and development of this type of norms – which to date have few precedents – could outline the path for other states regarding the defense of Human Rights, both those already established and those of new generation.