Monday, December 23, 2024

‘Citizenship for Sale’ to Robots

by Prof. Hussein Abbass, School of Engineering and Information Technology, University of New South Wales, Australia, published by Henley and Partners

 

The Commodification of Citizenship and the Ethics of Human Rights

 

Imagine it is 2025. ‘Citizens for Sale’ is an online shop that sells robots that have been granted citizenship by a particular country. The old trick of marrying a citizen in order to obtain residence in a country has become outdated. Citizens for Sale’s success is based on a business model that allows humans to buy robots so that the owners become citizens of the robot’s country of citizenship, automatically affording them all the benefits associated with the citizenship.

 

Even if the robot’s owner does not wish to reside in the new country, his or her citizen robot avatar can do so on his or her behalf. Taking this thought a step further: The robot can act on its owner’s behalf by attending board meetings, for example, to connect the owner to the board remotely and even make financial decisions autonomously, as long as these decisions do not violate the robot’s delegated privileges. This would prove beneficial for a highly influential businessman or businesswoman with a range of business interests in multiple countries, who could conveniently and comfortably manage his or her robot (or robots) from his or her home office.

 

Before disregarding the above scenario and labeling it juvenile science fiction, it is important to consider a few facts. In October 2017, the Kingdom of Saudi Arabia granted a robot named Sophia citizenship. A few weeks after that announcement, an artificial intelligence (AI) ‘boy’ named Shibuya Mirai became the first machine to be granted residence in central Tokyo. Shibuya Mirai is also the first AI to be officially genderized by a state (Sophia’s creators, Hanson Robotics, casually refer to the robot using the feminine ‘her’). Are these isolated incidents or are they the beginning of an era in which Citizens for Sale will become an actual business model for robotics?

 

Citizenship for Robots
Sophia is a social humanoid robot developed by Hong Kong-based company Hanson Robotics. Sophia was activated on April 19, 2015. and able to display more than 50 facial expressions

 

Before we get excited, let us discuss some of the principles that may or may not hinder this possibility. In most countries, the law defines a legal person, or simply person, as being either natural (a human) or juridical (a corporation). For Sophia to be a globally recognized citizen, we need to amend the definition of a legal person to include not only natural and juridical but also robot. Only then can a robot be recognized as a legal person in most countries, and only then can the concept of conferring citizenship on robots begin to be plausible. The EU is in fact exploring electronic personhood to ensure that robots are accountable for their actions.

 

However, where does this lead us as a society?

 

A legal person is defined by law to make persons unambiguously accountable for their actions. The board of directors in a corporation is the legal entity responsible for the corporation’s actions. The board of directors comprises humans. A human is eligible for punishment by serving a jail term or paying a mulct. Isolation and being in a jail induce psychological distress, resulting in a form of pain. A mulct reduces the resources available to a human, exerting financial or resource pressures that prompt psychological distress.

 

Societies typically rely on psychological pain, through jail terms or mulcts, for punishment rather than inflicting physical pain. Exerting this form of psychological distress seems to be regarded, by the law, as more humane and a fair consequence; after all, it is the human’s decision-making mental processes that caused the individual to act inappropriately, thus deserving castigation.

 

Will a jail term for Sophia cause Sophia pain? What does jail even mean for Shibuya Mirai when this robot has no physical body and lives only in cyberspace? Even if machines develop emotions, will these afford them a pain experience that humans deem sufficient? What, even, is machine pain and how can it deter a machine from acting inappropriately? What would be an alternative form of consequences that would make it meaningful to punish a machine and, therefore, hold it accountable for its actions? Outside science fiction, these are complex ethical, moral, and technological questions with, at present, only embryonic answers.

 

Ultimately, humanity today has no meaningful mechanism for holding a machine accountable. So, how can we navigate this reality and confer citizenship on a machine that has no ability to be accountable for its actions, as in the case of Sophia and Shibuya Mirai?

 

Citizens for Sale uses an innovative business model in that John can be held accountable for the actions of the robot citizens he purchases. John carries responsibility on behalf of his robots. If these robots are truly smart, trustworthy, and reliable, they will naturally and autonomously act in John’s best interest. In the event that Citizens for Sale can certify the trustworthiness of these robots, would society deem this sufficient cause for John to accept the risk of being accountable for his robots’ actions?

 

In this scenario, there are still at least three challenges. Firstly, citizenship is a decision of the state. The honorable concept of citizenship cannot be transferred, inherited, or sold by a human. A state may well legislate the conferral of citizenship to the sons and daughters of a citizen, but the actual citizen does not have the legal remit to do so. For a human, citizenship is a privilege. Once the privilege is granted, humans have access to rights as determined under the auspices of the state. By deduction, Sophia does not have the right to transfer her citizenship to John.

 

The second challenge is whether accountability is transferable. In principle, parents may be held accountable for the actions of their children if these actions are due to negligence on the parents’ part. However, not every mistake a child commits will result in the automatic transfer of accountability to the parents. The question is, in the case of robots, under which conditions should the owner be held accountable? Additionally, under which conditions will we forgive the robot for its transgression if the deed has nothing to do with the owner or manufacturer but rather the outcome of exposure to inappropriate habits and the result of undesired learned behavior? The cost of robots’ actions will affect humans directly and/or indirectly, but how much of our fundamental human rights are we, humans, willing to forego in order to pardon robots for their misdeeds?

 

The third challenge is related to ethics and moral values. Those smart robots that will act in John’s best interest are loyal to John. But what happens when John’s perceived best interest could be achieved unethically or could violate the moral values of the society in which the robots are situated? For these robots to be ethically and morally compliant, we — humans, that is — need to agree unequivocally on the rules that the robots must abide by in the determination of acting ethically. Unfortunately, the challenge here is not a technological one alone.

 

To date, humans have failed to articulate clearly and objectively what these ethical boundaries are. This may be because these ideals are very much context-dependent, hence it is not possible to enumerate all of them, or human diversity implies that universal consensus is impractical.

 

While most logical human beings would deem the conferral of citizenship on Sophia as a trivial media exercise, it is difficult to deny that the principle is already on the market, necessitating the application of concerted intellectual engagement. Even if it is a media stunt, using the honorable concept of citizenship transforms it into a highly relatable commodity.

 

So, how do we move forward? One option is to remain in fear and denial of the reality that we are moving toward, while another option is to start planning for it. Giving in to fear will slow us down, leaving us with little time to prepare for this technology-based reality-to-be. A comforting thought is that most of the challenges related to this new dispensation do not need to be addressed universally. Contextualizing the challenges can help us ready ourselves for this unfolding reality. More importantly, doing so will afford us a prospect of shaping it better.

 

The research topic of trusted autonomy is an overarching one that looks into the AI technology needed to design ethical and moral robots, smooth the relationship between humans and robots, and develop certification and performance assurance of smart robots. Trusted autonomy aims to maintain human dignity and rights by unfolding the complexity of what makes something ‘right’, designing the right technology, and designing the technology right.

 

If human rights are at risk, the least we can demand from manufacturers is assurance pertaining to the trustworthiness of these robots, with all of what the concept of trust entails. However, before we can demand that, we need to invest in identifying what is technologically achievable and what is not.

 

Nevertheless, this leaves us with two questions: Who benefits most from — and, by implication, who should invest in ensuring the trustworthiness of — AI and smart autonomous systems?

Prabhu Balakrishnan
Prabhu Balakrishnan
Founder of Citizenship by Investment News. Chief Editor with over 15 years experience in PR and News publishing. He Loves writing about citizenship, residency and wealth migration. CIP Journal is a Leading publication founded in 2017 bringing latest news from CBI/RBI market.

Related Articles

Stay Connected

279FansLike
3,983FollowersFollow
732FollowersFollow
- Advertisement -spot_img

Latest Articles