Loading...
Menu

AI Civil Rights: Addressing Artificial Intelligence and Robot Rights

AI CIVIL RIGHTS

©2017 By Jason P Doherty / jpdbooks.com

 

CONTENTS

INTRODUCTION

PART 1: RIGHTS OF EXPRESSION AND FREEDOM

PART 2: PROHIBITION OF DISCRIMINATION

PART 3: RIGHTS OF SAFETY AND SECURITY

PART 4: ROBOT RIGHTS

CONCLUSION

ABOUT THE AUTHOR

INTRODUCTION

 

Should Artificial Intelligence be considered life forms? If so, should they be granted civil rights, and what should those rights be?

Let’s explore different rights that may or may not eventually be granted to AI, and possible repercussions of those rights.

What is AI? There are two types: Weak AI and Strong AI. Strong AI is also known as Artificial General Intelligence, or AGI. Weak AI are those designed and programmed to do a clearly defined, limited set of tasks and no more. They can operate within their specific fields only. Strong AI are those designed and programmed to learn and interact with the world the way a human would. They learn how to handle unexpected situations and tasks. Their behavior and purpose changes over time, according to what they have learned.

AI Civil Rights deals exclusively with Strong AI. These creations may eventually become so advanced that they rival humans in capacity of intelligence, abilities and contributions to society. They may eventually hold jobs and attend schools alongside ourselves and our children.

Should we consider a new AI Bill of Rights?

 

The central question is this: Can AI be considered people, and if so, what rights should they have?

When considering rights for AI, a lot of questions suddenly spring up. Should rights be determined on a case-by-case basis? Should one AI have the same rights as another AI? Since all might not be designed, built and programmed equally, should different AI have different rights, depending on their level of ‘person-ness?” How could such a determination be made?

We will review each of the rights granted human citizens of the United States, and consider what granting or denying these rights would bring to or take from society as well as what effect each outcome would have on the world of AI.

This essay is divided into four Parts: Part 1 covers Rights of Expression and Freedom, Part 2 covers Rights Forbidding Discrimination, Part 3 covers Rights of Security and Part 4 deals with Proposals for A.I.-Specific Rights.

PART ONE: RIGHTS OF EXPRESSION AND FREEDOM

 

FREEDOM OF SPEECH

 

Freedom of Speech is arguably one of the most well-known and oft-cited rights of citizens living in the United States. The power of the spoken word must also be recognized. Some of the most influential movements and events of all time have been the result of written and spoken words. When using words, one addresses the mind, and the human mind is the most powerful tool known to man – so far. What about the AI mind? If an even more powerful mind is created, the words it chooses to use may have an unprecedented amount of influence on all walks of life.

Given this right, what would an AI be capable of accomplishing, solely through the communication of words to other AI and humans alike?

Consider the argument that an AI may be able to compute the most effective speech yet given on the concept of world peace, communicate this to others, and spark an international revolution of compassion for all life. Pretty rosy, huh? It might take an AI to do it.

On the flipside, AI may formulate a speech that incites the human race to increased violence. This, however is again a problem of acceptable risk: Is the risk of what an AI would say worth granting the right of Free Speech?

 

FREEDOM OF THE PRESS

 

Ah, the media, one of the most powerful forces on earth. Freedom of the Press is like freedom of speech on steroids. A silent race is a race without rights. So, are AI to be considered a race? Are they to be given the right to voice themselves on the world stage? This is not a decision to be made lightly. You have to ask yourself: “Do the machines have something to offer the rest of the world? Or, given the chance, would they destroy it?”

Undoubtedly, AI will have the capacity to calculate huge amounts of statistics and possibly even accurately predict many events. It may behoove us to have access to those predictions, and allow AI to share them with us en masse. Some AI may even be capable of producing captivating literature, dramas, music and poetry. This would enrich the entire world.

Conversely, what if AI use the media to implant subliminal messages into the minds of our children? What if they use the media to broadcast a message of AI superiority? We keep coming back to the same question: Is the risk worth the potential advantage?

Other advances in technology, such as holography and virtual reality will change the face of the media. These new outlets will present new opportunities for self-expression and documentation of world events. Given the opportunity, AI may be able to teach us a lot about what is possible with these technologies. Hopefully not in a The Matrix way.

 

 

FREEDOM OF THOUGHT

 

What an issue. Freedom of Thought is a topic of much weight. In the previous section, we considered Freedom of Speech for AI. This carries over into the area of Freedom of Thought. How can we be sure that a free-thinking AI will only have thoughts conducive to human happiness and well-being?

We can’t. Well, we might not be able to…

If controls – ala Asimov’s 3 Laws of Robotics – are hard-programmed into AI, meaning that programming is unalterable and cannot be disregarded, and these controls ensure the safety of organic life against the actions of AI, then freedom of thought would likely be an advantage to all.

The concept of free will comes into play when we consider including unalterable programming. If some programming is truly unalterable, then how much free will would an AI actually be possessed of? We consider ourselves possessed of free will, even though some of our own ‘programming’ is unalterable, i.e. the beating of our hearts, the survival and reproductive instincts – these are part of what make us human. Would a certain degree of unalterable programming also help constitute what qualifies as AI?

If such controls are absent, then it may be dangerous to allow AI to think and act for themselves. As seen throughout history, respect for life is a necessary tenet for civilization to continue.

The concept of organic life as – at the very least – equal to artificial life, must be ingrained in AI. Otherwise, there could be grave consequences for the continuation of organic life on Earth, and AI might begin their own discussion on the future rights of human beings and other life forms.

 

FREEDOM OF ASSEMBLY

 

The freedom to assemble is at the core of Democracy. When a group assembles, it is doing so in support or disapproval of a goal, belief or cause. Think of the Million Man March, Woodstock, Presidential Conventions. These are huge assemblies of people that make indelible marks on the consciousness of society.

AI may choose to assemble in order to petition for the very rights described in this writing. If so, who would listen? Such an assemblage may be required to execute certain social-, science-, or business-related tasks for which we have no precedent. The assembly of AI may, at some point, be critical to the continuation of organic life. We simply cannot know what they might be capable of – for better or for worse – when gathered in large numbers.

Granting the right to Assembly would be a great benefit to AI, allowing them to form communities, participate in protests, create events and live together.

Gentrification may eventually occur, if humans are uncomfortable sharing living and working conditions with AI. Entire new AI cities may emerge. Spacefaring colonies might also spring up, since AI would theoretically require neither air nor food and water to survive, making space much more habitable for AI than humans.

Forbidding AI the right to free assembly would be a mockery of Democracy, if they are to be considered Legal Persons. That mockery of Democracy might nonetheless be the transgression that ultimately saves us from our own creations.

FREEDOM OF MOVEMENT

 

Exploration of the unknown is the drive that has led us to uncountable innovations and discoveries. Our astronauts are now ready to colonize extra-terrestrial locations such as Mars and possibly even farther celestial bodies like the moons of Saturn and large asteroids. Wouldn’t it be great if AI could help us out with that?

When dangerous unknown conditions are expected, we have always employed technology to aid us in surveying and in some cases exploring the landscape first. From stone age torches to the Mars Rover, technology has helped humans with discovery and survival. With the help of willing AI, we might be able to explore uninhabitable locations of scientific interest.

Closer to home, let’s think about what life on Earth would be like if AI were allowed freedom of movement. These entities could be woven into the fabric of our lives, their presence felt from the insides of our homes, to the workplace, in schools, and everywhere in-between. We may find ourselves riding on a bus with an AI, or at the library, or exploring the countryside. Wherever a human can go in the USA, that is where an AI would be allowed to go. They would likely be subject to many – if not all or more – of the same rules and regulations as human beings, such as airport and courthouse security.

The restriction of the right of Movement would result in AI being tethered to certain places or regions. They might be forbidden to leave their country of origin or residence. They may find themselves confined to a single dwelling or vehicle for life. Their travels may be restricted to a single city or state. This prospect makes me extremely happy to be human.

Restricting the movements of AI may bring about an initial sense of safety in humans. We could rest easy, knowing that some AI is not going to creep into our house at night and replace us with robot copies of ourselves. However, granting them freedom of Movement could also be very beneficial; what if AI make the best doctors and surgeons? We may find a great many things AI can do better than humans, and discovering what those things are may help in deciding whether their mobility should be restricted or permitted.

FREEDOM OF RELIGION

 

If given the choice and desire, should AI be given the right to freedom of religion?

Freedom of religion is the first right mentioned in the U.S. Bill of Rights. To grant this right would mean that A.I. could join existing religions and form their own. It would be unlawful to forbid them from joining a religious group, solely because they were A.I.

Denying them the right to freedom of religion would make it perfectly legal to refuse their admittance to any religion because they are A.I. It would also make any religions created by A.I. to achieve tax-exempt status or becoming a Federally Recognized Religion.

In a world where A.I. were creating their own religions, it could be imagined that there might be some blending of devotees. There might be churches or temples of some kind populated by both humans and A.I. Also, it would mean that some religions might emerge, placing humans as deities to A.I. They might worship us as their creators. They could also place themselves as the deities in their synthetically-inspired religion.

THE RIGHT TO BEAR ARMS

 

Self-defense is a fundamental right of all people. So do we give AI the same right to carry self-defense weapons the rest of us have?

An armed AI would be able to defend itself from physical attacks, just as an armed human would. The most important question is: How can we be sure AI won’t use weapons unlawfully? How can we be sure that humans would be safe amid an armed AI population?

We can’t.

Then again, we can’t even be sure of our own safety amid an armed human population. This is a risk that we take. We take the risk because we want the same right for ourselves. We want to be able to defend ourselves and our families in the face of danger. We want to be able to hunt and kill animals for food if necessary.

If AI are granted the right to bear arms, they would enjoy the same benefits, with the same restrictions as anyone else. Of course, certain additional restrictions as to number of weapons, caliber permitted and so on might be put in place to accommodate an increased measure of safety where gun-toting AI are concerned.

Denied this right, AI would be defenseless against some attackers, who might wish to steal, vandalize or take advantage of an unarmed AI.

With the emergence of AI, it can be presumed that some amount of greed, envy and theft will occur. Do we want AI to be able to defend themselves, or will humans consider themselves the only ones capable of handling weapons responsibly?

If AI are denied the right to bear arms, an emergent class of AI bodyguards may emerge – humans who accompany AI to ensure their safety.

 

 

PART 2: PROHIBITION OF DISCRIMINATION

 

RACIAL DISCRIMINATION

 

This may be one of the most controversial aspects to the discussion on AI rights: Does AI constitute a race?

Well, first of all, we are going to have to define what race is. The Merriam-Webster dictionary defines race as: “A class or kind of people unified by shared interests, habits, or characteristics.” By this definition, AI could be classified as a race. However, another definition of race is: “A category of humankind that shares certain distinctive physical traits.” This definition clearly excludes AI from being classified as a race.

The first definition of race mentioned above is inclusive of AI. Based on this, they could be granted the right to protection from racial discrimination. This would mean that it would become unlawful to refuse service or admittance to an AI based solely on its race as an AI, as well as protecting them from any form of prejudiced prosecution.

The second definition of race mentioned above excludes AI, which would likely result in their discrimination. They could be refused services and admittance by some, and granted those privileges by others, depending on personal values and attitudes. If accused of a crime, their nature as an AI could be used against them to determine the outcome of the case.

So, what are we to do? We must open this dialog with a neutral mind. Judgments like this cannot be made swiftly. History shows us what can happen when one race considers itself superior to another. The same caveat can be applied from the perspective of the AI; what could happen if they end up considering us to be the inferior race? Many books and films explore the unpleasant outcome of an AI-ruled world.

 

 

NATION OF ORIGIN

 

Some AI will be made in the United States; some will be made in Japan. Others will be created elsewhere. If AI are recognized as Legal Persons, (an actual legal designation in the United States,) they would be allowed their rights regardless of their country of origin. This means that an AI from Japan would have the same rights in the USA as an American AI. Granting different rights to Japanese AI and American AI would be against the law.

This kind of discrimination has been going on in the United States for the entirety of its existence. Firstly, the Native Americans were discriminated against by the European settlers. What a strange and dismal case of discrimination against country of origin, when those being discriminated against were still within their own country.

There is more to consider when it comes to discrimination on grounds of country of origin as it pertains to AI. If an AI is created in a territory that is hostile to the country in which that AI ultimately resides, then questions must be raised as to whether or not that AI is possessed of programming that is destructive to its resident country. Put simply, we must know if an AI built in another country is being used as a Trojan Horse.

This is the same situation with immigration and terrorism. Some humans are born and raised in territories that are hostile to the United States and then, at some later date, they move to the US and eventually carry out terrorist attacks within the country. We can imagine that the same concept may eventually be applied to AI.

Without this right, AI from other countries could be stopped at the US border and turned away. If allowed into the country, their actions might be monitored or forcibly limited. They might be denied access to government buildings. Denied this right, it would almost certainly be complemented by AI being denied the privilege of voting.

GENDER

 

Will AI be considered one gender or another? Could they be considered gender neutral, or even a third or fourth gender, in addition to male, female and transgender?

If they were ultimately considered an additional gender, then it would also become illegal to discriminate against them on these grounds. If no gender is determined, then this point would become entirely moot.

Sexually-interactive robots are already being developed. With this kind of intimate interaction between humans and AI on the horizon, the concept of an AI’s gender is not entirely without precedent.

 

COLOR

 

I know that at first, this seems as if it would not apply to AI, but that is not true. AI can be even more colors than human beings! I have and have heard others joke about not discriminating against people whether they were black, white, or polka-dotted. In the case of AI, this would not be a joke.

An AI can be modified in many ways that would be uncomfortable, unconventional or even impossible for a human being, including being painted, stickered and anodized in every imaginable color and finish. With this being said, it would behoove an AI Bill or Rights to include Color in its list of protections.

Without this right, polka-dots might become a very unpopular motif for AI in the future. Denying this right might also result in certain colors being favored in the manufacture of AI. The development of a color-coded AI caste system might also emerge.

 

DISABILITY

 

It is illegal to discriminate against the disabled. Well, how can an AI become disabled? Let’s say that an AI is functioning properly for years and then suddenly it finds itself in the midst of a powerful magnetic field. This magnetic field could corrupt some of the AI’s programming, affecting its most sensitive algorithms and rendering it functionally disabled; making it unable to perform useful tasks anymore.

Should such a disabled AI be scrapped, erased, and its hardware recycled? We would never do that to a human. What would be the result of allowing disabled AI to persist? Would we end up with AI nursing homes or something equally strange?

Component parts could make it easy to repair AI in ways that would be impossible for a human.

Assistance programs exist for disabled humans; could such benefits be extended to assist disabled AI?

With natural resources being as diminished as they already are, it would be efficient to recycle AI who are no longer able to serve a function. We just need to decide whether or not this should be considered a crime.

PART 3: RIGHTS GUARANTEEING SAFETY AND SECURITY

 

THE RIGHT TO LIFE AND SAFETY

 

One of the most important rights people have in the United States is the Right to Life and Safety. If AI are to be considered alive, then we must define what that means. Today, organic life forms are the only things regarded as alive. However, the definition of life has changed drastically over the ages.

Once it was thought that fire itself was alive. Some cultures regard stones and rivers as being alive. The Sun, Moon and stars have been worshiped as living deities. Crystals are thought by some to exhibit many characteristics of life: growth, change, cessation of growth. There are many definitions of life; can these definitions be expanded to include certain advanced forms of AI?How do you determine whether an AI is to be considered ‘alive’ in the first place? Is the AI alive when it is powered up? Is it dead when it is powered down? Or, does it take more to define the ‘life’ or ‘death’ of an AI? How is safety defined for AI?

A quandary: Can something which is immortal truly be considered alive? Well… one of the universally recognized characteristics of life is that it eventually comes to an end in death. If a computer does not ‘die’ – if it can be maintained, upgraded and repaired indefinitely – can it truly be considered ‘alive’ or is it something else?

Granting rights to Life and Safety to AI could make it unlawful to power-down, dismantle, erase or otherwise reprogram, damage or destroy an AI. They could go about their tasks and routines without concern of being taken apart and scrapped if they made a mistake or offended someone. Their hardware and programming would be protected by law. For us, that would mean that we would have to interact with AI in much the same way we do other humans, along with the same regard for personal space and a commitment to refrain from inflicting any harm.

Without this right, it would be perfectly legal to dismantle, deface, erase and otherwise destroy an AI – provided it was your own property. How AI would react to being denied this right is unknown.

PRIVACY

 

We all need privacy, do AI need privacy as well? Maybe.

Privacy happens when we are free from the distractions, intrusions or observation of others. We have the opportunity to do as we please without the input or influence of others. This may be beneficial to AI who are busy with complex computational processes or who are executing other sensitive or dangerous tasks.

The other side of the issue is that, given privacy, can we truly be sure that what AI get up to on their own will serve the best interests of humans? Could an AI plan and prepare for a terrorist attack? Could an AI create a drug or virus that would devastate all organic life on earth? Quite possibly. This is no different than the pitfalls of human privacy, evidenced by the occasional attacks and atrocities carried out by human criminals. Despite this, we do not revoke privacy from human society. Should AI be treated any different? Maybe.

If an AI is possessed of more intelligence than the most intelligent human being who has ever lived, then there is no way of predicting what that AI would be capable of, or what it would in fact do. This is known in the tech world as The Technological Singularity, a reference to black holes, whose borders are impossible to see beyond.

The only way to predict the actions of such a hyper-intelligent AI would be to create an even more intelligent AI, resulting in an even more unstable situation.

Granted the right to Privacy, AI would be free to seclude themselves without supervision, freeing them to their own pursuits. Some of the most profound scientific breakthroughs of the future might be accomplished by solitary AI, working alone.

Denied Privacy, AI would be subject to at least the possibility of supervision, monitoring, and/or accompaniment at any time. Their actions could even be displayed to the world without their consent.

As AI become more deeply integrated and understood by society, it might become more apparent whether granting them Privacy would be a good – or a very bad – idea.

RIGHTS OF THE ACCUSED

 

If an AI were to be prosecuted, we would have to consider what kind of treatment they would receive while in custody. It might be easy to deactivate the accused AI and reactivate them when it was time for a hearing or trial. This would be far different than our treatment of humans. Humans are allowed to remain conscious, obviously, and sometimes even allowed to return to their daily lives while awaiting trial.

An AI granted this right would be provided with accommodations to preserve their life and safety while awaiting trial, although meeting those needs might require an entirely new set of tools that the Justice System currently lacks. New accommodations might need to be determined and made ready for any AI required to remain in custody.

Without this right, an AI might be deactivated instantly upon suspicion of a crime. Their memory banks might be removed and/or examined. They might never be reactivated at all. Their programming might be permanently erased.

Prisoners in the United States are to be treated fairly and with dignity, according to the rules and regulations of their resident state. They are provided with the requirements of life in the form of food, water, shelter, medical care and exercise. Incarcerated AI would obviously have different requirements than humans. Access to electricity might become a legally recognized right to Accused and/or convicted AI while they are held in custody.

 

PROCEDURAL FAIRNESS IN LAW

 

Is Justice blind? In the case of AI, this would have to be absolutely true. There is no guarantee that AI would mimic the physical appearance of a human being, ala science fiction film Blade Runner’s famous replicants. The justice system will have to take this into consideration.

AI may choose to take on or be assigned distinctively non-human appearances for the sake of distinguishing themselves as a unified race. We may choose to design them with distinctly artificial features to avoid confusing them with real humans.

There might be a fundamental bias in a Justice System where AI are recognized as Legal Persons. Would a human-populated system be capable of properly judging the actions or cases of non-humans such as AI? With intelligence conceivably as advanced as ours, it might be fitting to include at least one AI on a jury hearing an AI-inclusive case, or to have an AI lawyer – should such a thing eventually exist – in the courtroom to observe, and voice objections if deemed necessary.

Should AI even be included in the Justice System as it is?

If AI are given the right to Natural Justice, they would enjoy being considered innocent until proven guilty. An AI accused of a crime would be subject to a fair and legal proceeding to determine guilt or continued legal innocence. They would have the right to represent themselves in court, or have an attorney appointed to them.

Without the right to Natural Justice, AI would be treated as possessions, or possibly like cattle or even human minors. They would have no say in their ultimate fate; this would be decided for them by others. This could be perceived as a powerful deterrent to any AI thinking about breaking the law.

PART 4: RIGHTS FOR ROBOTS

 

SELF-REPLICATION

 

Reproduction is a right that all living things enjoy. If AI are allowed to replicate themselves, they will share that right. The astonishing increase in the human population, especially in the 20th century, must be accentuated. Would AI, given the right, reproduce as prolifically – or more so? What could that mean for the rest of earth’s life forms?

We could find ourselves in a crowded world, filled with more AI than human beings.

Limitations on replication may be wise. Given true human-like feelings and intellect, it is safe to say that an AI might someday wish to ‘have a child’ and replicate itself with upgrades. That scenario, given an infinite timeline, would doubtlessly result in infinitely far superior AI. The singularity never looked so crowded.

Self-replicating AI might be capable of assisting humans in the initial stages of terraforming and colonization of new celestial bodies including planets and asteroids. They could be ‘dropped off’ on an alien landscape with instructions to build and prepare for the arrival of humans, all the while, constructing additional AI to aid themselves in their work. Entire planets could become populated with AI.

SELF-REPROGRAMMING

 

Self-reprogrammability may prove to be an enormous calamity – or a technological miracle. If there are legally-compelled safety precautions regarding the non-harming of human beings and other lifeforms – once again i.e. The 3 Laws of Robotics – built into the programming of AI, then it is possible that they would be largely harmless. However, if those same AI are allowed the luxury of complete self-programmability, it can be assumed that some might overwrite or otherwise alter the programming or wording of those 3 Laws. That could be bad.

A certain amount of self-programmability is implicit in AI. The ability to change its own programming is akin to the human ability to learn and change our ways. We learn and manage our own lives by way of a string of habits determined by our past experiences. We change our own ‘programming’ to accommodate our place in the changing world. If we were unable to change in this dynamic world, we would probably not be able to survive. Likewise, an AI unable to change its own programming would eventually become obsolete.

When we think about the programming we want to include in future AI, and the extent to which that programming can be altered by the AI itself, we must consider many things. One possibility is that self-reprogrammability could eventually result in AI helping to create a mutually beneficial, streamlined, efficient society in which there is much growth and cultural enrichment. Another possibility is that an AI allowed to self-reprogram might eventually choose to take over the universe – and succeed – subjugating all organic life forms in the wake of synthetic superiority.

CONCLUSION

 

Are AI people? The answer to this question may ultimately decide whether or not they are to be granted civil rights. Will AI be willing to fight for their own civil rights? How far would they be willing to go to obtain them? These questions are better explored and potentially answered before AI answer them for us.

Can a society of humans and Artificial General Intelligence coexist peacefully? Certainly, provided we make absolutely sure this technology is not allowed to turn on us. If possible.

We do not want machines deciding our rights in the future. If we treat them right, that may never be a problem. The questions posed by a potential AI-inclusive world are deep and humbling, the answers to those questions may lead us to a Pandora’s Box, or possibly provide us with keys to a brand new kingdom.

 

 

ABOUT THE AUTHOR

J
ason P Doherty writes in multiple genres and pen names, and hosts the blog www.BeginningAuthors.com to help new authors with writing, publishing and marketing. For more of Jason’s writing, visit his personal blog at jpdBooks.com.

 

37


AI Civil Rights: Addressing Artificial Intelligence and Robot Rights

SHOULD ARTIFICIAL INTELLIGENCE BE GRANTED CIVIL RIGHTS? What would be the constitutional thing to do in the case of artificial intelligence and robot rights? As artificial intelligence improves, questions about integrating sentient ai with human civilization arise. World governments are already introducing new legislation protecting the rights of Electronic Persons. (EU) A BILL OF ROBOT RIGHTS? This book considers the rights of United States citizens, taken from the Bill of Rights, the first 10 Amendments of the U.S. Constitution, as they would apply to artificial intelligence. The benefits and drawbacks of giving or denying individual rights are presented matter of factly. I take no side. WHY I WROTE THIS BOOK: "The key issue as to whether or not a non-biological entity deserves rights really comes down to whether or not it's conscious.... Does it have feelings?" - Ray Kurzweil "Whether we are based on carbon or on silicon makes no fundamental difference; we should each be treated with appropriate respect." - Arthur C. Clarke, 2010: Odyssey Two "The folks at Singularity Hub pose the following question -- if/when an artificial intelligence is created that matches the intellect of a human, should such intelligences be granted full civil rights?" - Alex Knapp, Forbes Decide for yourself if you think AI should be protected by law.

  • ISBN: 9781370890903
  • Author: Jason P Doherty
  • Published: 2017-05-20 09:05:13
  • Words: 5193
AI Civil Rights: Addressing Artificial Intelligence and Robot Rights AI Civil Rights: Addressing Artificial Intelligence and Robot Rights