Loading...
Menu

Current and Emerging Issues: Persepectives from Students at The University of Tu

 

 

 

Contemporary and Emerging Issues: Perspectives from Students at the University of Tulsa

By Mikayla Pavac

 

Copyright © 2016 by Mikayla Pavac

All rights reserved. No part of this publication may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording, or other electronic or mechanical methods, without the prior written permission of the publisher, except in the case of brief quotations embodied in critical reviews and certain other noncommercial uses permitted by copyright law.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Preface

In the late fall of 2015 I was asked to offer an honors course on current and emerging issues.  The expectation was that enrolled students would be completing their honors curriculum and would represent a diversity of undergraduate majors.

When the class was informed that the semester would consist of independent work on informative contributions resulting in a published edited collection, they were a bit intimidated.  Within a week they had selected topics, formed pairs of co-authors, developed a schedule of due dates, agreed on general style guidelines, and chosen a designated editor.  They were acutely aware that a semester is a short time period, and they began their work in earnest.  The challenge was not the development of original research, but the mastery of very broad and deep existing literature and resources associated with their chosen topics – often outside the comfort zone of their primary field of study. The result is this collection of papers.

For the students, the benefit of developing these contributions included increased research skills, enhanced writing skills, and more experience with collaborative cooperation.  For the reader, these papers offer a glimpse of what a group of high performance undergraduates perceive as important issues which will affect their lives and the culture they expect to live in. Equally interesting, and perhaps impressive, is the way they think and write about such topics.  

The student selection of topics reveals concerns of individual students, and the list was a consensus choice with little discussion. The list of selected topics is interesting as is the list of potential topics not selected. In my policy analysis course I have often receive papers focused on LGBT rights and on marijuana law.  Has the passage of marriage equality and the state level evolution of marijuana laws created the impression that these issues are resolved?  Are these topics simply not perceived as the most pressing? The reader is free to speculate.

Obviously the group of 15 authors, nor the Professor, speak for the University of Tulsa, and the University of Tulsa does not restrict their right to speak.

Steve B. Steib

Kendall Professor of Economics

The University of Tulsa

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Reproductive and Religious Rights: A Discussion of Individual Rights in the Modern United States

By Brennen VanderVeen, Senior, Economics

Giselle Willis Cuauhtle, Senior, Chinese Studies

 

Introduction

In the United States, certain ideas transcend political divisions. One such notion is the idea of rights. The Declaration of Independence references “unalienable Rights,” including “Life, Liberty, and the pursuit of Happiness.” It goes further, stating that “Governments are instituted” in order “to secure these Rights.” Appeals to rights have not dwindled since the near quarter millennium since those words were written. Yet despite the intrinsic nature of rights, it is still difficult to pinpoint their origins and appropriate interpretations.

Indeed, while Americans would agree that the U.S. Constitution imbues them with rights, they would also differ in their interpretations of those rights. Some would emphasize certain rights over others, or argue over the practical implications of vaguely-worded rights such as the pursuit of happiness. Different political parties are especially likely to have conflicting perceptions of our rights. Politicians refer to various forms of action and inaction as “rights” when they want to convey that what they are doing is good for people. For example, Democratic candidates in the current presidential race refer to universal health care as a right, whereas Republican candidates are reluctant to call it a right, even if they support aspects of a widespread health care plan. On the other hand, Republicans emphasize the right to bear arms more often than the Democrats. The word “right” thus acts as a form of moral and legal immunity, because once something is called a right, anyone who argues against it appears to be limiting human potential. Thus, when someone opposes a supposed right, they are more likely to argue that the concept in question is not a right than they are to argue that the “right” should just be ignored.

Ultimately, debates about rights center on one’s own definition of rights. To aid in forming that definition, this paper aims first to detail a common framework for the understanding of the origins of rights, and second, to discuss the rise of reproductive and religious rights in the United States. The framework that we will use is one of “positive” and “negative” rights, as it happens to align well with the United States’ political parties. This conception of rights as negative and positive is credited to Czech-French jurist Karel Vasak, who was the first Secretary-General of the International Institute of Human Rights in Strasbourg.1 In an attempt to demonstrate the evolution of rights, he described “three generations:” the first carried “negative” rights, the second brought “positive” rights, and the third is now dealing with “rights of solidarity.”2

According to Vasak, negative rights are generally civil and political, while positive rights are economic and social.3 His interpretations are based on the adoptions of the International Covenant on Civil and Political Rights and the International Covenant on Economic, Social, and Cultural Rights in 1966. Negative rights prohibit action against an individual. For example, the right to freedom of speech prohibits institutional repercussions for vocalized criticisms of the government. Positive rights, on the other hand, demand certain institutional actions, such as a fair wage. Finally, “solidarity rights” can only be accomplished with the input of individuals and institutions and encompass ideals like peace and preservation of the environment.

Yet Vasak’s generational approach and subsequent distinction between positive and negative rights are criticized as overly simplistic in two ways. The first is chronological. Although his ideas reached international legal prominence,4 there are many scholars who argued against his premises. Asbjørn Eide, for example, implied that “economic, social, and cultural matters,” or positive rights, were already covered before the 1948 Universal Declaration of Human Rights (UDHR) and even before the establishment of the International Covenant on Economic, Social, and Cultural Rights. Therefore, the Declaration’s biggest contribution to international human rights law was actually its ability to codify those rights so that they had to enter the political and civil spectrum.5 In this sense, positive rights would have come before negative rights. Patrick Macklem added that the generational metaphor is problematic because positive rights did not eventually replace negative rights the way human generations replace each other.6 The framework, however, still gives an idea of when certain kinds of rights peaked or gained traction in relation to each other, even if conceptualizations of rights do not quite “die.”

Nonetheless, Vasak’s dichotomization was also criticized on analytical grounds. What Vasak called civil and political rights, Henry Shue referred to as “security rights,” and what Vasak called economic and social rights, Shue called “subsistence rights.” Shue challenged the assertion that the former were negative rights and that the latter were positive rights, and in so doing, also found that the distinction between the two was neither easy to see nor important.7 And according to Patrick Macklem, “all rights—whether civil, political, social, economic—give rise to both positive and negative state obligations.”8 For example, although the right to property is usually thought of as a negative right because it requires the state to refrain from taking property, it is also a positive right because the state has to create a plethora of institutions to ensure this protection: everything from zoning laws to police forces.9 In other words, some state action is required, even in a negative right, to prevent the state from taking action later on.

As is explored later in this chapter, these criticisms of the positive and negative framework are important in understanding current American political tensions. Those who do use the positive and negative framework might contend that positive/subsistence rights are inferior to negative/security rights because they came later in Vasak’s timeline, thus demonstrating that they are not as vital to human dignity and survival. Positive rights are also perceived to require action beyond the capabilities of the individual, which to some means that they should not be rights at all.

Nevertheless, we decided to explore what we will continue to refer to as negative and positive rights, precisely because Vasak’s ideas did come to be so mainstream as to influence the legal thinking of politicians around the world. When American politicians argue about access to healthcare, for example, they are operating under the positive/negative framework whether they realize it or not. All of them would have to agree that universal healthcare would be considered a positive right. To some, usually Republicans, that means it is a subsistence right that calls for undue government intervention despite not being as urgent as a security right. Others, generally Democrats, would argue, like Shue, that positive rights are as urgent as negative ones and that the distinction is negligible anyway. We look at the history behind these perceptions.

 

Classical Liberal Understanding

The classical liberal notion of rights is primarily negative in nature. It usually posits the existence of natural rights that take the form of prohibitions on actions against others.

John Locke is perhaps the most famous and one of the earliest classical liberal thinkers. He theorized that the natural state of humans is “a State of perfect Freedom to order their Actions, and dispose of their Possessions, and Persons as they think fit…without asking leave, or depending upon the Will of any other Man.”10

Locke’s Second Treatise starts with that assumption about the natural state of humans. By “natural,” Locke means “before any government or legal system has been established.” Locke further notes that the state of nature is a state of equality.11 By this he does not mean that all humans are necessarily equal in skill or material possession. Rather, it is simply an existence “without Subordination or Subjection.”12

Still, Locke’s state of nature is not a dominion of boundless liberty. While a person does “have uncontrollable [sic] Liberty to dispose of his Person or Possessions,” “he has not Liberty to destroy himself.”13 This is in contrast to some contemporary liberals and libertarians who sometimes frame assisted suicide as the “right to die.”14 Further, for Locke, Reason acts as a basis for law. He states that by Reason, one knows that “no one ought to harm another in his Life, Health, Liberty, or Possessions”15.

Locke’s A Letter Concerning Toleration provides some further elaboration, though it does not discuss rights explicitly as the Second Treatise does. In it, he states that governments exist in order to promote “Civil Interests.,” which includes “Life, Liberty, Health, and Indolency of Body; and the Possession of outward things.”16 Locke defends religious toleration on several grounds, but his first explanation for why the government should not be involved in the caring of its citizens’ souls is relevant in a discussion of rights. He states that because of divine rule, the government simply has no business involving itself in religion. Religion, or more specifically “the Care of Souls” is simply just “not committed to the Civil Magistrate.”17 He later elaborates that “it is easie [sic] to understand to what end the Legislative Power ought to be directed… and that is the Temporal Good and outward Prosperity of the Society; which is the sole Reason of Mens [sic] entering into Society, and the only thing they seek and aim at in it. And it is also evident what Liberty remains to Men in reference to their eternal salvation, and that is, that every one should do what he in his Conscience is persuaded [sic] to be acceptable to the Almighty…”18

In such an explanation for religious toleration, Locke does not mention rights per se. However, it is evident that he sees certain aspects of one’s life, religion in particular, as being entirely outside of the bounds of acceptable legislation. In doing so, he envisions limits on state power beyond which the government cannot legitimately have any influence. This is effectively the nature of negative rights, even if Locke does not use that terminology. People, according to Locke’s description, basically have the “right” to determine how to reach salvation. It’s a negative right in that it requires the government to refrain from infringing upon it.

A conception of negative rights can also be seen in the founding documents of the United States. The unalienable rights of the Declaration of Independence (Life, Liberty, and the pursuit of Happiness) are all negative. According to the Declaration, they preexist the government, their security being the reason for governments being instituted at all.

The Declaration is not the only document from America’s founding that demonstrates an understanding of rights as negative. The Bill of Rights lists protections that are all negative. The ten amendments make no mention of the government providing any sort of good. The closest is the “right to a speedy and public trial,” but that right has more to do with protecting citizens from harassment by the government. It is part of a right of general non-interference rather than a right to the provision of something.

The other amendments make it clear that the rights they describe preexist their enumeration. The First Amendment does not grant a new right to freedom of speech or of religion; it states that “Congress shall make no law.” The Second Amendment is similar, stating that “the right of the people to keep and bear Arms, shall not be infringed.” The other amendments are constructed similarly. All rights within the first ten amendments to the Constitution relate to the general principle of the government leaving people alone. The government is not expected to provide anything except for protection against interference from the government itself. In doing so, the Bill of Rights serves as the essence of negative rights.

 

Progressive Understanding

Around the beginning of the 20th century, an economic shift helped the rise of positive rights. Previously, industrialization had engendered a “producerist” worldview, wherein economic policies favored producers because they were thought to drive the market.19 This producerist preference was fueled in part by a distaste for consumers. European liberalism, which sees consuming as a threat to lucrative production, and American Puritanism, which equated immoderate consumption with religious failure, were both influential at the time.20 Only faced with continued “poverty amid plenty” did economists start to rethink the position of the consumer.21 They maintained that the key to well-being was material abundance, but instead of relying on producers to create the materials, they began to think of consumers as driving the market.

Initially, they were careful to distinguish between a “consumer-oriented economy” and a “consumption-oriented economy.” In the former, businesses only produce what consumers need—no more and no less. In the latter, consumerism drives high profit margins for businesses. But after World War II, the distinction basically disappeared; the Great Depression and high productivity after WWII convinced liberals that regulated capitalism could result in both plenty of product choices for consumers and high profit margins for businesses.22 Now, consumers were lauded as the providers of economic growth.

This emphasis on the consumer came about as President FDR defined “four essential human freedoms” in 1941: freedom of speech, freedom of religion, freedom from fear, and freedom from want.23 The concept of freedom from want was still controversial because classical liberals assumed that it would allow consumers to benefit from the hard work of producers without consumers having to do anything. Basically, it was viewed as a positive right, and more conservative groups weren’t sure people should have rights that required the action of another party.

Yet the idea came to be accepted by modern liberalism. Positive rights gained traction after World War II as states enjoyed a post-war economic boom and began providing welfare programs.24 Roosevelt’s “New Deal” cemented the shift away from laissez-faire economics and to an acknowledgement of consumers25 by way of providing them rights. After the economic crisis of the mid-1970s, however, conservatives began to argue that welfare was morally and economically corrosive.26 Yet the welfare state is here to stay: political arguments about it now center on how much funding the programs should get, not whether they should continue to exist. Indeed, the “collective provision for welfare is associated now with an idea of social citizenship,” and is “comparable” to the rights to own property and to vote.27 If this is true, then the positive right of welfare is equivalent to the right to own property, typically considered a negative right.

As a whole, the progressive movement broadened the definition of rights. Self-sufficiency was de-emphasized and welfare programs became more established. This expansion was important for the development of reproductive and religious rights throughout the 20th century

 

Reproductive and Religious Rights

With any mention of “reproductive rights,” the abortion debate may come to mind. However, we have chosen to ignore abortion completely in our analysis. In our omission, we do not mean to imply the nonexistence of either a “right to choose” or a “right to life.” Rather, the omission is only due to the form that the abortion debate most often takes. Opponents of abortion, while often religious, generally do not make the issue about themselves. They base their opposition on a fetus having the same “right to life” that those who have already been born have. Therefore, the issue isn’t the religious liberty. Also, it is not really about whether or not a “right to life” exists, since abortion proponents generally do ascribe such a right to those who have already been born. Instead, the debate is largely about whether or not a fetus constitutes a human life with the right to life. Finally, while some supporters of abortion do believe that abortion is a positive right—for instance the Democratic Party supports a woman’s right to terminate her pregnancy “regardless of ability to pay”^28^—the debate as a whole is anchored in the language of negative rights. Pro-life individuals believe that terminating a pregnancy constitutes a violation of the rights of the unborn, while pro-choice individuals believe that laws against abortion violate a woman’s right to control her body. The abortion debate is a unique issue that involves auxiliary concerns such as what it means to be human and when life begins. For these reasons, we decided to ignore the abortion debate.

In contrast, the debate over contraceptive coverage is much more appropriate for an analysis of rights. It has two sides, broadly speaking, both of which are acting in accordance with their understanding of what constitutes rights. There are those who are more in line with classical liberalism (though many of them are social conservatives) who believe in a negative right against the government compulsion to enter into contracts that violate one’s conscience. On the other side are progressive individuals who believe that birth control is important enough that it is a fundamental positive right to be provided with it, and therefore some people must be forced into providing it for others.

 

Evolution of Birth Control as a Right, both Negative and Positive

Although birth control has existed in various forms for centuries, it only became a part of national discourse at the end of the nineteenth century. The topic was and still can be considered taboo: this country’s Puritan settlers maintained that intercourse should only occur between married couples and only for procreation. Indeed, a majority (60%) of Americans still endorse the statement that the purpose of sex is to reproduce, and fewer (45%) endorsed the idea that it was “to connect with another person in an enjoyable way.”29 Birth control is explicitly intended to prevent procreation, so people feared it would enable lustful sex that, instead of producing a valued human life, would distract from the relationship Christians had with their god. Thus, in 1873, the United States restricted birth control for the first time by introducing the “Comstock law,” which prohibited “the sending of obscene matter,” including contraceptives, through the mail.30 The Comstock law would hinder the distribution of sexual education materials for years to come as well.

Around the same time, English economist Thomas Malthus’ ideas about the dangers of unchecked population growth were making their way over to the United States. Although the concept of an overcrowded dystopia was not as salient with Americans, who lived in a much more sizable country than the English, many still feared, like Malthus, that overpopulation would lead to increased poverty and a lack of resources in general. Where Malthus, however, thought that birth control was a vice, so-called neo-Malthusians believed that birth control could help curb population growth.31 Nevertheless, concerns remained regarding the purity of women, and neo-Malthusians who favored birth control to reduce the population found themselves having to defend against the effects increased birth control could have on the sexual activity of women.

They, in turn, influenced the growing “perfectionist socialist” movement in the United States, which attempted to achieve “perfection,” or utopia, through socialist practices. One such socialist, Robert Dale Owen, was one of the first to argue that, besides an effective way of reducing overpopulation, access to birth control was part of “women’s right to self-determination.”32 He further encouraged women to “overcome their diffidence,” and teach men about the benefits of birth control.33 Throughout the early twentieth century, the feminist movement answered him.

Yet the early feminists did not advocate for birth control in the same sense that modern feminists do. The feminists of that era were remarkably unified in support of “voluntary motherhood,” despite their different backgrounds, but this support did not necessarily translate into support for birth control devices, which were still considered unnatural.34 Rather, this movement focused more on highlighting the naturalness of women’s sexuality.35 In so doing, reformers like Elizabeth Cady Stanton argued that women had “the right to affirm their sexuality if they chose to do so, or contrarily, to refuse sexual relations altogether when necessary to avoid pregnancy.”36 In this sense, Stanton and Owen maintained women’s negative right to reproductive autonomy; no one should interfere with a woman’s right to choose when she wanted to have sex or when she wanted to abstain.

The positive right to obtain birth control devices, however, was another matter altogether. Feminists of the time were still wary of birth control devices that actually required action on the part of the individual, such as a sponge or concoction intended to kill sperm. This distaste for the unnatural, however, soon gave way to a different concern: eugenics. As birth rates remained high amongst immigrants and the poor, birth rates slowed down or stayed the same for professional women with higher education. President Theodore Roosevelt reprimanded women in 1905, saying that choosing not to have children was “criminal against the race.”37 Indeed, people feared that if well-educated, mostly white women were not having children, it could lead to “race suicide”—where the sheer number of poor minority children would overtake the professional, white, class.

Suddenly, birth control devices appeared much more attractive to those who wanted to make sure that lower classes were not reproducing. Arguments in this era did not appeal to anyone’s rights so much as to an upperclass, white fear of losing the power they had because of their genetic privilege. In fact, black activists of the time denounced birth control on the grounds that it was intended to diminish the black population and consequently, black political power.38 As Rickie Solinger explains, “the government was investing in services where it expected the beneficiaries to be Black.”39 Yet the government’s new interest in family planning, racially skewed as it was, is what helped professionalize birth control devices to the point that they became a part of national discourse.

Clinics focused on sexual health began to appear in the United States. Margaret Sanger’s American Birth Control League devoted itself to opening these clinics and lobbying for legislation40 that would protect them against the likes of Anthony Comstock. Yet as clinics opened outside of Sanger’s control, the contraceptives they provided were increasingly framed as strictly medical fare rather than anything having to do with women’s emancipation, so conservative attitudes about motherhood, sex education, and sexual morality persevered.41 This somewhat hindered the growth of clinics, as did the Great Depression.

However, this was also the era of the New Deal. While more people fell into lower classes, proponents of birth control returned to neo-Malthusianism and advocated for the inclusion of contraception in New Deal welfare programs in an attempt to help the poor.42 Access to contraceptives in the New Deal welfare programs may be the first instance of birth control as a positive right. Roosevelt’s “freedom from want” was especially appealing during the Depression, and contraceptives were conceived of as help for the poor to keep their finances under control (neo-Malthusian) more than to keep them from reproducing (eugenics). The government viewed the New Deal reforms as temporary in a time of crisis, not as permanent policy.

During World War II, the eugenics framework returned once again as the economy got better. In 1942, Margaret Sanger and like-minded activists created the Planned Parenthood Federation of America. Women were increasingly accepted into the labor force, but they also became economic assets to the country in another way. One Planned Parenthood ad was called “Planning for Victory”: “Planned Parenthood, with your understanding and support can, in 1943, be made to mean that more healthy children will be born to maintain the kind of peace for which we fight.”43 This emphasis on the future role of children also meant that women’s importance as childbearers increased in an economic light. Children literally contributed to the purchasing power of families, so as the New Deal and progressives began to acknowledge the role of consumers, women were viewed as the primary drivers of a consumer-based society because of their ability to have children44 and their subsequent need to buy products for the baby.

Quickly, Planned Parenthood focused its attention on the research and development of birth control devices that could help achieve eugenic purposes—namely, the pill. Sanger herself drove research to find an oral contraceptive that was finally approved by the FDA in 1957, but only to regulate menstrual cycles.45 By 1960, however, after about half a million women claimed to need it for menstrual problems, the FDA approved it for sale as a contraceptive.46 After the pill hit the market, more and more women found themselves in favor of birth control devices, even though many were still ridiculed for using it. One example comes from the black community: although black men continued to be vocal about the genocidal intent of birth control, black women were increasingly in favor of the pill, despite getting called “pill-pushers” or “traitors.”47 Their support, despite the backlash from their own community, highlights the ultimate reason for which reproductive rights have remained salient. Black women realized that although they could continue to try to have babies to increase the political power of their minority race, having babies and raising them to adulthood required planning and resources that not everyone had.48 This was the original eugenics-based argument that white advocates had used for their own race.

Yet once women started advocating for a right to birth control as a right to manage their own health, the family connotations of contraception faded away and gave way instead to a grayer area between what had been a positive right to devices and a negative right to self-preservation. The eugenics argument remains, however, that children who are born accidentally will not receive the same level of care as “planned” children. Neo-Malthusianism also continues to echo in those who argue that widespread birth control is the best way to provide better opportunities to the poor. Nevertheless, questions remain about whether birth control is enough of a right that others need to provide it for women.

Thus, the current controversy surrounding contraceptives can mostly be traced back to the Affordable Care Act (ACA). It mandates that employers provide preventative coverage in the healthcare plans they offer for their employees. The Department of Health and Human Services decided that 20 different contraceptives should be covered as preventative coverage. In doing so, it effectively treats birth control as a positive right. However, this has caused controversies with employers who object to some of the contraceptives, generally on religious grounds.

 

Religious Rights and the Law

By now, the idea of freedom of worship is thoroughly entrenched in both America’s culture and its law. Few dispute that people should be able to believe basically whatever they please. However, sometimes religious practices come into conflict with otherwise secular laws. Over the past few decades, the Supreme Court has dealt with cases that have ultimately led to what might be described as partial exemptions for religious individuals, culminating in the recent lawsuits against the Affordable Care Act.

In order to understand those recent cases against the ACA, one must start with the 1963 case Sherbert v. Verner. In that case, Sherbert, a member of the Seventh-Day Adventist Church was fired for being unable to work on Saturdays. She was also unable to find another job due to her religious obligation to refrain from working on Saturdays. However, she was denied unemployment compensation from South Carolina on the grounds that she refused available employment.49

The Supreme Court sided with Sherbert and found that South Carolina’s refusal to grant her unemployment compensation violated the Free-Exercise Clause. The reasoning was that the state created a situation in which she could only get government benefits by acting in direct contradiction to her religious belief. The decision mandated the use of strict scrutiny when religion is burdened.50 Strict scrutiny basically means that a court applies its toughest standard to a law whenever it is challenged. The Supreme Court generally applies strict scrutiny in cases that involve minority classifications or “fundamental” rights in the Bill of Rights. In order to pass strict scrutiny, a law must do three things. First, it must be based on a compelling government interest. Secondly, it must be narrowly tailored to achieve that interest. Finally, it must also be the least restrictive means of achieving that interest. Basically, strict scrutiny means that a law must have a very good reason for its existence and it has to be the least harmful way to achieve its end.

Strict scrutiny is generally considered a difficult hurdle for a law to pass. However, in 1990, the Supreme Court limited the use of strict scrutiny in Employment Division v. Smith. In that case, two men were fired from a private drug rehabilitation organization after they ingested peyote. Peyote is illegal but also required for ceremonial purposes in the Native American Church. Oregon denied them unemployment compensation because they had been fired for “misconduct.” The men’s case eventually came before the Supreme Court, which was tasked with deciding whether or not the ban on peyote violated the Free-Exercise Clause.

While the case appears similar to Sherbert, here the Supreme Court ruled against the men. The majority argued two main things. First, the prohibition on peyote was completely secular in purpose. The law did not intend to target members of the Native American Church. It was only incidental that they were affected. So, the law itself was not unconstitutional. Secondly, the court argued that members of the Native American Church were not entitled to an exemption from the law. The court reasoned that if it had ruled in the men’s favor, then it would exempt religious people from following generally applicable laws.51

Congress did not approve of the Supreme Court’s narrowing of religious protections. So, it passed the Religious Freedom Restoration Act in 1993 (RFRA). That law basically instructed courts to go back to Sherbert style strict scrutiny when deciding on laws. Specifically, it mandates that “Government shall not substantially burden a person’s exercise of religion even if the burden results from a rule of general applicability,” but there is an exception of the burden “is in furtherance of a compelling governmental interest” and “is the least restrictive means of furthering that compelling governmental interest.”52

That was the legal environment in which the tension between religious and reproductive rights under the Affordable Care Act led to a lawsuit. The Green family, who owns Hobby Lobby Stores, Inc., claimed that the Affordable Care Act violated the Free-Exercise Clause of the First Amendment and RFRA, and sued the Secretary of the Department of Health and Human Services, Sylvia Burwell.53 Specifically, they argued that four of the contraceptives they were required to cover were analagous to providing abortions, which are against their religion. Their case made it all the way to the United States Supreme Court, where the justices ruled 5-4 in favor of Hobby Lobby.54

In this case, women’s rights—or, more specifically—the positive right to contraception discussed earlier in the paper, is pitted against the negative right to religious freedom, otherwise interpreted as a lack of government impositions on religious beliefs. Justice Anthony Kennedy wrote the concurrence, wherein he concluded that the Department of Health and Human Services’ existing religious exemption for non-profit companies should extend to for-profit corporations like Hobby Lobby because there is not enough of a difference between the two kinds of organizations that would require differential treatment under the RFRA.55 Basically, requiring Hobby Lobby to provide the contraceptives to which they object was not the least restrictive means of providing women with more access to birth control.

Opponents of the decision argue that Hobby Lobby should not receive protection because it is not a “person,” which is what the RFRA specifically refers to. However, Justice Alito addresses this issue swiftly in court’s opinion,56 so this paper operates under the assumption that Hobby Lobby’s owners are due their religious rights.

In contrast, Justice Ruth Bader Ginsburg’s dissent drew on previous decisions and consisted of two parts. First, she relied on Employment Division v. Smith and its allowance for the government to infringe on religious practices when the infringement is “an incidental consequence of an otherwise valid statute.”57 Second, she argued that judicial precedent in general stated that religious beliefs could not impinge on the rights of others, and that allowing Hobby Lobby’s religious beliefs to interfere with the ACA would impinge on what she said was women’s right to contraception.58 In doing so, Ginsburg identified the provision of contraception as a positive right, one which she prioritized over the negative rights of the Green family.

 

Conclusion

It is our hope that this paper will elucidate the issues with what are (or are not) considered rights in the United States. We have focused in particular on the controversy surrounding reproductive and religious rights because it is a relatively clear example of how two groups can each oppose one another based on different understandings of what rights are. Some see the contraceptive mandate through the Affordable Care Act as a violation of rights. Others see the same mandate as upholding the rights of women. Recognizing that each side of the issue is informed by a fundamentally different understanding of what a “right” even is should help the two sides understand each other better. Also, the negative/positive distinction between different rights should also be helpful in analyzing political rhetoric as to what the government should (or should not) involve itself.

End Notes

“61st Annual DPI/NGO Conference”

2 Vasak 29

3 Vasak 29

4 Macklem 1

5 Eide 124

6 Macklem 2

7 Shue 36

8 Macklem 10

9 Macklem 10

10 Locke Second Treatise 269

11 Ibid.

12 Ibid.

13Locke Second Treatise 270–271

14 “The Right to Die”

15 Locke Second Treatise 271

16 Locke Toleration 26

17 Ibid

18Locke Toleration 48

19 Donohue 4

20 Donohue 2-3

21 Donohue 5

22 Donohue 6

23 Donohue 1

24 Neuborne 882, Donohue 3

25 Neuborne 882

26 King and Waldron 416

27 King and Waldron 417

28 The 2012 Democratic Platform

29 Barna Group (2016)

30 Gordon, Women’s Body, 24

31 Gordon, Women’s Body, 76

32 Gordon, Women’s Body, 83

33 Gordon, Women’s Body, 90

34 Gordon, Women’s Body, 95

35 Gordon, Women’s Body, 97

36 Chesler 59

37 Gordon, Moral Property, 86

38 Solinger 143

39 Solinger 169

40 Gordon, Moral Property, 172

41 Gordon, Moral Property, 189

42 Gordon, Moral Property, 211

43 Planned Parenthood in Wartime, PPFA pamphlet, 1942, PPFA Papers cited in Gordon, Moral Property, 247

44 Solinger 165

45 “The Birth Control Pill: A History.”

46 Ibid.

47 Gordon, Moral Property, 290

48 Ibid., 290

49 Sherbert v. Verner

50 Mullally

51 Employment Division v. Smith

52 Religious Freedom Restoration Act of 1993

53 Burwell v. Hobby Lobby

54 Burwell v. Hobby Lobby

55 Burwell v. Hobby Lobby

56 Burwell v. Hobby Lobby 573 U. S. ____ (2014)

 

 

Works Cited

“61st Annual DPI/NGO Conference.” UN News Center. UN, n.d. Web. 08 Mar. 2016.

“The 2012 Democratic Platform.” Democrats.org. Democratic Party, n.d. Web. 21 Mar. 2016.

“The Birth Control Pill: A History.” (2015): Planned Parenthood.

“Burwell v. Hobby Lobby.” The Oyez Project. IIT Chicago – Kent College of Law, 2011. Web. 13 April 2016. .

Chesler, Ellen. Woman of Valor: Margaret Sanger and the Birth Control Movement in America. New York: Simon & Schuster, 1992. Print.

Donohue, Kathleen G. Freedom from want: American liberalism and the idea of the consumer. JHU Press, 2003.

Eide, Asbjørn. Economic and social rights. na, 2000.

“Employment Division v. Smith.” LII / Legal Information Institute. Cornell University Law School. Web. 20 Apr. 2016. .

Gordon, Linda. The Moral Property of Women: A History of Birth Control Politics in America. Urbana and Chicago: U of Illinois, 2002. Print.

Gordon, Linda. Woman’s Body, Woman’s Right: A Social History of Birth Control in America. New York: Grossman, 1976. Print.

King, Desmond S., and Jeremy Waldron. “Citizenship, Social Citizenship and the Defence of Welfare Provision”. British Journal of Political Science 18.4 (1988): 415–443. Web…

Locke, John. A Letter Concerning Toleration. Ed. James Tully. Indianapolis: Hackett Pub., 1983. Print.

Locke, John. Two Treatises of Government. Ed. Peter Laslett. Cambridge: Cambridge UP, 1988. Print.

Macklem, Patrick. “Human rights in international law: three generations or one?.” London Review of International Law 3.1 (2015): 61-92.

Mullally, Claire. “Free-exercise Clause Overview.” First Amendment Center. 16 Sept. 2011. Web. 20 Apr. 2016. .

Neuborne, Burt. “State Constitutions and the Evolution of Positive Rights.“Rutgers LJ 20 (1988): 881.

Religious Freedom Restoration Act of 1993, Pub. L. No. 103–141, §3, 107 Stat 1488 (1993). .

“The Right to Die.” Editorial. The Economist 27 June 2015. The Economist Newspaper, 27 June 2015. Web. 19 Apr. 2016. .

“Sherbert v. Verner.” LII / Legal Information Institute. Cornell University Law School. Web. 19 Apr. 2016. .

Shue, Henry. Basic rights: Subsistence, affluence, and US foreign policy. Princeton University Press, 1996.

Solinger, Rickie. Pregnancy and Power: A Short History of Reproductive Politics in America. New York: New York UP, 2005. Print.

Vasak, Karel. “30-YEAR STRUGGLE-SUSTAINED EFFORTS TO GIVE FORCE OF LAW TO UNIVERSAL-DECLARATION-OF-HUMAN-RIGHTS.” Unesco Courier 10 (1977): 28.

 

 

 

 

IMMIGRATION

By:

Tom Walter, Sophomore, Economics & Mathematics

Josh Hinkle, Junior, Computer Science & Philosophy

 

One of the most pressing issues around the world today is the subject of immigration. Millions of immigrants from Syria and other Middle Eastern countries entrenched in warfare seek refuge in Europe to a controversial extent. In our own country, an estimated 11 million immigrants live in our borders without documentation, mostly from Mexico and East Asia. With recent events that are often characterized as crises comes a debate as to the extent that a country can control its borders, as well as if immigration can be considered a contribution or an impediment to a country’s economy, structure, and culture. While immigration has always been an important issue, it has become a front and center question in the increasingly globalized world. The debate on immigration informs many modern politicians’ platforms, journalists’ editorials, and the native population’s hopes and fears. It is a struggle that calls into question what it means to be a nation and the rights that come along with such a status.

In this paper, we will analyze immigration in three different sections. First, we examine the underlying theories about immigration and its effect on the nation in order to clarify the more philosophical positions behind immigration’s proponents and opponents. Next, we recount a history of immigration in the United States in order to see how policy has changed over time and find the precedents behind our modern-day conversation. Lastly, we elucidate and evaluate the different arguments concerning present immigration in order to develop some kind of answer to this hot topic.

 

Thought Behind Borders and Immigration

One of the primary questions when approaching the debate on immigration is what constitutes the nation. It helps qualify the legitimacy of their borders, their international obligations, and their definitions of citizenship. If the nation ought to represent a certain group of people united by inherited characteristics, like culture, language, and religion, and solely serve their interests, excluding outsiders has some precedence. On the other hand, if the nation is simply meant to be the people under a government that solely rules to maintain some sort of sociopolitical order, the question of immigration has less of a philosophical pretense and ought to be cast in more practical concerns of economics and social structure. Either approach, however, makes the problem of immigration vital since it supposes that errors immigration policy can deconstruct a nation’s entire meaning, its entire reason for existence. As such, the nation has a vested interest in selecting who can and can’t be a part of it.

When it comes to the question of a nation’s identity, the United States occupies a precarious position. European countries are the product of millennia of settlement and development. Their linguistic, cultural, and religious traditions are deeply tied to the land they came from, and their growth was an organic process guided by acquisition, warfare, and the natural elements. The nations-states that arose in the past few centuries, while the result of planned political actions, nevertheless sought to reflect the pre-established cultural developments. In Europe, defining borders was often accompanied by struggles between different ethnic and linguistic groups to give themselves a homeland. In Europe, what it means to be an Englishman, Frenchman, or German is tied to one’s heritage of participating in that cultural evolution.

Accordingly, the idea of some kind of ethnic, cultural, or linguistic solidarity acts as a justification for blocking immigration. Tom Tancredo, paraphrasing John Stuart Mill, notes how “two elements that define a nation [are] the desire on the part of the inhabitants to be governed together and the common sympathy instilled by shared history, values, and language (in other words, a common heritage)” (29). The importance of identity can be regarded as something more than just instilling a feeling of pride or xenophobia; it can be considered quite essential to the health of a democracy. Christopher Heath Wellman calls this the “liberal nationalist” approach, where immigration is restricted based on the pretense that homogeneity “is necessary to preserve their distinctive cultural identities” (49). Moreover, in such cases as a liberal democracy, citizens will be less willing to “sustain a robust and equitable democratic welfare state” without “sufficient trust and fellow-feeling among their compatriots” (49). When a nation’s identity becomes disregarded or displaced, it creates disunity among the people and makes them more interested in their individual groups rather than the total good of the nation. A common understanding of a nation’s history and values is essential to such a process.

The United States is not in the same situation. Its history is not a product of evolution but rather of revolution; it was born out of a conscious desire to create a new government and people. When the colonies rebelled against Britain, the United States forsake its identity built around an Anglo-Saxon heritage. Instead, that void was filled by an identity based upon universal values as opposed to one tied to a certain people. Its very own Declaration of Independence promises “Life, Liberty, and the pursuit of Happiness” by all those who choose to be governed under its laws. The primary condition for being an American was choosing to be a part of the new republic, not the family you were born into. To be certain, the United States’ history has not always upheld those principles, with those of Anglo-Saxon heritage in particular finding favor. However, the general principle still remains, and once-excluded ethnicities are now considered American, less because of their identity rather than by their allegiance to the country. In that case, any immigrant has the capacity to become an American no matter their background. The real question is if they want to uphold those American values.

Either case, however, does not give a clear answer for what to do about immigration even though they do help illuminate what might be in a nation’s best interest. One of the common arguments made for why the United States ought to have more open borders is that its very identity is composed of people who were once immigrants. From its inception, the United States came from people British colonists coming to a land that was not theirs. The natives who were here prior had no say in them moving in; nevertheless, they are considered the founders of this country. Now, the people with Anglo-Saxon heritage are a minority. Eventually, German, Irish, Poles, Scandinavians, East Asians, and all other people from around the globe began to compose the country’s population in what is often considered to be a “melting pot” of ethnicity. The general concept is that since the United States is founded not in its culture but rather its beliefs, the composition of its population not only does but ought to represent the different cultures of the world. Every person contributes their own unique experience to serve the United States’ ultimate purpose, which is to be a place “of liberty and justice for all”.

One aspect of becoming a part of this country is the degree to which one should assimilate. While immigrants do bring along their unique perspective, those perspectives may come in conflict with the core values of what it means to be an American. Tancredo considers the devaluation of “‘civic duty’”, or “selfless devotion to the American creed” (22) in modern immigration, and fears of how immigrants may still hold their home country as more important than their new one. To him, the process of assimilation means “disassociating themselves from their past and by integrating themselves into the American mosaic” (30), which he sees as having yet to be achieved. One way he demonstrates this is how “politicians and national policies had led to our essentially telling millions of immigrants and first-generation Americans that they shouldn’t learn English,” despite it being the country’s “native tongue” (27). The balance between assimilation and diversity is a hard one to pinpoint, especially since the idea that one must give up a part of their heritage is such a hard pill to swallow.

Along with this idea of diversity being essential to the American identity is a humanitarian aspect of the nation. The United States, if it is to be the protector of the free world, ought to serve the less fortunate of those around the world. As such, the country ought to let refugees, asylum seekers, and people improving their families’ lives into its borders in order to fulfill this mission. Especially in conjunction with the idea that anyone can be an American no matter where they came from, the United States is obligated to let in anyone who needs help improving their living conditions. In a very unequal world in terms of resources, the egalitarian option means that “existing global inequalities mandate that borders be porous” (Wellman 58). One has no control over the circumstances in which they were born, and a just nation ought to take that into account and provide an equal opportunity to the world’s people.

A part of this egalitarian case for open borders is how countries like the United States ought to atone for their sordid histories. As Wellman states, “it is not a coincidence that so many of today’s affluent societies are among those who colonized… foreign groups, while many of the most impoverished societies were among those colonized or otherwise exploited” (65). The United States, whose very existence is itself the result of colonialism and has often fallen short of its creed of providing “liberty and justice for all,” is no exception to this fact. Letting in immigrants becomes a reparative act for the country’s history of exploiting weaker countries and its own ethnic populations. While certainly a lot less patriotic than the first formulation, where it is a part of the duty of the United States to preserve its legacy (instead of apologizing for it), the logic of open borders ultimately serving a moral purpose remains the same. This argument of “immigrating to atone” has even been endorsed by world leaders, such as U.S. President Barack Obama who has stated that only Native Americans have the right to criticize immigration, and German Chancellor Angela Merkel, whose complete open borders policy in the European migrant crisis is partially informed by the wounds from the country’s Nazi past.

Another question is whether or not a nation has the right to restrict immigration at all. Wellman certainly believes this to be the case, where he frames his argument in whether or not a nation has the right to “free association” (13). He argues that “freedom of association is a crucial element of self-determination, and that its value stems in large measure from the right not to associate with others” (29). The right of self-determination is granted only to “regimes with a moral claim to rule,” in other words, if it “protects the rights of its constituents and respects the rights of others” (16). This is not say that legitimate countries ought not associate, but rather that the right to not associate is a defensible position. This deontological argument does not account for when not associating can have grievous political consequences, such as in the case of refusing refugees. For this reason, he concedes that “we should not be absolutists about freedom of association,” but unless some great catastrophe is at stake, it cannot be denied to legitimate states.

One of the more controversial questions this argument brings up is whether or not legitimate states have the right to choose which groups they associate with, specifically with regards to nationality, ethnicity, or religion. Most immigration debate is usually centered on anyone trying to enter the country, sometimes with favor towards skilled laborers, students, and family members, as the economic concerns are deeply relevant to those kinds of distinctions. The question of whether or not to exclude a person based on their background, however, targets the immutable identities of those trying enter. Consequently, such an action might be considered ipso facto racism. However, excluding certain ethnic and social groups has been suggested and executed on the grounds of national security and criminal demographics. In the United States, many German, Italian, and Japanese people were put into internment camps during World War II due to the fear of espionage. One common concern with the European migrant crisis was the risk of terrorists in the refugees, especially in light of recent terrorist attacks related to radical Islam. Selective immigration based upon religion would be highly controversial, and the question of whether to continue the legacy of discrimination is worth the potential safety is where the contention lies.

All of these theories act as a backdrop to how the immigration issue is being approached today. While facts and figures make up the meat of the immigration debate, the more intangible elements, like beliefs in a nation’s self-determination versus its moral obligations, still underlie how we look at immigration. No mainstream political candidate is considering to completely open the borders to anyone for all time nor is anyone suggesting that the United States can no longer accept any immigrants. Rather, it has been largely focused on the extent that regulating immigration is necessary. The theory remains necessary because it helps illuminate why such positions are taken in the first place, whether it be for increasing the degree of immigration or reducing it, as well as the laws that are put into place to prevent illegal immigration. Even in the more moderated realm of proposing public policy, the fundamental values can still be found and are still deeply relevant.

 

History of Immigration in America

To understand America’s turn toward restrictionism in 1924, one must understand the cultural context in the years leading up to it. Beginning in the early 1880s, America’s immigration demographics began to change considerably. Italians and Russian Jews displaced by economic modernization began arriving at an increasing rate to the United States, shifting from the traditional region of northwestern Europe to more arrivals from eastern and southern countries. (Fleegler 4). Looking at the overall figures, immigration from eastern and southern European nations began at 900,000 in the 1880’s and grew to 1.9 million in the 1890s before swelling to 8 million at the turn of the 20th century. In addition, the total foreign-born percentage of Americans doubled from 7% in the 1880’s to 14% in 1920, and 3/4s of the population of many major cities including New York and Detroit became comprised of immigrants and their first-generation American children. (Fleegler 4).

It was in this social climate that calls for stemming immigration grew louder. Advocates for restriction based their arguments in claims that these newcomers weakened the nation both economically and culturally. Following World War I, the economic downturn precipitated claims of lower-wages and job loss as a result of the larger workforce (Anderson 8, Fleegler 5). The American Federation for Labor laid out its reasoning for restricting immigration in a 1909 statement, claiming workers were facing “the ruinous competition of an unending stream freshly arriving from foreign lands who are accustomed to so low a grade of living that they can underbid wage earners established in this country and still save money.”

Yet a large portion of the oppositional sentiment lay in cultural grounds. While the pre-1880 immigrants had been largely Protestant, this new batch of Americans was predominantly Jewish and Catholic, both of which inspired a great deal of concern. Anti-Catholic sentiment, seen previously in the Irish migrations of the 1840’s and 1850s, reared its head again toward the former southern Europeans as nativists expressed a fear a Catholic loyalty the to Pope over the President. As for Jewish arrivals, one of the 1924 act’s chief architects, Rep. Albert Johnson, when arguing in favor of his bill, quoted a document authored by US Consuls abroad stating that, should the U.S. not effect restrictive measures, the nation would be met with shipments of Jews who were “abnormally twisted,” “inassimilable,” “filthy, un-American, and often dangerous in their habits.” (Anderson 11).

Somehow even more disturbing than the widespread xenophobia was the rise of scientific racism. Eugenics, a pseudoscience predicated upon racial superiority, advocated the improvement of the human race through sterilization of people deemed to have inferior or undesirable genetic characteristics and/or increased reproduction of those with preferred traits. While morally appalling today, the idea of racial superiority carried a great deal of influence in motivating, or perhaps justifying, the 1920s legislation (Fleegler 7, Anderson 9). In Madison Grant’s notable 1916 work, The Passing of the Great Race, the author uses a theory of skull sizes to argue a racial hierarchy with northern and western Europeans at the top, followed by eastern and southern Europeans, and finally Asians and Africans left at the bottom (Fleegler 6).

These eugenic theories were influential enough that in 1927 the U.S. Supreme Court issued an 8-1 judgement upholding a Virginia forced sterilization law in the case of Carrie Buck, a poor, pregnant woman who had been committed into an institution for the “feebleminded.” In the court’s majority opinion, Justice Oliver Wendell Holmes states “It is better for the world, if instead of waiting to execute degenerate offspring for crime, or to let them starve for their imbecility, society can prevent those who are manifestly unfit from continuing their kind. The principle that sustains compulsory vaccinations is broad enough to cover fallopian tubes. Three generations of imbeciles are enough.” (Fleeger 9). When the House Judiciary Committee was beginning the writing of restrictive legislation, they heavily consulted Harry Laughlin, the draft author of the ‘model eugenics law’ “which served as the basis for forced sterilization in Virginia.” (Anderson 10).

Out of this wash of xenophobia, pseudoscience, and economic concern, came the legislation that would define the next four decades of American immigration. The first immigration act of this new era was in 1921. This bill, which passed in the senate by a vote of 78-1, set an annual quota for immigration from each country of no more than 3% of that nation’s foreign-born population within the U.S. as of the 1910 census. (Anderson 10). Three years later however what would be the definitive, and even more restrictive, piece of legislation of this era was enacted. The Johnson-Reed Act of 1924 replaced the 1921 act by setting the per-nation quota at just 2% of the foreign-born population instead based on the 1890 census. Congress chose this date instead because it pre-dated the Russian and Italian movements, meaning they were vastly underrepresented and thus would receive a far smaller quota (Anderson 10). It also included a measure which completely excluded Japanese immigrants, complementing the continually renewed Chinese Exclusion Act in barring Asian immigration. (House of Rep, Dept. of State).

In the debate over the 1924 bill, Rep. Samuel McReynolds of Tennessee stated, “This country can no longer be a melting pot for foreign nations. There was a time when that could be done, when conditions were different, but this time has long since passed.” (Fleegler 17). Rep. John Tillman of Arkansas declared, “We have admitted the dregs of Europe until America has been orientalized, Europeanized, Africanized, and mongrelized. (Fleegler 21).

As we will see throughout this section, a central question in the debate on immigration is how we best understand immigrant assimilation to an American identity. The answers to this question have varied and evolved over the decades, with enormous implications in approaching diversity in the modern era. How exactly do we define the meaning of the oft-used term “melting pot?” Many politicians and leaders of the early 20th century advocated a blending of differences to a largely homogenous American culture. Both Theodore Roosevelt and Woodrow Wilson took this stance, the latter stating “America does not consist of groups, a man considering himself as belonging to a particular group has not yet become an American.” TR also wrote that that new immigrants “must revere only our flag, not only must it come first, but no other flag should even come second.” (Fleegler 7,8). In pursuing the cause of assimilation, Henry Ford created a school of English and American traditions for his foreign workers, with the goal “to impress these men that they are, or should be, Americans, and that former national, racial, and linguistic differences are to be forgotten.” (Fleegler 8).

Outside of the cultural melding view, which advocated the blurring of differences in favor of a common culture, was the view termed by historian Robert Fleegler as “contributionism.” This perspective sees the immigrant culture not as fading into larger homogeneity, but instead as celebrating the differences of various immigrants perspectives and traditions fusing together to create a new and evolving culture. Horace Kallen, a German-Jewish professor at the University of Wisconsin, penned an essay The Nation in which he advocated “cultural pluralism.” (Fleegler 11). He criticized nativists arguing that immigrants weakened the nation rather than strengthening it through contributing unique perspectives and diversity. Rather than a melting pot, he envisioned an America as an orchestra in which “every type of instrument has its specific timbre and tonality, founded in its substance and form,” all of which, he added “make the symphony of civilization.” (Fleegler 11).

These two competing views of assimilation form an important lens through which to view the next century of debate over immigration policy. As times change and the demographics of hopeful immigrants change with them, arguments over if and how immigrants assimilate as well as how their incorporation affects America’s economy and lifestyle stay largely the same. Can and do these new Americans assimilate? Does this strengthen us or weaken us? Does acknowledging our differences divide or enrich? Over the course of the 20th century, rhetoric changes, public opinion sways, and policy sometimes moves with it.

During the 1920’s, politicians wanted these newcomers to blend, and they made the case that they simply weren’t assimilating to the American way of life. Rep. Charles Stengle argued that “The fire has apparently gone out under the melting pot and the original American stock is not absorbing these insoluble elements.” (Fleegler 21). Senator Arthur Capper expressed the same sentiment, “the experience of the last quarter century warns us that the capacity of the melting pot is sadly over taxed, and that fusing has all but ceased.” (Fleegler 17). What is it that gave lawmakers this impression? A frequent piece of evidence was the widespread existence of foreign language newspapers. Sen. Capper framed it as “One-thousand four-hundred foreign language newspapers, printed in 40 different languages foster the alien racial solidarity of these groups and set up barriers against Americanization by encouraging and perpetuating foreign customs and alien prejudices.” (Fleegler 21.)

Some Congressmen fought against racial superiority and arguments that immigrants weakened our country. Rep. Ole Kvale used contributionism language to describe the role immigrants have had in developing the country, stating “These ‘foreigners’ and ‘aliens’ are the very people who have helped build America on farm, in shop and factory.” (Fleegler 23). Rep. Emanuel Celler made the direct historical comparison of the current wave of immigrants to the earlier Irish and German movement, stating, “Just so in 1840, 1850, and 1860 you did not want the ‘beery Germans’ and the ‘dirty Irish.’ The Germans and Irish were mongrels, self-seekers, disreputable, and would not assimilate. We now know how good a citizenry they have become.” (Fleegler 24).

Yet this call to contribution and historical perspective fell largely on deaf ears. On April 12, 1924, the Johnson-Reed Act passed by a vote of 323-71 in the House and 62-6 in the Senate. (House of Rep). The bill, however, included no quota for Western Hemisphere immigration. This was due to the US economy’s demand for cheap, seasonal agricultural labor in the South, and the fact that at that time Mexican and South American migration was quite small in comparison to the European migrations. It wasn’t until the end of the 20’s when Rep. Johnson sought Hispanic restrictions, arguing “During the last 10 years the racial problem has become an acute one in the Southwest. Here there have been established, as the demand for cheap labor increased, a great many Mexican immigrants who seem to be driving out Americans,” he continued then with a claim which would reappear over 50 years later, “The recent Mexican immigrants are making a reconquest of the Southwest.” (Fleegler 31).

Yet in the coming decades after the brutally restrictive 1924 act was effected, a new national rhetoric towards immigrants began to emerge. As President Franklin Delano Roosevelt campaigned for reelection in 1936, he delivered a speech celebrating the 50th anniversary of the Statue of Liberty in which he sung the praises of immigrant economic and cultural contributions, stating “They brought to one new country the culture of one hundred old ones.” (Fleegler 36). This was not an isolated incident. FDR regularly made pro-immigrant statements on the basis of their deep connections to American industry and society. Historian Robert Fleegler writes of this period that as the rhetoric shifted, “Politicians’ language both shaped and reflected changing popular attitudes toward immigrants.” (Fleegler 37).

As the Great Depression continued and fascism began to rise in Europe, American sentiment reacted by utilizing more inclusive ethnic rhetoric. From late 1938 to early 1939, an educational radio program, sponsored by the U.S. Office of Education and other pro-diversity organizations, entitled Americans All…Immigrants All aired with the goal of promoting the contributions various ethnic groups made to America. The series enjoyed incredible success, becoming the most popular to ever air on CBS until that point. (Fleegler 42). This contributionist language became increasingly commonplace, with a proliferation of government and civilian programs and events promoting diversity. A pamphlet, entitled “Out of Many – One” was distributed in the early 1940’s by the Bureau of Intercultural Education and intended for “citizens and educators who are enlisting in the American fight against intolerance.” (Fleegler 51). These examples demonstrate only part of widespread the shift of everyday language and attitude toward the “new wave” of immigrants and the diverse groups of America as a whole.

As the calls for acceptance grew in America, so too did the scientific racism of the 20’s fade from public discourse as the national socialists in Germany used the claims of eugenics as key pieces of their platform. Despite the advances in acceptance of the former Eastern and Southern Europeans, it is well worth noting that Asians, Latinos, and blacks continued to be looked upon in largely the same, negative way as in the 20’s.

Despite all the pleasant and inspiring language toward white European immigrants, real public opinion toward immigration and the policy therefrom remained positioned against the outside world. In July 1938, Fortune magazine published a poll asking if the US should accept political refugees from Germany and Austria. Sixty-seven percent said we should keep them out, with eighteen saying they should be allowed in but without changing the quota, and only five percent responding that we should encourage them to come. (Washington Post). In 1939, a survey was conducted asking if the US should admit 10,000 predominantly-Jewish children displaced by WWII. Sixty-one percent said no, thirty-percent said yes, and nine-percent had no opinion. (Washington Post). By 1947, with an estimated 800,000 “displaced persons” in Europe, a Gallup poll asked respondents if they would approve of a plan to relocate 10,000 refugees to their state, to which fifty-seven percent disapproved. (Pew).

These attitudes help frame the 1952 Immigration and Nationality Act, which upheld the quota system established in 1924. The act also lifted the exclusion of Asian immigration and instead placed small quotas with built-in preferences of family unification and skilled labor. The debate over this bill rested primarily on two schools of argument, one advocating American image abroad and the other prioritizing the issue as one of national security. With the Cold War beginning, the opponents of the bill argued that the quota system’s bias against Eastern and Southern European nations hurt our image in those regions where we desired influence. On the other side, the bill’s advocates wanted to maintain the strict quotas over concern that “the United States could face communist infiltration through immigration and that unassimilated aliens could threaten the foundations of American life.” (Dept. of State). One of the two sponsor’s of the bill, Sen. Pat McCarran, stated “the cold, hard truth is that in the United States today there are hard-core, indigestible blocs who have not been integrated into the American way of life, but who, on the contrary, are its deadly enemy.” (Fleegler 113).

The bill was in fact originally vetoed by President Truman on the basis that it maintained the national origins system, which he decried, “The idea behind this discriminatory policy was, to put it baldly, that Americans with English or Irish names were better people and better citizens than Americans with Italian or Greek or Polish names… Such a concept is utterly unworthy of our traditions and our ideals.” Congress overturned the veto. (Fleegler 117).

 

The Era of Restriction Ends, and a New Tide Begins

For immigration in general however, the end of the quota finally came in the 1965 Immigration and Nationality Act. (Anderson 12). This bill specifically removed the racial quotas which had dictated flows for over 40 years while also removing the barriers to Asian immigration. It also continued the preference system for both family reunification and skilled-workers. In the years leading up to this major reform, President Kennedy argued for change, stating, “the great experience of the United States is…that we have been able to assimilate people of different groups, different languages, different cultures…This has been the real strength of the United States and that this kind of infusion would be helpful in our society.” (Fleegler 178). Some congressman argued that increased immigration would open the door to communist intrusion as well as cause other domestic problems. Rep. Maston O’Neal argued “[new immigrants] will compete with the very class our Federal Government is seeking to aid through its war on poverty.” (Fleegler 182).

Some congressmen raised concerns over the lack of a Western Hemisphere immigration limit, specifically in regard to Mexican immigration. Yet an amendment to the bill that would have capped Latin American immigration at 115,000 a year failed to pass. The entire bill would then go on to pass by a huge margin of 318-95. (Fleegler 183).

Before the global racial quota system ended, the beginnings of anti-Mexican immigration began brewing. World-War II had caused a labor shortage for farmers in the Southwest as their workers had shifted to war-time production. In order to fill the labor gap, in 1942 the US and Mexican governments brokered an agreement to create the bracero program, allowing a predetermined number of Mexican workers into the United States. For the duration of their contract, these workers were guaranteed a decent wage, free housing, and free transportation home at contract’s end, essentially renting them as workers. (Fleegler 130).

Many Mexican workers still immigrated illegally outside the bracero program for the first decade of its implementation, with local farmers happy to hire them. (Fleegler 130). In 1954 however INS Commissioner Joseph Swing, as part of a dual effort to curb illegal immigration, sought to increase the use of the bracero program as well as tighten border security and increase deportations during Operation Wetback. (Anderson 94). Beginning in 1953, bracero admissions more than doubled from 200,000 per year to 450,000 by the mid-50’s until the end of the decade (Anderson 95). It is also worth noting that the number of Mexicans given permanent resident status increased from 18,454 in 1953 to an average of nearly 43,000 from 1955-1959.

So what was the result on illegal immigration? INS apprehensions of immigrants crossing the border illegally fell drastically from the 1953 level of 885,587 to under 100,000 a year by 1956 with a low of 45,336 in 1959. (Anderson 94). Immigration officials gave enormous credit to the program for these declines. When the program ended in 1964 however, apprehensions began a decade long rise from just under 100,000 in 1964 to 800,000 in 1974. (Anderson 96). This is certainly not entirely the result of bracero ending, but it demonstrates that when legal avenues were closed to Mexican immigrants, the stream was diverted to illegal crossings as the U.S. economy demanded labor. The majority of this post-bracero wave of migrants were seasonal agricultural workers who regularly crossed back and forth over the border.

The next turning point in Mexican immigration came with the Immigration Reform and Control Act of 1986. This act, passed as a measure to “control and deter illegal immigration to the United States,” granted legal status three million unauthorized immigrants, 2.3 million being Mexican, as well as further tightened border security and affected sanctions on employers who knowingly hired illegal workers. (USCIS, Migration Policy). As the economy continued to provide jobs suitable to low-skilled foreign workers and crossing the border became increasingly difficult, immigrants began putting down permanent roots and bringing their families to join them. (Migrant Policy).

The next swath of immigration reforms came at the beginning of the 90’s. The Immigration Act of 1990 increased the number of legal immigrants allowed annually from 500,000 to 700,000 and created a lottery program to encourage immigrants from countries, especially in Europe, that had recent underrepresentation. (Chavez 8).

The most major policy change came in 1996 however with the Illegal Immigration Reform and Immigrant Responsibility Act. This law sought to attack illegal immigration through multiple means; it increased requirements for unauthorized immigrants to obtain legal status, streamlined the deportation process and widened the amount of deportable offenses, as well as placed increased liability on immigrant’s employers and sponsors (Chavez, UVB). It should also be noted that in the same year, Congress passed welfare reform which cut access to food stamps and Supplemental Security Income for legal immigrants, as well as restricted access to Medicaid for five years after entry. All of these actions were part of a “get tough” attitude to immigration which would persist for at least the next two decades (Chavez 9).

Immigration, both legal and illegal, continued with US Census charting the total Mexican immigrant population as rising from 2.2 million in 1980 to 9.1 million in 2000 and finally 11.7 million in 2010. (Migration Policy). Estimates of illegal immigrants who had been drawn by the economic booms of the 90’s came in at 8.8 million in 2000 and 10.3 million in 2004 (Chavez). Looking at the numbers in broad historical context, we can spot a distinct parallel between the turn of the century and today. The foreign-born share of the American population peaked between 1890 and 1910 at about 14.7%, and was marked at 13.2% in 1920, just prior to the first pieces of restrictionist legislation. Over the next half-century, this proportion fell precipitously, hitting a low of below 5% just after the passage of the 1965 Act. But with the removal of the quota system and the beginnings of large-scale Mexican immigration, that proportion has increased every decade since, with the 2010 number at 12.9%.

Rhetoric began shifting once again. By the beginning of the 70’s, discussion of immigrant contributions to American society faded as the former “new wave” immigrants became fully-accepted Americans. But with the arrival of another wave with millions of tan-skinned, foreign-tongued new hopeful Americans, alarm bells started sounding once more. In a study of language used by the New York Times, Washington Post, Wall Street Journal, and Los Angeles Times, Douglas Massey and Karen Pren found an increase in negative metaphors describing Mexican immigration. They searched for pairings of words between “undocumented,” “illegal,” or “unauthorized” and “Mexico” or “Mexican immigrants.” Then examined the pairings of those terms with the words “crisis,” “flood,” or “invasion.” They discovered that “the use of the negative metaphors to describe Mexican immigration was virtually non-existent in 1965, at least in major newspapers, but thereafter rose steadily, slowly at first and then rapidly during the 1970’s to reach a peak in the late 1970’s, roughly at the same time illegal migration itself peaked.” (Chavez 36). The researchers attributed this jump to politicians and the media who began to see the political and viewership value in portraying a border and country under siege. (Chavez 36).

Barring the overt pseudoscientific racism, one does not have to look hard to find rhetoric mirroring that of the 20’s. In a 2004 article in Foreign Policy, Samuel P. Huntington wrote that “unlike past immigrant groups, Mexicans and other Latinos have not assimilated into mainstream U.S. culture, forming instead their own political and linguistic enclaves – from Los Angeles to Miami – and rejecting the Anglo-Protestant values that built the American dream.” (Chavez 24). Here, Huntington unfavorably compares this modern wave of immigrants to the now-old “new wave” of European immigrants. Yet he himself is parroting the views of the restrictionist congressmen who, 80 years prior, cited foreign-language newspapers as evidence that these new Europeans were failing to assimilate.

Conservative politician Pat Buchanan has been well known for singing the perils of Mexican immigration. In his book The Death of the West, he writes, “Unlike the immigrants of old…[millions] of [Mexicans] have no desire to learn English or become citizens…Rather than assimilate, they create little Tijuanas in U.S. cities…With their own radio and TV stations, newspapers, films, and magazines, the Mexican Americans are creating a Hispanic culture separate and apart from America’s larger culture. They are becoming a nation within a nation. (pp. 125-126).” (Chavez 39). Again we see the same concept of “insoluble elements” that refuse to assimilate to the American way of life. Again, holding onto any aspects of their life-long heritage when arriving and settling in America is portrayed as a threat to society. Any notion of contributionism or appreciation for what other cultures bring is absent, as is any historical-perspective on the tired argument of a failing assimilation process.

Beyond merely showing how the rhetoric has not changed, what do the statistics say? On the claim that these new immigrants are not learning English, the numbers show quite the opposite. In a 2006 survey by the Center for Research on Latinos in a Global Society (CRLGS) in Orange County, researchers found that in second-generation Mexicans only one-fifth used all or mostly Spanish at home, with 50.4% using all or mostly English. By the third generation, that number became an overwhelming majority, with bilingualism in the home falling to only 7.6%. (Chavez 60). These results are mirrored by Los Angeles and national surveys, the latter reporting 88% English fluency in the second generation. The Mexican immigrant move towards English happens even more rapidly when looking outside the home, as 71.4% of Orange County second-generation immigrants report speaking all or mostly English with their friends, with the number rising to 85.7% in the third generation. The numbers for speaking all or mostly English by Latinos at work are three-out-of-four and four-out-of five for second and third generations respectively. (Chavez 60-61).

Another common claim made about immigrants is that they commit more crime. It was made by 2016 presidential candidate Donald Trump, specifically about the U.S.-Mexican border, to much controversy and outrage (Walker). In a study published by the American Community Service, the incarceration rates of immigrant males was 1.6 percent as opposed to 3.3 percent of the native-born population. One of the reasons this is believed to be the case is because immigrants are less likely to engage in criminal behavior because of the fear of being deported. Most incarcerated immigrants (65% to be exact) have been arrested for immigration-related offenses, as opposed to violent ones (AIC). While Mexico’s rate of murder is indeed much higher than the United States, at 218.49 vs. 42.01 per million (NationMaster), the offenders and the people crossing the border are not one and the same.

As was stated on the section about theory of immigration, it is important that migrants assimilate. One measure of indicating the level of assimilation is whether or not they can speak English. It is true that only 35% of first-generation Hispanic immigrants say they can speak English pretty or very well; however, 91% of second-generation and 97% of third-generation immigrants said they could do the same. Also, 89% of Hispanics have stated that they believe they need to learn English in order to succeed in the United States (Anderson). In a Pew Research Center poll, 60% of Hispanic people said they could speak English “very well” or “pretty well” (Hispanic Trends Project Poll Database). While this may perhaps be a little slower than Tancredo’s desire for a complete English-speaking populace, these statistics do indicate that there is an impetus for assimilating among the Hispanic community.

Moreover, educational and occupational aspirations indicate that immigrants do adopt the values of their homeland in the United States. In terms of achieving success in the United States, 52% of Hispanic people say that it is “extremely important” that their children earn a college degree, and 34% say that it is “very important.” Conversely, only 34% of white people say that it is “extremely important” and 32% say that it is “very important” (Pew Research Center – “Hispanic, Black parents see college degree as key for children’s success”). This could mean that Hispanic people very much want to become a part of mainstream America and have their children hold important jobs, as opposed to simply becoming complacent in their own group. That being said, there is some evidence that “an immigrant from Mexico will therefore likely live around and work with both U.S.-born citizens of Mexican ancestry and other Mexican immigrants” (Desipio and de la Garza, 117), but so do most U.S. born ethnic groups. Lastly, there is an indication that Hispanic people are patriotic and grateful for the experience they have in the United States. 87% say the opportunity to get ahead is better in the U.S. than their home country, and 72% say it is better for raising children (Pew Research Center – “When Labels Don’t Fit”). Assimilation, whether achieved or not, is clearly part of the Hispanic-American’s mindset.

On the subject of Muslims in the United States, there are similar results which prove their allegiance to the country. Around half of Muslims believe that those coming to the United States should try to assimilate rather than remain apart, and 63% believe that being a devout Muslim does not contradict living in modern society. Also, only 47% think of themselves as Muslim first, before they think of themselves as American, and 51% say they are “very concerned about Islamic extremism in the world these days.” It is worth noting the differences between the Muslim population in the United States versus that in Western Europe. In the U.S., Muslims are only 2% more likely to be low-income than the general population, as opposed to as much as 23% in some European countries, and are much likelier to be assimilated into the culture and condemnatory of terrorism than their European counterparts. While there has been a lot of heated debate about the role of Muslim immigrants in Europe, those same arguments cannot be used against the ones in the United States because they tend to come from a much different class (Pew Research Center – “Muslim Americans”).

Another common fear about immigration, or more specifically Muslim immigrants, is the connection between Islam and terrorism. In the wake of the San Bernardino shooting, committed by two radicalized Muslim immigrants, as well as the earlier terrorist attack in Paris, Donald Trump called for a “total and complete shutdown of Muslims entering the U.S.” in another highly controversial statement (Johnson). While to be certain terrorists make up a miniscule portion of the Islamic population, it was debated whether such extreme measures is worth the safety of not having another one of these events on U.S. soil. To a certain extent, it is true that the rate of Muslims who commit terrorism is much greater than that of any other religion, with 320 individuals charged with “jihadist” terrorism out of 502 convicted terrorists since 9/11 (New America). For 0.9% of the population, that is a rather alarming amount (Pew Research Center, “America’s Changing Religious Landscape”). Additionally, only 148 of these jihadists were natural-born citizens; the rest were immigrants and 34 have an unknown origin. However, only 45 people have died from jihadist terrorism, which indicates that the measures to prevent such attacks are usually enough, and that additional safeguards such as blocking immigration of the entire religion is debated (New America).

If the cultural argument appears fruitless for modern restrictionists, what about economics? Fiscal arguments against current/additional immigration flows rest on two central points: immigrants overstress American welfare programs and lower the wages of native-born workers. A 2015 study by the Center of Immigration Studies which looked at welfare use by native and immigrant households generated a media cycle supporting the first claim. The study, looking at both legal and illegal immigrants, found that 51% of homes headed by an immigrant made use of at least one welfare program compared to 30% for homes headed by a native-born citizen. Even more apparently damning in the argument against Latino immigration is that the number rises to 71% for immigrants from Mexico and Central America. (CIS).

On the topic of illegal immigration, one argument, outside of the usual ones against immigration as a whole, concerns their usage of public services despite not paying as many taxes as documented immigrants. One of the main deterrents against this phenomenon was Proposition 187 in California, which “called for the exclusion of undocumented immigrants from public social services and health care services… public elementary and secondary schools and public colleges and universities” (Desipio and de la Garza, 114). However, the Supreme Court found that it was “unconstitutional for the states to deny education to the undocumented”, but the preventions against social welfare programs remained (115). As to whether or not illegal immigrants pay taxes, there is evidence to suggest that they do in fact pay taxes, often through consumption taxes and property taxes. That being said, the Heritage Foundation estimates that undocumented immigrants pay $10,334 in taxes as opposed to $24,721 in benefits received per household (mostly through public education), which results in a $14,387 deficit per household. Conversely, the average household gives off a surplus of $29,250 (Rector and Richwine). To a certain extent, complaints about illegal immigrants getting more than they give are true, this is to be expected from low-income families.

These numbers appear as strong fuel to support the notion that immigrants drain the nation of resources, however the study bears some context. First-generation Hispanic immigrants typically have larger families, increasing the likelihood of at least one family member being on at least one form of welfare. When it comes to earning income, it is true that first-generation Mexican immigrants overall have lower skill levels and less education than native-born citizens, causing a rift in career types. When comparing Mexican immigrants with natives by job category, the greatest disparities occur with 31% to 17% in service occupations, 25% to 8% in natural resources, construction, and maintenance occupations, and only 8% of Hispanics immigrants compared to 38% of all natives in management, business, science, and art occupations. (Migration Policy). It makes sense that lower-skilled workers would obtain jobs that earn less money, and thus more often make use of a welfare system designed to assist poor/working-class families with children.

What the CIS study does not look at however is what happens in the second generation. An extensive study by the Pew Research Center compared first and second generation immigrant groups and found encouraging results. According to the study, “Adults in the second generation are doing better than those in the first generation in median household income ($58,000 versus $46,000); college degrees (36% versus 29%); and homeownership (64% versus 51%). They are less likely to be in poverty (11% versus 18%) and less likely to have not finished high school (10% versus 28%). Most of these favorable comparisons hold up not just in the aggregate but also within each racial/ethnic subgroup (e.g., second-generation Hispanics do better than first-generation Hispanics; second-generation whites do better than first-generation whites, and so on).” (Pew)

As many second-generation Hispanics are just now beginning to enter adulthood and the workforce, the study is explicit in stating that it does not outright prove upward mobility between immigrants and their children. To draw such conclusions would require a much longer period of time than is possible today. However the same 2006 CRLGS survey referenced earlier offers cause for further optimism. When looking at proportion of families with income of $35,000 or more, the number jumped from 25.8% among first-generation (migrated age 15 or older) to 71.1% in the second generation. (Chavez 55). The Pew study also found that second-generation hispanic immigrants were 20% more likely than the full US adult population to agree with the statement “most people can get ahead if they are willing to work hard.” (Pew)

As far as the effect on native-workers’ wages due to increased immigration, a 2016 National Bureau of Economic Research study in their working paper series looked at the effects of immigration on today’s economy relative to the early 20th century. They found that “it appears that historical estimates of the effect of immigrant arrivals on native wages are larger than comparable estimates for today, which may be due to the fact that, in the past, immigrants and natives held a similar set of skills.” (NBER). At the turn of the 20th century, immigrants were competing more with natives for factory and manufacturing jobs. Now, immigrants fill lower-level jobs where they are in less direct competition. This does not mean that they do not have some negative effect on the low-skill native workers that they do compete with, but looking historically they seem to be impacting American wages less.

Another common concern about immigration is the effect they have on the job market. One argument is the idea that immigrants take up the jobs of natives, specifically in the areas of unskilled labor. During the economic recession of 2008, there were 1.7 million unemployed natives without a high school degree, with around 2.3 million recent immigrants without a high school degree still having jobs. This conclusion, however, overlooks some of the other factors in play here. Most of these unskilled natives were 16-24, meaning that they had very little work experience, whereas the unskilled natives had many years of work experience. Additionally, most unskilled migrants worked in construction and extraction, whereas the unemployed natives were new entrants to the labor market. Lastly, the two groups often live in different areas, with migrants living in the Pacific whereas the unemployed natives were in the North East Central and South Atlantic parts of the country. In this case, migrants really can’t be blamed for taking the jobs of natives since they tend to look for different kinds of jobs (Paral & Associates).

In addition it is worth noting that immigrants do not come to the country just to try to directly outwork the native population; they come to fill a vacuum in the supply of labor. When the great recession hit in 2008, the result was a net loss of immigrants. An estimated 1 million Mexicans and their families, including US born children, left the country between 2009-2014, with an estimated 870,000 Mexican nationals arriving in the same period. This reversal is attributed to the retracting and subsequent slow growth of the U.S. economy. This marked the first decrease of the Mexican population in the United States in over half a century. (Pew, Gonzalez-Barrera). When jobs are open, immigrants come to fill them. When they are not, they do not come and some even leave. Immigrants are not flocking over the border just to drain America for freebies, they are responding to labor demand for low-skill positions.

Of course, a fully detailed, in depth analysis of the effects of immigration on the American economy is outside the reach of this chapter. However arguing against immigration only on economic grounds that they come here and abuse American welfare undermines the contribution to the labor force made by immigrants and how their children and grandchildren see increased quality of life and become the taxpayers of tomorrow. History tells us with overwhelming authority that immigrants do not seclude themselves from larger American life, but in fact join and contribute to an ongoing cycle of American cultural change and growth. When carefully looking at immigration, given historical context, century-old parallels, and emerging statistics, the perspective can shift. Rather than a narrative of an endless stream of Mexicans flooding into the country to drain resources, steal jobs, reconquer the southwest and corrupt our culture, one can view Hispanic immigrants as filling a demand for labor while seeking a better life for themselves and their children, just as it was one hundred years ago.

The economics of immigration does bear considerable study and thought, but there is nothing to suggest that this contemporary “new wave” of Hispanics cannot go on to be just like old “new wave” of Southern and Eastern Europeans. Once maligned as threatening our identity, economy, and security, before growing to be accepted as contributors to an evolving American culture and society. History shows us how American-born citizens react to new immigrants, and shows us how overblown those fears have looked fifty years later. What is to say these immigrants are any different?

 

 

References

 

United States. US Department of State. Office of the Historian. The Immigration and Nationality Act of 1952. Bureau of Public Affairs, n.d. Web. 29 Feb. 2016.

 

United States. House of Representatives. Office of the Historian. The Immigration Act of 1924. House of Representatives, n.d. Web. 29 Feb. 2016.

 

United States. US Department of State. Office of the Historian. Chinese Immigration and the Chinese Exclusion Acts. Bureau of Public Affairs, n.d. Web. 29 Feb. 2016.

 

Tharoor, Ishaan. “What Americans Thought of Jewish Refugees on the Eve of World War II.” The Washington Post. Web. 27 Nov. 2015.

 

Desilver, Drew. “U.S. Public Seldom Has Welcomed Refugees into Country.“Pew Research Center RSS. N.p., 19 Nov. 2015. Web. 29 Feb. 2016.

 

“Immigration Reform and Control Act of 1986 (IRCA).” USCIS. Department of Homeland Security, n.d. Web. 01 Mar. 2016.

 

Zong, Jie, and Jeanne Batalova. “Mexican Immigrants in the United States.“Migrationpolicy.org. N.p., 02 Oct. 2014. Web. 01 Mar. 2016.

 

“Selected U.S. Immigration Legislation and Executive Actions, 1790 – 2014.“Pew Research Centers Hispanic Trends Project RSS. N.p., 27 Sept. 2015. Web. 01 Mar. 2016.

 

“1996 Illegal Immigration Reform and Immigrant Responsibility Act.” U.S. Immigration Legislation Online. The University of Washington-Bothell Library, n.d. Web. 01 Mar. 2016.

 

Camarota, Steven. “Welfare Use by Immigrant and Native Households.“Center for Immigration Studies. N.p., Sept. 2015. Web. 03 Mar. 2016.

 

“Second-Generation Americans: A Portrait of Adult Children of Immigrants.“Pew Research Centers Social Demographic Trends Project RSS. N.p., 07 Feb. 2013. Web. 03 Mar. 2016.

 

Abramitzky, Ran, and Leah Platt Boustan. IMMIGRATION IN AMERICAN ECONOMIC HISTORY. National Bureau of Economic Research, Jan. 2016. Web. 4 Mar. 2016.

 

GONZALEZ-BARRERA, ANA. “More Mexicans Leaving Than Coming to the U.S.” Pew Research Centers Hispanic Trends Project RSS. N.p., 19 Nov. 2015. Web. 04 Mar. 2016.

 

Fleegler, Robert L. Ellis Island Nation: Immigration Policy and American Identity in the Twentieth Century. Philadelphia: U of Pennsylvania, 2013. Print.

 

Chavez, Leo R. The Latino Threat: Constructing Immigrants, Citizens, and the Nation. Stanford, CA: Stanford UP, 2013. Print.

 

Anderson, Stuart. Immigration. Santa Barbara, CA: Greenwood, 2010. Print.

 

NationMaster. “All Countries Compared for Crime Violent Crime Murder Rate per Million People.” NationMaster.com. NationMaster, n.d. Web. 09 Mar. 2016.

Anderson, Stuart. “Immigration and English.” Immigration Reform Bulletin (2010): n. pag. 10 Oct. 2010. Web. 9 Mar. 2016.

DeSipio, Louis, and Rodolfo De La Garza. Making Americans, Remaking America: Immigration and Immigrant Policy. Boulder, CO: Westview, 1998. Print.

Ewing, Walter A., Daniel E. Martinez, and Ruben G. Martinez. “The Criminalization of Immigration in the United States.” The Criminalization of Immigration in the United States. American Immigration Council, 8 July 2015. Web. 09 Mar. 2016.

Johnson, Jenna. “Trump Calls for ‘total and Complete Shutdown of Muslims Entering the United States’.” Washington Post. The Washington Post, 7 Dec. 2015. Web. 09 Mar. 2016.

New America. “Homegrown Extremism 2001-.” Homegrown Extremism: Analysis. New America, n.d. Web. 09 Mar. 2016.

Paral, Rob, & Associates. The Disparity between Immigrant Workers and Unemployed Natives: Untying the Knot (n.d.): n. pag. American Immigration Council. American Immigration Council, 17 Sept. 2008. Web. 9 Mar. 2016.

Pew Research Center. “America’s Changing Religious Landscape.” Pew Research Centers Religion Public Life Project RSS. Pew Research Center, 11 May 2015. Web. 09 Mar. 2016.

Pew Research Center. “Hispanic, Black Parents See College Degree as Key for Children’s Success.” Pew Research Center RSS. Pew Research Center, 24 Feb. 2016. Web. 09 Mar. 2016.

Pew Research Center. “Muslim Americans: Middle Class and Mostly Mainstream.” Pew Research Centers Social Demographic Trends Project RSS. Pew Research Center, 21 May 2007. Web. 09 Mar. 2016.

Pew Research Center. “Pew Hispanic Center National Survey of Latinos, May, 2013.” Pew Research Centers Hispanic Trends Project RSS. Pew Research Center, 26 July 2011. Web. 09 Mar. 2016.

Pew Research Center. “When Labels Don’t Fit: Hispanics and Their Views of Identity.” Pew Research Centers Hispanic Trends Project RSS. Pew Research Center, 03 Apr. 2012. Web. 09 Mar. 2016.

Rector, Robert, and Jason Richwine. “The Fiscal Cost of Unlawful Immigrants and Amnesty to the U.S. Taxpayer.” The Heritage Foundation. The Heritage Foundation, 6 May 2013. Web. 09 Mar. 2016.

Tancredo, Thomas G. In Mortal Danger: The Battle for America’s Border and Security. Nashville, TN: WND, 2006. Print.

Walker, Hunter. “Donald Trump Just Released an Epic Statement Raging against Mexican Immigrants and ‘disease’” Business Insider. Business Insider, Inc., 6 July 2015. Web. 09 Mar. 2016.

Wellman, Christopher Heath, and Phillip Cole. Debating the Ethics of Immigration: Is There a Right to Exclude? Oxford: Oxford UP, 2011. Print.

 

 

American Racism

Hannah Kloppenburg: Sophomore, Communications and Spanish

James Whisenhunt: Junior, Psychology / Sociology and Gender Studies

 

Authors’ note

For the purposes of brevity and clarity, we will focus primarily on the challenges faced by African Americans/black Americans. This is not only because the majority of literature on American racism focuses on this subject, but also because there are a number of very nuanced topics that go along with a combined focus on other, less prominent minority groups (such as the model minority, recent waves of Islamophobia, immigration issues, etc.) that are not addressed in this paper.

Introduction and History

At the height of the civil rights movement in the early 1960s, Martin Luther King Jr. revealed his vision for a more inclusive and racially sensitive United States, saying “I have a dream that my four little children will one day live in a nation where they will not be judged by the color of their skin, but by the content of their character.” Less than a year later, Dr. King saw fruits of his labor with the enactment of the Civil Rights Act of 1964, which aimed to make discrimination based on race, skin color, religion, sex, or national origin an illegal practice.

1964 wasn’t the last time that racial discrimination was at the forefront of the American psyche, however. The Civil Rights Act of 1964 has been revised twice, in 1972 and 1991, to broaden its scope and harmonize it with Supreme Court decisions in the late 1980s. There have also been public outcries over events of police brutality on African Americans. In 1991, video of Los Angeles police officers beating Rodney King caused national outcries of racial discrimination. University of Missouri, Kansas City, Law professor Doug Linder provides an outline of the incident in his “The Trials of Los Angeles Police Officers’ in Connection with the Beating of Rodney King.” Officers attempted to stop King, who was under the influence of alcohol, for reckless driving, which escalated into a high-speed chase (Linder). When attempts to incapacitate King, including “a swarming maneuver” by officers and two Taser shots failed, Officers Lawrence Powell and Timothy Wind began beating King with metal batons, while other officers held him down and handcuffed him (Linder). After the incident, three of the four officers involved were acquitted of all charges, leading to riots in Los Angeles within two hours of the announcement of the decision (Linder).

More recently, the Black Lives Matter movement has sought to bring light to many instances of police brutality in the United States, such as the shootings of Trayvon Martin and Eric Harris. Both of these cases, as well as others the movement has brought to public attention, have been framed by members of the movement as instances of racial injustice and prejudice against black Americans. Though there are varying circumstances and degrees of ambiguity in each case, the movement’s general aim is to raise awareness of the issue of racial inequalities in police treatment.

Of course, there were also racial tensions in the United States before the 1960s. In “Slavery, Race and Ideology in the United States of America,” Barbara Fields posits that the origins of racial tensions in the United States involved a long process set in motion by the indentured servitude of Afro-Americans in 17th century Virginia. In addition to Afro-Americans, many Euro-Americans were indentured servants on tobacco farms. These Euro-American servants, however, were perceived more favorably because their ancestors had fought in Europe for the rights and respect of poor Europeans, a luxury the Afro-Americans did not have since neither they nor their ancestors had spent any time in Europe. Though poor Africans may have garnered certain rights in Africa, those rights held no significance to the Euro-Americans, whereas the rights of poor Europeans were respected by other Europeans, and Euro-Americans by extension.

In the 1660s, however, a combination of factors led to an environment in which it was a rational decision for Euro-Americans to enslave Afro-Americans for life. The reasons for this development include the falling price of tobacco causing fewer Europeans to travel to Virginia, once-indentured Euro-Americans demanding freedom and land, and increased life expectancy making enslaving Africans for life an economically viable decision.

The oppression and difference in social status between Euro-Americans and Afro-Americans remained for decades, eventually becoming institutionalized. It became an assumption in the Euro-American community that this difference was part of a natural order, becoming more and more ingrained in the collective white mind as time went on. The state of international communications at the time contributed to this, as there wasn’t much exposure to other social systems. Thus, if white children are consistently told by their slave-owner parents that black people are below them, and there are no counterexamples to contrast that with, the children are going to internalize it and pass that belief on to their children. Of course, it didn’t help that Afro-Americans didn’t have many rights or high social standing to start with.

Something else that helped perpetuate this inequality was the religious climate of the nation at that time. Larry Morrison notes, in “The Religious Defense of American Slavery Before 1830,” that biblical scholars Frederick Dalcho and Thomas Newton saw slavery as the manifestation of the prophecy of Noah (Morrison). In the book of Genesis, Noah places a curse on Canaan due to the actions of his father, Ham, saying “’Cursed be Canaan! The lowest of slaves will he be to his brothers.” He also said, ‘Praise be to the Lord, the God of Shem! May Canaan be the slave of Shem. May God extend Japheth’s territory; may Japheth live in the tents of Shem, and may Canaan be the slave of Japheth.’” (Genesis 9: 25-27). Thus, according to Dalcho, “the descendents of Canaan, the Africans, were to be… “the lowest state of servitude, slaves” to the descendents of Shem and Japeth, the present day Jews and Christians” (Morrison 18).

This lead to the development of what Fields calls an ideology of racism. Fields notes that the concept of ideology is “best understood as the descriptive vocabulary of day-to-day existence, through which people make rough sense of the social reality that they live and create from day to day” (Fields 110). Ideologies are not inherently correct or accurate; they simply give people justifications for their day-to-day actions (Fields). Racial ideology justified the slavery of black people in a nation that supposedly prided itself on life, liberty and the pursuit of happiness by constructing Afro-Americans as a separate and inferior race.

The need to create an ideology of racism to justify discriminatory action is caused by the lack of sufficient biological differences to justify race. As Fields says, “belief in the biological reality of race outranks even astrology, the superstition closest to it in the competition for dupes among the ostensibly educated” (Fields 96). Many scholars have argued for a biological basis of race and racial differences. A recent, and somewhat extreme, example comes from Richard Herrnstein and Charles Murray’s book “The Bell Curve.” In the book, the authors come to the conclusion that genetics, as well as environment, have some influence on intelligence differences between white and black students. They also argued that these IQ differences had an effect on life outcomes, such as income and age of childbearing. An argument from the other end of the spectrum is made by Arthur Jensen, whose study “Ethnicity and Scholastic Achievement” finds that “there is no evidence in these data that any differentially discriminative forces in the school, if such exist, differentially affect the scholastic performance of children according to their ethnic membership” (Jensen 668).

Audrey and Brian Smedley note in “Race as Biology Is Fiction, Racism as a Social Problem Is Real” that the interest of these researchers, no matter their stance on the topic, is misplaced, saying that “the consensus among most scholars in fields such as evolutionary biology, anthropology, and other disciplines is that racial distinctions… are not genetically discrete, are not reliably measured, and are not scientifically meaningful” (Smedley & Smedley 16). They also mention, in congruence with Fields, that the word “race” was used as a general categorization without a particular connotation, like the words “breed” or “type,” until the early 18th century, when it became a way to differ between social groups (Smedley & Smedley 19).

Audrey Smedley comes back to the topic later in “On the Confusion of ‘Race’ with Biophysical Diversity.” She again notes the ideological foundation of race, but also points out that the groups defined by racism are rather vague and non-descript. Many Europeans, Smedley notes, come from different nations and cultures, but all are put together into the overarching “white” category, considered to be above everybody in the “black” category, no matter what culture they belong to or their national heritage.

Fields also speaks on the broadness of these categories, saying that “black people,” regardless of descent, are considered a race separate from and below “white people,” the nation’s standard. She notes the use of the term “race relations” in that it refers to the broadest groups, such as “white” and “black.” This term is not used for groups within those large groups, such as the English and Irish within the “white” umbrella, although the English arguments for superiority are similar to the arguments white Americans make against black Americans (Fields 99).

 

Defining Racism

A working definition of racism must come with the understanding that racism is multifaceted, with explicit and implicit components. Policies like segregation laws and Jim Crow laws were very explicit in their racially charged intent. Thanks to the Civil Rights movement and the decision of Brown v. Board of Education, among other cases, many explicit forms of racism have gone away. This has left some people concluding that racism doesn’t exist anymore, as it is no longer visible in many people’s everyday lives. In 2016, black ESPN commentator Stephen A. Smith told University of South Alabama students that racism doesn’t exist for them because they are “not denied the opportunity to eat at restaurants, or to enter night clubs, or to patronize businesses” (Chasmar). Just because racism isn’t typically as extreme or explicit as it used to be doesn’t mean that it’s gone away for anyone — current social manifestations and psychological effects of racism are covered in future sections of this chapter.

A recent area of study for social psychology has been implicit associations and the idea of unconscious racism. In “Does Unconscious Racism Exist?,” Lincoln Quillian talks about two of the most common methods used to assess implicit feelings and attitudes: rapid priming and the Implicit Association Test (IAT). Priming uses stimuli displayed too quickly for a person to be consciously aware that they have seen it to activate unconscious schemas about what they had been primed to consider. IATs work by comparing how long it takes someone to pair different sets of words with different associations (such as white = good, black = bad vs. black = good, white = bad). The reader can take a racism IAT at http://www.implicit.harvard.edu. Quillian finds that most white people, and some black people, have an implicit preference for white people over black people. Quillian also points out research from Patricia Devine that says that white Americans tend to view ambiguously threatening acts, such as someone refusing to pay rent until their apartment is repainted, as more threatening when they were primed to think of black people over white people, showing implicit stereotypes of black people as violent and criminal (Devine).

Now that both the explicit and implicit aspects of racism have been touched upon, a working definition of racism must capture both, as well as the large variety of actions that can fall under it. Thus, for the purposes of this chapter, our definition of racism is “any thought, feeling, or behavior from an individual or group that, either implicitly or explicitly, asserts or perpetuates the idea that there are qualitative differences in value between people or groups of different races due totally or in substantial part because of the races of those involved.”

 

Mental Health and Psychology of Racism

Institutional racial differences have been proven to have an effect on the mental health status of minorities in the United States. Data from the US Department of Health and Human Services shows that, from 2009-2010, non-Hispanic black people were more likely than non-Hispanic white people to have serious psychological distress (DHHS). Non-Hispanic black people were also more likely to feel sad and that everything is an effort, and were much more likely in high school to attempt suicide, though non-Hispanic white people were more likely to succeed in committing suicide (DHHS).

Despite these concerns, and perhaps for the same reasons that these concerns exist, the Agency for Healthcare Research and Quality found in their 2010 National Healthcare Disparities Report that non-Hispanic black people are less likely than non-Hispanic white people to receive mental health counseling, prescription medications, and treatment after a major depressive episode (AHRQ).

A 2012 NPR interview with Dr. William Lawson of Howard University shows that the fault might not all be systemic, however. Though Lawson does note a lack of access to mental health services, a lack of mental health education also seems to be a factor. “[African Americans] may not be aware of the symptoms of many mental disorders”, Lawson notes, “or they may believe that to be mentally ill is a sign of weakness or a sign of a character fault” (NPR).

Racism also has psychological manifestations in the minds of white people. In “The Relationship Between Racism and Racial Identity Among White Americans: An Exploratory Investigation,” Robert Carter finds that racial identity attitudes, “[white people’s] attitudes about their own racial identity that correspond to attitudes about Blacks as a racial group,” are a significant predictor of racism in white Americans (Carter, 47). Interestingly, Carter also finds gender differences among different types of identities, as men scored higher on the disintegration subscale (a lower-level attitude characterized by dissonance between personal and societal beliefs), and women scored higher on the pseudo-independence and autonomy subscales (higher-level attitudes characterized by initial intellectualizing about race relations and respect for racial differences, respectively). Carter interprets the findings to argue that men tend to exhibit racism at all levels, while women tend to exhibit racism more when their awareness of race is low (Carter).

Another aspect to consider when looking at racism in the minds of white people is the idea of terror management. Terror Management Theory comes from the idea that humans always deal with a terror that comes from a desire to live coupled with awareness of the inevitability of death. In “Sympathy for the Devil: Evidence that Reminding Whites of their Mortality Promotes More Favorable Reactions to White Racists,” Jeff Greenberg, Jeff Schimel, Andy Martens, Sheldon Solomon, and Tom Pyszcznyski find that, when reminded of their death, white participants considered white pride less racist than white people who were not reminded of death. Greenberg et al. also found that white people, when mortality was salient, rated explicitly racist acts as less racist than white people who were only reminded of dental pain. These studies show that people tend to gravitate and agree more with people that are similar to them when threatened, as people like them promote a sense of safety through similarity. There are many possibilities for why this is, from the behaviorist possibility that negative associations with black people are strengthened by reinforcements and associations (such as friends laughing at a racist joke or seeing stereotypical portrayals of black people in media, respectively) to the evolutionary possibility that preferring people similar to ourselves was adaptive to human survival.

 

Social Influence of Racism

Racism has had a profound influence on American social structures; sometimes completely overt, but often subtle. Evidence of this influence is clear in the previous discussion of important historical events. However, the effects of racism are not always as obvious as blatant segregation or overtly racist insults.

Race, as previously discussed, is a socially constructed concept. For this reason, the social effects of racism are often the most poignant and damaging to people of color. What is most striking about the social implications of racism is that racist ideology has the ability to permeate nearly every corner and aspect of society, ranging from the more obvious issues of law enforcement policies and police brutality to perceptions of people of color in the media and conflicts in the workplace.

In the modern age, a popular perception of racism is that it doesn’t exist in America any more — at least, not to the extent that it did in the 1960s, during the age of Jim Crow and lynchings and segregated water fountains. This perception is misguided. In “American Drug Laws: The New Jim Crow,” Glasser notes that despite the fact that we like to congratulate ourselves on our victories since the 1960s, we still frequently discriminate based on race. This is also a factor in Bonilla-Silva’s “colorblindness” theory: Minimization of racism is “an explanation that discrimination is no longer a factor affecting People of Color because things are better now than they were in the past,” (McCoy 228).

The idea that things are “better” today than they were in the past is even subjective to a certain extent. Yes, the elimination of Jim Crow laws and the creation of a number of laws intended to prevent discrimination in many different social sectors have made great strides toward solving the issues caused by racist ideology. However, other forms of discrimination are alive and well in this day and age. Modern discrimination is not always conscious or obvious, and is largely composed of microaggressions or structural inequalities that have arisen from the more overt racist acts of past years.

In a study on microaggressions in higher education workplaces, Young and Anderson define the term “microaggression” as “brief or commonplace daily verbal, behavioral, or environmental indignities, whether intentional or unintentional, that communicate hostile, derogatory, or negative racial slights and insults toward people of color” (61). They continue on to explain that because of microaggressions, “the expression of racism emerges subtly through words and actions, invisibly aggressing against, and marginalizing minorities” (62). This certainly falls under our working definition of racism as listed above: “any thought, feeling, or behavior from an individual or group that, either implicitly or explicitly, asserts or perpetuates the idea that there are qualitative differences in value between people or groups of different races.”

Remaining structural inequalities will be discussed in more detail below; however, this category encompasses some of the more visible effects of American racism such as discrimination and disparities in law enforcement, education, and income.

In this sense, modern racism can be difficult for those not directly affected by it to pick up on. Note Young and Anderson’s image of micro-aggressive words and actions as an invisible force that attacks and marginalizes people of color (62). Additionally, structural inequalities can be difficult to pin down. Differences in manifestations of racist ideology in the past and in the present do not necessarily signify that its impact on people of color has lessened. This combination of microaggressions, long-standing structural inequalities, and subtle prejudices can and does lead to real, significant impacts for people of color in the U.S.

One of the most easily quantifiable social impacts of structural racism is disparities in income and job opportunities. As of 2014, the median yearly household income for white Americans is $60,256, according to a report by the US Census Bureau. It’s $42,491 per year for Hispanic Americans and $35,398 for African Americans, while Asian Americans earn an outstanding median income of $74,297. These proportional differences in income have stayed fairly static since 1967, with all incomes rising to account for inflation.

The same US Census Bureau report shows that the poverty rate for non-Hispanic white Americans in 2014 was 10.1%. The rate for African Americans was 26.2%, 12% for Asian Americans, and 23.6% Hispanic Americans with a total of 13.1 million Americans living in poverty. With the exception of Asian American families, it is more likely for American people of color to have a lower income than white Americans. All minorities surveyed are more likely to be living in poverty, particularly African Americans. This is despite the fact that historically, more African American women stay in the workforce for longer portions of their lives compared to white women (1284 Income disparities).

Disparities in income and poverty are not produced in a void. Rather, they are likely the result of race-based wage gaps or disparities in job opportunities. Admittedly, the American corporate world has never been more diverse, having seen a marked increase in the past few decades in workers who are people of color, particularly black women (Cornileus 444). One African American women and seven African American men to date have become CEOs of Fortune 500 companies (Cornelius 445). However, workers of color still face challenges in hiring, job access, and unequal pay.

As touched upon previously in the U.S. Census data concerning income, there are still significant gaps in wages between black and white Americans. This occurs despite equal qualifications: black men earn nearly one third less than white men in professional and specialty jobs, and black college graduates earn about $5,000 less than equally qualified white graduates (Hosoda et. al 146).

When employers are given the choice, black applicants are not hired nearly as frequently as white applicants. “In a now-classic study, sociologists discovered that when white and black high school graduates apply for a low-wage job, a white applicant is more than twice as likely to get called back for an interview than a black applicant” (Marsh 325). The researchers also found that white applicants with an 18-month incarceration on their record for cocaine abuse were called back more frequently than black applicants with no criminal record (Marsh 325).

This puzzling phenomena may be due to the fact that employers often envision the “ideal” candidate when searching for new hires, and this envisioning process predicts hiring decisions (Iannuzzi et al. 662). The catch? The ideal candidate is often white.Iannuzzi et al. argue that this phenomena is often unrelated to explicit prejudice and can affect even low-prejudice individuals (661). Additionally, as “soft skills” such as interaction and motivation skills have become more important to employers in many sectors of the economy, these criteria have often made it difficult for black men to secure jobs. Cornelius argues that “many black men — although certainly not all — are more verbally direct, expressive, and assertive than white men, who provide the standard against which black male behavior is measured” (146). These unconscious prejudices and structural inequalities lead to disparities which contribute to the poverty rates listed in the U.S. Census report, as discussed above.

In addition to disparities in income and jobs, structural racism is apparent in lawmaking and law enforcement. In recent months, there has been much debate over the relationship between race and law enforcement; in particular, its connections to instances of police brutality.

Though racial biases in lawmaking and law enforcement are often unconscious, they are still implicit through their creation and execution. For example, Glasser writes that DEA drug laws are skewed towards incriminating and convicting people of color, as are criteria for pulling cars over. In Maryland, 17% of drivers along a stretch of I-95 are African American, yet 73% of total drivers pulled over and subjected to extensive searches are African American (Glasser 704). What’s more, 20% of all drivers along that stretch are minorities, but 80% of those pulled over are minorities. “Nor is Maryland an isolated incident,” Glasser writes, continuing on to list several other states that boast similar statistics (705).

This is due, according to Glasser, to the Drug Enforcement Agency’s “Operation Pipeline,” which established criteria for stopping cars on the highway. These criteria include: Is there an air freshener hanging on the mirror? Is there a bumper sticker from Jamaica? And in particular, is the car expensive and the driver black? (707). Drug laws, Glasser claims, are just as oppressive and skewed toward the minority as Jim Crow laws were in the past. And in the end, traffic stops aren’t even a very effective way to search for drugs (Glasser 708).

“Although recognized for years, widespread concern over racial, ethnic, and income-based disparities persists in the criminal justice system,” Griffin, Sloan, and Eldred write in a report on disparities in law enforcement (1368). Their research focuses exclusively on Driving While Intoxicated (DWI) arrests. DWI stop and arrest rates for both Black and Hispanic men are higher than their relative population percentages would predict (Griffin 1371). Disparities in lawmaking and enforcement such as these are cause for great concern among American minority communities.

Additionally, police brutality and mistreatment by law enforcement are some of the most controversial social issues facing people of color in America. Police brutality was at the forefront of American politics in the 1990’s after the infamous Rodney King beatings and subsequent LA riots. The 2014 shooting of unarmed teenager Michael Brown by a Ferguson, MO police officer sparked heated protests in and around Ferguson. Police in riot gear, as well as the National Guard, were sent to address these protests, which resulted in some violent altercations and a great deal of national controversy (Luibrand). This, as well as several similar cases, inspired a fiery debate over police tactics and the relationship between law enforcement and American minority groups (Luibrand). Most famously, the Black Lives Matter movement arose from the events of August 2014. The movement, composed primarily of young black Americans, had a strong basis in social media outreach and called for change with how police deal with minorities. It also aimed to draw attention to modern systemic racism (Luibrand).

Differences in educational opportunities are also a part of the infrastructure of modern American racism. Disparities in funding, teacher availability and willingness, and resources can affect the quality of a child’s public education. Though a large part of funding for public education is distributed to public schools through the Department of Education, “states rely primarily on income and sales taxes to fund elementary and secondary education,” and property taxes are the primary source of local funding for public education (Atlas). Though differences in the quality of public education are technically due to socioeconomic economic differences between school districts, socioeconomic status and race are often correlated. With many in primarily minority-occupied regions struggling with job opportunities, earning relatively lower incomes, and consequently living in areas of less valuable property, public schools are often lacking in the resources, structure, and qualified teachers necessary for a quality education:

 

“Wealthier, property-rich localities have the ability to collect more in property taxes. Having more resources to draw from enables the district to keep tax rates low while still providing adequate funding to their local school districts. Poorer communities with less of a property tax base may have higher tax rates, but still raise less funding to support the local school district. This can often mean that children that live in low-income communities with the highest needs go to schools with the least resources, the least qualified teachers, and substandard school facilities” (Atlas).

 

This principle also applies on the state level to a certain extent. States have highly subjective formulas for distributing public education funds which usually attempt to take into account factors such as the number of students in a district, students living in poverty or with disabilities, or the number of students who don’t speak English as a first language (Atlas). While this allows states to customize their education plans in an effort to equalize education, it also results in a huge disparity in funding allocated to between states, ranging from 29 percent to 82 percent of total funding (Atlas). For those minorities living in poverty, hanging on to a job opportunity, or otherwise unable to move, this can result in a reliance on a local schooling system which is highly subjective in quality.

In addition, racist beliefs held by educators themselves (consciously or not) can impact the quality of education. According to a study of university teachers and mentors by McCoy, Winkle-Wagner, and Luedke, many educators take a “color-blind” approach to teaching, mentoring, or counseling students of color. While this often stems from a desire to treat these students fairly, it’s also indicative of a certain amount of ignorance or insensitivity to those students’ particular cultural backgrounds. Scholars suggest that students of color in STEM fields have indicated that “having a mentor with the same racial identity is important, indicating that they receive more help than students matched with a mentor from a different race” (McCoy 227).

White faculty sometimes have different (lower) expectations of students of color and express a desire to help them “get up to speed;” rarely acknowledging structural differences which may have put students of color behind in education (McCoy 235). Conversely, they sometimes make concessions for students of color in an attempt to treat them fairly (McCoy 234). They are overall unsure how to work with students of color, as there is tension between the desire to recognize race and the desire to treat students equitably.

Racist ideology is apparent in America’s largest institutions, extending even to the media — popular media interpretations of people of color, particularly black Americans, can perpetuate racist ideologies. American media, in theory, attempts to create a perception of society that is inclusive and equal (the concept of “American pluralism”). In practice, some scholars argue, “American pluralism enforced Anglo conformity and created an American identity that excludes all groups outside the norm and realm of Whiteness” (Littlefield 674). For this reason, portrayals of American minorities place them in the “Other” category, a form of virtual segregation which can have negative consequences (Littlefield 676).

Portrayals of black women are often particularly damaging. According to Littlefield, biased media portrayals of black women as wild sexual predators date back to similar perceptions of black women held by white slaveholders in the early history of the United States (675). Black women are often portrayed in the media as violent, angry, and untamed, which can lead to social troubles with both racism and sexism (Littlefield). Chaney also argues that violent caricatures of black Americans in the media correlate with police brutality and unjust treatment by law enforcement officials.

As we touched upon in the “Mental Health and Psychology of Racism” section, the many facets of racism can have a negative effect on victims’ mental health. In a study by Graham, Calloway, and Roemer, African Americans who reported high levels of recent racist encounters (ranging from being treated poorly by service workers to being hit or yelled at due to racial differences) also tended to have higher levels of anxiety and more difficulty with emotion regulation (560). African Americans in particular see a prevalence of mental health concerns, according to the CDC — while this may not be a causal relationship to racist encounters, a correlation is certainly evident.

Racism can also be mentally damaging to the white Americans who perpetrate it. Garriott, Love, and Taylor claim that racism in white college students is correlated with feelings of unreasonable superiority, irrational fear of people of color, and social difficulties (46). This can even damage their relationships with other white people who see overtly racist behavior as abhorrent or socially off-putting (Garriott et. al 47). Those with contradictory feelings towards people of color can also experience difficulty adjusting to college (Garriott et al. 46).

Racism can be as detrimental to physical health as it often is to mental health. As previously discussed, racial profiling is an issue in law enforcement, but it can also be extremely damaging in medical settings. Assumptions about illnesses or conditions stereotypically attached to certain races can lead doctors to make assumptions about symptoms, resulting in misdiagnoses or insufficient care (Wasserman 121). Wasserman provides this example of racial profiling in medical environments as compared to police racial profiling:

 

“A man walks into a store, where he is observed by a police officer. Or he walks into a health clinic, where he is observed by a doctor. In both cases, he is short, brown-skinned, and poorly dressed, speaking Mexican Spanish and no English. The police officer observes that he seems nervous in her presence, and avoids eye contact; the doctor observes that he is coughing and congested, with a slight fever. The policeman asks to see his identification; the doctor asks him to take a test for swine flu” (120).

 

Doctors can also, in theory, claim to have made stereotypical diagnoses such as these for the patient’s benefit (Wasserman 124). However, they can end up being more damaging in that doctors may ignore other potential causes of illness based on their own confirmation biases.

Finally, it’s important to consider that racism can often be experienced differently from person to person based on gender. In Racism at the Intersections, Kwate and Goodman claim that black women’s unique position as members of two major oppressed groups (women and African Americans) makes their experiences with racism different from those of black men (397). When asked to list memorable racist experiences, “Thirty percent of women reported incidents involving resources, 26% reported aggression, and 7% reported stereotypes; male counterparts were 24%, 27% and 5%, respectively” (Kwate et. al 401). Additionally, women tend to notice and remember microaggressions more often than men (Kwate et al. 397). Black men and women, as previously discussed, face different challenges in regards to media perceptions and job discrimination.

Though experiences with racism often correlate with overarching trends, each individual is affected by it uniquely and usually negatively. The persistence of racism in American culture is a multifaceted, subtle, and complex issue. Many of the concerns American minorities face are closely related. For example, availability of job opportunities can affect income and consequently healthcare, leading to dire health consequences. Education quality can affect job opportunities, etc. In exploring the relationship between employment inequalities and poor healthcare for African Americans, Doede writes that “Access to decent employment is a structural determinant of health — an element of society that is responsible for health inequities. Structural determinants also include economic, education and health care systems, social and physical environments, and political climates” (151). These observations are indicative of modern American racism as the remnants of a superstructure. Because of their intertwined nature and the fact that they are deeply socially or psychologically embedded, the social constructions of American racism have not been and will not be easily deconstructed.

 

Analysis and proposed solutions

Now that the psychological and social implications of racism have been laid out, the issue at hand becomes what to do about them. Solutions need to be as complex and multifaceted as the problems that necessitate those solutions, so there is no simple, single method to address racist ideology in the United States.

One idea that seems initially appealing is to simply be “color blind,” or to actively ignore race and treat everyone exactly the same. At first glance, this seems to be what Dr. King was talking about when he wished his children would be judged by character instead of skin. Though it seems nice on the surface, Neville, Awad, Brooks, Flores, and Bluemel note in “Color-Blind Racial Ideology” that “racial color-blindness is unattainable, reinforces racial prejudices and/or inequality, and is actually an expression of ultramodern notions of racism among White Americans and of internalized racism or the adoption of negative racial stereotypes among people of color” (Neville et al). This is evident in the descriptions of professors attempting to be “color-blind mentors” in the section entitled “Social Influence of Racism.” Instead, Neville, Awad, Brooks, Flores, and Bluemel endorse a color-consciousness model, in which people accept and are aware of race and the advantages and disadvantages that come with it, giving an incentive to help eradicate racial differences (Neville et al.). The first step to finding a solution is acknowledging the problem, and ignoring differences in race or culture isn’t going to get rid of implicit biases or remove the systemic disadvantages people of color face.

One possibility for reducing implicit biases against minorities is promoting interaction between white people and people of color. In “Implicit Bias and Contact: The Role of Interethnic Friendships,” Aberson, Shoemaker, & Tomolillo find that white participants with African American and Latino friends show less implicit biases when taking an IAT. This could be because having friends of different races gives a person positive associations with those races (Aberson et al.). The issue with this solution is whether or not its implementation is feasible or can be controlled. One way to address this might be to begin providing more diverse representation in the media — normalizing the inclusion of different races in the media can have a positive effect on racist ideology, helping to eliminate the kind of unconscious prejudice apparent in the previously discussed study on imagining the ideal job candidate (so that the ideal is no longer necessarily whiteness).

In a similar vein, one way of attacking implicit attitudes about race would be to provide more opportunities for inter-ethnic friendships, especially in young children. Of course, part of this process needs to happen naturally as children interact and decide who to befriend, but encouraging schools and day-cares to facilitate interactions between children of many different races would be a good building block to creating generations of less implicitly biased children. This could, in theory, lead to less explicit racism, as implicit thoughts and feelings of white supremacy would be lessened. Of course, given that public schools are broken up into districts that can be very racially segregated, it is very likely that many schools won’t have a diverse mix of students as a starting point. Unfortunately, there’s not much that can significantly reduce this problem short of a mandatory quota system, which in the past has proven controversial in universities.

One hot-button topic in current discussion of education is the idea of affirmative action, as many people often relate it to the idea of quotas for minority students in a school. This issue has even been taken to the US Supreme Court, in Fisher v. University of Texas. Abigail Fisher, a white woman applying to the University of Texas, was not allowed admission to the University. Believing that her rejection was due to her race, she sued the school and the case caught the eyes of national media outlets. The Supreme Court eventually decided in favor of the University of Texas, allowing them to continue using race as one of multiple criteria.

There is an important distinction to make between affirmative action and quotas. The National Conference of State Legislatures website refers to affirmative action as “admission policies that provide equal access to education for those groups that have been historically excluded or underrepresented, such as women and minorities” (NCSL). While quotas in education may be controversial and possibly ineffective if the minority students accepted aren’t academically prepared, affirmative action can encompass more than quota systems. NCSL notes describes other aspects of affirmative action, such as application outreach programs for minority high school students, as well as scholarship opportunities and on-campus support services for minority University students (NCSL). It has also been effective at getting more minority students in Universities. When California and Texas did away with affirmative action laws, UC Berkeley had 61% fewer minority students in its incoming class and Rice had 41% fewer black students in its incoming class (NCSL). Affirmative action programs may not be a perfect program for eliminating racial inequalities in Universities, but it is certainly making a positive change in the lives of minority students.

Lack of access to healthcare is another serious problem for people of color and should be addressed however possible. For black Americans with mental illness, a supplement to consider is the role of religion. Pew Research Center has found that African American adults (at a consistently higher percentage than any other race measured) believe in God (83% of black people “absolutely certain” vs. 61% of white people), attend religious services at least once a week (43% black vs. 34% white), and say religion is very important (75% black vs. 49% white) (Pew Research Center). Often, in the case of mental illness, a church can provide a strong network of social support that can be beneficial to recovery. In his NPR interview, Dr. Lawson mentions that “African-Americans tend to like to seek treatment or help from those institutions that they're familiar with and trust” (NPR). Using what resources are available could aid in coping with mental illness.

Part of an overarching solution is publically addressing situations, like police brutality and workplace microaggressions, that are widely considered unacceptable. This is evident in the efforts of the Black Lives Matter movement, as well as internet activists who film instances of police brutality or post about racist encounters online. This can help address the perception held by many that racism is no longer a relevant issue in American society by bringing instances of racially-driven discrimination to the forefront of people’s minds through social media.

Addressing media representation of minorities is also very important. Perceptions of reality, in some ways, reflect what we see in movies, television, and online. Representation in the media is often skewed against minorities, with very few minority actors being cast as leads, and being cast into typecast or stereotypical roles. Casting more talented minority actors as leads and increasing representation in TV shows, music, and online will normalize cooperation and communities that encompass a variety of races. Representation in the media is particularly important for children, who are increasingly using it as a resource to learn about the world around them.

As previously mentioned, there is no simple or single solution to racist ideology in American society. However, there are steps we can take — starting with the suggestions proposed, and building on from there — to lessen the impact of racism for Americans.

 

Works Cited

 

Aberson, Christopher L., Carl Shoemaker, and Christina Tomolillo. “Implicit Bias and Contact: The Role of Interethnic Friendships.” The Journal of Social Psychology 144.3 (2004): 335-47. Web.

 

National Healthcare Disparities Report. Agency for Healthcare Research and Quality. Web.

 

Brown-Iannuzzi, Jazmin L., B. Keith Payne, and Sophie Trawalter. “Narrow Imaginations: How Imagining Ideal Employees Can Increase Racial Bias.” Group Processes & Intergroup Relations 16.6 (2013): 661-670. Academic Search Complete. Web.

 

Carter, Robert T. “The Relationship Between Racism and Racial Identity Among White Americans: An Exploratory Investigation.” Journal of Counseling and Development 69 (1990): 46-50. Web.

 

Chaney, Cassandra, and Ray V. Robertson. “Racism and Police Brutality in America.” Journal of African American Studies 17 (2013): 480-505. Web.

 

Chasmar, Jessica. “Stephen A. Smith, Black ESPN Commentator, Tells Students ‘Racism Doesn’t Exist’ Anymore.” Washington Times. The Washington Times, 03 Mar. 2016. Web.

 

Cornileus, Tonya. “‘I’m A Black Man And I’m Doing This Job Very Well’: How African American Professional Men Negotiate The Impact Of Racism On Their Career Development.” Journal Of African American Studies 17.4 (2013): 444-460. Academic Search Complete. Web.

 

Devine, Patricia G. “Stereotypes and Prejudice: Their Automatic and Controlled Components.” Journal of Personality and Social Psychology 56.1 (1989): 5-18. Web.

 

Doede, Megan Sarah. “Black Jobs Matter: Racial Inequalities In Conditions Of Employment And Subsequent Health Outcomes.“Public Health Nursing 33.2 (2016): 151-158. Academic Search Complete.

 

Fields, Barbara Jeanne. “Slavery, Race and Ideology in the United States of America.” Web.

 

Glasser, Ira. “American Drug Laws: The New Jim Crow.” Albany Law Review 63 (2000): 703-24. Web.

 

Graham, Jessica R., Amber Calloway, and Lizabeth Roemer. “The Buffering Effects of Emotion Regulation in the Relationship Between Experiences of Racism and Anxiety in a Black American Sample.” Cognitive Therapy and Research 39 (2015): 553-63. Web.

 

Greenberg, Jeff, Jeff Schimel, Andy Martens, Sheldon Solomon, and Tom Pyszcznyski. “Sympathy for the Devil: Evidence That Reminding Whites of Their Mortality Promotes More Favorable Reactions to White Racists.” Motivation and Emotion 25.2 (2001): 113-33. Web.

 

“The Holy Bible, New International Version.” Biblica Inc, 2011. Web.

 

Jensen, Arthur R. “Ethnicity And Scholastic Achievement.“Psychological Reports 34.2 (1974): 659-68. Web.

 

Kwate, Naa Oyo A., and Melody S. Goodman. “Racism at the Intersections: Gender and Socioeconomic Differences in the Experience of Racism Among African Americans.” American Journal of Orthopsychiatry 85.5 (2015): 397-408. Web.

 

Linder, Doug. “An Account of the Los Angeles Police Officers’ Trials(The Rodney King Beating Case).” An Account of the Los Angeles Police Officers’ Trials(The Rodney King Beating Case). 2001. Web.

 

Littlefield, Marci Bounds. “The Media as a System of Racialization: Exploring Images of African American Women and the New Racism.” American Behavioral Scientist 51.5 (2008): 675-85. Web.

 

Luibrand, Shannon. “Black Lives Matter: How the Events in Ferguson Sparked a Movement in America.” CBSNews. CBS Interactive, 7 Aug. 2015. Web.

 

Marsh, John. “Bleak, Bleaker, Bleakest.” Educational Theory 65.3 (2015): 325-331. Academic Search Complete. Web. 19 Apr. 2016.

 

McCoy, Dorian L., Rachelle Winkle-Wagner, and Courtney L. Luedke. “Colorblind Mentoring? Exploring White Faculty Mentoring of Students of Color.” Journal of Diversity in Higher Education 8.4 (2015): 225-42. Web.

 

Morrison, Larry R. “The Religious Defense of American Slavery Before 1830.” The Journal of Religious Thought: 16-29. Web.

 

“Affirmative Action | Overview.” National Conference of State Legislatures. Web

 

“Behind Mental Health Stigmas In Black Communities.” NPR. NPR, 20 Aug. 2012. Web.

 

Neville, Helen A., Germine H. Awad, James E. Brooks, Michelle P. Flores, and Jamie Bluemel. “Color-Blind Racial Ideology: Theory, Training, and Measurement Implications in Psychology.” American Psychologist 68.6 (2013): 455-66. Web.

 

Religious Landscape Study. Pew Research Center. Web.

 

“Pre-12K Financing Overview.” Atlas. Atlas, 29 June 2015. Web.

 

Quillian, Lincoln. “Does Unconscious Racism Exist?” Social Psychology Quarterly 71.1 (2008): 6-11. Web.

 

Smedley, Audrey, and Brian D. Smedley. “Race as Biology Is Fiction, Racism as a Social Problem Is Real.” American Psychologist 60.1 (2005): 16-26. Web.

 

Smedley, Audrey. “On the Confusion of “Race” With Biophysical Diversity.” American Psychologist (2006): 180-81. Web.

 

U.S. Department of Health and Human Services, comp. Health, United States, 2011, With Special Feature on Socioeconomic Status and Health. Rep. 2011. Web.

 

Wasserman, David. “Is Racial Profiling More Benign in Medicine Than Law Enforcement?” Journal of Ethics 15 (2011): 119-29. Web.

 

 

 

Recent Trends in Police Brutality

by

Sarah Yauger, Senior, History

 

Introduction

If you have ever watched a crime procedural drama, perhaps this scene is familiar: the investigators determine that the criminal they have been investigating is a violent offender. In order to protect the lives of the officers who will be apprehending the suspect, the SWAT team is called in. Once the officers have arrived at the suspect’s house, SWAT busts down the door, possibly throws in a flash-bang grenade or smoke bomb, and takes the suspect into custody. However, this scene is one that has been repeated all over America, with average citizens who have no criminal background whatsoever. For instance, on May 16, 2003, New York City police officers stormed an apartment building in Harlem in response to a tip from a confidential informant who stated that there was a convicted felon who was dealing both drugs and guns out of the sixth floor. Before breaking in, the raid party set off a flash-bang grenade — a nonlethal weapon that is designed to shock and disorient its target by emitting a bright flash of light accompanied by a deafening thud in combat situations. This effectively stunned the only occupant in the room: 57-year-old Alberta Spruill, who fell to the ground in shock. When the police realized their mistake, an officer attempted to help Spruill, only for her to slip into cardiac arrest and die two hours later. It was later discovered that not only had the informant lied, but the officers who conducted the raid had not confirmed the story before they raided Spruill’s home.59 However, despite her tragic death at the hands of the police misinformation and misconduct, no serious investigation or reform was launched. Alberta Spruill is only one of the many victims who have been mistakenly targeted by the police in recent years.

The death of Michael Brown at the hands of Officer Darren Wilson in Ferguson, Missouri, sparked another debate about the issue of police brutality. Many believe that there has been an alarming increase in police brutality in recent years. This has also been linked to the rise in police militarization that has followed the events of September 11, 2001. However, despite case after case of police misconduct resulting in the bodily harm, traumatization, and death of innocent civilians or nonviolent, low-risk offenders, there has been no widespread reworking of governmental policy towards police brutality. For instance, Police Commissioner Bill Bratton stated “he would not support a law to make chokeholds illegal, insisting that a departmental prohibition is enough,”60 despite evidence that plenty of chokeholds had been performed on citizens since the ban.

The question of “who will watch the watchers?” should be asked in regards to monitoring and punishing police use of deadly force. However, it should be noted that this is not an anti-cop paper. Rather, it is a survey of some of the issues surrounding a highly controversial topic in a way meant to provide the reader with a truncated historical overview of police misconduct before moving in to a discussion of those who support and those who oppose police use of force. Statistics surrounding the subject and a range of case studies meant to provide examples of recent cases of police brutality and public reactions to them, are also provided. The conclusion will present a few suggested reforms to the current system.

 

Historical Overview

In order to understand the issues surrounding the police brutality debate, it is first important to briefly discuss the history of policing in the United States. The first modern police force was created in London in 1829 by Sir Robert Peel. Despite Peel’s awareness that the English public would not respond well to a police force that appeared to be a standing army, he believed that a successful police force would need some of the structure and discipline that military training would provide. London’s first police force implemented a top-down administrative structure in addition to military titles.61 In America, the first modern police force was formed in New York in 1845. Very wary of the example that London was setting through using what was believed to be a militaristic approach, American police forces were very democratic and lacking in official training. However, this caused several problems, since political connections were often the basis of being elected to a police post. As Radley Balko notes in his work Rise of the Warrior Cop: The Militarization of America’s Police Forces:

“With no training or standards, and with jobs based on patronage more than merit, the police in America were best known for corruption, brutality, and incompetence. Wealthy citizens looked instead to private organizations like the Pinkertons when they needed reliable security or knew of a crime they wanted solved.“62

The Progressive Era saw a marked change in the way that American police forces operated, which was meant to combat the rampant corruption and lack of standardized training. Two main ideological camps began positing theories on ways in which to reform the police system. Progressive academics and elites of society advocated for a reworking of the police system designed to weed out corruption and eliminate patronage-based appointments. They also wished for the police to undertake a more paternalistic role in society through enforcing good habits and morals among the urban poor.63 However, law enforcement administrators had a different vision of what police reform should entail. Although they agreed that the police needed to be free from political influence, administrators believed a focus on actual crime fighting was a prerogative. They advocated for more autonomy for local police chiefs, who had little control over individual officers’ behavior despite being held responsible for those officers’ actions.64

In the long run, the path advocated by the administrators won out. This was done through the embracing of the concept of “professionalism.” This turned the police force into a formal profession with its own standards, specialized knowledge, and higher personnel standards and entry requirements. Police forces formed their own unions and professional societies, the sharing of knowledge such as fingerprinting methods, and the creation of specialized squads inside the police force meant to focus on specific issues like gambling, prostitution, and organized crime.65 August Vollmer, chief of the Berkeley police force from 1905 to 1932, was a champion of this movement. Under his guidance, police forces began utilizing now-familiar methods such as police radios, squad cars, lie detector tests, and crime labs.

Despite the well-meaning intentions of police reform advocates, and the many ways in which the police system was transformed for the better, there were also severe downsides to the introduction of the professionalist model. The patronage system was ended, yet in doing so many police forces were removed from the communities that they served and had been close to after their conception. Technology that had been implemented as a way to modernize the police force also served to build a wall between the police and community. Squad cars that replaced cops walking beats gave officers a faceless and intimidating presence rather than enabling the officers to incorporate themselves into the community. Police officers began to interact with the citizens in their jurisdictions in regards to dealing with crime (ticketing, questioning, responding to a crime) instead of interacting with their fellow citizens.66 As Balko notes, this separation was problematic, since “making cops indifferent to the area they patrolled, instilling in them the notion that they were all that stood between order and anarchy – all of this could make police view the citizens in their districts as at best the other, and at worst, the enemy.”67 By creating a separation between the police and the community they were meant to protect, political patronage and rampant corruption was rooted out. However, this separation also created a growing sense of animosity towards law enforcement that had not been present prior to the Progressive Era, especially in communities of lower socioeconomic status. This gap would only widen as the police force continued to modernize with an emphasis on professionalism.

Another factor that lent itself to the truly modern police force has been an increase in militarization. Simply put, there are two forms of militarization in regards to policing: direct and indirect. Direct militarization occurs when the military itself is used for domestic police purposes. An example of this form of militarization would be the use of the United States military to enforce the Reconstruction of the South following the Civil War. Indirect militarization ensues once both the police agencies and officers take on the characteristics of an army. Most of police militarization is through the indirect route, and includes officers taking on military titles to distinguish themselves and use of military equipment. However, it is important to note that despite most militarization of police forces occurring through indirect means, direct militarization has also played a part in shaping the modern police force. One of the most infamous incidents of military involvement in the controlling of domestic disturbances occurred in 1932, following the Bonus Army’s march on Washington D.C. to demand the payment they had been promised for their service in World War I. President Hoover responded by calling in the US Army to control the protestors, which led to newspapers, civil rights organizations, and veterans to condemn him for the improper treatment of U.S. veterans.68 Patton, MacArthur, and other military leaders ridiculed the incident, believing that military intervention had been necessary to prevent an insurgency, and made recommendations that included firing directly into protestor crowds in order to control domestic disturbances.69 This prompted further protests from the American community, who argued that the U.S. Army was essentially advocating for methods meant to wage war on their own people.70 Although the military had been involved in helping to police American citizens in the past, the police force itself became more militaristic once SWAT was introduced in the 1960s, perhaps owing to the amount of backlash direct military involvement in the Bonus Army and other incidents of the sort invoked.

 

Advent of SWAT

Following a string of high-profile crises situations, including a mass shooting on the University of Texas Austin campus and a shootout in Los Angeles, LAPD inspector Daryl Gates decided to form a special task force meant to help the police handle scenarios such as hostage situations and mass shootings more effectively. Gates brought in Marine Corps personnel to teach strategies for dealing with snipers and handpicked several LAPD members regarded as the department’s best sharpshooters for additional training; he also structured this new unit to function as a military squad would. His program was approved in 1966 by the head of the LAPD, Thomas Reddin.71 The name of this program would be Special Weapons and Tactics; its purpose was to provide a quick, forceful response to defuse violent situations with the least amount of life lost possible.

SWAT’s first raid occurred in 1969, on the Los Angeles headquarters of the Black Panthers. Journalist Matthew Fleischer offers a narrative of events based on interviews of multiple Black Panther members and LAPD SWAT officer Patrick McKinley, which is briefly laid out as follows. Police had claimed to receive reports of Black Panther members in possession of illegal firearms, and secured a search warrant for the 41st and Central headquarters. The raid began at approximately 5:30 AM on December 6, 1969. The Black Panther building way both well-armed and well-fortified, and the decision was made to send the team in covertly to secure entry into the complex.72 Due to police misinformation, SWAT was delayed entry into the hideout, resulting in a shootout between the LAPD and Black Panthers that totaled more than 5,000 rounds of ammunition being exchanged. 4 Black Panthers and 4 SWAT officers were wounded in the course of the fight, which lasted for over 4 hours.73 Those who were arrested and tried were actually given plausible cause to claim self-defense because of the surprise tactics used, and were acquitted of the most serious charges.74 Despite the ways in which the case was bungled, SWAT’s first raid was considered to be a success, since it demonstrated an effective takedown of an organization widely feared and despised by politicians. However, it was also a massive show of force that had resulted in armed conflict, despite SWAT’s mandate to prevent such unnecessary violence. Due to the civil strife that was occurring during the time of SWAT’s creation, including the high-profile cases detailed above, the overt militarization of American police forces was overlooked. However, it was not long before SWAT forces were being used to police nonviolent drug offenses and the military was being used in conjunction with such forces more regularly, notably in the 1980s.

In 1981, President Ronald Reagan introduced the Military Cooperation with Law Enforcement Act, which essentially permitted the military to work with drug cops on every aspect of drug prohibition sans making arrests and conducting searches.75 Crime reports of the time stated that the “drug problem” had reached new heights, and in response, the police were given more and more concessions to effectively end criminal drug offenses as a part of the War on Drugs. In 1984, a new crime bill was introduced to enable law enforcement agencies who assisted federal drug investigations to share in any asset forfeiture proceeds that the case produced. This gave law enforcement agencies a strong incentive to find a connection between valuable property and drug activity, even if there was no such connection.76 It also meant that the amount of police raids that were being conducted on innocent civilians also increased with police attempts to halt crime. Police officers were documented to be extremely violent in each case. An example can be found in the police raid on John Taylor:

On March 2, 1988, San Diego police conducted a 2:20 AM raid on the home of John Taylor, his brother George, and George’s wife. Forty-four-year-old George Taylor was thrown to the floor with a gun to his head. An officer then stepped on his neck to keep him in place while they searched the house. He’d had spinal surgery a year earlier.77

Although the police apologized for the incident when they had realized they had intended to raid the house next door, it was only a week later when a similar incident occurred at the same residence. Mistaken raids were conducted with a high frequency despite police insistence that such an occurrence was extremely rare. More often than not, no apology was issued to the innocent civilians who were terrorized in violent police raids.

The typical rhetoric behind more militarized weapons and training was that it was meant to protect officers’ lives in dangerous situations. However, cops were found to be dying in raids as well. Despite the unnecessary loss of civilian and law enforcement lives, violent police raids continued to be conducted in order to “protect” the American public – with their consent. A Washington Post poll conducted in September 1989 found that 62 percent of the country said that they were willing “to give up a few of the freedoms we have in this country” in order to reduce the amount of illegal drug use. An additional 52 percent agreed that police should be allowed “to search without a court order the houses of people suspected of selling drugs, even if houses of people like you are sometimes searched by mistake.”78 Programs like the Byrne grant program, which was begun in 1988, also incentivized police forces to focus on drug-related crimes above all others, which would also contribute to many innocent civilians and nonviolent offenders being subject to police brutality that resulted in trauma or death.

The creation of the Byrne grant program was a part of the 1988 crime bill. The purpose of this program was to enable the federal government help local law enforcement fight crime through providing monetary aid. Although well-intentioned, by the middle of the 1990s it became clear that there were flaws in the system. Balko explains the application process and its issues in his work:

When applying for grants, departments are rewarded with funding for statistics such as the number of overall arrests, the number of warrants served, or the number of drug seizures. Those priorities, then, are passed down to police officers themselves and are reflected in how they’re evaluated, reviewed, and promoted. …actual success in reducing crime is generally not rewarded with federal money, on the presumption that the money ought to go where it’s most needed – high-crime areas.79

The issue, then, is that these grants rewarded police departments for making easy arrests (low-level drug busts), lots of seizures (regardless of the amounts seized in the raids), and for serving large amounts of warrants. Another issue with the Byrne grant program was the creation of narcotic task forces that span across regional and jurisdictional lines. With funding coming in from the federal government, and any asset forfeiture proceeds they obtain also funding their operations, these task forces were not beholden to the local officials. This resulted in squads of drug cops who were loaded down with SWAT gear and no way to check their power should they take their investigations too far.80

Increase of Militarization

The events of September 11, 2001, had a profound impact on several aspects of the police role in the community, including policy and funding. The War on Terror meant a new reservoir of funding for police forces in the name of fighting terrorism at home. Anti-terrorism grants have gifted billions of dollars to police forces across the country, giving away military equipment such as armored vehicles, guns, and armor.81 The argument for this was to protect police officers and ordinary citizens alike from terrorist attacks. However, the increase in raids on innocent people has also increased with the handouts of military equipment. War rhetoric – the War on Terror, the War on Drugs, the War on Crime, etc. – has been inculcated into the mentality of police forces everywhere, leading to officers conducting no-knock raids on homes in the dead of night without thought of how homeowners might react. For example, in 2004 the mayor of Berwyn Heights, Maryland, Cheye Calvo, was raided by a SWAT team by mistake.

He ran to the window and saw heavily armed men clad in black rushing his front door…the police blew open his front door. …He was instructed to walk downstairs with his hands in the air, the muzzles of two guns pointed directly at him. He still didn’t know it was the police. [He stated] “At the bottom of the stairs, they bound my hands, pulled me across the living room, and forced me to kneel on the floor in front of my broken door. I thought it was a home invasion. I was fearful that I was about to be executed.”82

When asked later, Calvo stated “the worst thing I could have done was defend my home.”83 There have been several cases of police shootings when the homeowner noticed what appeared to be a home invasion taking place and pulled a gun to defend themselves from what turned out to be the police. In most cases, the homeowner was either injured or killed by police responses to their instinct to protect their home from what they perceived as a home invader. Even years after the event, when Calvo had been cleared of any wrongdoing, officials maintained that those who had been on the raid had not been at fault for any misconduct despite terrorizing Calvo and his family.84

More and more, police officers have been in the media for extreme uses of force in civilian situations. For instance, at the Occupy protest in Davis, California, Lieutenant John Pike was made infamous after he was photographed pepper spraying peaceful protestors. This move earned him the title of “Pepper Spray Cop” and became a well-recognized symbol of the Occupy movement. The Ferguson police force also gained widespread attention following their deployment to contain riots in the area after the death of Michael Brown. The vast arsenal of equipment used included tear gas, which is a chemical agent that has been banned as a permissible means of warfare per the Chemical Weapons Convention of 1993. However, it is permissible to be used in domestic riot control situations. This type of riot control was used on slews of protestors in Ferguson, as well as reporters. The conclusion one can draw from this is that what started as a program meant to help prevent mass shootings and hostage situations and target violent criminals has devolved into a system that subjugates innocent civilians and peaceful protestors to what many perceive to be unwarranted police violence. Law enforcement officers who look like they belong in a military unit in the Middle East have been deployed on the streets of American cities. Officers who inflict unnecessary harm on the civilians that they are meant to protect have become a widespread issue, and many are calling for a change.

 

Law Enforcement Brutality

One of the most important questions in determining police excessive use of force is the question of determining a justified action. Although there are other methods of exerting extreme force, one often thinks of police shootings when “police brutality” is mentioned. However, the determination of proper conduct in regards to a policeman’s use of his firearm varies from state to state and specific police departments. As William B. Waegel reveals in his article “How Police Justify the Use of Deadly Force,” despite these inherent differences, that a typical department’s guidelines might state:

There are three instances in which a police officer may fire his [weapon] at another human. They are:

1. To protect his own life, when it is in immediate danger.

2. To protect the life of another.

3. In an effort to prevent the commission of certain violent felonies or to prevent the escape of a violent felon, but only after all [emphasis original] other means have been exhausted.85

The Department of Justice also broadly defines a justifiable police shooting as stated above. Once a police officer discharges his or her weapon in the line of duty, they are required to undergo a review meant to determine if the shooting was justified considering the circumstances surrounding the shooting. The reviews are conducted by the officer’s peers, and are more often than not ruled to be justified given the circumstances. However, public opinion has also given rise to the idea that a cop is justified in utilizing all methods at his or her disposal in order to keep order. In looking at the issue of police brutality, there are two opposing groups. In looking at the two, one can gain a better idea of what most Americans believe in regard to police brutality. It is also important to note that the police brutality and militarization of the police often go hand-in-hand.

Critics state that police officers use excessive force much too frequently. Those who are victims are often members of minority groups or poor communities, which law enforcement officials try to cover up. Opponents of police militarization state that outfitting and training the police to look and think like soldiers undermines the original purpose of the police. Police departments have no need for military equipment and weapons, and having this equipment leads to an increase in police misconduct and unnecessary deaths.

According to the FBI’s Uniform Crime Reports, the homicide rate for police officers in 2010 (the last year for which data is available) was about 7.9 per 100,000 officers. That’s about 60 percent higher than the overall homicide rate in America, which is 4.8. but it’s lower than the homicide rates in many large cities, including Atlanta (17.3), Boston (11.3), Dallas (11.3), Kansas City (21.1) Nashville (8.9), Pittsburg (17.3), St. Louis (40.5), and Tulsa (13.7).86

If one accepts these statistics as true, then one could make an argument that the extreme hazards of police life have been exaggerated in recent years. In fact, many argue that painting citizens – regardless of if they have a criminal background or not – as a threat to the police officer who is going about his daily beat is what leads to an increased potential for police misconduct in less-than-lethal situations. The statistics shown above demonstrate that one is more likely to be a victim of homicide simply by living in one of the cities mentioned above than if they were a police officer living in America. In fact, according to the Bureau of Labor Statistics, in 2012 (the latest year data is available), police officers had a total of 105 fatalities in relation to their occupation and a fatality rate of 15 deaths per 100,000 full-time equivalent workers. In comparison, the fatality rate for many other professions, including logging (129.9), fishing (120.8), and truck driving (24.3) have much higher fatality rates than police officers do.87 Statistics such as this indicate that the often-repeated idea that police officers have a more dangerous job than almost any other in America may be overstated.

Supporters of police forces argue that reports concerning widespread police brutality and abuse are exaggerated, and rarely do officers use excessive force. The steady decline in crime rates across the country is a testimony towards the effectiveness of police methods and training. In relation to the militarization of police forces, they posit that federal programs that provide police departments with military training and equipment both help protect police lives and ensure general public safety. Equipment such as night vision goggles and advanced weaponry help police conduct SWAT raids with minimal casualties. Others state that by attempting to limit police officers to using less-than-lethal force with suspects, it creates an environment that places the officer at a heightened and unnecessary risk. Law enforcement officers have been killed in the line of duty after utilizing a less-lethal weapon, such as a Taser, that did not incapacitate a suspect enough to prevent the suspect from returning fire with a firearm.

Statistics

There have been stories of police brutality and misconduct in the news frequently following a string of high-profile cases in 2014. However, one can argue that the frequency of police misconduct has been blown out of proportion by the media. Instead of simply reading about police brutality in the news, it is also important to put these stories into a larger context of how frequently such acts occur.

The Cato Institute has released the 2010 National Police Misconduct Statistics and Reporting Project (NPMSRP) Police Misconduct Statistical Report through its website policemisconduct.net. It states that the statistics generated are based on reports that have met credibility criteria and gathered from several different media sources throughout the United States. The three images below demonstrate a few different statistics gathered by the Institute in regards to police misconduct.

Image 1: Excessive Force by Type (policemisconduct.net)

The image above denotes the types of excessive force that police officers were accused of committing during 2010. Physical force was the most often used at 57 percent. Physical excessive force cases included a mixture of fist strikes, throws, chokeholds, baton strikes, and other physical attacks. This was followed by firearms at 15 percent. Misconduct involving the use of a Taser was recorded at 11percent, and those cases that included a combination of force types was found to be 13 percent. A total of 1,557 officers were reported in excessive force complaints to generate these numbers.88

Image 2: Excessive Force Fatalities by Type (policemisconduct.net)

Of the numbers reported in the Excessive Force complaints, there were 127 fatalities in relation to excessive force in 2010.89 Of that number, 71 percent of the fatalities included the use of firearms. Physical force fatalities were reported at 15 percent, whereas 9 percent were Taser related.

Image 3: Crime Rate Comparisons (policemisconduct.net)

According to these statistics, the general public commits 20.1 more violent crimes per 100,000 than police officers. In terms of police crime rates, they are surprisingly on par – if not above – the general public’s crime rate, with the exception of robberies. The murder rate is highly interesting. Although the rate of police officers who have been charged with murder is only slightly above the general population’s, the Cato Institute estimates that if excessive force complaints involving fatalities were prosecuted as murder, the law enforcement murder rate would exceed the general public’s by 472 percent.90

In terms of prosecution, the Supreme Court has passed several judgments that have made it more difficult for law enforcement officials to be prosecuted of crimes. For instance, in Reichle v. Howards, which dealt with Steven Howards, who criticized and touched then-Vice President Cheney while he was on a meet-and-greet at a local shopping center.91 Howards had intended to confront Cheney to express his opinion on the war in Iraq, and brushed the Vice President’s shoulder with his hand when Cheney was attempting to walk away. The Secret Service members who were with Cheney arrested and charged Howards with assaulting the Vice President.92 The Supreme Court was charged with resolving the issue of upholding the First Amendment’s statement of the right to free speech when probable cause of arrest is presented. The Court ruled that police officers and federal agents cannot be sued for violating a citizen’s civil rights if the right that was supposedly violated was not formally recognized to exist at the time the officers acted, regardless of whether or not this remained the case at a later date.93 This ruling effectively grants law enforcement officials to act in any way they see fit if it is determined that the right being violated was “formally recognized.” Ambiguous language such as this makes it very hard to determine what is and is not recognized, and under what situations a right becomes invalid.

There is also the issue of willingness to question police misconduct. In a panel survey he conducted in 1997, Joseph McNamara revealed a widespread reluctance to question a police decision in regards to the use of excessive force during SWAT raids. The panelists included a variety of law enforcement officials from the California area. Those meant to act as a check to police misuse of power expressed little interest in doing so. He stated:

You get this robot mentality with these officials. The mayor said she knew nothing about these raids and didn’t want to know anything about them until they were over. The judge wasn’t interested in scrutinizing the raid until it was over – when any damage would already be done. Everyone else said it wasn’t their job to worry about it. And so you end up with this dangerous decision that gets made by people of lower rank with little training, with little incentive to care much about constitutional rights, with no oversight – no checks or balances. Collateral damage is just part of the game.94

Granted, this panel reflected the opinion of a variety of officials in California, primarily in the Los Angeles area. However, there are two important pieces of information that can be extrapolated from this: 1) Los Angeles has the second-largest police force in the United States, and 2) the LAPD has been a model for many of the modern police forces in America since the Progressive Era. That law enforcement and government officials are disinclined to act as a preventative measure against police misconduct in one of the largest and arguably most influential police forces is highly concerning. Although the statement was made in regards to SWAT conduct on drug raids, there is evidence of this mentality permeating law enforcement agencies nationwide. Daniel Pantaleo, the officer involved in Eric Garner’s death, had been accused of violating citizen’s rights in three separate lawsuits before Garner’s death.95 Despite these allegations, Pantaleo was kept on active duty. Incidentally, civilians are often unwilling to sentence law enforcement should they be brought to trial on criminal charges. According to the statistics found by the Cato Institute:

 

 

Image 4: Conviction and Incarceration Rates (policemisconduct.net)

This image shows that in 2010, an ordinary citizen’s conviction and incarceration for committing a crime was roughly twice as likely to occur as a law enforcement official’s. Possible reasons for this discrepancy could be due to Supreme Court rulings such as Reichle v. Howards, other governmental officials’ refusal or reluctance to properly oversee law enforcement officials from performing questionable raids, and citizens’ reluctance to indict police officers in criminal charges. The common argument is that police officers have a difficult job, but because they are supposed to be trained to use lethal force in certain situations, they should know how to use it correctly.

Another important aspect that needs to be considered is who is at risk in regards to police misconduct. According to an analysis (using data from 2010 to 2012) conducted by non-profit organization ProPublica, “blacks, age 15 to 19, were killed at a rate of 31.17 per million, while just 1.47 per million white males in that age range died at the hands of police.”96 This amounts to young black men being 21 times more likely to be killed by police. This report also presents some other troubling statistics. In looking at who was responsible for these deaths, it was found that a large majority of police shootings that resulted in death were performed by white officers. However, the authors also note that although “black officers account for a little more than 10 percent,” of all fatal police shootings, 78 percent of the victims were African American.97 The study also reported on circumstances surrounding fatal shootings. It was found that there were 151 instances in which the victim was shot after fleeing the crime scene or resisting arrest, with 67 percent of those instances having African American victims. In cases where the circumstances were unlisted, 77 percent of those killed were black.98 Statistics such as the ones presented here lend credence to the idea that minority groups are targeted more frequently than their white counterparts.

The case of Akai Gurley, a 28-year-old African American man, who was shot and killed on November 20, 2014, provides one with an interesting case study in regard to prosecuting a police officer for wrongful death. The officer in question, Peter Liang, and his partner were conducting a floor-by-floor sweep of East New York’s Louis Pink Houses. Liang entered the stairwell from the eighth floor at the same time that Akai Gurley and his girlfriend walked into the same stairwell on the seventh floor. Liang, who had set out on patrol with both his gun and flashlight in his hands, was startled by the couple’s entrance and discharged his firearm. The round hit Gurley in the chest.99 Due to electrical problems that the building had been experiencing, the stairwell was dark, and both cops had only spent 16 months on the job at the time of the incident. Following Gurley’s death, his domestic partner, Kimberly Ballinger, sued New York for wrongful death. Liang had stated the shooting had been accidental100 following Gurley’s death, but the lawsuit claims:

Mr. Liang shot Mr. Gurley “without reason or provocation” and “negligently and recklessly.” …that the housing authority “created a hazardous and traplike condition” by failing to provide adequate lighting in the staircases. Additionally, the suit charges that the officers did not perform or request medical aid after the shooting. And it says that the city was negligent “in training, hiring, supervision and retention of the police officers involved in this incident” and on training officers on “the use and abuse of power while in the field.”101

Both Liang and his partner retreated to the eighth floor following the shooting, without checking to see if Gurley had needed assistance. By the time that first responders had brought Gurley to a hospital, he was pronounced dead. The lawsuit against Liang was settled on February 11, 2016. It was decided that: “Liang, 28, was charged with second-degree manslaughter, second-degree assault, second-degree reckless endangerment, criminally negligent homicide and one count of official misconduct, stemming from not administering CPR to Gurley” and faces up to 15 years in prison.102 There has been speculation that Liang was convicted to act as a scapegoat for fatal police shootings following the failure to indict the officers involved in Eric Garner’s death earlier the same year. The Asian-American community has also been divided over the issue; some state that had Liang been white, he would not have been indicted, while others maintain that the jury decision was just and needed to be carried out.103 Regardless, this case does present an interesting perspective because the officer in question was not only indicted, but convicted following the death of an unarmed civilian at the hands of a police officer, which recent statistics show was fairly unlikely.

Case Studies

In recent years, there have been several high-profile cases of police brutality that have been circulating in the news. Although the events surrounding the death of Akai Gurley have been presented above, three additional case studies will be presented below in order to give the reader a more comprehensive idea of allegations of police brutality in recent years. All cases have gained widespread attention due to the controversy surrounding the suspects’ deaths, yet each study will contain a brief summary of events before analyzing both the actions of the officers and the reactions of the general public.

New York

On July 17, 2014 in Staten Island, New York City, Eric Garner died following an altercation with Daniel Pantaleo, a New York City police officer, who put him in a chokehold. Garner had been known to sell loose cigarettes in the area, notes the New York Times, and had been warned about continuing to do so days prior to his death. Garner, as a man who was not a violent criminal, should not have been a target of police violence. Despite a lack of violent behavior, he was placed in a chokehold, handcuffed, and wrestled to the ground. What would come to follow made national headlines after a video was posted showing the altercation between the NYPD and Garner. What started the conflict was Garner’s refusal to allow the police to handcuff him. The officer, later identified as Pantaleo, wrapped his arm around Garner’s neck and dragged him to the ground.

Garner was reported to have exclaimed several times “I can’t breathe!” as Pantaleo continued to restrain him. The hold in question was later identified by city officials as a chokehold, which has been banned by the department for more than 20 years and is defined as any police maneuver that puts “any pressure to the throat or windpipe, which may prevent or hinder breathing or reduce intake of air.”104 What is also important to note is that there was not one officer involved, but several, and none of them spoke out against Garner’s treatment. Following the autopsy, NBC New York reported:

The city medical examiner has ruled the death of Eric Garner…a homicide, saying a chokehold killed him. …compression of the neck and chest, along with Garner’s positioning on the ground while being restrained by the police caused his death. Garner’s acute and chronic bronchial asthma, obesity and hypertensive cardiovascular disease were contributing factors…^105^

To be clear, the medical examiner’s report did not state that Garner was responsible for his own death. Pantaleo should have known he was risking a man’s life when he placed Garner in that hold, given the history of deaths by chokehold that led to the NYPD’s ban of the move altogether and the repeated warnings of the victim. A grand jury decided not to indict Pantaleo, stating there was no “reasonable cause” for an indictment,106 despite the video footage documenting the incident and the medical examiner’s report. Garner’s death was followed closely by Michael Brown’s in Ferguson. The combination of these violent deaths at the hands of police officers who are meant to serve and protect the citizens in their jurisdictions sparked a series of heated debates surrounding the police use of force.

Missouri

On August 9, 2014 in Ferguson, Missouri, Officer Darren Wilson fatally shot 18-year-old Michael Brown. Brown was shot a total of six times before he died, which lent to the discussion about the circumstances surrounding his death. According to the New York Times, a surveillance video showed Brown stealing cigarillos in a convenience store at 11:54 a.m. At 12:01 p.m., Wilson arrived at the crime scene, and after identifying Brown as the convenience store robbery, attempted to stop him.107 What would follow became a source of great controversy:

Some witnesses said Mr. Brown never moved toward Officer Wilson when he was shot and killed. Most of the witnesses said the shots were fired as he moved toward Officer Wilson. …Some witnesses said that Mr. Brown had his hands in the air. Several others said that he did not raise his hands at all or that he raised them briefly, then dropped them and turned toward the officer. Others described the position of his arms as out to the side, in front of him, by his shoulders or in a running position.108

Wilson reported that Brown had charged him after being told to stop, and the St. Louis County prosecutor stated the most credible witness corroborated this story. Brown’s death sparked a series of riots and protests that engulfed the Ferguson area following the shooting. In response to the protests, some of which had turned violent, Ferguson police utilized heavy-handed tactics meant to contain the situation.

In an article in The Nation, former Marine Lyle Jeremy Rubin noted “What we’re seeing here is a gaggle of cops wearing more elite killing gear than your average squad leader leading a foot patrol through the most hostile sands or hills of Afghanistan.”109 Such killing gear included Kevlar helmets, tactical body armor vests, M4 carbine rifles and semiautomatic pistols, and “about 120 to 180 rounds for each shooter.”110 Other military-grade equipment being used included stun grenades, smoke bombs, riot guns, tear gas, and a variety of projectiles. All of this equipment has been classified as “less than lethal,” which is how police are able to use it on civilians in extreme situations. However, these weapons have been known to cause extreme damage and even death. Perhaps one of the most well-known pieces of equipment is the flash-bang grenade. Several cases have been documented where a flash-bang grenade was used and resulted in the death of an innocent, including one were a SWAT team member died while organizing his SWAT gear in his garage, when a flash-bang grenade accidentally detonated on him.111 Another case included an FBI agent being set on fire following a flash-bang grenade accidentally exploding while attached to his Kevlar vest.112 These devices have frequently been used in enclosed spaces that can result in death or injury on American civilians.

Other riot control weapons, such as pepper balls, can also be lethal. The Boston Police Department found this fact out the hard way following the accidental killing of a young woman following a Red Sox game after a pepper spray pellet pierced the woman’s left eye, “opened a three-quarter-inch hole in the bone behind it, broke into nine pieces, and damaged the right side of her brain,” according to the autopsy report.113 The victim, Victoria Snelgrove, was a bystander in the unruly behavior following the Red Sox victory. At least two other fans were struck by the pellets, reported The New York Times, which resulted in stitches, welts and bruises for one fan and required the removal of several pieces of plastic from his forehead for another.^114^ Despite these cases, pepper balls have seen continued use in riot situations. Other “less than lethal” projectile weapons that saw use in Ferguson following Brown’s death include rubber bullets, bean bag projectiles, wooden bullet projectiles, and riot guns. As Rubin continues to point out, these weapons are meant “disable and kill.”115 These types of projectiles are billed as “less than lethal,” but it is also important to consider the weapons that fire them. Rubin notes:

The most likely culprit is the ARWEN 37, which is capable of discharging 37mm tear gas canisters or wooden bullet projectiles. Another possibility is the [+ SL6+], a 37mm six-shot rotary magazine projectile launcher that is seemingly capable of firing [+ every relevant “non-lethal” round+] in the book. When a Marine or other warfighter is introduced to one of these for the first time, he likely thinks of the M203 Grenade Launcher as a point of comparison. This is because they’re all part of the same family. They’re all grenade launchers.116

Many have been used abroad by the military in the War on Terror, or were created to deal with riot situations in rebellious colonized areas, such as Hong Kong.117 By placing a projectile round into a weapon initially built to kill, it is not a stretch to imagine a poorly placed “less than lethal” projectile would result in death for the target. According to the New York Times, violent crime in Ferguson is much lower than in neighboring towns, having approximately only 5 incidents per 10,000 people. In comparison, Jennings has approximately 3 times that amount, and other towns in the area are comparable in both size and crime rates to Ferguson.118 This has led many to question why the Ferguson police would need the heavy military equipment that they have been using to control the riots if there is not a precedent of high crime or violence in the area.

Following the refusal of a St. Louis grand jury to indict Wilson in connection to Brown’s shooting, protests continued and in many areas turned violent. The Department of Justice launched its own investigation into the matter, yet also declined to prosecute Wilson, stating: “Because Wilson did not act with the requisite criminal intent, it cannot be proven beyond reasonable doubt to a jury that he violated 18 U.S.C § 242 when he fired his weapon at Brown.”119 There were many reasons for this decision: witnesses were proven to be unreliable or recanted their initial statements, the autopsy report contained evidence that corroborated Wilson’s story, and many witnesses did the same. However, Brown’s death sparked a national “Black Lives Matter” movement meant to call attention to the disproportionate police violence directed at people of color that has continued to receive media attention today.

Oklahoma

In Tulsa, Oklahoma, Eric Harris was fatally shot by reserve sheriff’s deputy Robert Bates on April 2, 2015. Authorities stated that Harris was attempting to sell an undercover officer a gun in a sting operation in the moments leading up to his death. A video of the operation was recorded through a camera installed in a pair of sunglasses.120 Harris, having realized that an undercover operation was in motion, attempted to flee the scene on foot before being tackled by a deputy. Harris, who was unarmed, was shot by Bates after he was on the ground and in the process of being handcuffed.121 It was reported by The Guardian:

In the video…officers continue to try to subdue Harris, one shouting: “Shut the fuck up …You ran, motherfucker, do you hear me, you fucking ran.” When the 44-year-old says “I’m losing my breath,” an officer replies: “Fuck your breath.”122

What is most concerning about this incident is the amount of contempt that the officers expressed for Harris despite Bates’ misconduct. Not only was he shot while already in police custody, but the officers were not overly concerned with administering basic first aid or that the reserve deputy had shot a suspect. It was later reported that Bates had meant to stun Harris with his Taser, but drew and fired his firearm instead.

Robert Bates was charged with second-degree manslaughter involving culpable negligence with the potential for a 4-year prison sentence if found guilty following Harris’ death. Bates stated he had felt shock and disbelief following the shooting, but added that he had believed Harris was in possession of a gun at the time. He reached for what he believed to be his Taser, but what went off was his gun instead123 Provided below are images of both the sidearm and Taser that Bates was reported to have on his person at the time of the shooting:

Image 6: S&W .357124 Image 7: Taser Model X26C125

Law enforcement experts state that the gun should be holstered on the officer’s dominant side of the body, with the Taser on the nondominant side. However, the placement of weapons was reversed in Bates’ case, and when he announced he was going to draw his Taser, he used his right (nondominant) hand to pull out his gun.126 If one looks at the two weapons pictured above, there is a noticeable difference between the two that makes it hard to believe one can get the two confused. Steve Tuttle, vice president for strategic communications at Taser International, noted differences between the two, stating: “A gun is heavier. A Taser has a different grip and feel. When you take the safety off a Taser, an LED control panel lights up.”127 These distinctions are meant to differentiate a Taser from a firearm and prevent the police officer from confusing the two.

The sheriff’s department has stated that Bates experienced a phenomenon known as “slips and capture,” which is when the brain makes a mistake by reverting to learned behavior. Experts have questioned the line of defense that has been used in the Bates trial, stating that this claim amounts to “junk science” that has not been peer-reviewed or generally accepted in the scientific community.128 Bates revealed that normally he kept his sidearm on his hip, while his Taser was in his protective vest, closer to his chest.129 Other experts have questioned why Bates was present on the sting operation in the first place, given that he was an elderly (73 years old) reserve officer. Questions about the legitimacy of his training have also been raised, given that he was a close friend of the sheriff and had donated a large amount of money to the Tulsa County Sheriff’s office in the past. Bates has pleaded not guilty to the charge of second-degree manslaughter, with the trial set to begin on April 18, 2016.130

 

Reforms

Having looked at several case studies in which the police have used deadly force as a first response to nonlethal situations, the question that must be considered next is if there are reforms that can be implemented to prevent such tragedies in the future. Movements such as the “Police Lives Matter” state that the American population should show support for police officers, who spent their careers serving their communities and are meant to protect the common citizen from harm. However, the suggestion that in order to support law enforcement, one must foreswear the Black Lives Matter movement is folly. “Black Lives Matter,” to many people, is equivalent to professing hatred of police and not understanding or appreciating the difficult situations they face in the line of duty. An interesting “middle-ground” argument in terms of supporting police officers while still recognizing that some officers are culpable in misconduct has been posited recently as well, which states:

The choice between support for police and support for Black Lives Matter is a false choice. The critical questions the movement raises are questions that must be addressed in the interest of usual. They are questions that must be asked, too, in the interest of the “good cops” trotted out in response to news reports about the bad ones. …To join the protests is not necessarily to proclaim oneself “anti-cop” …there is an opportunity for law enforcement to become what it is supposed to be – a symbol of responsible use of power in service of all.131

The police motto is “to protect and serve.” In becoming a police officer, one is meant to uphold the law. However, many have pointed out that there are broken parts of the system. Corruption, training geared towards a militaristic mindset, and a lack of checks on police power do need to be examined critically. There have been several suggestions for dealing with police misconduct in the wake of heightened media exposure. Below are only a few possibilities of many.

As the direct military involvement in Afghanistan and Iraq begins to be reduced, the military is left with a surplus of equipment that is being handed down to local police forces. In providing the police with the leftovers, they have transformed those meant to serve and protect the ordinary citizen into a paramilitary force. In looking at a police officer in riot control gear, or a SWAT officer about to embark upon a raid, it is very easy to confuse the image with those who have served in the Middle East. In addition to feeding into a military mindset that encourages law enforcement agents to shoot first and ask questions later, the re-gifting of military equipment causes a desire to use it. As former New Haven police chief Nick Pastore revealed, “I had some tough-guy cops in my department pushing for bigger and more hardware. They used to say, ‘It’s a war out there.’ They like SWAT because it’s an adventure.”132 The introduction of military equipment leads to a desire and willingness to use it, despite not needing to in order to de-escalate a situation. Many police departments who have little-to-no substantial history of violent crime demand military-grade equipment as well, feeding in to the mentality that the police need to be highly militarized in order to effectively maintain order. Yes, there are instances where SWAT teams are necessary. The rise of modern technology and weaponry have seen to that. However, the use of military-grade equipment for non-violent, low-risk offenses needs to be stopped. When SWAT was invented, it was designed to deal with volatile situations that contained a high risk for all those involved. Yet those teams have been misused.

Another reform that should be seriously considered is to halt the federal incentive programs that make drug busts so appealing to police forces. Maintaining SWAT teams comes at a huge expense for police forces, and much of this expense is covered through federal programs regardless of whether or not the local force needs such a team in the first place. If the federal government were to put an end to programs like the Byrne grant program, asset forfeiture laws, and other programs geared towards making the drug war a lucrative business, the cost of maintaining and deploying SWAT teams would possibly persuade police departments to reserve them for the types of missions that they were created to deal with, rather than those that more often than not terrorize low-risk perpetrators and innocent civilians. As the riots in Ferguson demonstrated, the use of military-grade equipment can result in severe harm and damage to nonviolent protestors, and even increase the area’s unrest and escalate violence.

The issue of “less than lethal” weapons also falls under the aforementioned category. As demonstrated in a variety of cases, “less than lethal” does not guarantee that the weapon will be nonlethal. Rather, it simply states that the officer’s intention when using it was to subdue, and not to kill. However, it has been shown that this does not necessarily prevent deaths. Raising law enforcement awareness of the dangers “less than lethal” weapons can pose if used improperly would possibly help decrease the amount of tragic accidents that occur when this equipment is used improperly. If there is one phrase that law enforcement should take from military training, it is “never point a weapon at anything you do not intend to shoot.”133 This phrase should be made to include “less than lethal” weapons as well. As shown in cases like Victoria Snelgrove, aiming a projectile weapon without being aware of potential victims can be deadly. The potential to harm, maim, or kill when using these weapons has been well documented, and is not negated by the “nonlethal” tagline that is placed upon them.

Police training also needs to be closely observed and changed. In June 2013, before the widespread social media reports of police brutality, the United States Department of Justice, Bureau of Justice Statistics (BJS) released a report discussing the training of police recruits. It was found that “the majority of police recruits receive their training in academies with a stress-based military orientation.”134 Stress-based training is based upon the type of training recruits undergo in a military boot camp: “characterized by paramilitary drills, daily inspections, intense physical demands, public discipline, withholding privileges, and immediate reaction to infractions.”135 Although some argue that this type of training fosters a sense of self-discipline and command presence, others argue that this goes against community-oriented policing, which is focused on trust building and fostering a sense of community between cops and the civilians they are meant to protect. Military training is meant to teach its recruits to dispatch an enemy with lethal force, and instills a sense of “us versus them” in which those outside the group are unwelcome and regarded with suspicion. By teaching police in a military setting, the same mentality is fostered, making an officer much more likely to react in the most lethal way possible instead of trying to defuse the situation and protect the life of the citizens in their district. The BJS report also noted completion rates for stress and non-stress training academies, and noted that academies that were predominantly non-stress environments produced a completion rate of 89 percent compared to stress environments’ 80 percent completion rate. Non-stress candidates also outperformed stress candidates in a variety of areas.136 In thinking about reforms to the current system, one to strongly consider would be the implementation of more non-stress training academies that place an emphasis on building the skills necessary for community-oriented police work. Doing so would also help dispel the “us versus them” mentality that appears to be so pervasive in police work today, which could also lead to a reduction in police brutality.

In the wake of deaths such as Akai Gurley, Eric Garner, Michael Brown, and Eric Harris, one topic that has been brought up repeatedly is the idea of holding the police to the same standards as ordinary citizens. The 1989 Supreme Court Case Graham v. Connor ruling essentially prohibits the second-guessing of a police officer’s decision to use deadly force despite evidence to the contrary.137 The refusals of grand juries to indict Officers Daniel Pantaleo and Darren Wilson following the killing of two unarmed men reflects this idea. Pantaleo is especially troubling because of his history of violence. However, police officers are chosen to uphold the law, and as such, should be held to a standard that reflects the seriousness of the responsibilities conferred upon them. Balko notes that in many states, police unions advocate for a “law enforcement bill of rights” that will essentially afford police officers “rights” that are above and beyond what regular citizens are given.138 Laws that protect law enforcement officials from the laws that they are meant to uphold can instill a sense of being above the law. This is also a problem, because:

…you could make a good argument that police should be held to a higher standard than regular citizens. And you could make a good argument that they should be held to the same standards. But it’s hard to make a good argument that they should be held to a lower one.139

Protestors who gathered to dispute grand jury decisions on the officers involved in the deaths of Eric Garner and Michael Brown indicate that there is a strong movement for holding police responsible for wrongful death. The successful sentencing of Peter Liang for the death of Akai Gurley also reflects such. The police, as the guardians of American liberties, should be held to the same standards as its citizens. If no one is above the law, the legal system should demonstrate this ideal as well. Finally, in changing the system as a whole, one can prevent the tragedies detailed above from occurring again.

 

59 Radley Balko, Rise of the Warrior Cop: The Militarization of America’s Police Forces, (New York: PublicAffairs, 2014), 264.

60 Conor Friederdorf, “Eric Garner and the NYPD’s History of Deadly Chokeholds,” The Atlantic, Atlantic Media Company, 4 Dec. 2014, Web.

61 Radley Balko, Rise of the Warrior Cop: The Militarization of America’s Police Forces, (New York: PublicAffairs, 2014), 29-30.

62 Radley Balko, Rise of the Warrior Cop: The Militarization of America’s Police Forces, (New York: PublicAffairs, 2014), 31.

63 Radley Balko, Rise of the Warrior Cop: The Militarization of America’s Police Forces, (New York: Public Affairs, 2014), 31.

64 Radley Balko, Rise of the Warrior Cop: The Militarization of America’s Police Forces, (New York: PublicAffairs, 2014), 32.

65 Radley Balko, Rise of the Warrior Cop: The Militarization of America’s Police Forces, (New York: PublicAffairs, 2014), 32.

66 Radley Balko, Rise of the Warrior Cop: The Militarization of America’s Police Forces, (New York: PublicAffairs, 2014), 34.

67 Radley Balko, Rise of the Warrior Cop: The Militarization of America’s Police Forces, (New York: PublicAffairs, 2014), 34 -35.

68 Radley Balko, Rise of the Warrior Cop: The Militarization of America’s Police Forces, (New York: PublicAffairs, 2014), 37.

69 Radley Balko, Rise of the Warrior Cop: The Militarization of America’s Police Forces, (New York: PublicAffairs, 2014), 38-39.

70 Radley Balko, Rise of the Warrior Cop: The Militarization of America’s Police Forces, (New York: PublicAffairs, 2014), 39.

71 Radley Balko, Rise of the Warrior Cop: The Militarization of America’s Police Forces, (New York: PublicAffairs, 2014), 60-61.

72 Radley Balko, Rise of the Warrior Cop: The Militarization of America’s Police Forces, (New York: PublicAffairs, 2014), 78.

73 Fleischer, Matthew. “’41st & Central: The Untold Story of the L.A. Black Panthers’ Featured in L.A. Times Magazine.” N.p., 4 Apr. 2011. Web.

74 Radley Balko, Rise of the Warrior Cop: The Militarization of America’s Police Forces, (New York: PublicAffairs, 2014), 79.

75 Radley Balko, Rise of the Warrior Cop: The Militarization of America’s Police Forces, (New York: PublicAffairs, 2014), 145.

76 Radley Balko, Rise of the Warrior Cop: The Militarization of America’s Police Forces, (New York: PublicAffairs, 2014), 152-153.

77 Radley Balko, Rise of the Warrior Cop: The Militarization of America’s Police Forces, (New York: PublicAffairs, 2014), 161.

78 Radley Balko, Rise of the Warrior Cop: The Militarization of America’s Police Forces, (New York: PublicAffairs, 2014), 169.

79 Radley Balko, Rise of the Warrior Cop: The Militarization of America’s Police Forces, (New York: PublicAffairs, 2014), 243.

80 Radley Balko, Rise of the Warrior Cop: The Militarization of America’s Police Forces, (New York: PublicAffairs, 2014), 244.

81 Radley Balko, Rise of the Warrior Cop: The Militarization of America’s Police Forces, (New York: PublicAffairs, 2014), 254.

82 Radley Balko, Rise of the Warrior Cop: The Militarization of America’s Police Forces, (New York: PublicAffairs, 2014), 310.

83 Radley Balko, Rise of the Warrior Cop: The Militarization of America’s Police Forces, (New York: PublicAffairs, 2014), 310.

84 Radley Balko, Rise of the Warrior Cop: The Militarization of America’s Police Forces, (New York: PublicAffairs, 2014), 314.

85 William B Waegel, “How Police Justify the Use of Deadly Force,” Social Problems 32.2 (1984): 144-55, JSTOR, Web, 146.

86 Radley Balko, Rise of the Warrior Cop: The Militarization of America’s Police Forces, (New York: PublicAffairs, 2014), 271.

87 U.S. Bureau of Labor Statistics. “Fatal Occupational Injuries.” CFTB 04/24/2014 Fatal Occupational Injuries, Total Hours Worked, and Rates of Fatal Occupational Injuries by Selected Worker Characteristics, Occupations, and Industries, Civilian Workers, 2012 (2012): 1-4. Bureau of Labor Statistics. Department of Labor. .

88 "2010 NPMSRP Police Misconduct Statistical Report -Draft-," PoliceMisconduct.net, Cato Institute, 05 Apr. 2011, Web.

89 "2010 NPMSRP Police Misconduct Statistical Report -Draft-," PoliceMisconduct.net, Cato Institute, 05 Apr. 2011, Web.

90 "2010 NPMSRP Police Misconduct Statistical Report -Draft-," PoliceMisconduct.net, Cato Institute, 05 Apr. 2011, Web.

91 “Reichle v. Howards (11-262),” LII / Legal Information Institute, Cornell University Law School, n.d. Web.

92 John W. Whitehead, A Government of Wolves: The Emerging American Police State, (New York: SelectBooks, 2013), 181.

93 “Reichle v. Howards.” SCOTUSblog RSS. N.p., n.d. Web.

94 Quoted in Radley Balko, Rise of the Warrior Cop: The Militarization of America’s Police Forces, (New York: PublicAffairs, 2014), 227.

95 Kevin McCoy, “Choke-hold Cop Sued in Prior Misconduct Cases,” USA Today, USA Today, 4 Dec. 2014. Web.

96 Ryan Gabrielson, Ryann Grochowski Jones, and Eric Sagara, “Deadly Force, in Black and White,” Top Stories RSS, ProPublica, 10 Oct. 2014, Web.

97 Ryan Gabrielson, Ryann Grochowski Jones, and Eric Sagara, “Deadly Force, in Black and White,” Top Stories RSS, ProPublica, 10 Oct. 2014, Web.

98 Ryan Gabrielson, Ryann Grochowski Jones, and Eric Sagara, “Deadly Force, in Black and White,” Top Stories RSS, ProPublica, 10 Oct. 2014, Web.

99 Sarah Maslin Nir, “Officer Peter Liang, on Stand, Breaks Down as He Recalls Brooklyn Killing,” The New York Times, The New York Times, 08 Feb. 2016. Web.

100 Sarah Maslin Nir, “Officer Peter Liang, on Stand, Breaks Down as He Recalls Brooklyn Killing,” The New York Times, The New York Times, 08 Feb. 2016. Web.

101 Stephanie Clifford, “Family of Akai Gurley, Man Fatally Shot by Officer in Brooklyn, Sues New York City,” The New York Times, The New York Times, 21 May 2015. Web.

102 Christopher Fuchs, “NYPD Officer Peter Liang Guilty of Second-Degree Manslaughter in Akai Gurley Killing,” NBC News, NBC News, 11 Feb. 2016. Web.

103 Christopher Fuchs, “NYPD Officer Peter Liang Guilty of Second-Degree Manslaughter in Akai Gurley Killing,” NBC News, NBC News, 11 Feb. 2016. Web.

104 Quoted in Joseph Goldstein and Marc Santora, “Staten Island Man Died From Chokehold During Arrest, Autopsy Finds,” The New York Times, The New York Times, 01 Aug. 2014. Web.

105 Al Sharpton, “Eric Garner’s Death Ruled a Homicide,” NBC New York, NBC News, 1 Aug. 2014. Web.

106 Ray Sanchez, Dana Ford, Catherine E. Shoichet, Dominique Debucquoy-Dodley, Ben Brumfield, Daniel Verello, and Leigh Remizowski, “Protests after N.Y. Cop Not Indicted in Chokehold Death; Feds Reviewing Case,” CNN, Cable News Network, 04 Dec. 2014. Web.

107 “What Happened in Ferguson?” The New York Times. The New York Times, 12 Aug. 2014. Web.

108 “What Happened in Ferguson?” The New York Times. The New York Times, 12 Aug. 2014. Web. 08 Mar. 2016.

109 Lyle Jeremy Rubin, “A Former Marine Explains All the Weapons of War Being Used by Police in Ferguson,” The Nation, The Nation, 20 Aug. 2014, Web.

110 Lyle Jeremy Rubin, “A Former Marine Explains All the Weapons of War Being Used by Police in Ferguson,” The Nation, The Nation, 20 Aug. 2014, Web.

111 Radley Balko, Rise of the Warrior Cop: The Militarization of America’s Police Forces, (New York: PublicAffairs, 2014), 276.

112 Radley Balko, Rise of the Warrior Cop: The Militarization of America’s Police Forces, (New York: PublicAffairs, 2014), 275.

113 Donovan Slack and Suzanne Smalley, “In Snelgrove Files, Officers Recount Night of Chaos,” Boston.com, The Boston Globe, 21 Sept. 2005, Web.

114 “Violence Denounced in Eulogy for College Student.” The New York Times. The New York Times, 27 Oct. 2004. Web.

115 Lyle Jeremy Rubin, “A Former Marine Explains All the Weapons of War Being Used by Police in Ferguson,” The Nation, The Nation, 20 Aug. 2014, Web.

116 Lyle Jeremy Rubin, “A Former Marine Explains All the Weapons of War Being Used by Police in Ferguson,” The Nation, The Nation, 20 Aug. 2014, Web.

117 Lyle Jeremy Rubin, “A Former Marine Explains All the Weapons of War Being Used by Police in Ferguson,” The Nation, The Nation, 20 Aug. 2014, Web.

118 “What Happened in Ferguson?” The New York Times, The New York Times, 12 Aug. 2014, Web.

119 “Memorandum: Department of Justice Report Regarding the Criminal Investigation into the Shooting Death of Michael Brown by Ferguson, Missouri Police Officer Darren Wilson,” Department of Justice Report (2015): 1-86, Web, 86.

120 Dylan Goforth, “Family of Man Killed by Undercover Reserve Deputy Asks Sheriff’s Office to Release Video,” Tulsa World, Tulsa World, 10 Apr. 2015. Web.

121 John Bacon and William M. Welch. “Tulsa Reserve Deputy Charged with Manslaughter,” USA Today, USA Today, 13 Apr. 2015. Web.

122 Tom Dart, “Oklahoma Officer Who Mistook Gun for Taser Charged in Killing of Black Man,” The Guardian, The Guardian, 14 Apr. 2015, Web.

123 Catherine E. Shoichet, Jason Morris, and Ed Lavandera, “Tulsa Shooting: Deputy Robert Bates Charged,” CNN, Cable News Network, 14 Apr. 2015. Web.

124 “Product: Model 60.” Product: Model 60. Smith & Wesson, n.d. Web. 09 Mar. 2016.

125TASER X26C | TASER International.” TASER X26C | TASER International. N.p., n.d. Web.

126 Holly Yan, “How Easy Is It to Confuse a Gun for a Taser?” CNN, Cable News Network, 20 Apr. 2015. Web.

127 Holly Yan, “How Easy Is It to Confuse a Gun for a Taser?” CNN, Cable News Network, 20 Apr. 2015, Web.

128 Holly Yan, “How Easy Is It to Confuse a Gun for a Taser?” CNN, Cable News Network, 20 Apr. 2015, Web.

129 James Queally, “Experts Doubt Tulsa Deputy’s Claim He Confused Pistol with Stun Gun,” Los Angeles Times, Los Angeles Times, 17 Apr. 2015, Web.

130 Corey Jones, “External Report: Sheriff’s Office Use-of-deadly-force Policies Allow for Misunderstanding, Inconsistencies,” Tulsa World, Tulsa World, 1 Mar. 2016. Web.

131 Ryan P. Cumming, “Support Black Lives Matter or Support Police? It’s a False Choice,” The Huffington Post, TheHuffingtonPost.com, 12 Aug. 2015. Web.

132 Timothy Egan, “Soldiers of the Drug War Remain on Duty,” The New York Times, The New York Times, 28 Feb. 1999, Web.

133 Lyle Jeremy Rubin, “A Former Marine Explains All the Weapons of War Being Used by Police in Ferguson,” The Nation, The Nation, 20 Aug. 2014, Web.

134 Karl W. Bickel, “Recruit Training: Are We Preparing Officers for a Community Oriented Department?” Community Policing Dispatch, United States Department of Justice, June 2013. Web.

135 Karl W. Bickel, “Recruit Training: Are We Preparing Officers for a Community Oriented Department?” Community Policing Dispatch, United States Department of Justice, June 2013. Web.

136 Karl W. Bickel, “Recruit Training: Are We Preparing Officers for a Community Oriented Department?” Community Policing Dispatch, United States Department of Justice, June 2013. Web.

137 Chase Madar, “Why It’s Impossible to Indict a Cop,” The Nation, The Nation, 24 Nov. 2014. Web.

138 Radley Balko, Rise of the Warrior Cop: The Militarization of America’s Police Forces, (New York: PublicAffairs, 2014), 328.

139 Radley Balko, Rise of the Warrior Cop: The Militarization of America’s Police Forces, (New York: PublicAffairs, 2014), 336.

 

 

Bibliography

Associated Press, The. “Violence Denounced in Eulogy for College Student.” The New York Times. The New York Times, 27 Oct. 2004. Web.

Bacon, John, and William M. Welch. “Tulsa Reserve Deputy Charged with Manslaughter.” USA Today. USA Today, 13 Apr. 2015. Web. 9 Mar. 2016.

Baker, Al, J. David Goodman, and Benjamin Mueller. “Beyond the Chokehold: The Path to Eric Garner’s Death.” The New York Times. The New York Times, 13 June 2015. Web. 09 Mar. 2016.

Balko, Radley. Overkill: The Rise of Paramilitary Police Raids in America. Washington, D.C.: Cato Institute, 2006. Cato Institute. Cato Institute, 17 July 2006. Web. 8 Mar. 2016.

Balko, Radley. Rise of the Warrior Cop: The Militarization of America’s Police Forces. New York: PublicAffairs, 2014. Print.

Balko, Radley. “Some Thoughts on Eric Garner.” The Washington Post. The Washington Post, 4 Dec. 2014. Web. 9 Mar. 2016.

“Battlefield USA: American Police ‘excessively Militarized’ – ACLU Study.” RT International. N.p., 26 June 2015. Web. 9 Mar. 2016.

Bickel, Karl W. “Recruit Training: Are We Preparing Officers for a Community Oriented Department?” Community Policing Dispatch. United States Department of Justice, June 2013. Web. 10 Mar. 2016.

Cassidy, John. “Cops Should Be Cops – Not Combat Troops.” The New Yorker. The New Yorker, 14 Aug. 2014. Web. 09 Mar. 2016.

Clifford, Stephanie. “Family of Akai Gurley, Man Fatally Shot by Officer in Brooklyn, Sues New York City.” The New York Times. The New York Times, 21 May 2015. Web. 08 Mar. 2016.

Cumming, Ryan P. “Support Black Lives Matter or Support Police? It’s a False Choice.” The Huffington Post. TheHuffingtonPost.com, 12 Aug. 2015. Web. 09 Mar. 2016.

Dart, Tom. “Oklahoma Officer Who Mistook Gun for Taser Charged in Killing of Black Man.” The Guardian. The Guardian, 14 Apr. 2015. Web. 9 Mar. 2016.

Delattre, Edwin J. Character and Cops: Ethics in Policing. Washington, D.C.: AEI, 1994. Print.

Egan, Timothy. “Soldiers of the Drug War Remain on Duty.” The New York Times. The New York Times, 28 Feb. 1999. Web. 9 Mar. 2016.

Fleischer, Matthew. “’41st & Central: The Untold Story of the L.A. Black Panthers’ Featured in L.A. Times Magazine.” N.p., 4 Apr. 2011. Web.

Friederdorf, Conor. “Eric Garner and the NYPD’s History of Deadly Chokeholds.” The Atlantic. Atlantic Media Company, 4 Dec. 2014. Web. 09 Mar. 2016.

Fuchs, Christopher. “NYPD Officer Peter Liang Guilty of Second-Degree Manslaughter in Akai Gurley Killing.” NBC News. NBC News, 11 Feb. 2016. Web. 08 Mar. 2016.

Healy, Gene. “Deployed in the U.S.A.: The Creeping Militarization of the Home Front.” Washington, D.C.: Cato Institute, 2003. Cato Institute. Cato Institute, 17 Dec. 2003. Web. 8 Mar. 2016.

Geller, William A., and Hans Toch, eds. Police Violence: Understanding and Controlling Police Abuse of Force. New Haven: Yale UP, 1996. Print.

Goforth, Dylan. “Family of Man Killed by Undercover Reserve Deputy Asks Sheriff’s Office to Release Video.” Tulsa World. Tulsa World, 10 Apr. 2015. Web. 9 Mar. 2016.

Goldstein, Joseph, and Marc Santora. “Staten Island Man Died From Chokehold During Arrest, Autopsy Finds.” The New York Times. The New York Times, 01 Aug. 2014. Web. 09 Mar. 2016.

Goodman, J. David, and Al Baker. “Wave of Protests After Grand Jury Doesn’t Indict Officer in Eric Garner Chokehold Case.” The New York Times. The New York Times, 03 Dec. 2014. Web. 09 Mar. 2016.

Johnson, Marilynn S. Street Justice: A History of Police Violence in New York City. Boston: Beacon, 2003. Print.

Jones, Corey. “External Report: Sheriff’s Office Use-of-deadly-force Policies Allow for Misunderstanding, Inconsistencies.” Tulsa World. Tulsa World, 1 Mar. 2016. Web. 09 Mar. 2016.

Kerby, Sophia. “The Top 10 Most Startling Facts About People of Color and Criminal Justice in the United States.” Center for American Progress. Center for American Progress, 13 Mar. 2012. Web. 20 Apr. 2016.

Kuhns, Joseph B., and Johannes Knutsson. Police Use of Force: A Global Perspective. Santa Barbara: Praeger, 2010. Print.

Lynch, Timothy. “In Defense of the Exclusionary Rule.” Washington, D.C.: Cato Institute, 1998. Cato Institute. Cato Institute, 1 Oct. 1998. Web. 8 Mar. 2016.

Madar, Chase. “Why It’s Impossible to Indict a Cop.” The Nation. The Nation, 24 Nov. 2014. Web. 09 Mar. 2016.

McClam, Erin. “Law Enforcement Experts Weigh In on Tulsa Reserve Deputy Robert Bates’ Explanation.” NBC News. NBC News, 17 Apr. 2015. Web. 09 Mar. 2016.

McCoy, Kevin. “Choke-hold Cop Sued in Prior Misconduct Cases.” USA Today. USA Today, 4 Dec. 2014. Web. 9 Mar. 2016.

“Memorandum: Department of Justice Report Regarding the Criminal Investigation into the Shooting Death of Michael Brown by Ferguson, Missouri Police Officer Darren Wilson.” Department of Justice Report (2015): 1-86. Web. 8 Mar. 2016.

Nir, Sarah Maslin. “Officer Peter Liang, on Stand, Breaks Down as He Recalls Brooklyn Killing.” The New York Times. The New York Times, 08 Feb. 2016. Web. 20 Apr. 2016.

“Product: Model 60.” Product: Model 60. Smith & Wesson, n.d. Web. 09 Mar. 2016.

Queally, James. “Experts Doubt Tulsa Deputy’s Claim He Confused Pistol with Stun Gun.” Los Angeles Times. Los Angeles Times, 17 Apr. 2015. Web. 09 Mar. 2016.

“Reichle v. Howards (11-262).” LII / Legal Information Institute. Cornell University Law School, n.d. Web. 20 Apr. 2016.

“Reichle v. Howards.” SCOTUSblog RSS. N.p., n.d. Web. 20 Apr. 2016.

Roleff, Tamara L., ed. Police Brutality. San Diego: Greenhaven, 1999. Print.

Rubin, Lyle Jeremy. “A Former Marine Explains All the Weapons of War Being Used by Police in Ferguson.” The Nation. The Nation, 20 Aug. 2014. Web. 08 Mar. 2016.

Sanchez, Ray, Dana Ford, Catherine E. Shoichet, Dominique Debucquoy-Dodley, Ben Brumfield, Daniel Verello, and Leigh Remizowski. “Protests after N.Y. Cop Not Indicted in Chokehold Death; Feds Reviewing Case.” CNN. Cable News Network, 04 Dec. 2014. Web. 09 Mar. 2016.

Slack, Donovan, and Suzanne Smalley. “In Snelgrove Files, Officers Recount Night of Chaos.” Boston.com. The Boston Globe, 21 Sept. 2005. Web. 08 Mar. 2016.

Sharpton, Al. “Eric Garner’s Death Ruled a Homicide.” NBC New York. NBC News, 1 Aug. 2014. Web. 09 Mar. 2016.

Shoichet, Catherine E., Jason Morris, and Ed Lavandera. “Tulsa Shooting: Deputy Robert Bates Charged.” CNN. Cable News Network, 14 Apr. 2015. Web. 09 Mar. 2016.

TASER X26C | TASER International.” TASER X26C | TASER International. N.p., n.d. Web. 09 Mar. 2016.

Toch, Hans. Cop Watch: Spectators, Social Media, and Police Reform. Washington, DC: American Psychological Association, 2012. Print.

Tucker, Eric. “Why Most Police Shootings Don’t End With Prosecutions.” Business Insider. Business Insider, Inc, 25 Nov. 2014. Web. 09 Mar. 2016.

"2010 NPMSRP Police Misconduct Statistical Report -Draft-." PoliceMisconduct.net. Cato Institute, 05 Apr. 2011. Web. 08 Mar. 2016.

U.S. Bureau of Labor Statistics. “Fatal Occupational Injuries.” CFTB 04/24/2014 Fatal Occupational Injuries, Total Hours Worked, and Rates of Fatal Occupational Injuries by Selected Worker Characteristics, Occupations, and Industries, Civilian Workers, 2012 (2012): 1-4. Bureau of Labor Statistics. Department of Labor. Web. .

Waegel, William B. “How Police Justify the Use of Deadly Force.” Social Problems 32.2 (1984): 144-55. JSTOR. Web. 08 Mar. 2016.

Weber, Diane Cecilia. “Warrior Cops: The Ominous Growth of Paramilitarism in the American Police Department.” Washington, D.C.: Cato Institute, 1998. Cato Institute. Cato Institute, 26 August 1999. Web. 8 Mar. 2016.

“What Happened in Ferguson?” The New York Times. The New York Times, 12 Aug. 2014. Web. 08 Mar. 2016.

Whitehead, John W. Battlefield America: The War on the American People. New York: SelectBooks, 2015. Print.

Whitehead, John W. A Government of Wolves: The Emerging American Police State. New York: SelectBooks, 2013. Print.

Yan, Holly. “How Easy Is It to Confuse a Gun for a Taser?” CNN. Cable News Network, 20 Apr. 2015. Web. 09 Mar. 2016.

 

 

 

Mental Health Accessibility in America

Darian Dozier and Brad Frederick, Class of 2016 (Psychology)

 

According to the Substance Abuse and Mental Health Services Administration, only 19.6 million adults with any mental disorder received treatment in 2013. This figure amounts to less than half of those with mental illnesses in the United States (n.d.). Furthermore, up to 25% of primary care patients suffer from depression, yet primary care physicians identify less than one third of these patients (“Data on Behavioral Health in the United States”). Despite increasing technology in the field of medicine, a large segment of the population is going without treatment. Why is this so? It could be argued that the mental health care system in the United States has several flaws that prevent it from reaching its potential, which could range from costs to resource availability. How did these problems arise, and how do we fix them? In this section, we discuss the evolution of mental health care in the United States, examining how the system we have inherited came to be, and current issues both inside and outside the system that may prevent one from obtaining the full amount of treatment they could receive. With this approach, we hope to locate both the roots of our current issues and possible ways to rectify the sources of these problems.

Historical Context

Mirrored in the progression of mental health care itself, the history of the mental health system in the United States has followed a long and winding road to get to its present state. From eighteenth century asylums to the Community Mental Health Centers Act, people with mental illnesses have had varying levels of access to proper mental health care and have experienced a wide range of care and treatments. The birth of the mental health system in the United States was a modest one, at best. During the early colonial period, there was not a consensus on how those with mental illnesses should be treated or managed. In fact, those with mental illnesses were “almost exclusively a private, family matter,” which generally meant confining them to their homes under lock-and-key (Hinshaw 65). This “private” approach to mental illness was facilitated by the rural nature of the New World: a “lack of centralized population gave rise to few public concerns” and mental health was often overlooked (Hinshaw 65). In other words, a scarce population density resulted in a weak sense of community and an underdeveloped sense of a public good.

However, this began to change as more people immigrated to the colonies and the population grew. Over time the United States grew more urban. In order to take care of the increasing number of people with mental illnesses, the Pennsylvania Hospital opened in 1752. Soon other institutions devoted to the care of the mentally ill began to appear in several major cities. “Care,” though, is a word that should be used loosely. As some authors have noted, asylums were often more “like prisons than hospitals” (Barlow and Durand 14). Conditions ranged from acceptable to appalling: in several instances, patients were chained to walls in the cellars, had their nasal cavities purged (an activity commonly believed to help remedy mental illnesses at the time), and were freely whipped by those who were in charge of supervision in order to control them during what were most likely psychotic episodes (Hinshaw 66).

These deplorable conditions did not go unchallenged. Opponents to the asylum system, such as psychiatrist Philippe Pinel, began pioneering new techniques to treat those with mental illnesses. When Pinel became the superintendent of La Bicêtre in 1791, a hospital in Paris, he “instituted remarkable reforms by removing all chains used to restrain patients and instituting humane and positive psychological interventions” (Barlow and Durand 14). Other positive changes included providing sunny rooms to those staying at the hospital, as well as allowing them time to freely exercise on the grounds. This was the birth of what was then known as the traitment moral, or “moral treatment.” Pinel was not alone in his push for improved care in asylums. Those in asylums were often kept in jails with criminals and in some cases deprived of clothing, heat, and bathrooms. Such poor conditions encouraged Dorothea Dix to begin her own stateside crusade (“Dorothea Dix”). During the 1840s, she campaigned for increased standards of care for those with mental illnesses, and “successfully persuaded the U.S. government to fund the building of 32 state psychiatric hospitals” over a forty year period (“Module 2: A Brief History of Mental Illness”). Her goodwill was not confined to the United States. While visiting Europe, Dix managed to get an audience with Pope Pius IX, who “personally ordered construction of a new hospital for the mentally ill after hearing her report” (“Dorothea Dix”).

While her intentions were undoubtedly noble, Dix’s advocacy resulted in the opposite of her original goal. Although her campaign had led to new psychiatric hospitals being constructed, it did not lead to these hospitals being adequately staffed. Thus, after the public began committing the mentally ill to these asylums en masse, the lack of properly trained mental health professionals soon resulted in “a rapid transition from moral therapy to custodial care” (Barlow and Durand 15). This extremely high patient-to-caregiver ratio—along with the then-current conceptualization of the incurable mental illness—meant that many people who were thought to be disturbed were left to languish in mental hospitals, many of which failed to live up to the promise of improved conditions and adequate care (Barlow and Durand 15). Therefore, many people with mental health issues were soon confined to asylums and were yet again living in squalor

The idea of an incurable mental illness began to change following Louis Pasteur’s development of the germ theory of disease several decades later, which put forth the idea that disease was caused by outside agents and was treatable. While the etiology of many mental disorders was still unknown, psychologists began adopting this model as well. Nowhere is this more apparent than in the initial treatment for general paresis, a kind of neuropsychiatric disorder which can lead to memory problems and possibly delusions. While most psychotic patients maintain the same level of symptom severity throughout their lifetime, those with advanced syphilis often exhibit psychotic symptoms that deteriorate into death. Consequently, they were given the separate diagnosis of general paresis. Doctors eventually found that general paresis could be cured with the disease malaria, and doctors soon began injecting the blood from those who had contracted malaria into patients with this disorder (Barlow and Durand 12). These patients treated with malaria saw a marked improvement in their psychotic symptoms, This observed linkage between syphilis, general paresis, and malaria led to a major paradigm shift in the field of psychology. Psychologists and psychiatrists became convinced that a cure could be found for most, if not all, mental illnesses.

Further rooting the field in biology, psychiatrist Emil Kraepelin began promoting the concept of “discrete disorders, distinguishable from one another. Each with its own symptomatology and course; each with its own specific pathophysiology.” He also postulated that nosology, the branch of science dealing with classifying and describing diseases, “should be the principal guideline in exploring the biological roots of mental pathology” (Praag 29). One of his contemporaries, psychiatrist John Gray, also believed that “mental illness was due to physical causes that could be found in the brain, as opposed to the long-standing belief that mental illness was due to ‘moral’ causes,” and advocated viewing mental illness the same as physical illnesses (“19th-Century Psychiatrists of Note;” Barlow and Durand 12).

This approach resulted in the formulation of several new kinds of treatments. Following the proposition of John Gray’s model, where mental illnesses have a physical explanation, basic psychosurgeries such as topectomies (the excision of select parts of the brain), leucotomies (severing nerve fibers in different areas of the brain), and lobotomies (a type of leucotomy where nerve fibers to and from the prefrontal cortex are cut) began to emerge. Later, treatments like the lobotomy were touted as “miracle cures” and received a surge of popularity in the middle of the twentieth century. By the end of the 1950s, “an estimated 50,000 lobotomies [were] performed in the United States” (Pan). The creation of several antipsychotic drugs resulted in the decrease in popularity of psychosurgeries, and the changing landscape of mental health care generated in the push for deinstitutionalization: the process of replacing psychiatric hospitals with mental health services in the community.

Those who supported deinstitutionalization lobbied for community-based care over the asylums that had dominated for the last two hundred years in order to let those with mental illness remain in their own communities, to reduce the amount of egregious neglect, and to reduce the suffering of those receiving treatment (“Module 2: A Brief History of Mental Illness;” Sheth). This idea found ultimate creation in one of the last pieces of legislature that John F. Kennedy would sign into law: the Community Mental Health Centers Act in 1963, which called for “2,000 community mental health centers throughout the nation [… and] a 50 percent reduction in the number of mental health patients under custodial care” (Kofman 27). In order to achieve this, the act moved resources away from formal institutions and funneled them into community-based mental health care programs, which changed not only “how services were provided” but, equally as important, “who performed those services” (“Still Pursuing the Promise;” Kliewer, McNally, and Trippany 40). In this way, treatment delivery was vastly altered. Until this bill was passed, those with a severe mental illnesses (SMIs) were confined in hospitals, while those who were basically healthy but struggled with such issues as marital conflict were sent to mental health counselors (Kliewer, McNally, and Trippany 40). After the passing of the bill, services for those with SMIs were offered by many non-medical professionals, a trend that has continued into the present.

There were obvious benefits from the bill. As Kliewer, McNally, and Trippany mention, greater independence, improved quality of life, a reduction in psychotropic needs, as well as increased socialization and adaptability to change have all been documented in the research literature. Unfortunately, the benefits for some came at a cost for others. Stigma—which will be discussed in detail later in this chapter—often kept those with mental illnesses unemployed and in isolation, a problem that persists into the present day. Furthermore, Mechanic points out that “residence in the community also allowed access to street drugs and alcohol,” which led to higher rates of “psychiatric and substance abuse comorbidities” (1549). Thus, some of the released population were simply reinstitutionalized. Others found themselves “homeless, isolated, and victimized,” if they were lucky. If they were not, they lost their lives (Kliewer, McNally, and Trippany 41). One way lawmakers tried to counter this problem was through the introduction of Medicaid in 1965, a program where money from the federal government helps states provide medical assistance to low-income residents. The most important role it played was acting as “insurance-like coverage where previously none existed” (Frank, Goldman, and Hogan 105). However, the federal government was unwilling to pay for state mental hospitals. As Frank, Goldman, and Hogan figure, covering the cost of state mental hospitals would have increased the Medicaid spending by seventy-eight percent, so the government “prohibited Medicaid payments to institutions of mental disease (IMDs) for people ages 22–64” (105). Therefore, patients were moved to either nursing homes or general hospitals, many of which were not fully equipped to care for them (Pan).

The range and variety of mental health care was expanded in 1980 under Jimmy Carter’s Mental Health Systems Act. This act “restructured the federal CMHC [Community Mental Health Center] program by strengthening the linkages between the federal, state and local governments.” It did this by providing “a litany of federal grants […] for CMHCs to assist in expanding services to meet an array of priority populations” (Hammond). In this way, the federal government was able to target specific services for funding, like community mental health centers. This approach was not universally accepted, though, and was repealed just a year later by Ronald Reagan’s Omnibus Budget Reconciliation Act, which switched to a funding system which granted single block grants, allowing for the allocation of funds to be at the discretion of the state (“National Institute of Mental Health”). Targeted financial aid from the federal government for mental health care was basically eliminated, and soon general financial support declined as well. The federal government’s spending on mental health decreased by 30 percent, and by 1985, federal funding made up about 11 percent of community mental health agency budgets (Pan).

Despite this, the twentieth century ended on a high note. Known colloquially as “the decade of the brain,” the nineties saw an upswing in the amount of research being devoted to both the brain and to the neurosciences (Mechanic 1549). This resulted in an increased understanding of neuroplasticity (or, the adaptability of the brain), the effects of heredity and genetics, and different brain imaging technologies, among other things (“A Decade after The Decade of the Brain”). The ramifications of this research are still being observed in the present. While increased understanding is always a gain, it may come at a cost. Medications have come to dominate as the treatment du jour, with Americans spending around $34 million on psychotropics and approximately twenty percent of adults “taking at least one psychotropic medication” (Smith). Neuroscience is not the only cause of alarm for some. The mentality of the “current prevalent hospitalization model,” seemingly an outcome of deinstitutionalization, features extremely short hospitalizations (typically lasting five or six days). This causes some clinicians to believe that such relatively short stays “may diminish opportunities for a sustained recovery” for patients (Glick, Sharfstein, and Schwartz 206). From an ancient “imbalance of humors” theory to the current definition of a mental disorder—“a syndrome characterized by clinically significant disturbance in an individual’s cognition, emotion regulation, or behavior that reflects a dysfunction in the psychological, biological, or developmental processes underlying mental functioning”—psychology has gone a long way (American Psychiatric Association, 2013). All of these current issues in psychology have emerged from historical precedents, and they each have effects on both what treatment options are available and how patients are able to receive mental health care in the United States.

 

Present-Day Care in America

The numerous changes that have occurred in the United States mental health care system have informed our current approach to mental health care. The old push for deinstitutionalization, for example, lingers to this day and has resulted in a reduction in the number of psychiatric beds available, with “about 14 per 100,000 people” (Amadeo). Whether or not this is a positive or a negative change remains debatable. Despite this, there have been those who continue lament the system—even as recently as 2015. Many of the criticisms stem from the general lack of funding for many mental health programs. Representative Tim Murphy (R-PA) laments that “only a small fraction of the nearly $130 billion appropriated by the federal government for mental health finds its way down to families or communities” (“How Do We Fix America’s Mental Health Care System?”).

In spite of whatever obstacles may be present, steps have been made to improve the way Americans access mental health services. The Paul Wellstone and Pete Domenici Mental Health Parity and Addiction Equity Act of 2008 (also known as the MHPAEA) required that insurance companies treat “mental and behavioral health and substance use disorder equal to (or better than) medical/surgical coverage” in group health plans (“Does Your Insurance Cover”). Since then, even larger changes have occurred regarding mental health care and insurance coverage. Building on the MHPAEA, the passage of the Affordable Care Act (hereafter, ACA) required that “most individual and small employer health insurance plans […] cover mental health and substance use disorder service” by adding it to the list of “essential health benefits required by law to be covered (“Health Insurance and Mental Health Services”). Now, most health plans cover preventative services and one cannot be denied coverage “due to pre-existing health conditions, including mental illnesses” (“Health Insurance and Mental Health Services”).

It also appears that the American model of psychological care has seen a shift in focus, with more emphasis placed on prescribing drugs than on prescribing psychotherapy. This is a result of pharmaceutical companies who profit from psychotropics, insurance companies who have “reimbursement rates and policies that discourage talk therapy,” and the simple fact that “a psychiatrist can earn $150 for three 15-minute medication visits compared with $90 for a 45-minute talk therapy session” (Harris). Several research articles also mention the rising dependence on psychotropics by mental health professionals—in 2003, for example, spending on antipsychotic medication grew 22.1 percent while the overall spending grew by 11.5 percent that same year (Frank, Conti, and Goldman). Further research has found that between 1996 and 2006, “per capita spending on psychotropic drugs tripled” (Frank, Goldman, and McGuire 654). Clearly, the legacy of the “decade of the brain” is alive and well in the twenty-first century.

History is present in other ways, as well. As time would suggest, Medicaid provides more limited coverage than some would prefer. At the discretion of the state, programs “provide some mental health services and some offer substance use disorder services” (“Health Insurance and Mental Health Services”). However, this relative freedom for states results in some negative outcomes: seventeen states have rejected any kind of Medicaid expansion as proposed by the ACA (“Where the States Stand”). Medicare coverage also has a relatively narrow scope regarding mental health care. Subsequently, there is a relatively large proportion of the United States’ population that has restricted access to mental health care resources.

 

Two Global Comparisons

While the United States usually falls somewhere in the middle during wide global comparisons, two of the best countries regarding efficiency of mental health care treatment include Germany and Denmark. A study done by Moran and Jacobs measured input and output variables such as the number of psychiatrists, the number of psychiatric beds, the amount of discharges (all per 1000 population), and the average length of stay; they also took environmental variables like alcohol consumption, income, education, and unemployment into account. Using several different models, both countries tended to be in the upper half of all the countries studied—and Denmark was consistently rated higher than the United States on each of these measures (Moran and Jacobs 94). These countries’ scores for mental health integration—based on a mental health integration index created by the Economist Intelligence Unit—were some of the highest as well, with Germany having the top global score (Meek). This index was centered on five key areas, which included “medical provision, human rights, stigma, [and] the ability to live a fulfilling family life and employment” (“Germany leads Europe”). Much like in the United States, both countries have slowly been progressing towards deinstitutionalization, and both systems are relatively decentralized. Thus, perhaps following these European examples might relieve the current American system from the issues which have constantly plagued it.

Germany

The German health care system has a separation of power between the federal government, the individual governments of the bundesländer (what could loosely be deemed states), and “competing, not-for-profit, nongovernmental health insurance funds called ‘sickness funds’” which provide health insurance coverage (Mossialos et al. 63). Mental health care tends to be more community-based and is organized as a decentralized subsidiary system (Salize, Rössler, and Becker 92). In this system, “a key characteristic is the particularly wide gap between inpatient and outpatient services, which are funded separately and staffed by different teams” (Salize, Rössler, and Becker 92). Such heterogeneity is a result of the highly decentralized state: there is no “National Health Service” and federal authorities may only organize service “in the event that private, volunteer, or other organizations are unable to provide the service.” Individual bundesländer plan and regulate mental health care within a slight legal framework supported by the national government (Salize, Rössler, and Becker 93).

Despite the push for decentralization, some authors have noted what they call “trans-institutionalisation,” or a shift in placement from one institution to another, arising in Germany; most notably, there have been “rising numbers of forensic beds, involuntary hospital admissions, and places in supported housing” (Priebe et al. 124). It is worth noting that Germany is not the only country to do so. In the same study, England, Italy, the Netherlands, Spain, and Sweden were also examined, and the same results were generally found in all countries: a rise in forensic beds in all six countries, rise in involuntary admissions in half, rise in the number of places in supported housing, decrease in psychiatric beds in five out of the six countries, and a rise in the prison population in all six countries.

It has also been noted that “Germany is one of the countries with highest expenditure for mental health care in the world,” but little aid is given through community mental health, and “no national mental health policy exists” (Stierlin et al.). Recently, though, legal changes have resulted in more community care options becoming available. As more community care programs are implemented, more data has been gathered. Some researchers have found that “intensified outpatient services do not contribute to a reduction of psychiatric treatment costs but to increased efficiency of psychiatric treatment” (Stierlin et al.). Limiting the study, however, is the fact that the changes in Germany are confined to pilot studies. It is worth noting that other researchers have found that such results have also been found “irrespective of programme design” (qtd. in Stierlin et al.).

Denmark

In Denmark, the “underlying principle of the health care system is to provide universal access to each of its citizens” (Mossialos et al. 33). To achieve this goal, the mental health care system in Denmark “consists of two relatively independent sectors”: the larger regional health authorities and individual municipalities (Hoof et al. 3). The change from institutional care to community-based care—a transition first seen in the United States during the 1970s—was established near the beginning of the 1990s in Denmark and is still currently implemented (“Mental Health Briefing Sheets”).

During this transition, from 1987 to 2007, the number of psychiatric beds was cut in half (Wahlbeck et al. 454). Regional authorities maintained psychiatric hospitals which specialized in outpatient care, while municipal authorities took care of social psychiatry and “support services in the fields of day-activities, employment and education” (Hoof et al. 3). Mental health care in Denmark is financed mainly through national taxes, though “regional and local authorities decide for themselves which part of their health budgets will be spent on mental health care services” (Hoof et al. 3). Despite this relative autonomy, there are certain incentives to promote community support: for example, local authorities “have to pay for prolonged hospitalization when adequate social psychiatric or local residential care is not available” (Hoof et al. 3). Thus, local authorities are held accountable for providing effective treatment, as failure at the local level costs regional authorities more in the long run. No such system exists in the U.S., where states are regulated at a much lower rate.

Unlike some countries, though, mental health care does not stop at the treatment level in Denmark. All disabled adults—including those with SMIs—are entitled, through the Danish national legislature, to education or vocational training; and approximately a quarter of apartments in subsidized housing are reserved for individuals suffering from a “social or mental health problem” (Hoof et. al. 3). Furthermore, the governmental program “PsykiatriFonden” is aimed at “education and prevention of mental disorders through a countrywide educational contribution to schoolchildren” (“Mental Health Briefing Sheets”). In addition, programs that target mental health at the workplace, in older people, and which combat stigma have also been implemented (“Mental Health Briefing Sheets”). In this regard, the Danish mental health system appears to be a more holistic approach than other countries, as it involves the actual treatment, education, and social support for those with mental illnesses.

As both the German and Danish examples demonstrate, mental health care systems can be relatively decentralized and still be effective. In fact, Hoof et al. found that “key persons […] appreciate the national de-institutionalization process” and that this division featured enough cooperation that it “ensured that municipalities are closely involved” in the mental health system (3-4). One method of measuring this effectiveness is the mean life expectancy, which has increased for both Danish men and women between 1987 and 2006, especially “among young people with mental disorders” (Wahlbeck et al. 454). These authors posit that a number of problems, such as “unhealthy lifestyle, inadequate access to good-quality physical healthcare, and a culture of not taking physical disease into consideration” could affect those with mental disorders, who are more often “poor, unemployed, single and marginalized” (Wahlbeck et al. 455).

However, certain problems can arise in a decentralized system: in both Germany and the United States, when states do not have a large incentive to provide optimal mental health care from the national government, there is the obstacle of a “fragmented system of mental health care provision and mental health care funding” which result in disparate levels of care across different localities (Salize, Rössler, and Becker 101). Obviously, this matter needs to be addressed in order to provide better coverage for all regardless of system. In spite of this major issue, it may still be productive to consider these international models as we plunge into current issues within the United States mental health care system.

 

Reasons One Might Forego Treatment

Dissatisfactions with the problems found in the general health care industry have been voiced throughout the previous decades, and the mental health system in the United States is not immune to criticism. Changing the system is not an uncommon sentiment among those both inside and outside the medical field. There are about as many opinions on what to do with the mental health care system as there are voices to propose them. A 2015 headline from the U.S. News announced that “tinkering can’t fix the mental health care system” (Sederer). In that article, the author lamented the holes that seemed to litter the mental health care field—from some illnesses going undiagnosed to less than effective treatments—and asserted that a major change must be forced into the current system, as small adjustments won’t be enough. On the other hand, some are willing to praise the steps that have been made in providing access for those with a disorder. An article from CNBC—also from 2015—discusses the positive effects of the Affordable Care Act, yet it acknowledges simply that “we have more to do” without specifying what that would entail (Miller). Other suggestions range from “[addressing] data on the dangers of long-term use of drugs” and other psychopharmaceuticals to creating “single points of access” for mental health resources such as information for families (Nikkel; Appelbaum).

 

Costs and Coverage

One major obstacle regarding access to mental health care is the cost. Frankly, mental health care in the United States, historically, has not always been cheap. Even today, such care is a significant portion of American medical care costs. Regarding mental health care, the Agency for Healthcare Research and Quality reports that “care expenditures [rose] from $35.2 billion in 1996 to 57.5 billion in 2006” (“Mental Health: Research Findings”). More recent data suggests that the national expenditure on mental health continued to bloat to about $147 billion in 2009, and that the total cost of mental disorders in 2012—taking items such as “lost earnings and public disability insurance payments”—was about $467 billion for the nation (“Director’s Blog”).

Taking a look at the individual level, the average out-of-pocket expenditure on mental health care in 2006 was $1,591 per person in America (“Mental Healthcare Cost Data”). This is not insignificant. Further, at the start of 2015—even after the implementation of the ACA—approximately 32.3 million Americans still lacked health insurance (Majerol, Newkirk, and Garfield 5-6). Additionally, one analysis has found that there was a “mean reduction in earnings of $16,306” in persons with serious mental illnesses (SMIs) that resulted in “an annual loss of earnings totaling $193.2 billion” (“Assessing the Economic Costs of Serious Mental Illness” 663). This presents an obvious problem: those who are in need of care will be less likely to be able to afford it because of their SMI.

The literature seems to support the notion that the current system is underutilized simply because of the exorbitant price tag. According to one study, though the rate of contact between those with significant psychological distress and mental health professionals has increased over time, main effects existed for race, age, and insurance coverage (Mojtabai 2010). In other words, the results indicated that racial minorities, those over 65, and those lacking insurance coverage were less likely to have had contact with a mental health professional at a statistically significant level (p < .05). It is interesting to note that all three of these populations (minorities, seniors, and the uninsured) typically have lower levels of income. One study has shown that the costs of mental health care, while still being identified as a barrier to receiving adequate mental health care, were not primary concerns for most individuals (Gulliver, Griffiths, and Christensen). Bigger concerns were discomfort talking about mental health problems and self-reliance. In spite of this, the effect of cost cannot be understated.

What can be done to counteract the looming question of mental health care costs? One study has found that complementary and alternative medical (CAM) therapies (such as dietary supplements, aromatherapy, and music therapy) are “more than twice as common among youth [from 7 to 17 years old] with mental health concerns as those without (28.9% to 11.6%, p< .05),” with the inability to afford mental health care being associated with CAM therapy usage (Kemper, Gardiner, and Birdee 2013). These researchers suggest that further study of CAM treatments might be worth undertaking if “patients are seeking CAM therapy because mental health counseling is not affordable” (Kemper, Gardiner, and Birdee 2013). Despite this potential for cheaper options, sturdy research regarding these methods is sparse. Furthermore, disparities between the educational levels of CAM treatment users were found, where those with higher levels of education were found to use CAM therapies more than those with lower levels of education. Therefore, if these kinds of treatments are found to be a viable alternative to traditional psychotherapy, then patients with lower levels of education would benefit from discussions regarding CAM treatments.

There is also the old concept of self-help meeting with emerging advances in technology. Namely, the internet provides a way for those who may not be able to afford treatment to do something about mental health symptoms. One disorder that could benefit from this approach is depression. Depression can be treated with cognitive-behavioral therapy (CBT), one mode of therapy that can potentially be translated into an internet source. One study looked at just that: how effective CBT is when administered via self-help websites and a discussion group versus simply participating in a discussion group. The study resulted in two important findings. First, it revealed that self-help was effective in treating depressive symptoms both immediately after self-treatment and six months later. Second, the discussion group without any self-help therapy did not show any change in depressive symptoms (Andersson et al.). Thus, it seems that self-applied therapy is a viable alternative to more traditional forms of counseling.

 

Lack of Local Resources

Even when individuals are able to afford mental health care, they may be unable to find appropriate care in their areas. A global perspective is available via the World Health Organization (WHO), which has identified barriers to mental health service availability including “inadequate human resources for mental health” (“10 Facts on Mental Health”). On a national level, certain news outlets have been quick to point out the lack of availability of mental health care options in some communities. As one Forbes writer notes: “as federal and state governments look to cut budgets at every turn, mental and behavioral health services are often on the chopping block first,” which can be seen “through decreases in available services, lack of providers due to poor reimbursements or less preventative actions in communities” (Fisher).

This can plainly been seen at the local level. A story in Nebraska’s Columbus Telegram detailed a nine-month search for a mental health professionals for the county jail. However, according to the executive director of the East Central Health Department, the shortage of staffing is a statewide problem (Osborn). This problem is not limited to Nebraska. In other states such as Iowa, New Hampshire, and South Carolina, the lack of available mental health resources has been decried by local news outlets (Brooks; Ronayne; Gross). How has this occurred? Funding may be an issue. In a two year period (from 2009 to 2011), the total amount cut from state mental health budgets was about $4 billion—“the largest single combined reduction to mental health spending since de-institutionalization in the 1970s” (Fisher).

Different studies have shown that lack of local resources can have a negative impact on mental health care accessibility. As has been noted in the literature, lacking such things as time or transportation was a particularly large barrier, especially in rural areas where mental health resources might not be in the immediate area (Gulliver, Griffiths, and Christensen). Furthermore, it was found that “geographic accessibility and resource availability measures were associated with long-term continuity of care” and that “increased distance from providers was associated with greater risks of 12-month gaps in health system and mental health services utilization for patients of the Veterans Affairs (VA) mental health system (McCarthy et al. 1042). Clearly, when individuals lack resources in their geographic area, they cannot utilize them. Location is not the sole issue, however. Some authors believe that an “increased awareness [of the importance of mental health] has not yet been translated into greater investment of resources” (Saxena et al. 886). Additionally, these authors maintain that problems with equable distribution and inefficient implementation further compound the problems. Scarcity is made worse when resources are not dispersed equally or used properly. For these authors, a lack of funding places a severe limit on the number of viable resources.

But what can we do to combat this problem? Some sources emphasize task shifting, or “delegating tasks to existing or new cadres with either less training or narrowly tailored training,” and in this context can entail “employment of mental health care providers in different sectors; intersectoral collaborations […]; or both of these” (Kakuma et al. 1656). These authors also emphasize further training opportunities for mental health professionals and altering the public’s perception of mental health jobs as low in status. This approach both redistributes resources within the mental health sector as well as between sectors and attempts to attract new members to the field.

Another option becoming increasingly common is the use of technology in mental health services. One study has noted the use of mobile applications as means of engagement, as a way to facilitate treatment processes, and as a method of sustaining gains after formal treatments end (Price et al.). When clinicians integrate technology into their counseling, they can help disseminate more information and help prepare clients for therapy sessions before they begin, and help maintain improvements made in therapy after a client either finishes or decides to drop out of counseling. However, issues such as usability (either preference or cost) and data security persist as very real issues when incorporating smartphone technology into treatment, and technology may soon be outpacing policy in this area (Luxton et al. 509). While we do not know for certain if mobile technology is a viable alternative to traditional therapy, it does allow for clinicians to expand their reach.

 

Lack of Knowledge

A lack of knowledge also prevents people from seeking treatment, and manifests itself in two ways: a lack of knowledge regarding mental health itself, and a lack of knowledge regarding mental health services. It is becoming clearer that education regarding mental health, also known as mental health literacy , is lacking in the general public. It has been noted that limited knowledge regarding mental illness can prevent those who have mental illnesses from seeking treatment and affect how families provide care for relatives (“Module 6”). From a 1992 survey, 6.2% of individuals that responded had a SMI, and—of that number—less than 40 percent had received consistent treatment (“Module 6”). This finding was corroborated when authors found that difficulty identifying symptoms resulted in less care for mental illnesses. This may limit those who actually seek counseling services. As one author contends: “the prevalence of mental disorders is so high that the mental health workforce cannot help everyone affected and tends to focus on those with more severe and chronic problems” (Jorm 399).

One study found that “participants were aware of their distress, but continuously altered the meaning they attached to this distress, and in particular whether or not it was ‘normal’ in order to accommodate higher levels of distress and avoid seeking help” (Gulliver, Griffiths, and Christensen). This same study also found that confusion regarding different mental health services—the second type of lack of knowledge—was another large barrier preventing treatment seeking. Even among a relatively educated sample (college students), mental health literacy varied “according to gender, symptoms and the faculty affiliation [type of study]” (Lauber et al.). This lack of mental health literacy translates into real-life problems. According to one author, poor mental health literacy “may place a limit on the implementation of evidence-based mental health care” (Jorm 399). If these kinds of treatments do not comply with what the general public has envisioned, the author suggests, individuals might not seek out those kinds of treatments.

Some authors suggest that increasing support for families and new parents would be beneficial, with programs that could “[provide] early childhood education” for first-time parents (Nikkel). Another approach suggested by the literature is implementing what is known as “narrative advertising,” which typically depict a story. After participants viewed a narrative advertisement, they were more likely to “engage in issue elaboration and […] seek professional help” (Chang 48). Further, an article from the Medical Journal of Australia lists seven components of a successful campaign to promote health literature, which included items such as preliminary research about the consumers to better tailor specific messages and evaluating the reach of the message (Kelly, Jorm, and Wright). With this information in mind, it is important to not only create these educational programs suggested by Nikkel or Chang, but to make sure they are properly targeting the correct demographic and actually reaching them. Increasingly, the internet is playing a role in mental health accessibility as well. One study suggests that the internet “has been used as a source of mental health information by over 10% of the general population and by over 20% of those with a history of mental health problems” (Powell and Clarke 275). Both broad attempts to affect public opinion and narrower programs that target specific subgroup have been found to be effective (Jorm 399). Unfortunately, in a meta-analysis of seven media items, it was found that mental health experts who appear in the media often have their messages undermined by journalists seeking a more sensational story.

Stigma

Of all the reasons listed above, the most prominent reason people may not seek treatment is because of stigma. Stigma and shame has been listed as the primary barrier to treatment, with stigma appearing in over 75% of the 13 studies examined in a systematic review (Gulliver, Griffiths, and Christensen). Therefore, we will spend the remainder of this chapter discussing the problems of stigma at length.

In the United States, mental health disorders affect 18.1% of the American adult population and 59.4% of children ages 8-18 years old (NIMH). These mental health disorders include serious illnesses depression, anxiety, attention deficit disorders, personality disorders, mood disorders, panic disorders and eating disorders (NIMH). However, according to the National Institute of Mental Health, only 13.4% of adults and 50.6% of children with mental disorders used mental health services and receive treatment. There are many reasons for this including economics, feasibility, religion, and, one of the prominent reasons, the belief in the stigmas of mental health. There are numerous stigmas that plague the mental health world and stop people from receiving the treatments that they need. In order to improve our society, those stigmas need to be overturned and the truth about the mental health field needs to be unveiled.

Them Crazy Folk Belong in the Nut House”

A common misconception of those with mental health disorders is that they are people who need to be feared, who cannot function safely in a normal environment, and need constant supervision to ensure their safety and the safety of those around them. Often, when violence occurs, the first assumption is that the person responsible was mentally ill and if they would have been put away, they could not have hurt anyone. Mass shootings and family homicides are normally attributed to the perpetrator having a mental health disorder. A prime example of this is the story of a matricide by her twin daughters, a.k.a. the “Twisted Twins” from Conyers, Georgia (Beck). The twins confessed to murdering their mother after a fight that they had had (Beck). However, the term “twisted” refers to being “mentally or emotionally unsound or disturbed” (Merriam-Webster). This implies that these twins were mentally ill when they committed the crime which was not the case. These twins were mentally stable but had normal conflicts with their mother that escalated one morning into a violent brawl with an unfortunate ending. This case has a lot of mysteries because the twins have lied about the details of the murder to protect themselves, but as far as their mental health goes, they were normally functioning girls. This implication that they are “twisted” is an example of the stigma that violent actions are only done by those with mental illnesses when in fact, only 3-5% of violent acts can be attributed to those with mental health disorders (“Mental Health Myths and Facts”). Even more ironic, people with severe mental health disorders are over 10 times more likely to be the victims of violent action (“Mental Health Myths and Facts”).

Gun violence is a very prevalent topic and the main issue is that society wants stricter gun laws for people who suffer or have suffered from mental illnesses. There are petitions to stop them from buying guns and the Gun Control Act of 1968 states that an individual is prohibited from buying or possessing firearms for life if he/she has been adjudicated as a mental defective or committed to a mental institution. This means that individuals who have shown a lack of ability to control themselves, prove to be a danger to themselves or others, or are found insane by a court are not allowed to purchase guns. Also, any person who has been involuntarily committed to a mental health institution is not allowed to carry or buy firearms (Coalition to Stop Gun Violence). The negative stigma that mental health is the cause of most gun violence has been augmented by mass shootings at Aurora, Columbine, Virginia Tech in Blacksburg, and Newtown, along with the thousands of suicides that happen every year in the United States. What is a not understood is that the lack of treatment for these individuals is a small contributor to a small number of violent gun actions, but not the sole predictor.

A further examination of high profile mass shootings will help show how other factors than mental health illness contributed to their actions and that if their mental illnesses would have been treated, perhaps these unfortunate accidents could have been avoided. Columbine High School is a high school in Colorado where one of the first mass school shootings took place April 20, 1999 (Kohn). Two high school students, Eric Harris and Dylan Klebold, went on a 50 minute killing spree throughout their school (Kohn). They killed 12 students and teachers and injured 24 people with gunshots and also detonated several bombs throughout the school (Kohn). After their killing spree, the two returned to the library where a majority of their killings had taken place and proceeded to simultaneously commit suicide (Kohn). For almost a decade, this was named the worst school shooting reported in history, but that was not even their main goal. They had videos and journals and blog posts that were recorded and showed that the duo’s main goal was to have the worst massacre in history, rivaling the Oklahoma City Bombing of 1995 (Cullen). They had created so many bombs, that if they had properly detonated, they would have been able to kill at least 600 people and their next set of bombs lined up in the cars in the parking lot would have killed anyone in the parking lot including survivors, media, and emergency personnel (Cullen). Their plan for mass destruction on a monumental scale was driven by a number of factors that were determined after their suicides and examination of their personal documents. Bullying by other classmates and the clique culture of Columbine High School are generally identified as driving forces, but documented mental health reasons were also revealed during the final examinations. Harris, who was determined to be the mastermind behind the plan, was diagnosed as a psychopath (Cullen). Klebold was diagnosed with severe depression (Cullen). This is a prime example of how untreated mental health disorder played a role in gun violence.

Events in 2007 at Virginia Tech College provide another example was 2007 at Virginia Tech College. Seung-Hui Cho, an immigrant Korean student, killed 33 people in three hours and then killed himself (biography.com/editors). He had been diagnosed with an anxiety disorder when he was younger and was prescribed medication (Friedman). However, he was also described as a loner and someone who was bullied and not well liked (biography.com/editors). Before the shooting, he had been referred to the counseling service center by concerned professors and then was committed to a psych ward in 2005 by the state after he making comments about committing suicide to his roommate (Friedman). He had to see three therapists and all of them wrote that he was troubled. But because he insisted that his comments were meant to be a joke and that he had no intentions or reasons to take his own life it was decided that he was fine and did not require follow up sessions (Friedman). After the shooting, NBC received a package that Cho had sent to them that contained many pictures of him with guns and videos of him ranting angrily about the privileged people he was around and how they bullied him endlessly. Experts believe that this package was a clue to the rationale behind his actions (Friedman).

In 2012, James Holmes entered a movie theater in Aurora, Colorado that was showing the midnight premiere of The Dark Knight Rises. He exited through a rear door in the theater and returned with a few rifles and tear gas canisters (Lin). He threw the canisters and then opened fire, killing 12 people and injuring nearly 70 (Lin). He was soon arrested in the back parking lot of the theater and his apartment building was searched for possible explosives that he rigged around the building (Lin II). His defense team tried to plead not guilty by reason of insanity (Lin II). The defense team called in a doctor that claimed Holmes was so psychotic during the attack that he could not tell right from wrong (Gurman). The claim was that before the shooting, he suffered from a mental disorder that made him emotionally flat and caused him to have the delusion that if he said he was going to do something, then he had to carry out his plan (Gurman). He felt a desire to kill people in order to satisfy his own suicidal sensations (Gurman). He had suffered from symptoms of anxiety and depression so strong that he had to drop out of the neuroscience program he was in (Gurman). It was speculated that right before the attack, he suffered a psychotic break that clouded his judgement and prevented him from knowing what was right and what was wrong (Gurman).

Another unfortunate shooting happened on December 14th, 2012 in Newton, Connecticut at Sandy Hook Elementary School. Twenty year-old Adam Lanza shot his mother to death that morning and then drove to Sandy Hook where he shot his way through the secured entrance (CNN). Three faculty members were shot when they investigated the noise and then the shooter moved on to classrooms where he fatally shot 20 children between the ages of 6 and 7 and three more adults (CNN). Before police could detain him, Adam Lanza shot and killed himself (CNN). After the shooting, there was an investigation into the mental health of Adam Lanza. Several reports surfaced of all his issues that went untreated because his mother neglected to follow the instructions of experts and get him the proper help he needed. Yale University was called by Ms. Lanza when he was in ninth grade to help her with his issues (Cowan). She resisted the recommendations they gave her in order to shelter her son because she believed that he would not be able to handle the prescribed medicine and treatment (Cowan). Adam Lanza suffered from obsessive compulsive disorders and anxiety which led to an additional diagnosis of anorexia, all of which were untreated by the resistance of his mother (Cowan).

These examples are tragedies that were caused by people with mental health illnesses that were either untreated, undertreated or undiagnosed until post shooting. The evidence of the mental health disorders that these individuals suffered drives the motor behind the assumption that those with mental health disorders cause massive acts of violence, and that the mentally ill are those who should be feared.

This stereotype associated with mental health illness is false and has been disproven several times. Mass shootings are only a small percentage of total gun violence so it is difficult to come up with a prevention program for an event that is so random and sporadic (The Editorial Board). Between the years of 2001 and 2010, out of the 120,000 gun related crimes, less than 5 percent were committed by people with mental illnesses (Metzl and MacLeish). Another aspect that Metzl and MacLeish considered in their article was that severe mental illness often includes schizophrenic patients, who by nature are often isolated and withdraw from society. These components of their personality often make them the victims of these violent acts (Brekke et al.). According to police reports, patients suffering from schizophrenia are victimized 65% more than those without schizophrenia (Brekke et al.).

There are credible studies to demonstrate that other factors have a stronger correlation of prediction of gun violence. Alcoholism and drug use increase the chance for gun violence more than any signs of mental health illness (Metzl and MacLeish). Also, access to guns, especially during emotionally charged times, has a stronger correlation with gun violence than mental health illness (Metzl and MacLeish), and communities and households with firearms had a higher rate of gun violence than communities and households that did not (Miller et al). Florida’s gun violence jumped 200% after their state legislation passed the “stand your ground” law (The Editorial Board).

The stereotype of a lone, crazed, gunman walking around shooting strangers can be rejected because reports show that gun violence is 85% more likely to occur in social circles (relatives, friends, acquaintances, and enemies) (Hamilton). According to Hamilton, a person is more likely to die in a plane crash, drown in a bathtub, or perish in an earthquake than be murdered by a crazed stranger.

The point of all these statistics is that mental health illness is not a predictor of gun violence. There are too many other factors that have higher correlations with gun violence than simply being severely mentally ill. All the legislative efforts and news reports that attribute violent actions to the mentally ill and the underfunding of mental health services or underdiagnosis do nothing but perpetuate the stereotype that mental illness and gun violence have a direct link. Are there instances where mass shootings are perpetrated by those with mental health issues? Absolutely, as was demonstrated above. But these instances are so rare that they cannot be used as a telltale predictor that mental health patients are individuals who need to be feared because they are likely to snap and murder hundreds of people at any given moment. These are individuals who, with proper help, most likely would not have committed these crimes, and there were other issues that these individuals had that also contributed to their crimes.

Focusing all of our efforts on treating mental health solely to decrease gun violence in the world is a waste of time because there are so many other factors that have a much stronger connection to gun violence. The sooner the myth and stigma that mental health issues cause gun violence is worked out of the minds of the nation, the sooner the people of the United States can address the real issues that contribute to gun violence while simultaneously treating mental health issues without stigmatizing those individuals and making them feel as if they are criminals.

Criminalizing mental health is also another issue in the United States. People who have mental health illness often are arrested and incarcerated solely for the fact that they have a mental health illness. Police forces will arrest individuals who they believe to be ill for misdemeanors and petty crimes simply to put them in prison where they believe they will be better served (Tobar). Patients suffering from schizophrenia and psychosis are put in prison for assaults or alcohol and drug use because those who are not mentally ill want their “streets to be cleaned” (Tobar). According to Tobar, this practice is very common in New Orleans, Louisiana where police are constantly picking up psychotic vagabonds. They truly believe that the prison system offers psychiatric help for those who cannot or will not spend money on psychiatric hospitalization. Families who cannot get their mentally ill family members into psychiatric hospitals call the police to have them incarcerated because they believe that is the only way to get them the proper treatment (Tobar).

The problem with this practice is that having these mentally ill patients arrested and thrown in the prison system opens them up to other complications that worsen their condition. Many schizophrenic or depressed individuals will commit or attempt to commit suicide (Tobar). They are at risk because their symptoms label them as outcasts by other inmates who are likely to abuse them (including rape and assault) (Torrey). These types of attacks are fairly common in prison, almost standard; but the problems that these individuals suffer from make dealing with these adversities that much harder. They become prime targets for bullying by inmates and are ignored by officials (Torrey). The issue of venereal diseases in prisons is exacerbated when it comes to mentally ill inmates because they cannot properly describe how they are physically feeling nor their symptoms (Torrey). The more they are raped, the more these diseases get spread around the prison, further contributing to death rates (Torrey).

Not only does imprisoning mental health patients put them in dangerous environments and increase their death rate, but it also augments the stereotype that mental illness is a negative thing that should be punished. Incarceration is a punishment for those who committed crimes and need to pay for what they did. Mental illness is not something that people ask for: it is a biological factor that expresses itself at its own will, without the consent or input of the person it is affecting. Putting people in prison systems as if they were criminals, making them feel worse about their situation. They may feel like they are bad people because typically bad people go to prison and here they are going to prison. What they may fail to understand is they may not be bad people at all and just need proper care of their disease in order to improve their mental functions. Imprisonment is not the same as hospitalization and does not have the same effect. Prisons don’t have the same resources, the same trained professionals, the right environment, nor the security that mental health institutional reforms have ensured for patients with mental health issues.

“But I Don’t Belong in the Nut House…”

The social stigmas discussed in the last section have developed over centuries and are deeply rooted in our legislation, social structures, and ideology. These myths and stigmas do nothing to actually help those individuals in need. In fact they make these individuals avoid treatment, and a mental health disorder untreated is detrimental to all parties. When people watch the news and hear how awful people with mental health abnormalities are treated, they want to make sure that they cannot be treated the same way, so they stay silent. They may even partake in the activities that lead to the continuation of these stigmas to ensure that they do not fall into those categories. They do not know that they are hurting themselves and they do not have to be alone. They would rather deal with their illnesses themselves than be the butt of a joke or the secret that everyone is whispering about.

Some people who are diagnosed with a mental health illness acknowledge and address their diagnosis and often feel powerless and vulnerable constantly (Hayne). Others with mental illness deny that they have an illness, and it takes a long drawn out process to get them to accept it (Camp, Finlay, & Lyons). They can also stigmatize and apply negative stereotypes to themselves (Corrigan). These negative feelings lead to the avoidance of psychiatric help. Adolescents are prime targets for this avoidance. They are at a stage in life where their social lives are developing and their sense of self is just starting to solidify (Corrigan). They have found their nook for the most part and the last thing they need is something to rock the boat, something that makes them different from their classmates: a mental health diagnosis.

A psychological study done by Tally Moses in 2008 delves into self-labeling of adolescents with psychiatric diseases and what this does to other characteristics of themselves (self-esteem, competency level, etc.). In this paper, Moses identified fear of labels and stigmas as being the driving force behind treatment avoidance and poor adherence. This avoidance and lack of use of mental health resources is troubling because there are a lot of youths who demonstrate mental health issues (Cauce, Domenech-Rodriguez and Paradise). They have the same conceptualization of their mental health disorder as the ones described above by Moses. They avoided labeling their symptoms as having something to do with mental illness to avoid the ostracization of their peers or being pitied or degraded (Mowbray, Megivern, and Strass).

The results of this study showed three different types of teens. They were either in denial, unsure, or absolute about their mental health disorder. Those that were in the denial group attributed their symptoms to either outside causes or feelings and emotions that every teenager goes through. They did not believe that they required psychiatric help nor medication because the symptoms they were showing were nothing anyone needed to be concerned about because everyone has those same problems.The unsure group were very unsure about their disorder and they used words like ‘I’m not sure,’ ‘I don’t know’ ‘I guess’ or hesitated while answering questions. They also claimed to only have mental illness when they were out of control or in a bad mood but when they felt stable again and happy, they did not feel like they had a mental health disorder.The absolute group was filled with teenagers who absolutely knew and acknowledged the fact that they had a mental health disorder. According to the study, they were termed ‘self-labelers,’ as in they acknowledged and labeled themselves and their disorder as belonging to them (for instance using the terms like ‘my bipolar disorder’.)

The last group was the group that also recorded feeling more public stigma. They felt more negativity and rejection from their peers than the other two groups. They were also more depressed than the other two groups. The first two groups also scored higher on the self-mastery scale. The teenagers who were part of the denial or avoidance group were, on average, older than the self-labeled group and the unsure group when they first started taking medicine (10 vs. 7.3 and 7.7 respectively).

This study is a prime example of the detrimental effects that public stigma can have on adolescents which is the age group that needs to be handled the most effectively and efficiently. However, these kids are too afraid to admit that they have a problem because they do not want anything to disrupt the way that their lives have been going, especially when they are still trying to figure out exactly who they are. Biologically and mentally, they should expect to have tremendous changes in order to improve, but socially they need all the support they can get from peers, family, and even complete strangers because that is who they are surrounded by and have their social interactions with on a daily basis. If more teens feel like there is nothing wrong with having a mental disorder and that it is not their fault that they are different then they will be more inclined to seek the services that will benefit them the most. They need to know that they aren’t that different and that even if they think differently than others and even if they sometimes act differently than others it’s still okay and they will not have their entire life go up in flames because of a minor setback. The sooner that society reaches out to adolescents and offers them a hand to hold, the sooner their mental health disorders can start to be treated.

“I Don’t Want to Go Because…”

Stigmas of being weird, crazy, or dangerous are not the only reasons keeping people out of therapy. People avoid mental health services for a number of reasons or fears, validated by myths and articles discouraging them from using those services.

One of the big reasons that people do not want to get help is because they believe that the psychiatric field is nothing but a pharmaceutical moneymaker. The National Alliance on Mental Illness (NAMI) received a donation that totaled $23 million dollars from pharmaceutical companies and were accused of only pushing the medical model because they were in drug companies’ pockets (Early). The Medical Model is the idea that mental health is caused by chemical imbalances in the brain which need to be treated by pharmaceuticals (Early). To combat these critics, NAMI has claimed that these donations were entirely too much, and on their website, they have created a page that notes exactly which companies donate over $5,000 and how that money is being used (Early). Critics of these types of actions often include anti-psychiatry and anti-pharmaceutical groups that claim that mental disorders or either not real or not chemically based (Early).

Another area of mental health that is often criticized as being another pharmaceutical money maker is the area of ADD/ADHD diagnosis and medication. An article called The Selling of Attention Deficit Disorder by Alan Schwarz, discusses the concern of the overdiagnosis of ADD and ADHD to simply make more money. The diagnosis of children from 1990 to now has risen from 600,000 to 3.5 million and 15% of high school aged children are diagnosed (Schwarz). These numbers call into question the validity of these diagnoses and the lengths that pharmaceutical companies are willing to go to in order to make money. Profits have soared from $1.7 billion dollars in 1999 to almost $9 billion dollars in 2010 (Schwarz). Adult diagnosis has also, as of late, become a hot topic which will probably increase the profits of these companies even more.

Another aspect of pharmaceutical companies that is criticized is the risk of dependence that pharmaceuticals carry, especially illnesses like ADD and ADHD that rely on potentially abused drugs like Adderall. The article written by Schwarz had concerns of abuse and dependence and questioned whether or not patients needed to take these medications their whole life. Doctors on both sides of the coin contributed information to Schwarz’s article about whether or not they believed that dependence was a huge issue.

Some people want to avoid therapy and pharmaceuticals because they are not sure if they are actually doing something to help themselves or if they are just contributing to the paychecks of huge pharmaceutical companies while not receiving any benefits themselves. What potential patients need to understand is that the drug companies are going to make money, and they are going to make a lot of it. Even if a drug fails, they have several other alternatives that they will still profit from. The choice to prescribe to these medicines should not be based solely on how much money these companies are making because the number will always be high. The choice should instead be based on whether or not medication is needed to help them and what is the best recommendation from their doctor and the people in their lives whom they trust.

Privacy is also another reason why people avoid doctors. There are concerns about whether or not to disclose certain information to “strangers.” One of the main factors that influence this concern is culture. Culture norms can seriously impact whether or not people seek mental health treatment because the degree of privacy changes based on culture (Lin and Lin). In Hispanic cultures, it’s also very common to keep problems that affect the family within the family. What is not understood is that doctors have a confidentially clause that keeps them from sharing a patient’s information. Also, even if people feel uncomfortable talking with someone who seems to be a stranger, they have the option of having multiple therapy sessions and getting to know their doctor enough to tell them the truth. In Gustav College’s Top Ten Reason People Say No Counseling, they have included the concern that talking to a stranger cannot be helpful and how potential patients are possibly betraying their family. These concerns are extremely common and make it hard for people to trust someone enough to be able to open up. However, what they must realize is that the most important concept in therapy is honesty. No problems can be solved if the doctor does not know what is wrong. Opening up about situations can be hard and take a long time, but it is the only way to move forward, and patients have to be able to trust that their doctor is not going to abuse their privilege and break their confidentially clause. Mental health professionals are trained to just listen and use therapeutic techniques to provide help, so are valuable resources even if they are “strangers” (Gustave Counseling Center). The cultural limitations that some face will have to be reconsidered, especially dependent on the time and situation.

Lastly, another common reason that people avoid treatment for mental health illness is because of the campaigns of anti-psychiatric groups that advocate for the avoidance of mental health care. A prime example of one of these groups is an online blog called Ask Grace. This website has an article titled “25 Reasons to Avoid Psychiatric Treatment and Drugs by a Survivor of Psychiatric Treatment” Ask Grace™. This website argues for the abolition of psychiatric practices and psychiatrists for reasons of harm by physicians of the mind, body and soul, falsification of diagnosis, deception, sexism, homophobia, and coercion. They have listed 25 ways in which psychiatrists push drugs on people and make them believe that they are severely damaged by a mental disorder. Ask Grace™ asserts psychiatrists just want to control people and make money off of them (Weitz). Anyone showing signs of symptoms often go online and do their research to try and figure out what is wrong with them and what they should do. When they come across articles like this one, the myths and stigmas that already plague their minds encourages the belief of these myths does nothing but persuade those who need mental health care not to pursue it. The issue is that unfortunately a lot of individuals are not well educated in the mental health field and they are extremely liable to believe articles like this and allow it to stop them from receiving the proper help and treatment that they need.

 

What is the Truth of Mental Illness?

Having discussed the myths and stigmas of mental illness, we now discuss what mental health illness is and what the true, basic facts of mental illness are.

Mental illness refers to a wide range of mental health conditions: disorders that affect your mood, thinking and behavior (Mayo Clinic). Mental illness is diagnosed in individuals depending on their symptoms and how they match up with the Diagnostic and Statistical Manual of Mental Disorders (DSM). The DSM is the main tool used for analyzing and diagnosing patients who are presenting of symptoms. This DSM varies culturally and by time. It changes every few years and is now in its fifth edition. The DSM is written and published by members of the American Psychiatric Association, APA. A brief history of the DSM will show how it has changed and how the definition of what is considered a mental health disorder has changed as well over the last half century.

The unofficial first edition was published before World War II (APA). It was the first attempt of an organization of all the recorded categories of mental health illness from the late 1800’s census (APA). This first edition included seven categories: mania, melancholia, monomania, paresis, dementia, dipsomania, and epilepsy (APA). This was used in 1917 to create a uniform health statistic manual across all mental health hospitals (APA).

The first official edition was published after World War II and included more diagnoses that represented the problems and symptoms of veterans such as psychoses and psychoneuroses (APA). This manual was a glossary that focused mainly on the reaction that people’s personalities had to biological, psychological, and social factors (Adolf Meyer’s view) (APA). The second edition of the DSM was very similar except that it lacked Adolf Meyer’s view completely and deleted the term reaction (APA).

The third edition was published in 1980 and included explicit diagnostic criteria, a multiaxial diagnostic assessment system, and an approach to treatment (APA). The main goal of the DSM-III was to provide precise definition of mental disorders for clinicians and researchers (APA). The third edition was revised in 1987 due to inconsistencies and unclear diagnostic criteria and lead to the DSM-III-R (APA). Six years after that, the fourth edition was published and included insertions, deletions, modification, and reorganization of disorders (APA). The fifth and most current edition was published in 2013 and after a decade of effort, filled in existing gaps in the research and more in depth research of 13 areas of the DSM-V (APA).

The DSM has changed numerous times to include or modify the prevalence of mental health disorders. This changing of the manual is what makes diagnosing mental health disorders so tricky, especially because diagnosing mental health disorders is a subjective field. There are no objective tests that can be done to determine the extent of a patient’s mental illness. The only thing patients and doctors can rely on is the information patients provide doctors and the symptoms the doctors observe. The DSM is a method of standardizing the diagnoses of patients and organizing the list of symptoms into a compiled list so doctors can efficiently determine the diagnosis and treatment of their patients.

Another question is the cause of mental health illness. That is a question that is being researched and advanced every day. Researchers and doctors still do not know exactly what the underlying causes are behind mental health illness but they have some clues. The most modern approach to solving the mystery of the cause of mental health illness is the biopsychosocial approach. This is an approach that looks at not just one aspect, but three: biological, psychological, and social. These three factors all play a role in causing and perpetuating the prevalence of mental health.

The biological aspect looks specifically at brain chemistry, neurochemistry, and brain activity (Weir). The thought behind this approach is that all mental processes are brain processes, and therefore all disorders of mental functioning are biological diseases. The brain is the organ of the mind. Where else could mental illness be if not in the brain (Eric Kandel)? Doctors look at abnormalities in brain shape, dysfunction in certain areas of the brain that are responsible for certain brain functions, and genetics. The biological level of analysis for mental illness is thought to be one of the main contributors to mental health illness because it is something that cannot be controlled. No one can change the chemistry of their brain without using pharmaceuticals.

The other two aspects are the cognitive level of analysis and the social level of analysis. The cognitive level of analysis looks at the mental frame and how people think about the world (IB Psychology). It is based on mental processes such as perception, attention, language, memory, thinking, and the way we take in information from the outside world and make sense of everything (IB Psychology). The idea behind this approach is that the way people think and the way their mental states and memories are structured affect the way they take in information and understand what is going on around them. The other aspect, the sociocultural level of analysis is the study of how people’s thoughts, feelings and behaviors are influenced by actual, implied or imagined presence of others (IB Psychology). This approach looks at the behavior of humans when they are around other humans or affected by their culture whether their behavior is considered abnormal.

When all three approaches are combined they are considered the biopsychosocial approach and incorporate all three factors as the cause behind mental illness. All three factors have to be considered when looking at treatment plans and what lifestyle changes individuals need to make when trying to improve. Pharmaceuticals are not the only approach and neither is just therapy. There are a multitude of treatments that need to be combined to create a well-rounded treatment plan (for example Cognitive Behavioral Therapy [CBT] which incorporates changing the mindset of a patient and changing the behavior as well while also using any pharmaceuticals necessary).

 

Who Do I Need to See?

When looking at treatment plans for mental illness, it’s best to stay off the internet and go see a professional. There are several levels of help in the mental health field, and knowing the differences will help when trying to figure out what treatment plan is best.

Therapists and counselors are the first line of defense when it comes to mental health treatment. These professionals are trained in a variety of different mental health areas from addiction to marital problems to family wellness problems. There are numerous types of therapists and counselors but the most common are Social Worker, Licensed Counselor, Mental Health Counselor, Alcohol and Drug Abuse Counselor, Marital and Family Therapist, Pastoral Counselor and Peer Specialist (Mental Health America). All of these professionals are trained in different ways to counsel different problems that people may be having however counseling is all they can do. They cannot provide medication but can refer a patient to a psychiatrist, someone who can provide medication.

Psychiatrists are at the top of the mental health professionals because they went to medical school and became certified to prescribe medicine (Mental Health America). They observe and diagnose individuals with mental health disorders and they have the ability to write a prescription and give individuals the pharmaceutical help they need. Psychiatrists often do not provide counseling or psychotherapy so they work closely with a team of psychologists (Mental Health America).

Psychologists went to a graduate program and received their PhD and became trained to make diagnoses and provide individual and group therapy (Mental Health America). However, in most states, they are not allowed to prescribe medicine which is why they work with psychiatrists so closely (Mental Health America).

The mental health field is nothing but a huge interconnected web of professionals all striving for the same goal: to decrease the mental health struggles that Americans are suffering from. Some have certain limitations as to the help they can provide, but they all work together and as a whole field, virtually eliminating the boundaries and are able to provide well rounded mental health care to all who need it.

 

Concluding Remarks

As we come to the end of the chapter, there are a few things we would like to say. First, we want to emphasize that the current issues in our mental health care system are not insurmountable. In fact, there are several possible solutions to each problem; they just need to be implemented. Second, mental health disorders are not caused by individuals and are nothing to be feared. Rather, they should be accepted as a condition to be treated instead of turned away and ridiculed. Finally, we feel that it is important to remind our readers that while the advent of technology has vastly improved access to mental health resources, nothing is a perfect replacement for traditional therapy with a licensed mental health professional.

 

 

Works Cited

“10 Facts on Mental Health.” WHO.int. World Health Organization, n.d. Web. 3

March 2016.

“19th-Century Psychiatrists of Note.” National Library of Medicine. National

Institute of Health, 24 March 2015. Web. 23 February 2016.

Amadeo, Kimberly. “Deinstitutionalization.” USEconomy.about.com. IAC, 18 June

2015. Web. 25 February 2016.

Andersson, Gerhard, Bergström, Jan, Holländare, Fredrik, Carlbring, Per, Kaldo,

Viktor, and Ekselius, Lisa. “Internet-based Self-help for Depression:

Randomised Controlled Trial.” British Journal of Psychiatry 187.5 (2005): 456-461. Web. 9 March 2016.

American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). Washington, D.C.: Author.

Appelbaum, Paul S. “How to Rebuild America’s Mental Health System, in 5 Big

Steps.” TheGuardian.com. Guardian Media Group, 29 May 2014. Web. 1

March 2016.

Barlow, David H. and Durand, V. Mark. Abnormal Psychology: An Integrative

Approach. 7th ed. Boston: Cengage Learning, 2014. Print.

Beck, Catherine. “Twisted Twins: Teens Confess to Brutal Murder of Mother.”

11Alive. WXIA, 23 Mar. 2015. Web. 29 Feb. 2016.

Biography.com Editors. “Seung-Hui Cho Biography.” Bio.com. A&E Networks Television, n.d. Web. 2 Mar. 2016.

Brekk JS., Prindle C., Bae SW., Long JD. “Result Filters.” National Center for Biotechnology Information. U.S. National Library of Medicine, Oct. 2001. Web. 5 Mar. 2016.

Brooks, Jason W. “Lack of Resources Impacts Response to Mental Health Care.”

Newton Daily News. Shaw Newspapers, 14 October 2015. Web. 3 March

2016.

Chang, Chingching. “Increasing Mental Health Literacy via Narrative Advertising.”

Journal of Health Communication 13 (2008): 37-55. Web. 8 March 2016.

Cowan, Alison Leigh. “Adam Lanza’s Mental Problems ‘Completely Untreated’ Before Newtown Shootings, Report Says.” The New York Times. The New York Times, 21 Nov. 2014. Web. 4 Mar. 2016.

Cullen, Dave. “At Last We Know Why the Columbine Killers Did It.” Slate.com. Slate, 20 Apr. 2004. Web. 1 Mar. 2016.

“Data on behavioral health in the United States.” American Psychological

Association. American Psychological Association, n.d. Web. 9 February

2016.

“A Decade after The Decade of the Brain.” The DANA Foundation. The DANA

Foundation, 26 February 2010. Web. 4 February 2016.

“Does Your Insurance Cover Mental Health Services?” APA.org. American

Psychological Association, n.d. Web. 25 February 2016.

“Dorothea Dix.” Biography.com. A&E, n.d. Web. 2 February 2016.

Earley, Pete. “NAMI and Drug Makers’ $$$.” Pete Earley. Pete Earley, n.d. Web. 7 Mar. 2016.

The Editorial Board. “Craven Statehouse Behavior.” The New York Times. The New York Times, 14 Mar. 2014. Web. 5 Mar. 2016.

The Editorial Board. “Don’t Blame Mental Illness for Gun Violence.” The New York Times. The New York Times, 15 Dec. 2015. Web. 5 Mar. 2016.

Fisher, Nicole. “Mental Health Loses Funding as Government Continues

Shutdown.” Forbes. Forbes, Inc, 10 October 2013. Web. 8 March 2016.

Frank, Richard G., Conti, Rena M., and Goldman, Howard H. “Mental Health Policy

and Psychotropic Drugs.” The Milbank Quarterly83.2 (2005): 271-298. Web.

9 March 2016.

Frank, Richard G., Goldman, Howard H., and Hogan, Michael. “At the Intersection

of Health, Health Care and Policy.” Health Affairs 22.1 (2003): 101-113.

Web. 30 January 2016.

Frank, Richard G., Goldman, Howard H., and McGuire, Thomas G. “Trends in

Mental Health Cost Growth: An Expanded Role for Management.” Health

Affairs28.3 (2009): 649-659. Web. 9 March 2016.

“Germany Leads Europe in Dealing with Mental Illness.” EIUmedia.com.

Economist Intelligence Unit, 8 October 2014. Web. 18 February 2016.

Glick, Ira D., Sharfstein, Steven S., and Schwartz, Harold L. “Inpatient Psychiatric

Care in the 21st Century: The Need for Reform.” Open Forum 62.2 (2011):

206-209. Web. 2 February 2016.

Grenoble, Ryan. “James Holmes Was Clearly Psychotic, Says Doctor Who Interviewed Him Days After Shooting.” The Huffington Post. TheHuffingtonPost.com, 25 June 2015. Web. 3 Mar. 2016.

Gross, Daniel J. “Lack of Mental Health Resources Leads to Tragic Results.”

GoUpstate.com. Gatehouse Media, Inc, 19 September 2016. Web. 3 March

2016.

Gulliver, Amelia, Griffiths, Kathleen M., and Christensen, Helen. “Perceived

Barriers and Facilitators to Mental Health Help-Seeking in Young People: A

Systematic Review.” BMC Psychiatry 10.113 (2010), n.p. Web. 1 March

2016.

Hamilton, B. “Odds That You’ll Be Killed by a Stranger in NYC on the Decline.” New York Post. New York Post, 05 Jan. 2014. Web. 5 Mar. 2016.

Hammond, Michael J. “Celebrating 40 Years of Community Mental Health

Services.” ACMHCK. Association of Community Mental Health Centers of

Kansas, Inc, n.d. Web. 4 February 2016.

Harris, Gardiner. “Talk Doesn’t Pay, So Psychiatry Turns Instead to Drug Therapy.”

The New York Times. The New York Times, 5 March 2011. Web. 1 April

2016.

Hauser, Christine, and Anahad O’connor. “Virginia Tech Shooting Leaves 33 Dead.” The New York Times. The New York Times, 15 Apr. 2007. Web. 2 Mar. 2016.

“Health Insurance and Mental Health Services.” MentalHeatlh.gov. U.S. Department

of Health & Human Services, n.d. Web. 25 February 2016.

“Health & Education.” NIMH RSS. NIMH, n.d. Web. 3 Mar. 2016.

Hinshaw, Stephen P. The Mark of Shame: Stigma of Mental Illness and an Agenda

for Change. Oxford: Oxford University Press, 2009. Print.

“History of the DSM.” DSM History. American Psychology Association, n.d. Web. 7 Mar. 2016.

Hoof, Frank van, Knispel, Aafje, Aagaard, Jørgen, Schneider, Justine, Beeley, Chris,

Keet, René, and Putten, Marijke van. “The Role of National Policies and

Mental Health Care Systems in the Development of Community Care and

Community Support: An International Analysis.” Journal of Mental Health,

Early Online (2015): 1-6. Web. 16 February 2016.

“How Do We Fix America’s Mental Health Care System?” National Alliance on

Mental Health. National Alliance on Mental Health, 12 March 2015. Web. 9

February 2016.

IB Guides. “IB Guides.” IB Psychology Notes. IB Guides, 2012. Web. 5 Mar. 2016.

Insel, Thomas. “Assessing the Economic Costs of Serious Mental Illness.” American

Journal of Psychiatry 165.6 (2008): 663-665. Web. 9 March 2016.

Insel, Thomas. “Director’s Blog: Mental Health Awareness Month: By the

Numbers.” National Institute of Mental Health. National Institute of Mental

Health, 15 May 2015. Web. 24 January 2016.

Jorm, A. F. “Mental Health Literacy: Public Knowledge and Beliefs about Mental

Disorders.” British Journal of Psychiatry 177 (2000): 396-401. Web.

Kakuma, Ritsuko, Minas, Harry, Ginneken, Nadja van, Dal Poz, Mario R., Desiraju

Keshav, Morris, Jodi E., Sexena, Shekhar, and Scheffler, Richard M.

“Human Resources for Mental Health Care: Current Situation and Strategies

for Action.” Lancet 378 (2011): 1654-1663. Web. 8 March 2016.

Kelly, Claire M., Jorm, Anthony F., and Wright Annemarie. “Improving Mental

Health Literacy as a Strategy to Facilitate Early Intervention for Mental

Disorders.” Medical Journal of Australia 187.7 (2007): S26-S30. Web. 8

March 2016.

Kemper, Kathi J., Gardiner, Paula, and Birdee, Gurjeet S. “Use of Complementary

and Alternative Medical Therapies Among Youth With Mental Health

Concerns.” Academic Pediatrics 13.6 (2013): n.p. Web. 26 January 2016.

Kliewer, Stephen P., McNally, Melissa, & Trippany, Robyn L.

“Deinstitutionalization: Its Impact on Community Mental Health Centers and

the Seriously Mental Ill.” The Alabama Counseling Association Journal 35.1

(2009): 40-45. Web. 30 January 2016.

Kofman, Olga Loraine. “Deinstitutionalization and Its Discontents: American

Mental Health Policy Reform.” Diss. Claremont McKenna College. 2012.

Web. 23 February 2016.

Kohn, David. “What Really Happened At Columbine?” CBSNews. CBS Interactive, 17 Apr. 2001. Web. 2 Mar. 2016.

Lauber, Christoph, Ajdacic-Gross, Vladeta, Fritschi, Nadja, Stulz, Niklaus, and

Rössler, Wulf. “Mental Health Literacy in an Educational Elite – An Online

Survey Among University Students.” BMC Public Health 5.44 (2005): n.p.

Web. 8 March 2016.

Lin, Keh-Ming. “Chapter 5.” Asian American Mental Health. Ed. Margaret Lin. N.p.: n.p., n.d. N. pag. Print.

Lin, Rong-Gong, II. “Gunman Kills 12 at ‘Dark Knight Rises’ Screening in Colorado.” Los Angeles Times. Los Angeles Times, 20 July 2012. Web. 3 Mar. 2016.

Luxton, David, D., McCann, Russell A., Bush, Nigel E., Mishkind, Matthew C., and

Reger, Greg M. “mHealth for Mental Health: Integrating Smartphone

Technology in Behavioral Healthcare.” Professional Psychology: Research

and Practice 42.6 (2011): 505-512. Web. 2 March 2016.

Majerol, Melissa, Newkirk, Vann, and Garfield, Rachel. The Uninsured: A Primer.

Menlo Park, 2015. Web. 2 March 2016.

McCarthy, John F., Blow, Frederick C., Valenstein, Marcia, Fischer, Ellen P., Owen,

Richard R., Barry, Kristen L., Hudson, Teresa J., and Ignacio, Rosalinda V.

“Veterans Affairs Health System and Mental Health Treatment Retention

among Patients with Serious Mental Illness: Evaluating Accessibility and

Availability Barriers.” Health Services Research 42.3 (2007): 1042-1060.

Web. 1 March 2016.

Mechanic, David. “Mental Health Services Then and Now.” Health Affairs 26.6

(2007): 1548-1550. Web. 30 January 2016.

Meek, Thomas. “Germany, UK, and Denmark Lead Mental Health Integration.

PMLive.com. , PMGroup Worldwide Ltd. 9 October 2014. Web. 9 February

2016.

“Mental Health Briefing Sheets.” Europa.com. European Union, n.d. Web. 16

February 2016.

“Mental Healthcare Cost Data for All Americans (2006).” National Institute of

Mental Health. National Institute of Mental Health, n.d. Web. 24 January

2016

“Mental Health Myths and Facts.” MentalHealth.gov. U.S> Department of Health & Human Services, n.d. Web. 27 Feb. 2016.

“Mental Health: Research Findings.” Agency for Healthcare Research and Quality.

U.S. Department of Health & Human Services, October 2014. Web. 22

January 2016.

Metzl, Jonathan M., and Kenneth T. MacLeish. “Mental Illness, Mass Shootings, and the Politics of American Firearms.” American Journal of Public Health. American Public Health Association, Feb. 2015. Web. 5 Mar. 2016.

Miller, Alan B. “Obamacare has been a huge help for mental health care.” CNBC.

NBCUniversal Comcast Corporation, 26 January 2015. Web. 9 February

2016.

“Module 2: A Brief History of Mental Illness and the U.S. Mental Health Care

System.” Unite for Sight. Unite for Sight, n.d. Web. 26 January 2016.

“Module 6: Barriers to Mental Health Care.” Unite for Sight. Unite for Sight, n.d.

Web. 5 March 2016.

Mojtabai, Ramin. “Trends in Contacts with Mental Health Professionals and Cost

Barriers to Mental Health care Among Adults with Significant Psychological

Distress in the United States: 1997-2002.” American Journal of Public

Health 95.11 (2005): 2009-2014. Web. 26 January 2016.

Moran, Valerie and Rowena Jacobs. “An International Comparison of Efficiency of

Inpatient Mental Health Care Systems.” Health Policy 112.1-2 (2013):

88-99. Web. 18 February 2016.

Moses, T. “Result Filters.” National Center for Biotechnology Information. U.S. National Library of Medicine, Feb. 2009. Web. 7 Mar. 2016.

Mossialos, Elias, Wenzl, Martin, Osborn, Robin, and Anderson, Chloe, eds. 2014

International Profiles of Health Care Systems. CommonwealthFund.org. The

Commonwealth Fund, 2015 January. Web. 9 February 2016.

“National Institute of Mental Health (NIMH).” National Institute of Mental Health. National Institute of Mental Health, n.d. Web. 4 February 2016.

Nikkel, Gina. “How to Fix the Broken Mental Health System: Ten Crucial

Changes.” PsychiatricTimes.com. UBA Medica. 26 January 2015. Web. 1

March 2016.

Osborn, Jim. “Lack of Treatment Resources Creating ‘Mental Health Crisis.’” The

Columbus Telegram. Lee Enterprises, 4 September 2015. Web. 2 March

2016.

Pan, Deanna. “Timeline: Deinstitutionalization and Its Consequences.” Mother

Jones. Mother Jones, 29 April 2013. Web. 4 February 2016.

Powell, John and Clarke, Aileen. “Internet Information-Seeking in Mental Health.”

British Journal of Psychiatry 189.3 (2006): 273-277. Web. 8 March 2016.

Price, Matthew, Yuen, Erica, Goetter, Elizabeth M., Herbert, James D., Forman,

Evan M., Acierno Ron, and Ruggiero Kenneth J. “mHealth: A Mechanism to

Deliver More Accessible, More Effective Mental Health Care.” Clinical

Psychology & Psychotherapy 21.5 (2014): 427-436. Web. 2 March 2016.

Priebe, Stefan, Badesconyi, Alli, Fioritti, Angelo, Hansson, Lars, Kilian, Reinhold,

Torres-Gonzales, Francisco, Turner, Trevor, and Wiersma, Durk.

“Reinstitutionalisation in Mental Health care: Comparison of Data on Service Provision from Six European Countries.” British Medical Journal 330.7483 (2005): 123-126. Web. 8 March 2016.

“Psychology vs. Psychiatry | What’s the Difference?” All Psychology Schools. All Psychology Schools, n.d. Web. 7 Mar. 2016.

“Results from the 2013 National Survey on Drug Use and Health: Mental Health Findings.” Substance Abuse and Mental Health Services Administration. U.S. Department of Health and Human Services, n.d. Web. 1 April 2016.

Ronayne, Kathleen. “Report Highlights Lack of Resources for Mental Health

Services in N.H.” Concord Monitor. Newspapers of New England, 29

January 2015. Web. 3 March 2016.

Salize, Hans Joachim, Rössler, Wulf, and Becker, Thomas. “Mental Health Care in

Germany: Current State and Trends.” European Archives of Psychiatry and

Clinical Neuroscience 257.2 (2007): 92-103. Web. 18 February 2016.

“Sandy Hook Elementary Shooting: What Happened?” CNN. Cable News Network, n.d. Web. 4 Mar. 2016.

Saxena, Shekhar, Thornicroft, Graham, Knapp, Martin, and Whiteford, Harvey.

“Resources for Mental Health: Scarcity, Inequity, and Inefficiency.” Lancet

370 (2007): 878-889. Web. 3 March 2016.

Schwarz, Alan. “The Selling of Attention Deficit Disorder.” The New York Times. The New York Times, 14 Dec. 2013. Web. 6 Mar. 2016.

Sederer, Lloyd. “Tinkering Can’t Fix the Mental Health Care System.” U.S. News.

U.S. News & World Report, L.P., 20 March 2015. Web. 9 February 2016.

Sheffield, Wesley. “Still Pursuing the Promise of Reform Fifty Years Later.”

YoungMindAdvocacy.org. Young Minds Advocacy, n.d. Web. 23 February

2016.

Sheth, Hitesh C. “Deinstitutionalization or Disowning Responsibility.” The

International Journal of Psychosocial Rehabilitation 13.2 (2009): 11-20.

Web. 23 February 2016.

Smith, Brendan L. “Inappropriate Prescribing.” APA.org The American

Psychological Association, June 2012. Web. 23 February 2016.

Stierlin, Annabel Sandra, Herder, Katrin, Helmbrecht, Marina Julia, Prinz, Stefanie,

Walendzik, Julia, Holzmann, Marco, Becker, Thomas, Schützwohl, Matthias,

and Killlian, Reinhold. “Effectiveness and Efficiency of Integrated Mental

Health Care Programmes in Germany: Study Protocol of an Observational

Controlled Trial.” BMC Psychiatry 14.163 (2014): n.p. Web. 9 March 2016.

“The Top 10 Reasons People Say No to Counseling.” The Top 10 Reasons People Say No to Counseling. Gustavus College, n.d. Web. 7 Mar. 2016.

“Twisted Definition.” Merriam-Webster. Merriam-Webster, n.d. Web. 6 Mar. 2016.

“Types of Mental Health Professionals.” Mental Health America. MHA, n.d. Web. 10 Mar. 2016.

Wahlbeck, Kristian, Westman, Jeanette, Nordentoft, Merete, Gissler, Mika, and

Laursen, Thomas Munk. “Outcomes of Nordic Mental Health Systems: Life

Expectancy of Patients with Mental Disorders.” The British Journal of

Psychiatry 199.6 (2011): 453-458. Web. 16 February 2016.

Weir, Kirsten. “The Roots of Mental Illness.” Apa.org. American Psychological Association, June 2012. Web. 7 Mar. 2016.

Weitz, Don. “25 Reasons to Avoid All Psychiatric Drugs and Treatment.” Ask Grace. Ask Grace ™, 13 Feb. 1998. Web. 7 Mar. 2016.

“Where the States Stand on Medicaid Expansion.” Advisory.com. The Advisory

Board Company, n.d. Web. 25 February 2016.

 

 

 

Education

Witt Womack, Senior, History

Courtney McKeon, Chemical Engineering/PreMed

 

 

Part I: The Assessment of K-12 Education

 

A child in the United States is required to attend school from at most the age of eight until at least the age of sixteen.140 At this age, depending on the child’s state of residence, a child may have the option to drop out of school entirely, though children as young as fourteen may opt out of classes under conditions such as work obligations, parental permission, or handicaps. According to 2014 data, roughly 82% of all American students not only attended school for as long as was compulsory in their state, but also went on to earn their high school diploma—though the figures vary widely depending on state, race, language ability, disabilities, and income level. 141

 

For instance, only 68.5% of New Mexican students graduated high school, while 90.5% of Iowans achieved the same feat. 142 Subdividing students further reveals even greater disparity in achievement: just under 54% of black students from Nevada, for example, while 96% of Asian students from New Jersey graduated high school in 2014. 143 Such differences in graduation rates along with other measures of academic achievement demand attention. It must be asked whether schools in low-performing areas are failing to provide their students with the greatest possible opportunity.

 

Educators have in principle ten years or so to impart all of the knowledge and skills which they deem necessary for all students to learn, and one or two years after that to impart the knowledge and skills which they deem necessary for all high school graduates to learn; a child successfully running the entire K-12 gauntlet will have had thirteen years of schooling.

 

While ten to thirteen years may sound like many, the duration is not at all long enough to cram in all of the skills which different factions of educators have claimed to be necessary over the years—preparation for adult life, preparation for a career, preparation for higher learning, a love of learning, numeracy, literacy, technical literacy, exposure to the Western Classics, exposure to emerging issues, multiculturalism, acculturation, creativity, critical thinking, the expectations of a citizen, the expectations of a world citizen, an appreciation for the arts, an appreciation for the sciences, ethics, languages, social skills, health, fitness, nutrition, sex ed.—all aspects of education have their sincere advocates and their waves of support. Many take a turn as a cause célèbre of educational reform, lauded as the solution to the most glaring problems of education, but in the end it becomes clear that time is limited, and the overemphasis of one subject will generally occur to the detriment of other important, even if not all-important subjects. No student can learn everything which has ever been deemed important for a student to know in only thirteen years, much less ten, especially when the subjects must be introduced at a digestible pace.

 

This is not to say that there is no right combination of subjects. The incredible disparity between the graduation rates of states or populations enumerated in the first paragraph indicates that different educational policies yield better results than others. If there is a 22% difference in the graduation rates of the best and worst state, then the educational environment of the former is in some regard better than the latter. A good or bad set of state educational policies will create corresponding learning environments, even if the differences must also be partly accounted for by less controllable factors such as poverty levels or the education levels of the prior generation.

 

Even within states, there is often quite a difference in achievement between school districts, and even schools within a single district. It is, after all, the school districts who generally decide the specifics of the curriculum for their students. Ultimately, it is up to the teacher—and in no small part the student—for all of this planning of lessons and curricula to be transmitted, hit home, and actually do any good. Any mistake along the chain of decisions from the implementation of national education policy to the hiring of teachers to the choice of breakfast by the pupil can corrupt or disrupt altogether the transmission of the intended learning.

 

It must always be remembered that educational planning and policy can only go so far to direct a student towards this learning, and that the training and hiring of teachers, as well as the home life of students play significant roles too. But an educational system is foundational, and a broken system will fail its students and teachers, no matter how bright they are. The ultimate goal in maintaining an educational system is to make it as straightforward and natural as possible for any student to obtain that which he or she needs to know. Such a task involves the delicate interplay of many complicated processes. Even deciding “what a student needs to know” is not a simple mandate, because the aims of public education are not necessarily set in stone.

 

Aims and Expectations of Education

What is to be taught in school can be divided between practical schooling, which entails “developing the skills for doing practical work of society;” theoretical schooling, or “pursuing advanced theoretical knowledge in areas such as mathematics, literature, logic, and the arts”;  and moral schooling, or “providing a set of moral guidelines and ethical values for judging right and wrong.”144

 

Ideally, all these areas of development should get their proper due, but defining what due is proper will not often yield unanimity. Some educators stress developing pragmatic skills as the most beneficial part of education—its most useful end—but theoretical skills are often seen as at the very least the fundamental building blocks upon which the rest of an education is based. Certainly, numeracy, literacy, and scientific reasoning tend to be tested more often on state, national, and international assessments as the measures of competency. Moral education—apart from civics in part—is generally held to be the purview of the family or community, whether this entails religious, cultural, or regional values. Democratic values and the duties of a citizen may be taught in schools, though, as a form of moral education, with less risk of contention than singling out a particular form of morality or another to teach. Effective participation in a democratic society requires in a way certain “prerequisite courses”—knowledge of the society itself works. It carries with it the freedom to choose for oneself, and to learn from these choices, but without the capacity for reasoning out what is most beneficial to oneself or most aligned with what one values, one cannot enjoy the benefits of democracy.

 

If one citizen, for instance, is convinced by another who has more information at hand to do something against his or her interests, the first citizen has essentially given up a vote to the other. Democracy loses when the citizenry is not well-informed enough to use critical thinking correctly, or else when one part of society is more well-informed than another. Therefore it is best for the citizenry as a whole to be as educated as possible in the art of critical thinking, reasoning, and decision making. Critical thinking is not only desirable in a democracy, but essential civic training.

 

It will suffice to say that many conclusions have been reached concerning the purpose and nature of education, some more popular than others, but none universally accepted. A description of essential functions of a high school—that is, the cumulative gains of a K-12 education—emblematic of what may be said to be a prevalent understanding of education in a democratic system, can be found in Boyer’s High School, a 1983 reform book. According to it, American high school students ought to: (1) develop critical thinking and effective communicating; (2) learn about themselves, the human heritage, and the interdependent world; (3) prepare for “work and further education”; and (4) fulfill social and civic obligations.145

 

In statements which try to compactly define goals of education applicable to the entire nation such as the one above and others to be encountered, the goals are defined broadly, and the ways to reach them are not concretely defined. What specifically must be taught in order for these heights to be reached is not stated, since there are many ways to reach them. This represents, then, the most abstract stage of deciding what is to be taught. Just as these mission statements may indicate that educational goals are flexible, holistic, and comprehensive, they could be equally interpreted as arbitrary or vague by those who see such aims as far too broad to be practical.

 

If these aims are useful, it is because they set a standard for standards. It is important to keep larger aims of education in mind when deciding what academic standards to meet and the curricula to meet them. One finds often in the discourse a tension between the theoretical and the practical, between what should be taught and what can be. Deciding and implementing educational policy is a balancing act. To focus only on subject matter and whether kids know it risks overlooking why students are being educated to begin with—focusing on means at the expense of the ends.

 

On the other hand, the higher aims of education must be effected through concrete means. Critical thinking, for instance, is not something that can be taught in a vacuum, and preparedness for work and society entails learning specific facts. Just as it is impossible to learn how to ride a bike without a bike, the teaching of critical thinking and similar skills must be taught within a curriculum of useful knowledge to which critical thinking may be applied.

 

The challenge is to allow for flexibility while insisting on certain standards to be met. Flexibility at the classroom level often manifests in the teaching of different content—different novels in two English classes for instance—or at different paces. But even if some lessons taught are more flexible content-wise, some content is more fundamental for a quality education.

Literacy and numeracy are often the first concrete skills to appear in mission statements as the broader aims of education are honed into expectations and standards. One report addresses the health of America’s educational system in terms of the “fundamental obligations of any society” to “prepare its adolescents and young adults to lead productive and prosperous lives as adults,” but clarifies such preparation as a “solid enough foundation of literacy, numeracy, and thinking skills for responsible citizenship, career development, and lifelong learning.”146

 

The basics of literacy and numeracy especially, as well as a working knowledge of logic—here expressed as thinking skills, and often couched in terms of scientific reasoning—must by necessity be learned in the years before high school and higher education if the student truly hopes to delve into subjects which require calculation or interpretation. These represent more tangible standards, standards which can be more definitively addressed and assessed.

 

The monumental task of standards is to decide what is truly fundamental learning for a student by the time he or she has finished with their education, and after this necessary knowledge is ensured and validated through assessment, to expose the student to as much useful knowledge as possible in the remaining time they are required to be in school, which is achieved in the crafting of curricula.

 

Common Core

Currently, the most discussed and successful set of standards, though certainly not unchallenged, is Common Core, “a set of clear college- and career-ready standards for kindergarten through 12 th grade in English language arts/literacy and mathematics.”147 The proponents of Common Core define the concept of standards as “the learning goals for what students should know and be able to do at each grade level,” and distinguish standards from curriculum, which are chosen by “local communities and educators,” so that “the Common Core is what students need to know and be able to do, and curriculum is how students will learn it.”148

 

Suggestions for national educational policy must take into account the variance between states as it stands in a nation where curriculum is decided at the state level and below, meaning that national educational directives may err on the side of vagueness. Common Core, however, is not a federally mandated program, nor adopted through legislation at the federal level; instead, states adopt it through their own legislative processes. This is one reason why it is popular with states. Currently, “forty-two states and the District of Columbia have voluntarily adopted and are working to implement the standards.”149

The ostensible concreteness of the affair is also inviting. Common core gives states a springboard of what they need to teach, and in some cases names specific topics. Even more usefully, it indicates when certain skills should be learned and what knowledge should precede them. Much of the difficulty of creating academic standards is making them feasible for a given age. Standards are set, and education succeeds when these standards are met and surpassed, so that the student has the ability to confront new swaths of knowledge and skills. But gauging when standards are met presents its own difficulties.

Standards, Assessment, and Accountability

Three areas of policy work together, not wholly unlike how a three-branched government works, to improve education. These are: standards, assessment, and accountability—more precisely, establishing academic standards based on what learning is deemed feasible, necessary, and useful; assessing schools, teachers, and students based on these academic standards; and holding a teacher, school, district or state accountable to standards in light of the results of the assessment through various means, often incentives or disincentives.

A movement to set up standards organically becomes a movement for accountability to these standards, since a standard of education on paper is useless if it is not implemented. Moreover, a movement for accountability is often discussed in the light of assessment, since a knowledge of where standards are being met and where they are not must be obtained before any accountability can be applied. Assessment provides information to educators about the effectiveness of their techniques. If something is not working as well as it could, educators need to know about it, and the best way to know is through the gathering of data through assessment and the comparison of that data to the standards set. Likewise, accountability ensures that the information gained is actually put to use.

Unfortunately, accountability is much more difficult to apply in reality, because education is such a necessary service. Can there be disincentives to do poorly if punishments—less funding, for instance—only make it harder for a school to do better the next year? These are the challenges recent educational policy—discussed further below—has had to address, with ambiguous success. The accuracy or interpretation of assessments present particular problems of their own. Standards first have to be set. In the United States, they have to meet the aims of a democratic education. That very complicated discussion has been introduced above, though certainly not exhausted. Suffice to say, standards have to align with aims, but they are also shaped by feasibility. They have to correspond to realistic predictions of where a child can be at a given age. If they are the cumulative standards of K-12 education, they have to be possible to meet in thirteen years.

One of the greatest indicators of feasibility is through comparisons, and in a way this represents a form of assessment. At the beginning of this chapter, it was noted that there is great disparity between states in academic achievement. The revelation of this fact reveals a rift in American education, but it also establishes precedents for what can be achieved, even if it is not being achieved elsewhere. If Massachusetts sets the bar for state academic achievement, it stands that feasibly, albeit with a lot of work, other states can eventually achieve similar achievement levels because it has been done before in an American context. To take this idea one step further, American achievement can be compared to that of other countries, in order to see what is feasible on a national level.

 

International Comparisons through PISA

International comparison is ostensibly useful to countries, because it helps place their standards and goals in a context beyond their own problems. There have been other comparative international surveys in the past, but the most influential test today is probably the Programme for International Student Assessment (PISA). In the late 1990s, countries with membership in the Organisation for Economic Co-operation and Development (OECD) came up with the idea to compare educational systems to see which ones were more effective than others. As described on the PISA website, PISA is a two-hour test given every three years “which aims to evaluate education systems worldwide by testing the skills and knowledge of 15-year-old students,” focusing on one of three subjects: reading, mathematics, and science.150 Fifteen year olds, who in the United States are generally freshmen or sophomores in high school, are tested because that is when compulsory education most often ends around the world; though, as stated earlier, U.S. students attend school until at least sixteen years old.

 

One of the hallmarks of the PISA is that the countries are supposed to take the test as a unit, so that consistency in achievement becomes a goal greater than achievement itself. In other words, the PISA should not be seen as a test of the heights which can be achieved within a given system, but what is consistently achieved by students in the system. It must be taken into account that the whole of the USA competes in the main against smaller countries with smaller populations. Some might consider this to be unfair or the results thereof misleading. Such a complaint finds especial vindication in China’s recent results, since the country “is allowed to let certain regions stand in for the performance of the entire nation;” namely Shanghai, Macao, and Hong Kong—regions with a history of some autonomy, true, but listed in the official results as, for example, “Shanghai-China”, and with no results listed for China as a whole.151 Not only do these regions outperform the rest of China, bringing into question how well they represent the nation as a whole, but they outperform the rest of the world. Shanghai swept the top positions for all testing subjects in both 2012 and 2015.152 This prompts criticism from the United States pointing out that certain states do similarly better than the country’s ranking as a whole implies, bringing into question the sense of crisis which has resulted from the country’s mediocre performance.

 

Certainly, the United States’ standings would look quite different if the results of its top performing regions were substituted for those of the whole.  In 2012, Massachusetts would have placed 9th worldwide in science (instead of the actual U.S. ranking of 31st) and 4th overall in reading (instead of the U.S. ranking of 21st). The results, then, of the PISA exam do not reflect the reality of many of the nation’s schools. Educators in Massachusetts clearly have more to be proud about than the rest of the country, but such a rose-colored view ignores useful conclusions. China’s behavior is not exactly typical, and it becomes less difficult to hand-waive the other twenty-five odd countries ahead of the United States, even if China’s results are discounted.

 

The fact remains that the U.S. education system can feasibly improve. There are currently expectations higher elsewhere than in the U.S., and the country benefits from knowing this. A very understandable objection is that America’s circumstances are different—the country is much bigger than most—which leads to an “apples to oranges” mentality. But if America’s lagging behind the world leaders in education is circumstantial, perhaps this should incite America to seriously consider her circumstances and whether they can be improved.

 

Massachusetts may have a world-class educational system, but if America represented by its average lags so far behind Massachusetts, then the educations available in states below the national average must be truly woeful—as far behind as Massachusetts is ahead. Indeed, if Mississippi, America’s worst-performing state, had been compared separately to the rest of the world in 2012, it would have ranked alongside Bulgaria and Uruguay.153 From this huge gulf between scores, it is easy to conclude that America’s highest-achieving academic environments are not replicated with any consistency throughout the nation, just as there are pockets of the country in severe need of improvement. Some states are “bringing down” national scores, and these states are disproportionately poorer than the others. Such results are an indictment of the great inequity in America between states.

 

The PISA test ought to be seen in light of its limitations, but with a nuanced understanding of how the test works and what it is measuring, the PISA results can be very enlightening. To disavow the analysis of PISA entirely would be neglecting data which any one nation’s government could not by itself obtain. Forbes journalist James Marshall Crotty has reasoned from the data that “it’s not how much money a country spends on education, but where and how it spends it, as well as the cultural expectations a nation sets for student initiative, drive and excellence.”154 One of OECD’s key findings for the 2012 test corroborate this statement: “While the U.S. spends more per student than most countries, this does not translate into better performance. For example, the Slovak Republic, which spends around USD 53 000 per student, performs at the same level as the United States, which spends over USD 115 000 per student.”155

 

Inequities

The gulf between the states’ academic achievement is only one of many inequities riddling the country’s educational system. One of the greatest problems, if not the greatest, identified in American education today is the achievement gap. In fact, a recent report preferred to discuss the issue in terms of not one gap, but four achievement gaps: (1) between the United States and other nations, (2) between black and Latino students and white students, (3) between students of different income levels, and (4) between similar students schooled in different systems or regions.156

 

Inequity may be undesirable, but in a competitive society, why should gaps in achievement be considered as much of a problem as it is? One of the great measuring sticks of educational success at the state and national level is equity. This principle of equity in the United States is well represented by a statement found in the introduction to the famous report “A Nation at Risk”:

 

“All, regardless of race or class or economic status, are entitled to a fair chance and to the tools for developing their individual powers of mind and spirit to the utmost. This promise means that all children by virtue of their own efforts, competently guided, can hope to attain the mature and informed judgment needed to secure gainful employment, and to manage their own lives, thereby serving not only their own interests but also the progress of society itself.” 157

 

Equity is in other words a state in which fairness triumphs and merit is a stable currency. Such a longing for equality of opportunity should not be confused with a desire for education to conform to some lowest common denominator. The guarantee of freedom to choose the form of education one receives and equity in education are sometimes conflicting forces. As in many of the aspects present in a democratic state, education is torn between public and private interests. But private academic success need not be at the expense of the success of others. A desire for equity is a natural extension of the desire for high standards and quality in general. Students raise each other up, give each other competition, and cooperate with each other, just as they will in society. Equity and quality in education make a winning combination. To aim wide and high in achievement is the task at hand. What is known to be possible is what has been achieved—and so to see what is feasible beyond what is being achieved in America, one must look at the performance of other countries. The countries at the top of educational achievement lists then become the bar of what is possible and thereby the standards to reach for. But the best education possible is not useful for a society if it cannot be consistently replicated, so that the goals become high achievement relative to other countries, and equity in this achievement.

With such noble goals, one can begin to see why assessments are valuable tools, and have become so widely used. Data from tests reveal where achievement is lagging behind the rest of the country. It is far easier to diagnose from the data. However, many argue that assessments are overused. Through simplification of the educational process borne of an effort to make it more effective and efficient, the danger grows that lessons become merely “teaching to the test”; that is, framing content around test performance, rather than using test performance to gauge how well content is grasped. When lesson plans start to form around improving kids’ ability to take an assessment, the assessment has arguably done more than assess the situation, it has altered the situation. While there is often an element of the strawman in many criticisms of how testing affects the classroom, standardized testing does play a significantly larger part in schools than it has in the past.

 

The Rise of the Standardized Test

A recent history of K-12 education policy and reform in America will begin with Lyndon Johnson’s administration in the 1960s, when the Elementary and Secondary Education Act (ESEA) was passed. The federal government thenceforth became more directly involved in the funding and regulation of education. The watershed report, A Nation at Risk (ANAR), published in 1983 during the Reagan administration, will inevitably be brought up in any discussion of educational reform and policy as another milestone. It was this report, criticized by some as alarmist both at the time and thereafter, while welcomed by others as a much-needed reality check that set the narrative for educational policy until the present. Beginning with the words: “our Nation is at risk. Our once unchallenged pre-eminence in commerce, industry, science, and technological innovation is being overtaken by competitors throughout the world,” the report revolved around a concern for a perceived “rising tide of mediocrity” in America’s schools.158

 

Not only was this sense of mediocrity ascertained from America’s performance in international assessments, but also from a growing understanding of the deep-rooted inequities maintaining the rifts described earlier between income-levels, state, school system, and race within the country itself. Many indicators of risk were collected in the report, such as “a virtually unbroken decline” in SAT scores “from 1963 to 1980,” among others.159 Such indicators amounted to a visible problem, which begged for a solution.

 

The solution suggested was higher standards of education. Many see the root of the later testing emphasis in this report too. However, as education reformer and historian Diane Ravitch sees it, the standards movement which rallied around the report led to many good suggestions which were only—as she terms it—“hijacked” in 1995 after the debate on became politicized concerning how American history should be taught.160 Rather than compromise on what should be taught, compromise became a mutual backing away from the topic altogether. Ravitch insists that “consequently, education leaders retreated into the relative safety of standardized testing of basic skills, which was a poor substitute for a full-fledged program of curriculum and assessments.”161

 

Even before, in the 1994 State of the Union President Clinton maintained, with broad popular support, that the nation’s schools should be measured by “one high standard,” namely, whether or not America’s children were “learning what they need to know to compete and win in the global economy.”162 This was the rationale he gave for the Goals 2000 initiative, which sought to improve certain measures of academic achievement before the year 2000.

 

The American government at that time and for much afterward focused on assessing basic skills to the chagrin of proponents of so-called holistic education, like Ron Miller, who opined: “the consuming obsession with national economic power blinds us to what really matters in our lives, and to what a decent education is really about.”163 In his view and in those similar, education focused on numbers, competition, and quotas will defeat itself. But Bill Clinton and many Americans felt and feel the need to be competitive, because America’s mediocre performance amongst its international peers only made and makes educational policy into a national embarrassment, causing bipartisan enthusiasm for reform. And yet governmental programs meant to revitalize the promise of ESEA seem to come and go without a satisfactory conclusion.

 

NCLB

This seems to be the verdict on the late No Child Left Behind (NCLB) program, which dominated the policy landscape of early twenty-first century America since being signed into law in 2002 with bipartisan support. Despite its initial popularity across the aisle, NCLB grew to be considered by more and more politicians as ineffective. In the end, despite being a law “with something for almost everyone to love,” it “ended up having something for almost everyone to hate.”164 Growing frustration with NCLB mirrored that towards the Improving America’s Schools Act which NCLB had been drafted to remedy, and ultimately culminated in yet another program being drafted in NCLB’s stead in 2015—the Every Student Succeeds Act (ESSA). NCLB was devised to account for the inadequacies of the previous attempts to raise standards and hold schools accountable to them, but also addressed inequity. As the NCLB parent’s guide explains:

“Since the Elementary and Secondary Education Act first passed Congress in 1965, the federal government has spent more than $242 billion through 2003 to help educate disadvantaged children. Yet, the achievement gap in this country between rich and poor and white and minority students remains wide…and while [NAEP] scores for the highest-performing students have improved over time, those of America’s lowest-performing students have declined.”165

 

NCLB, then, was as its name indicates heavily concerned with addressing the inequity in schools, but focused foremost on accountability with increased resources taking an assisting role, since reading scores seemed to remain stagnate despite increased appropriations for ESEA. One of the most important aspects of the program addressing inequity through accountability was by the measure of Adequate Yearly Progress (AYP). Schools were assessed based on how well test scores improved year to year. A successful school would make “Adequate Yearly Progress” (AYP). A child in such a school would eventually have the option to transfer to better schools if their school had not made adequate yearly progress.166 This proved more difficult in practice, since a better school wasn’t always available.

 

Other criticisms were directed at the NCLB mandate that each state “measure every child’s progress in reading and math in each of grades 3 through 8 and at least once during grades 10 through 12.”167 The frequency of assessment, coupled with the higher accountability for failure only exacerbated the feeling that gratuitous assessment led to instructors “teaching to the test”. Whether or not critics’ emphasis of its flaws is very representative, the NCLB era has ended, and the true extent of its successes, its failures, and the lessons which came with them is yet to be seen.

 

ESSA

As the Obama administration comes to an end, a new educational reform bill has been passed: the Every Child Succeeds Act (ESSA), which is currently being implemented. A major question that arises as the ESSA era begins is: how much does ESSA actually do to change the NCLB status quo? The most important change ESSA promises to bring is more freedom to states to fashion their own way of assessing students, and less meddling by the federal government in the same area, while maintaining the pursuit of high standards.

 

Christopher B. Swanson, Vice President of Editorial Projects in Education, says, “After a decade and a half of strong federal influence over school accountability, the states are poised to take the helm again and chart their own course,” though he also says “states may take very different paths forward.”168 It must be asked whether or not the states in direst need of improved education are capable to handle educational reform by themselves. ESSA mandates that each state make a standard for itself, but

 

The common core certainly provides an opportunity for states to adopt a common standard, which could potentially stem the inequity between states by bringing those states with less academic achievement to par with their more successful peers; even the states who clearly have larger issues which envelop educational problems. Yet if the last waves of reform all seem to have failed, another dilemma comes to the fore: whether education can truly help reduce poverty or poverty must be eradicated before education can operate effectively. The answer probably lies somewhere in the middle, but the effects of poverty on achievement cannot be overlooked. A recent report graded each individual state on a number of educational factors, one of which being the “chance-for-success index”, a measurement created for the report in order to “better understand the role of education across an individual’s lifetime.”169 Three groups of factors determined each state’s grade: (1) early foundations, including such indicators for success as parental education and family income; (2) school years, with indicators including preschool and kindergarten enrollment, elementary reading, middle school mathematics, and graduation figures; and (3) adult incomes, covering indicators after school.

 

It might be difficult to determine whether such a tripartite prediction of educational success—only one part of which deals with the school years themselves—is cynical or comprehensive, but clearly, educators are moving away from the notion that education occurs in a vacuum. Tellingly, the states above the median line in early foundations recur for the most part in the top halves of the other two parts. It has been recognized for quite some time that poverty has adverse effects on success in education, but education has frequently been seen as the great hope to end poverty. If educational achievement is what can elevate a person out of poverty, however, it is also much more difficult to attain within a state of poverty.

 

Substantial federal intervention in public education first came about in response to poverty, and a considerable amount of funding has gone to low-performing schools since; but the current political climate, as evidenced by the replacement of NCLB by ESSA, however, seems to favor less and less federal intervention. This does not indicate a belief that kids in poverty are incapable of learning and not worth the government’s time and money, but that assistance can come from places other than the very top. Time will ultimately tell if this approach works better than those of recent years.

It should suffice to note here that to solve poverty is beyond the scope of this paper. The lines between causation and correlation are blurred wherever poverty rears its ugly head. Poverty is neither solely the cause nor solely the effect of academic difficulty. The two form a symbiotic relationship and feed off one another.

 

College Entrance Exams

Vicious cycles do not stop at the end of high school. The inequity in K-12 education affects inequity later on. While students have not been required under NCLB or ESSA to take state tests during high school to the extent they do from 3rd to 8th grade, standardized tests remain a large part of a high schooler’s academic achievement in the form of college readiness exams such as the ACT and the SAT, not to mention their preliminary forms. The SAT and ACT face many criticisms. Among the forefront of these are, (1) the tests are said to be poor predictors of subsequent academic success, which calls into question their primary role as college-readiness tests; (2) the tests “reward superficial learning”, reinforcing “passive, rote learning” rather than “active, critical thinking skills”; and (3) high scores correlate better with socioeconomic class than with college success.170

 

This has caused some universities to reconsider using them as part of their admissions process.171 However, the ACT and SAT are not doomed to fade away, by any means, especially since receiving a potential new lease of life under ESSA. The old NCLB requirement of one High School assessment has been relaxed so that states may pick tests such as the ACT and SAT to a fulfill the requirement, meaning that the ACT and the SAT may become even more favored rather than less favored in the coming years. If the charges against these standardized tests are truly accurate, then this would help entrench inequity in college admissions.

Part II: The Cost of Higher Education

The average cost of achieving higher education has been steadily increasing by eight percent each year, which means the cost of tuition and fees doubles every nine years. This is due in part to the fact that state and federal funding for higher education has been steadily decreasing since the 1970s (Making Higher Ed Free). Due to the increasing costs of achieving higher education, support for making all public four-year colleges and universities free in the United States has been rapidly increasing over the past few years and has become a hot topic during the 2016 Presidential Election. This paper will analyze some of the reasons for the increase in college tuition, the pros and cons of making public four-year colleges and universities free, and possible alternatives to this plan.

Reasons For the Increase in College Tuition

Administration

Since the 1970s, universities have been seeing a significant increase in the number of administrators.172 Many of these administrators, such as the presidents of these universities, receive hefty salaries of over a million dollars a year.173 Part of the reason for this issue is due to the fact that universities have begun to emphasize research over undergraduate education. Increased emphasis on research leads to increased necessity for administrators, as the faculty no longer has the available time to handle administrative duties. Although universities have always employed administrators, in the 1960s “top administrators were generally drawn from the faculty, and even midlevel managerial tasks were directed by faculty members” (Ginsberg). Not only are the number of administrators and staff members increasing, but their salaries are increasing as well.174 Universities have been forced to increase tuition prices to compensate for the increase in the number of administrators and their salaries. Instead of spending money on expanding instructional resources that would direct affect undergraduate students, universities are choosing to spend money on expanding their managerial staff with whom students rarely come in contact with.

Ranking and Ratings

The US News and World Report releases a ranking of universities based in part on their selectivity including incoming students’ average SAT scores and high school GPAs.175 The report does not however, rank universities on how much students are learning once they are actually attending the school which could potentially be accomplished by taking into account average starting salary of graduates or the percentage of graduating students who immediately go on to jobs or graduate schools. Selectivity of universities being used as a ranking tool leads to increase of tuition. This is due to the fact that universities need to be able to reject more people in order to increase their selectivity. Universities will spend millions of dollars on advertising and recruitment to try and get as many students as possible to apply just so that they can reject them.176 The student’s tuition fees pay for this increase in advertisement and therefore the ranking system drives up the cost of tuition. Using SAT scores and incoming GPAs as a tool for ranking also increases the inequity of universities, as studies have shown that students with the highest SAT scores and GPAs are those who come from wealthy families.177

Money spent on non-educational amenities

More and more schools are seeking to attract students not by their educational criteria but by using flashy recreation centers, food courts, housing facilities, and other amenities. Schools are attempting to keep up with the increasing expectations set forth by students and their parents about the accommodations that should be provided to the students outside of the academic related field. Building large, state of the art, facilities costs money, lots of it, and the increased demand for these accommodations leads to an increase in tuition costs for students.178

The Pros of Making Four-Year Public Higher Education Free

“The first step…is to calculate how much it would cost to make all public higher education free in the United States. In 2008–9, there were 6.4 million full-time-equivalent undergraduate students enrolled in public universities and 4.3 million enrolled in community colleges. In 2009–10, the average cost of tuition, room, and board for undergraduates at public four-year institutions was $14,870; at two-year public colleges, it was $7,629. If we multiply the number of students in each segment of public higher education by the average total cost, we discover that the cost of making all public universities free would have been $95 billion in 2009–10, with an annual cost of $33 billion for all community colleges—or a total of $128 billion. While $128 billion seems like a large figure, we need to remember that in 2010, the federal government spent $35 billion on Pell grants179 and $104 billion on student loans, while the states spent at least $10 billion on financial aid for universities and colleges and another $76 billion for direct support of higher education. Furthermore, looking at various state and federal tax breaks and deductions for tuition, it might be possible to make all public higher education free by just using current resources in a more effective manner. ….The cost for free public higher education could [also] be greatly reduced by lowering the spending on administration, athletics, housing, dining, amenities, research, and graduate education.” (Samuels)

Inequity

Due to the skyrocketing costs of attending college180, many students from lower income families are unable to handle the financial burden of achieving higher education. These students, despite having potentially the same educational qualifications as their wealthy counterparts, are denied the opportunity to advance their education based on their families financial situation of which they have little to no control over. Though low-income students can receive government and instructional loans and grants, support for these programs at the federal and institutional level has been dramatically decreasing the past few decades.181 This in turn creates a vicious cycle throughout generations, as those students’ kids are less likely to attend college as well.182 Obtaining a degree has been proven to have significant impacts on workers’ salaries and job stability.183 Making higher education free would eliminate finances as an issue and would allow for all qualified students to better themselves and obtain a degree.

Student Loans

The students graduating from college are finding themselves in crippling debt due to the fact that they need to take out large amounts of money in student loans to afford the tuition costs.184 Due to this rise in debt, more and more students are starting to believe that the value of obtaining a college degree isn’t worth the money.185

The Cons of Making Four-Year Public Higher Education Free

Disclaimer

The inequity that is caused by college tuition continuing to rise drastically is a deeply disturbing and serious issue and, if not addressed soon, could have dire effects on our society as a whole. The following argument is made with the intention of showing why making public four-year higher education free would not be the most effective way to solve these problems and would in fact cause more problems for our higher education system. However, college tuition prices are unreasonably high and efforts need to be made to drastically decrease the cost of pursuing a higher education.

Defunding amenities not directly related to undergraduate education

Lowering the spending on things that are not directly related to undergraduate education could potentially be an effective way of reducing costs of attendance to universities. However, in reality the culture of our society demands these things. College athletics are widely beloved and accepted by all, allow for alumni to remain in contact with the school, and in fact bring in a great deal of revenue.186 Universities conduct research not to just gain recognition and awards for their professors, but to introduce students to what life conducting research is like and to give them the opportunity to produce significant, meaningful results. In fact, a large number of findings in practically every disciple have come from undergraduates conducting research guided by a professor.187 Nearly all research labs at public universities encourage undergraduate students to work with a PhD professor in their lab where they learn skills that they can take with them into the workforce that they otherwise wouldn’t have been able to learn in the normal classroom setting. Now a day, people are seeing college as more than just a way to further one’s education but as a life experience.188 Students go to four-year universities not just to learn about math and science but to develop social skills, learn to live on their own, and to develop their own personal sense of self. College students develop their social skills at the recreation center where students not only cultivate interpersonal relationships while playing a game of pickup basketball, but also where they create habits for healthy living. They learn to live on their own by having their own apartment and kitchen where they learn the time and physical demands required to have and make a home. They develop their own personal sense of self by partaking in events at the campus theatre, the campus union, and at campus athletic events. One of the biggest complaints about education today is that it doesn’t do enough to accommodate for the whole individual; students are sitting in a classroom and just learning from books instead of engaging all of their minds. These accommodations are what lead to the education of the student as a whole. A student learns about themselves and their own minds by engaging outside the classroom as well as by learning inside it. The culture of today’s society views four-year universities as not just a place to get a degree but a place for students to become acclimated to “the real world”. In fact, studies show that students are more likely to attend schools with better accommodations over schools with better educational statistics.189 Due to society’s views on what a four-year university should provide to the student as a whole, decreasing funding for these facilities and amenities, while saving money, would most likely lead to a decrease in enrollment.

Dropout Rates

An alarming statistic that is rarely discussed when debating whether public higher education should be free is the rate of college dropouts. Nearly half of all students who enroll in public four-year higher education (44%) drop out and never complete their degree. While a good portion of this group (approximately 38%) drop out due to financial struggles, the portion who drop out due to academic disqualifications is almost just as high (28%). Additionally, the other 34% of people drop out due to distance from home (4%), health problems (5%), poor social fit (13%), mental/emotional issues (3%), and family support (9%). 190 High school, a free public service offered to every student in America, sees dropout rates of 19%.191 According to the above data, the number of people who drop out of college due to financial hardships is 17%, as this is 38% of the 44% of college dropouts. Despite having no financial responsibility to pay for their education, students drop out of free high school at a higher rate than those who drop out of college for financial reasons. While it is unjust that over a third of the people who drop out of college due so due to lack of finances, providing free higher education doesn’t take into account the 2/3 of dropouts who leave for purely non financial reasons.

While in theory we would like to believe that students are motivated in school by their pure love and desire for education but the reality is that, like the rest of the population, the vast majority of students’ internal motivation for education stems from one thing: money.192 Whether it is their desire to achieve a degree so that they can make more money later in life or their desire to stay in school so that they will not lose out on the money they have already invested in their education; money is the motivator. This motivator begins as early as high school where students are told that if they work hard and get good grades then they could get part of their college tuition subsidized. Furthermore, many students are only motivated to achieve a certain GPA in college because it is required in order to keep their scholarship or financial aid.193 Additionally, students are highly motivated to not fail classes, as that would essentially be similar to flushing money down the drain. Were four-year higher education to become free, college would essentially become an “extended high school”; effectively eliminating the monetary motivation seen at both the high school and college levels. No longer driven to achieve a certain GPA due to scholarship/financial aid requirements, most likely a drop in performance would be seen. The saying “D’s get degrees”194 takes on an even stronger meaning here because, though more students could potentially obtain degrees, the quality of education and the amount of knowledge students retained would decrease because the students are less motivated to learn and master material. No longer driven to pass classes the first time around due to nonrefundable tuition costs for failed classes, a rise in the number of students failing classes would be seen, as there is essentially no punishment for this. This in turn would increase the amount of money needed for a student to achieve a degree; money that would most likely come from the taxpayer’s pockets. While making higher education free would effectively eliminate the 17% of people who drop out of college due to financial hardships, a case could be made that free college will see drop out rates similar to free high school at around 19%.

When public higher education becomes free:

While the figures claiming that free higher education could be completely paid for by using the money already invested in higher education, they fail to take into account the inevitable rise of enrollment as a result of public universities becoming free. This rise in student enrollment will in turn cause a need for more professors to teach these students and more administrators to facilitate these students and professors. Even with this increase in professors, class sizes will inevitably skyrocket. Though this is not guaranteed, there is substantial evidence that larger class sizes lead to a decrease in the quality of education that students receive.195 It will also cause a need for an increase in housing and dining facilities as new places to house these students will need to be built and more food and cafeteria workers will need to be provided to feed these students. Taking all of this into account, the current federal funds allocated towards subsidizing higher education will not be sufficient to provide higher education for free to all students. Therefore, the additional money will need to come from somewhere.

Bernie Sanders196 has been a vocal proponent during the 2016 Presidential Election of providing free public higher education. It is one of the corner-stones of his campaign, and one of the main reasons he is polling as having nearly 90% of millennials’ votes. 197 He intends to fund this and many of his other programs by increasing the tax brackets.198 Bernie Sanders consistently mentions the 1% of the US that owns 33% of the total available wealth. While this is a valid and alarming statistic, his proposed ways to help fund programs that are intended to help decrease the inequity of education in America, will most likely have the opposite effect. The people who will be most affected by these tax bracket increases are not the greedy billionaires and corporations who have more money than they will ever be able to spend, as this increase in taxes will mean practically nothing to them. The people who will be most affected are engineers, doctors, lawyers; people who went to school for several years to master a skill that should in turn afford them a increased income. If taxes on these people increase to the point of almost 50% of their salary being taken away, the motivation for students to attempt to achieve these jobs and in turn this income could potentially decrease. Regardless if Bernie Sanders is elected and actually able to implement these specific tax policies, the additional money needed to fund free higher education will likely be provided by an increase in taxes. The people who would be paying these taxes to implement free college would be the same people who took out student loans to put themselves through college and who are still paying these loans off. This is the reason that, despite Bernie Sanders having incredible polling numbers with the millennial generation, he polls at a significantly lower number with older generations. 199 We would like to believe that people would put the betterment of society as a whole above themselves, but in reality that is not human nature. Increasing taxes on people who worked their way through college and who paid off or are still paying off student loans so that a younger generation could attend college for free will most likely not be supported.

What would happen to the value of a college degree

Despite the fact that more and more jobs are requiring a college degree, the current unemployment rate for college graduates is 7.2% and has been steadily rising since 2007. 200 Were the number of people obtaining college degrees to increase due to public higher education becoming free, the number of unemployed college graduates would increase as well unless new jobs for college graduates were to become readily available in the market place. Additionally, a trend would begin to develop similar to that seen over the past thirty years or so where the value of a high school degree has decreased. When more and more people start attaining college degrees, the value of the degree will decrease, as seen with the high school degree, and in turn more jobs will start requiring masters, then PhDs, and so on.

Possible Alternatives

College isn’t for everyone and in fact isn’t necessary for a vast majority of the population who desire to work in fields that do not require a college degree.201 Making college free will most likely cause a decrease in the number of people pursuing these professions that do not require a four-year degree as those who have a desire to pursue those careers and who would usually do so without attending college, will most likely attend college anyway simply due to the fact that it is free to do so. Educating people should be the focal point of a society but four-year universities are not the only way to become further educated, and as a society we should begin to focus on emphasizing these other means of education. Children are told all throughout high school that their ultimate goal should be attending college, they take tests specifically designed to prepare them to attend college, and they have days specifically dedicated to inform them about all of the colleges and universities that they should aspire to apply to. Little to no effort is put into informing high school students about furthering their education through trade, technical, or vocational schools. In European countries, these schools are viewed as having a similar prestige as universities. Students are introduced to these types of jobs as early as middle school and are actively encouraged to pursue these careers and further their education through their trade, technical, and vocational schools.202 American primary schools need to work to decrease the negative stigma associated with these trade, technical, and vocational schools and begin to acknowledge them as not only viable, but respectable ways of achieving higher education. As a society, we should make an effort to emphasize the benefits of attending such schools and work to make more students aware of the alternative options, aside from universities, of furthering their education.

 

 

 

 

140 Education Committee of the States, p. 1

141 “National Center for Education Statistics, Table 1”

142 Idem.

143 Idem.

144 Nelson, p. 141

145 Gutek, p. 146

146 Symonds et al., p. 1

147 “Frequently Asked Questions” – Common Core State Standards Initiative

148 Idem.

149 Idem.

150 “About PISAOECD

151 Crotty (Sept. 2014), “PISA 2012 Results – OECD.”, p. 5

152 Strauss

153 Crotty (Sept. 2014)

154 Crotty (May 2014), p. 2

155 “United States – Country Note – Results from PISA 2012.”, p. 1

156 “The Economic Impact of the Achievement Gap in America’s Schools Summary of Findings: April 2009,” p. 1

157 United States. National Commission on Excellence in Education

158 United States. National Commission on Excellence in Education.

159 Idem.

160 Ravitch, p. 20

161 Ibid., p. 22

162 Miller, p. 211

163 Ibid., p. 214

164 Tooley

165 “No Child Left Behind: A Parents Guide,” p. 3

166 Ibid., p. 2; “Comparison of the No Child Left Behind Act to the Every…”, p. 4

167 “Testing: Frequently Asked Questions”

168 “Quality Counts Marks 20 Years Report Explores New…,” p. 1

169 “National Highlights Report 2016: Called to Account,” p. 4-5

170 Sacks, p. 26-27

171 Jaschik

172 Over the past forty years in American colleges, the number of full-time faculty members has increased by about 50% whereas the number of administrators has increased by 85% and the number of administrative staffers has increased by 240%. (Ginsberg)

173 The presidents of the top twenty colleges in the United States each made over $1 million in 2013. The president of Colombia University made $4.6 million. (Berman)

174 Vice presidents can earn well over $200,000, and deans earn nearly as much. Both groups saw their salaries increase as much as 50 percent between 1998 and 2003. (Ginsberg)

175 The US News and World Report releases a yearly ranking of nearly 1,800 colleges and universities based on reputation, faculty resources, student selectivity, financial resources, graduation rate, and alumni giving rate. (US News and World Report)

176 According to the higher education consulting group, Noel-Levitz, the median cost of recruiting each new student at a private university is $2,433 and $457 at public universities. (Glass)

177 Data provided by College Board, the administrators of the SAT, was analyzed for the year of 2014 and found that students in every income bracket outscored students in a lower bracket on every section of the test. Students in the highest income bracket scored an average of 400 points higher than those in the lowest income bracket. [_(_]Zumbrun)

178 In 1995, American colleges spent $6.1 billion a year on construction projects. In 2013, this number rose to $10.9 billion with only 20% of this going to renovations of existing buildings. (Newlan)

179 A pell grant provides need-based grants, which do not need to be repaid, to low-income undergraduate students (“Federal Pell Grant Program”)

180 A Harvard Study showed that the US has the highest college dropout rate among industrialized nations. (Rowan)

181 This was reported in US News as being the overall American high school dropout percentage for 2012. (Camera)

182 A study by Cengage Learning found that almost half (49%) of students name “career goals” as their primary motivating factor for success. “Future earning goals” came in second, with 17% of students ranking this as their top motivator. Additionally, less than 6% of students named “knowledge and learning” as their primary motivator. (Strang)

183 The economic analysis from the Pew Research Center finds that Millennial college graduates ages 25 to 32 who are working full time earn more annually—about $17,500 more—than employed young adults holding only a high school diploma. The pay gap was significantly smaller in previous generations. College-educated Millennials also are more likely to be employed full time than their less-educated counterparts (89% vs. 82%) and significantly less likely to be unemployed (3.8% vs. 12.2%). (Pew Research Centers Social Demographic Trends Project RSS)

184 According to The Institute for College Access and Success, 69% of students graduated with student loan debt, with an average of $28,950 per student. (“Project on Student Debt”)

185 Among recent graduates who received their degrees in 2006 or later, only 38 percent “strongly agreed” that college was worth it. Additionally, just 18 percent of recent grads with $50,000 or more in student loan debt “strongly agreed” that their education was worth what they paid for it. (Kamenetz)

186 The top five college athletic programs in the country each brought in over $100 million in revenue in 2008. (“College Athletics Revenue and Expenses – 2008”)

187 According to a study conducted by the National Science Foundation in 2007, 72% of chemistry undergraduates and 74% of environmental science undergraduates had research experience. (Webb)

188 A survey by Pew Research in 2011 found that less than half of the public (47%) believes that the main purpose of college is to teach knowledge and work related skills. The remainder believes that the purpose of college is to help a student grow personally. (“Is College Worth It?”)

189 A survey by the National Bureau of Economic Research shows that all university students valued spending on amenities and would make enrollment decisions based in part on such spending. Only students who are able to enroll at highly competitive universities valued spending on academics and would make enrollment decisions based in part on such spending. (Jaschik)

190 A Harvard Study showed that the US has the highest college dropout rate among industrialized nations. (Rowan)

191 This was reported in US News as being the overall American high school dropout percentage for 2012. (Camera)

192 A study by Cengage Learning found that almost half (49%) of students name “career goals” as their primary motivating factor for success. “Future earning goals” came in second, with 17% of students ranking this as their top motivator. Additionally, less than 6% of students named “knowledge and learning” as their primary motivator. (Strang)

193 According to the National Center for Education Statistics, 59% of undergraduate students in the United States receive some form of grant of scholarship. (Davidson)

194 A saying used by college students to reiterate that one does not need to make great grades to graduate from college.

195 Researchers generally agree that lower class sizes are linked to positive educational benefits such as better test scores, fewer dropouts, and higher graduation rates. (Higgins)

196 Bernie Sanders is a Democratic candidate for President of the United States. He serves in the U.S. Senate as a representative from Vermont. (“Meet Bernie Sanders”)

197 In the Iowa Democratic caucus, Sanders won 84% of voters between the ages of 17 and 29. (Bruenig)

198 Bernie Sanders’ proposed tax brackets would raise income taxes on those making $190K-250K a year to 35%, those making $250K-500K a year to 39%, and those making $500K-2 million a year to 45%. (Cole)

199 Poll data from Reuters shows that Sanders polls at right around 30% for all age categories above thirty. (Bruenig)

200 This data comes from a 2015 report by the Economic Policy Institute. (Davis)

201 According to a study conducted in 2013 by the Bureau of Labor Statistics, 48% of employed college graduates are in jobs that require less than a four-year degree. (Adams)

202 In Switzerland, about two-thirds of 15 and 16 year olds who finish the nine years of obligatory schooling choose to continue their education through Vocational Education and Training (VET), a system that churns out skilled workers who are the backbone of the country’s thriving economy. (Bachmann)

 

Works Cited

Part I

“About PISAOECD.” OECD.org. Organisation for Economic Co-operation and Development. Web. 20 Apr. 2016.

“Comparison of the No Child Left Behind Act to the Every Student Succeeds Act” ASCD.org. ASCD, 2015. Web. 20 Apr. 2016.

Crotty, James Marshall. “Why Asian Nations Dominate Global Education Rankings.” Forbes. Forbes Magazine. 21 May 2014. Web. 20 Apr. 2016.

Crotty, James Marshall. “If Massachusetts Were A Country, Its Students Would Rank 9th In The World.” Forbes. Forbes Magazine. 29 Sept. 2014. Web. 20 Apr. 2016.

Education Committee of the States. Compulsory School Age Requirements. 2010. Web. 19 Apr. 2016.

“Frequently Asked Questions.” corestandards.org. Common Core State Standards Initiative. Web. 20 Apr. 2016.

Gutek, Gerald Lee. An Historical Introduction to American Education. Long Grove, IL.: Waveland, 2013. Print.

Jaschik, Scott. “SAT Tests Are Totally Unreliable for Predicting Students’ Grades in College.” Slate Magazine. 28 Jan. 2016. Web. 20 Apr. 2016.

Miller, Ron. What Are Schools For: Holistic Education in American Culture. Brandon, VT.: Holistic Education, 1990. Print.

National Center for Education Statistics. “Table 1. Public High School 4-year Adjusted Cohort Graduation Rate (ACGR), by Race/ethnicity and Selected Demographics for the United States, the 50 States, and the District of Columbia: School Year 2013–14.” 4 September 2015. Web. 19 Apr. 2016.

“National Highlights Report 2016: Called to Account – Education Week.” Edweek.org. Education Week, 26 Jan. 2016. Web. 20 Apr. 2016.

Nelson, Jack L., Kenneth Carlson, and Stuart B. Palonsky. Critical Issues in Education: A Dialectic Approach. New York: McGraw-Hill, 1993. Print.

“No Child Left Behind: A Parents Guide.” Ed.gov. U.S. Department of Education, 2003. Web. 20 Apr. 2016. .

PISA 2012 Results – OECD.” OECD.org. Organisation for Economic Co-operation and Development. Web. 20 Apr. 2016.

“Quality Counts Marks 20 Years Report Explores New Directions in Accountability” Education Week. Education Week Research Center, 26 Jan. 2016. Web. 20 Apr. 2016. .

Ravitch, Diane. The Death and Life of the Great American School System: How Testing and Choice Are Undermining Education. New York: Basic, 2010. Print.

Sacks, Peter. “Standardized Testing: Meritocracy’s Crooked Yardstick.” Change: The Magazine of Higher Learning 29.2 (1997): 24-31. Web.

Strauss, Valerie. “No. 1 Shanghai May Drop out of PISA.” Washington Post. The Washington Post. 26 May 2014. Web. 20 Apr. 2016.

Symonds, William C., Robert Schwartz, and Ronald F. Ferguson. “Pathways to Prosperity: Meeting the Challenge of Preparing Young Americans for the 21st Century.” Pathways to Prosperity Project, Harvard University Graduate School of Education. Web. 19 Apr. 2016.

“Testing: Frequently Asked Questions.” Ed.gov. U.S. Department of Education, 17 Nov. 2004. Web. 20 Apr. 2016. .

“The Economic Impact of the Achievement Gap in America’s Schools Summary of Findings: April 2009”. McKinsey & Company. Web. 19 Apr. 2016.

Tooley, Melissa. “No Child Left Behind Is Gone, but Will It Be Back?” The Atlantic. Atlantic Media Company, 24 Dec. 2015. Web. 20 Apr. 2016.

“United States – Country Note – Results from PISA 2012.” OECD.org. Organisation for Economic Co-operation and Development. Web. 20 Apr. 2016. .

United States. National Commission on Excellence in Education. A Nation at Risk: The Imperative for Educational Reform: A Report to the Nation and the Secretary of Education, United States Department of Education. Washington, D.C.: National Commission on Excellence in Education, 1983. Web. 20 Apr. 2016.

Part II

Adams, Susan. “Half Of College Grads Are Working Jobs That Don’t Require A Degree.” Forbes. Forbes Magazine, 28 May 2013. Web. 10 Mar. 2016.

Bachmann, Helena. “Who Needs College? The Swiss Opt for Vocational School | TIME.com.” World Who Needs College The Swiss Opt for Vocational School Comments. 4 Oct. 2012. Web. 10 Mar. 2016.

 

Berman, Jillian. “These College Presidents Make over $1 Million (their Students Go on to Earn a Lot Less).” Web. 10 Mar. 2016.

“Meet Bernie Sanders.” Bernie Sanders RSS. Web. 10 Mar. 2016.

Bruenig, Elizabeth. “Why Are Millennial Women Gravitating to Bernie Sanders?” New Republic. 9 Feb. 2016. Web. 10 Mar. 2016.

Camera, Lauren. “High School Dropout Rates Plummet.” US News and World Report. 10 Nov. 2015. Web. 10 Mar. 2016.

Cole, Alan, and Scott Greenberg. Forbes. Forbes Magazine, 28 Jan. 2016. Web. 10 Mar. 2016.

Davidson, Jacob. “3 Mistakes That Will Cost You a College Scholarship.” Time. Time, 3 Sept. 2014. Web. 10 Mar. 2016.

Davis, Alyssa, Will Kimball, and Elise Gould. “The Class of 2015: Despite an Improving Economy, Young Grads Still Face an Uphill Climb.” Economic Policy Institute. 27 May 2015. Web. 10 Mar. 2016.

“College Athletics Revenue and Expenses – 2008.” ESPN. ESPN Internet Ventures, 2016. Web. 10 Mar. 2016.

US News and World Report. “How U.S. News Calculated the 2016 Best Colleges Rankings.” Education. 8 Sept. 2015. Web. 10 Mar. 2016.

“Federal Pell Grant Program.” Federal Pell Grant Program. US Department of Education, 6 June 2015. Web. 10 Mar. 2016.

National Center for Education Statistics. “Students Whose Parents Did Not Go to College.” Findings from the Condition of Education – 2001. Web.

Ginsberg, Benjamin. “Administrators Ate My Tuition.” Sept. 2011. Web. 10 Mar. 2016.

 

Glass, Rob. “What Is the Average Marketing Budget for a Public University and a Private University in the US for Marketing? Do They Have a Different B…” What Is the Average Marketing Budget for a Public University and a Private University in the US for Marketing? Do They Have a Different Budget for International Student Recruitment? Oct. 2014. Web. 10 Mar. 2016.

Higgins, John. “Does Class Size Matter? Research Reveals Surprises.” The Seattle Times. 2014. Web. 10 Mar. 2016.

 

Jaschik, Scott. “The Customer Is Always Right.” Inside Higher Ed. 29 Jan. 2013. Web. 10 Mar. 2016.

Kamenetz, Anya. “$50,000 In Student Loans? You Probably Don’t Think College Was Worth It.” NPR. NPR, 29 Sept. 2015. Web. 20 Apr. 2016.

Newlan, Cara. “The College Amenities Arms Race.” Forbes. Forbes Magazine, 31 July 2014. Web. 10 Mar. 2016.

Odland, Steve. “College Costs Out of Control.” Forbes. Forbes Magazine, 24 Mar. 2012. Web. 10 Mar. 2016.

“Is College Worth It?” Pew Research Centers Social Demographic Trends Project RSS. 2011. Web. 10 Mar. 2016.

“The Rising Cost of Not Going to College.” Pew Research Centers Social Demographic Trends Project RSS. 11 Feb. 2014. Web. 20 Apr. 2016.

Reich, David, and Brandon Debot. “House Budget Committee Plan Cuts Pell Grants Deeply, Reducing Access to Higher Education.” Center on Budget and Policy Priorities. 21 Oct. 2015. Web. 20 Apr. 2016.

 

Rowan, Rachel. “High Cost of Dropping Out – If You Don’t Graduate, Student Loan Debt Hits Even Harder.” Tuitionio Student Loan Blog. 2013. Web. 10 Mar. 2016.

Samuels, Robert. Why Public Higher Education Should Be Free: How to Decrease Cost and Increase Quality at American Universities. Print.

Strang, Tami. “What Keeps College Students Motivated? Their Responses.” The Cengage Learning Blog. 14 Jan. 2015. Web. 20 Apr. 2016.

“Project on Student Debt.” The Institute For College Access and Success. Web. 10 Mar. 2016.

“These College Presidents Make over $1 Million (their Students Go on to Earn a Lot Less).” Web. 10 Mar. 2016.

Webb, Sarah. “The Importance of Undergraduate Research.” The Importance of Undergraduate Research. 2007. Web. 10 Mar. 2016.

Zumbrun, Josh. “SAT Scores and Income Inequality: How Wealthier Kids Rank Higher.” WSJ. The Wall Street Journal, 7 Oct. 2014. Web. 10 Mar. 2016.

 

 

 

 

Pop Culture and Social Media

Mikayla Pevac: Junior, English

Katelynn Fell: Junior, Psychology and Women and Gender Studies

 

Pop culture and social media are both vital parts of today’s modern culture. Without pop culture we would not have our communal obsession with Leonardo DiCaprio finally winning an Academy Award203. Without social media we would lack resources that would allow us to goof off at work or during class. Pop culture inspires a sense of community amongst its followers and social media builds relationships, overcoming physical limitations. This chapter will discuss both topics: pop culture and social media, in the context of American society. Because of the breadth of each of these topics, however, we, the authors, thought it best to discuss only some of the subtopics involved in the overarching areas of pop culture and social media. Pop culture covers a description of what exactly pop culture is, a brief history of the subject, an overview of some of the theories used to study pop culture, and the influence of pop culture on gender, race, and politics. For the section on social media, the author touches on the history of social media; what exactly social media is; effects on society of social media: privacy, polarization, job implications; abuse via social media: sexualization of women, sexting, trolling; and the general effects of social media: universal accessibility and validity problems. The chapter will end with a connection between social media and artificial intelligence—the subject matter of the next chapter.

 

What is Popular Culture?

When one thinks of pop culture, or popular culture, some of the things that come to mind are Television, comic books, video games, clothes, music, and movies. The actual definition of pop culture, from The Oxford Dictionary is, “culture based on the taste of ordinary people rather than an educated elite” (Oxford Dictionaries). “However, the concept “popular culture” belies a simple definition. It has been the subject of debates for three hundred years and has changed, for example, with Romanticism204, industrialization205, Marxism206, American conglomerate culture207, and identity politics208” (Jenkens, McPherson, and Shattuc 27). This means that the history of pop culture is long and complex. While the history of pop culture will be touched upon, this is not by any means a full, extensive history.

The creation of the word popular culture seems to be somewhat up in the air. One source states that the word popular culture was first used, “in 1876 by a British statesman bemoaning the increasing knowledge gap between the lower and higher classes” (“What is Pop Culture?”). While a more reliable source cites that, “Burke credits the German philosopher J. G. Herder with the creation of the term “popular culture” … He thus proclaimed a division between popular and elite culture” (Jenkins, McPherson, and Shattuc 28). That was around 1778, a century before the British statesman. When researching the two words separately, “popular” and “culture”, it seems that the two words could even contradict one another. Culture has had many different interpretations throughout history, however, ultimately, “’culture’ signifies the cultivated or more elite realm of the educated classes as opposed to the debased world of the lower classes, the realm of the popular” (Jenkins, McPherson, and Shattuc 27). Popular has meant, “’belonging to the people.’…this definition always carried a sense of “low” or “base” …this pejorative209 meaning remains along side the newer, modern meaning of “well-liked” or ‘widely liked’” (Jenkins, McPherson, and Shattuc 27). So it’s interesting that the word “culture” which connotes more of an elite air is coupled with “popular” which connotes such a low feeling. It’s almost as if those in the lower classes were trying to demean the high-class word culture by coupling it with the lower class popular. However, when thinking of pop culture now people think of, “culture actually made by people for themselves” (Strinati 3).

 

What is Popular Culture?

When one thinks of pop culture, or popular culture, some of the things that come to mind are Television, comic books, video games, clothes, music, and movies. The actual definition of pop culture, from The Oxford Dictionary is, “culture based on the taste of ordinary people rather than an educated elite” (Oxford Dictionaries). “However, the concept “popular culture” belies a simple definition. It has been the subject of debates for three hundred years and has changed, for example, with Romanticism210, industrialization211, Marxism212, American conglomerate culture213, and identity politics214” (Jenkens, McPherson, and Shattuc 27). This means that the history of pop culture is long and complex. While the history of pop culture will be briefly reviewed, this is by no means a full, extensive history.

The creation of the word popular culture seems to be somewhat up in the air. One source states that the word popular culture was first used, “in 1876 by a British statesman bemoaning the increasing knowledge gap between the lower and higher classes” (“What is Pop Culture?”). While a more reliable source cites that, “Burke credits the German philosopher J. G. Herder with the creation of the term “popular culture” … He thus proclaimed a division between popular and elite culture” (Jenkins, McPherson, and Shattuc 28). That was around 1778, a century before the British statesman. When researching the two words separately, “popular” and “culture”, it seems that the two words could even contradict one another. Culture has had many different interpretations throughout history, however, ultimately, “’culture’ signifies the cultivated or more elite realm of the educated classes as opposed to the debased world of the lower classes, the realm of the popular” (Jenkins, McPherson, and Shattuc 27). Popular has meant, “’belonging to the people.’…this definition always carried a sense of “low” or “base” …this pejorative215 meaning remains along side the newer, modern meaning of “well-liked” or ‘widely liked’” (Jenkins, McPherson, and Shattuc 27). So it’s interesting that the word “culture” which connotes more of an elite air is coupled with “popular” which connotes such a low feeling. It’s almost as if those in the lower classes were trying to demean the high class word culture by coupling it with the lower class popular. However, when thinking of pop culture now people think of, “culture actually made by people for themselves” (Strinati 3).

 

History of Popular Culture

When talking about the history of popular culture, it is difficult not to write about popular culture as it appeared in other countries in history in order to see how it was born and evolved before it arrived in the States. When writing about the influence of pop culture, it is necessary to narrow the scope of research down to a single country, the United States. This is because if writing about the influence of pop culture were not narrowed down, and the goal was to write about its influence on the world, then there would be no end in sight. To begin with, pop culture was born from the hierarchical gap between the many middle class and poor people and the percent who were “high class” or rich. “Mass entertainment216 may have begun as the democrats’ revenge against the elites they despised” (Ashby 18). People did not want to continue to follow in the footsteps of those who claimed to be of a “higher class” than they were, or “elite” compared to them. Because of that, “popular culture can be understood as a part of the growing Romantic backlash against a number of converging influences…these influences included the cold formalism of Classicism217, the distant rationalism of the Enlightenment218, and the inhumanity of industrialization” (Jenkins, McPherson, and Shattuc 29). As written earlier, it’s debated who coined the term popular culture first, the German philosopher J. G. Herder or an anonymous British statesman. Whichever of those two men was the creator, neither one of them was from the United States, so one can conclude that the beginning of popular culture was not in America. Taking that into consideration, “The Western world’s first pop culture “superstar” was probably William Shakespeare219. His theater plays are timeless classics, but he wrote them for a mass audience, thus fulfilling pop culture’s requirement of art that is meant to be enjoyed by the masses” (Wertz). This was back in the 16th century.

One of the first instances of American pop culture was in the 1800’s, and it came in the form of Sam Patch. He was a man known for jumping off of waterfalls. Many people, not of high social standing, would gather around and watch Sam when he made his daring jumps as a source of entertainment (Ashby 9). While the elites were out looking at high art and listening to fancy music, commoners were content with watching a man risk his life.

Another good way to look at the history of pop culture is the way in which its content is created. “Popular culture is determined by the interactions between people in their everyday activities: styles of dress, the use of slang, greeting rituals and the foods that people eat are all examples of popular culture. Popular culture is also informed by the mass media220” (Delaney). This means that popular culture is always going to be changing. What is known as popular culture today is completely different than what was known as popular culture in the 1920’s or even the 1980’s or 90’s. This is what the majority of people are wearing, eating, saying, watching, and so on is not stagnant. That seems to be the most prominent way of looking at how popular culture is created, but there are other interpretations and ideas about how how it is determined. Some people think that pop culture has to, “rise up from the people ‘below’”, “sink down from elites ‘on high’”, or that it is, “rather a question of an interaction between the two” (Strinati 3). Another idea aside from it just being born from peoples’ interactions with one another is that the ideas of pop culture could be, “imposed from above by those in positions of power as a type of social control” (Strinati 3). This is not the happiest idea, but it is one that needs to be considered when thinking about pop culture because if one blindly follows what everyone else is doing or liking, it may not be for the best.

Theories of Popular Culture

In researching pop culture, it becomes evident that there has been a lot of research done on the subject already. That is because it informs and influences everyone’s daily lives. “The emergence and consolidation of popular culture as a subject to be analyzed and taught has meant that it has been assessed and evaluated by a number of different theories221” (Strinati xii). These theories include: mass society theory, theory of mass culture, the Frankfurt School’s theory of modern capitalism, structuralism and semiology, Marxist theory of political economy, Feminist theory, and postmodern theory. It is important to look into these theories, or at least attempt to give a synopsis of each, in order to show the different ways in which people can view pop culture.

Mass society theory starts with the change from the close knit agrarian222 society to the more impersonal city life. “The theory argues that industrialization and urbanization223 serve to create what is called ‘atomization’. This defines precisely what is meant by a mass society. A mass society consists of people who can only relate to each other like atoms in a physical or chemical compound” (Strinati 6). Because of this lack of personal connection, they also begin to lack a moral connection. This makes people in urban areas more prone to “turn to surrogate and fake moralities” (Strinati 6). People make up for the lack of personal and moral connections by investing time in pop culture and mass media. This allows them to feel close to people again because pop culture supposedly reflects the views and beliefs of the majority of people. “Mass culture plays a part here in that it is seen as one of the major sources of a surrogate and ineffective morality” (Strinati 6). People can be manipulated by pop culture at this point because there is nothing between them and the mass media to stop them from falling into its grasp.

Mass culture and popular culture sound similar, but are very different, “Put simply, we can say that mass culture refers to popular culture which is produced by the industrial techniques of mass production, and marketed for profit to a mass public of consumers” (Strinati 10). For the theory of mass culture, it is imperative that culture be able to sell and can be mass produced. This theory also points out the way in which culture that is mass produced could threaten the existence of both folk culture and high culture. In this theory, the only future alternative to mass culture is the avant-garde224. Not everyone in mass culture theory will passively consume what is put in front of them, so to think that the only art that may survive is avant-garde is sort of unrealistic. Some people are going to question why they have to look at the same things as everyone else and artistic ability will not die out just because mass produced culture is being pushed on people. People will still want to create.

Another theory is the Frankfurt School’s theory of modern capitalism225. “The School’s theory argues that modern capitalism has managed to overcome many of the contradictions and crises it once faced, and has thereby acquired new and unprecedented powers of stability and continuity” (Strinati 52). Because of this, people are allowed to satisfy their “false needs226”. These “false needs” lie within the culture industry, “The culture industry deals in falsehoods not truths, in false needs and false solutions, rather than real needs227 and real solutions” (Strinati 57). Pop culture is allowed to take over the minds of the masses because, “it offers the semblance…of resolving problems, the false satisfaction of false needs as a substitute for the real solution of real problems” (Strinati 57). This seems like a bleak theory, and, it is heavily criticized due to its, “attempt to maintain a distinction between false and true needs” (Strinati 70). Like the theory of mass culture, this theory is not one which seems feasible in reality because people most likely would not put their false need for watching television above their true need for happiness.

To begin writing about structuralism and semiology as it applies to pop culture, it is necessary to define the two terms. “Structuralism has been defined as a theoretical and philosophical framework relevant to the social sciences as a whole, which stresses the universal, causal character of structures. Semiology has been defined as the scientific study of sign systems228 such as cultures” (Strinati 78). One of the ways in which structuralism can be applied to pop culture is looking for the structure in popular things in culture that allows them to be so successful. An example would be the James Bond229 franchise. There is a set structure to each book, and consequently each movie, which appeals to a mass audience and allows it to be such a success (Strinati 93). Another type of story that may not look like it shares much in common with James Bond, but has a similar structure that appeals to the masses is fairy tales. “They [both] express a universal structure of basic oppositions which…will ensure success. Both the Bond novels and fairy tales are successful because they are universal in their underlying connection with the eternal conflict between good and evil” (Strinati 94).

Semiology is understood in pop culture in that, “material reality can never be taken for granted. It is always constructed and made intelligible to human understanding by culturally specific systems of meaning” (Strinati 97). This would mean that objects can mean different things depending on the culture in which they are being interpreted, and this interpretation determines how the reality in that culture is made up and viewed. One criticism of both structuralism and semiology is that they both do not have good ways of validating the theories through research. With structuralism, how can it be tested if it is the structure that is making something successful, and with semiology how can a claim like, “roses signify passion” be validated? (Strinati 108-110). However, both of these are still important in that they are ways in which people look at pop culture so they still need to be considered when thinking about how society is influenced by it.

Central to Marxist theory of political economy is Marx’s230 ideology. His ideology seemed to fluctuate, but one consistent idea can be nailed down and that is that, “the predominant ideas common to a capitalist society, including its popular culture, are those of the ruling class” (Strinati 117). This means that the economy is also run by those in the ruling class. Marxist theory of political economy is one of the only theories of pop culture that pays any attention to how the economy effects pop culture. “The political economy approach highlights some of the structural conditions under which popular culture is produced, distributed and consumed, and it has to play a key role in any adequate sociological analysis of popular culture” (Strinati 126-127). This theory allows people to look at pop culture through an economic perspective.

Feminist Theory of pop culture deals with how women and men are portrayed in different areas of pop culture. One of the most important points of the Feminist Theory is the idea of patriarchy.231 “The concept of patriarchy describes a social relationship in which men dominate, exploit, and oppress women” (Strinati 180). It can also deal with, “the unequal power relationship between men and women” (Strinati 180). This is an important aspect “in determining how men and women will be represented in pop culture, and how they will respond to those representations” (Strinati 180). Assuming patriarchy does exist in society, men will probably tend to be portrayed in a more favorable light than women. This can have an impact on pop culture by send messages to the public about how they should act toward or act as women.

The final theory which is used to study popular culture is that of postmodernism. Postmodernism itself, “describes the emergence of a society in which the mass media and popular culture are the most important and powerful institutions, and control and shape all other types of social relationships” (Strinati 205). Basically postmodern theory posits that pop culture and mass media at some point will no longer reflect or warp reality, but that the only reality that will be available is that of the mass media and pop culture. In a way, it will be like pop culture and mass media create the reality that we perceive and live in.

These short overviews of the theories of popular culture make it clear that there are various ways to interpret pop culture. Our understanding of pop culture’s influence on society is filtered through these theories. Each theory is a different lens for viewing pop culture, and each lens effects the viewer and society.

 

Popular Culture’s Influence on Society

Popular culture has influence: the specific areas of influence that will be looked at in this section are: gender, race, and politics. Because of the broad nature of these categories, it will be necessary to keep the discussion limited to pop culture’s influence in the United States. At times, it might be essential to stray to its influence in other countries.

A lot of the theories that were described earlier will be addressed again in the upcoming sections. It is necessary to do so because these theories help to explain why such things as gender, race, and politics are influenced so much by pop culture. For most of this section, the theory of pop culture that will be used like a lens to view society is the mass society theory of pop culture. This is because most of America is perceived as a united country, but it is a large country with many polarized large cities. Pop culture, which is believed to indicate the majority of people’s interest in something, is a source of connection for people around the United States, and in turn makes them feel more united as a country.

Gender

Before digging into all of the ways in which pop culture influences gender, it is important to provide the definitions for sex and gender because those sometimes get a bit fuzzy:

It is important to begin by distinguishing sex and gender as those terms are often mistakenly taken as synonyms. Sex is biological; it is physiologically what prompts us to be assigned as male or female. Gender is socially constructed; it consists of the ideas we have about masculinity and femininity and how we apply these notions to people based on their designated sex assignment. (Leavy and Trier-Bieniek 2)

For ease of discussion, sex will be understood and discussed as a binary system232 in this section consisting of male and female, but intersexuality is also an issue in society today. It may be difficult for some to understand that gender is constructed by society and things such as pop culture. For a person inside of the society that is building these ideas about gender, it becomes natural to think of gender in this socially constructed way. In the same way that people see sex as binary, “social constructions create [a] gender binary where masculinity and femininity are seen as polar opposites. Some feelings, behaviors, preferences, and skills are attributes to females and others to males” (Leavy and Trier-Bieniek 4). Anything that a person does that does not normally fit to their assigned sex and gender binary is seen as wrong.

In order for gender to be socially constructed, it must be taught or socialized, “through interactions with people and cultural texts and objects” (Leavy and Trier-Bieniek 4). Those that people interact with or are influenced by at an early age are the ones who teach them gender norms and the consequences of deviating from these norms. One of the ways in which pop culture plays into gender is that the mass media is one of the places that children learn gender norms from things such as: Disney princesses, Hot Wheels, Barbie’s, Nerf Guns, My Little Pony, commercials depicting the toys that young girls and boys should be playing with, etc.

Although the media and pop culture is only one of the forms of socialization toward gender, it is different from the other forms, “because of its far-reaching grasp… and also because we often elect to spend our leisure time participating in, generally consuming, pop culture. We are more likely to view it as fun…and therefore may fail to interrogate the messages of pop culture and how they are impacting us” (Leavy and Trier-Bieniek 13). It is important to analyze the relationship between gender and pop culture because people themselves might not be able to identify the messages being relayed to them by the media. One of the first red flags when looking at gender and pop culture is that, “media culture is overwhelmingly produced by men” (Leavy and Trier-Bieniek 15). The statistics for women in the media are very low. The highest percentage of women in an area of media in Leavy and Trier-Bieniek was seventeen percent. The gender gap in media production means that there is a lack of gender balance leading to gender stereotypes233 being reinforced. These stereotypes can lead to internalization234 of gender constructs, which may cause women to feel the need to be more skinny, or men to not cry.

Pop culture is in itself a gendered institution with men running most of the behind the scenes production. This leads to pop culture displaying and adding justification to gender stereotypes. People then internalize these stereotypes or binary constructions of gender and in turn try to mold themselves into the gender pop culture shows them they should be.

Race

One of the most recent examples of the influence of pop culture on race, is the 2016 Academy Awards. For the second year in a row, all of the nominees in the top four categories (actress in a supporting role, actor in a supporting role, actress in a leading role, and actor in a leading role) were white. This was surprising as many movies had been released the previous year with actors, actresses, and directors of color who were worthy of, at least, a nomination. There was out rage at the lack of diverse Oscar235 nominees from people throughout the country on social media. The 2016 Academy Awards reached 34.4 million viewers when it aired (“Oscar Ratings”). So when host, Chris Rock’s monologue shamed not only the Academy for their lack of diversity, but also the film industry for their lack of opportunities for people of color to act in movies, people were watching and getting the message. This shows the power of the media and pop culture to influence the way people think about race and diversity, and possibly open their eyes to see where this diversity is lacking.

One author disagrees, writing that, “without a change in policy that creates opportunity for black Americans, those examples236 may serve mainly to remind viewers of how little substantive improvement there has been in remedying discrimination and economic disparity” (Gittell). Another author talks about the invisibility of race in pop culture. “In an age when race seems increasingly to be a cultural pressure point and to be ever present, it also retains an opacity237, a now-we-see-it, now-we-don’t quality that makes adequate explanations of its workings in today’s society difficult to produce” (Jenkins, Henry, McPherson, and Shattuc 519). If one thinks of pop culture in terms of the mass society theory, then this is troubling because media and pop culture supposedly reflect the views and beliefs of the majority of people. It is also troubling looking at it through the postmodern perspective. In this case there, is no reality besides the reality created by the media and pop culture which would be either blind to racial problems, or full of empty promises of racial equality.

Pop culture should be a way for racial issues to be addressed so that those in the public who consume pop culture will be aware of the issues as well. In the case of the Academy Awards, this seemed to work by highlighting all of the people of diverse backgrounds that did not get nominated. This caused the Academy to take action by adding 322 diverse new members to the Academy of Arts and Sciences, the organization behind the Academy Awards (Keegan). Most of the time, pop culture is passively consumed by those who do not want to take time to see the message about race behind the television show they are watching. However, stereotypes played out on the screen by those who are not of the race that they are portraying sometimes reinforce the stereotypes people may have about a certain race in their mind.

In some of the same ways that people internalize their gender roles, people may internalize ideas and stereotypes that are presented in pop culture about their race. Pop culture permeates every part of society, and around every corner there will be some sort of representation of people of color whether it is accurate or not. It is a matter of, like with gender, getting those with authentic experience on the subject behind the scenes of pop culture and the media. This is so that what is shown to and influences society is real rather than fictional stereotypes.

Politics

While the previous two areas touched upon how pop culture influences the individual through the society by reinforcing stereotypes or breaking stereotypes, this section will focus on how pop culture has changed politics. 1960 was the first time that presidential debates were aired on television. The debates were between Nixon and Kennedy. Nixon lacked the visual stage presence of Kennedy during the debates and because of this, Kennedy took the presidency. Although this was not the only variable to Kennedy winning the election, it was one of the ways he secured the presidency. It is important to note that those who did not watch the debates on television, but listened to them on the radio thought the first of the four debates was a tie and some even claimed Nixon had won (“The Kennedy-Nixon Debates”). This goes to show the power of media in politics. Now to jump from 1960 to the present day. Due to this being an election year, there are plenty of things to talk about, and much to talk about this year in politics related to pop culture. Politics and pop culture are intertwined:

Politicians need to care about popular culture because it is one of the common bonds that tie increasingly segmented Americans together.  Whether you live in a red state or a blue state238, or an urban or rural environment, you are aware of popular culture.  And a politician who can skillfully navigate the use of pop culture references and appearances in pop culture venues can increase his appeal to the American public. (Rubin).

Pop culture is such a part of society that those wishing to govern the society must be aware of pop culture to connect with those they want to govern. It is a way of connecting with the voters, like pop culture itself is a way of connecting people to each other through a common interest.

In the current election, several of the candidates have made attempts to use pop culture to gain voters. Throughout her campaign, Hilary Clinton has tried to appeal to young voters. One of the ways that she has tried to do this is by taking selfies with celebrities like Kim Kardashian and Katy Perry (Tatum). Saturday Night Live (SNL) has been one of the other pop culture hot spots for candidates trying to reach out to voters. Hilary Clinton made an appearance on SNL in a skit where she appeared as a bartender trying to get her political ideals across in a subtly, cool way. Another candidate that appeared on SNL was Donald Trump. He actually hosted an entire episode of SNL doing different skits and jokingly showed, “what a meeting inside the Oval Office with Trump as president might look like” (Tatum). SNL always uses politics as sources for comedy, and SNL is a pop culture icon that has been around for more than 40 years. This exemplifies the influence of pop culture on politics because people tune into SNL every week, in addition to their nightly news, to see a parody of how a political figure has messed up or done something right.

People’s idea of an ideal presidential candidate is also shaped by pop culture and this can affect how candidates campaign. Some candidates campaign in the ostentatious way they do because people see stereotypical things in the media and pop culture about different types of people and believe them, and this type of candidate caters to those people. Those candidates who see that the problem with pop culture is the stereotypes, take a step back and address the people who see these problems as well, and mold their campaign to show they will try their best to fix this.

Pop culture is a very important aspect of politics because it is an area that allows candidates and those in office to connect with the American people. One article stated that, “it is likely that the candidate most able to use pop culture will come out on top” (Rubin). If pop culture can influence a campaign that much, then it is a crucial factor connecting people together.

 

Conclusion

Popular culture started out as those who were beneath the elite wanting to claim some form of entertainment for themselves. It has evolved into something that effects everyday like in all kinds of different ways, and is not only for those of “lower class” but those of higher standing as well. There are many different theories that attempt to understand how pop culture interacts with the world around us. These theories can be put to use when looking at how pop culture influences gender, race, and politics. Gender and race can be examined along the same lines. Both females and people of color are highly underrepresented in the production stages of pop culture, and are lacking in lead roles in movies, television shows, comics, and books. This leads to pop culture presenting females or persons of color being presented as stereotypical or just plain wrong. This in turn gives those consuming it the wrong idea about that gender or race, and reinforces stereotypes. Pop culture, however, can also be an outlet to show where society needs to improve and lead people in that direction. Politics are influenced by pop culture because it is what connects Americans, so those wanting to hold office have to be up on pop culture in order to connect with the voters. They can do this by referencing things in pop culture, appearing in the media, and using celebrities in their campaigns. Without pop culture, the world would be a very different place.

 

Transition from Pop Culture to Social Media

 

Pop culture and social media both have been facilitated by online culture. While pop culture was around before the invention of the Internet, social media has exacerbated it. Where once pop culture was isolated to ‘big name’ celebrities or huge social events, now there are famous children, dogs, cats, and average men and women that can get their ‘15 minutes of fame’ by simply posting a funny video to Facebook. Social media’s rise to prevalence was dependent on the Internet, however. Without the ability to connect with people through computers at any time, then social media would never have been successful. Both pop culture and social media feed off of the same aspects of modern culture: the need to form a community with those who are similar to themselves.

 

History of Social Media

This chapter is based on an American perception of social media and is only a snapshot of the topic—many topics and arguments are left out, but the main issues surrounding social media are discussed or touched upon.

We live in the Information Age239, a time when knowledge is available to everyone. Where once only the elite could afford the downtime needed to learn, now, at the click of a computer mouse, even the most uneducated can find himself or herself immersed in the cutting-edge information that makes up the Internet240.

In order to best understand the origin of social media, the original draw and popularity of the Internet must first be explained. Web 1.0241 enabled individuals, groups, and companies to communicate with anyone, anywhere, at anytime (Baltzman 94). Real-time communication changed the entire realm of relationships. Instant messaging enabled a person to be in a completely different place than another person and yet still be able to receive messages in real time. This new kind of communication left email and ‘snail mail’ in the dust.

In the first decade of the Internet’s life, starting in the 1990s and going until the early 2000s, “most online content resembled traditional published material: the majority of web users were consumers of content, created by a relatively small amount of publishers” (Agichtein, Castillo, Donato, Gionis, and Mishne). The Internet consisted mostly of content shared between professionals where only trained persons could access it. On the other hand, once the early 2000s hit, today’s system of user-generated content “became increasingly popular on the web: more and more users participated in content creation, rather than just consumption.” Modern social media falls under the heading of ‘content creation.’ Take Facebook as an example, the entire site functions off of individuals ‘posting’ material that is then liked, commented on, and shared throughout the site and, if its popular enough, is ultimately circulated throughout other social media platforms.

The term social media is credited to Chris Shipley “and is used to describe online tools and utilities that allow: communication of information online, participation, and collaboration” (Newson, Houghton, and Patten 49). Social media has changed the tools and behaviors originally associated with Internet communication. Where once Internet users utilized online resources to just access information, now, with the social media-focused environment, the Internet has become a breeding ground for creativity: “The web is a stage on which new kinds of online lives are being acted out” (Newson, Houghton, and Patten xi). The modern landscape of the Internet “reflects a progressive shift from the physical economy towards a knowledge economy” (Newson, Houghton, and Patten 115). This transformation in the usage of the Internet is not a surprise however:

“After all, the Internet started out as nothing more than a giant Bulletin Board System242 (BBS) that allowed users to exchange software, data, messages, and news with each other. The late 1990s saw a popularity surge in homepages, whereby the Average Joe could share information about his private life; today’s equivalent would be the weblog, or blog. The era of corporate web pages and e-commerce started relatively recently with the launch of Amazon and eBay in 1995, and got a right ticking-off only 6 years later when the dot-com bubble burst243 in 2001. The current trend toward Social Media can therefore be seen as an evolution back to the Internet’s roots, since it retransforms the World Wide Web to what it was initially created for: a platform to facilitate information exchange between users” (Kaplan and Haenlein).

Overall, the Internet has gone through an evolution in the last few decades that has created a completely different entity. Most likely the Internet will continue to evolve in order to adapt to the needs of the users; therefore, in the future the Internet may serve a totally different function than it does today.

What is Social Media

Social media is a beast of a topic. On one hand, social media is presented as just another modern tool that was spawned by the dawn of Web 2.0244, but on the other hand social media is a highly personal mode of communication for millions of people around the world. In a nutshell, “Social media employs mobile and web-based technologies to create highly interactive platforms via which individuals and communities share, co-create, discuss, and modify user-generated content” (Kietzmann, Hermkens, McCarthy, and Silvestre). There are multitudes of different online sites that are considered to be social media, the list goes on and on. The top ten most popular Social Networking sites according to an online eBusiness ranking site, eBizMBA, are:

1. Facebook: with 1,100,000,000 estimated unique monthly visitors.

2. Twitter: with 310,000,000 estimated unique monthly visitors.

3. LinkedIn: with 255,000,000 estimated unique monthly visitors.

4. Pinterest: with 250,000,000 estimated unique monthly visitors.

[* 5. Google+ (Google Plus) ^245^*]: with 120,000,000 estimated unique monthly visitors.

6. Tumblr: with 110,000,000 estimated unique monthly visitors.

7. Instagram: with 100,000,000 estimated unique monthly visitors.

8. VKontakte (VK)^246^: with 80,000,000 estimated unique monthly visitors.

9. Flickr247: with 65,000,000 estimated unique monthly visitors.

10. Vine248: with 42,000,000 estimated unique monthly visitors.

As exemplified by the range in specialties of the social networking platforms above, “There currently exists a rich and diverse ecology of social media sites, which vary in terms of their scope and functionality” (Kietzmann, Hermkens, McCarthy, and Silvestre). For example, Facebook is all about connecting with those around you on a digital level, while LinkedIn is focused mainly on providing a place for potential employers and employees to exchange information with ease: “Some sites are for the general masses (like Facebook),…[while] other sites, like LinkedIn, are more focused professional networks.” Sharing via the Internet has become the norm over the last decade, “Media sharing sites, such as MySpace, YouTube, and Flickr, concentrate on shared videos and photos.” The reason that so many of the sites are so popular is their accessibility. Whether you are the president of the United States of America or a college student, anyone can author a blog or participate in microblogging249.

Microblogging offers a platform for people to offer “short status updates of what users are doing, where they are, how they are feeling.” The Internet has been transformed into a place that validates the everyday thoughts or notions that a person has by allowing them to share their ‘updates’ with a global audience: “The emergence of Internet-based social media has made it possible for one person to communicate with hundreds or even thousands of other people about” a variety of topics ranging from the current American political race to which is the best dry cleaning service in Tulsa, Oklahoma (Mangold, Glynn, and Faulds).

According to Kietzmann, Hermkens, McCarthy, and Silvestre, most social media is built upon a framework of seven building blocks:

1. Identity: “The identity functional block represents the extent to which users reveal their identities in a social media setting. This can include disclosing information such as name, age, gender, profession, location, and also information that portrays users in certain ways. For instance, the presentation of a user’s identity can often happen through the conscious or unconscious ‘self-disclosure’ of subjective information such as thoughts, feelings, likes, and dislikes.”

2. Conversations: “The conversations block of the framework represents the extent to which users communicate with other users in a social media setting. Many social media sites are designed primarily to facilitate conversations among individuals and groups. These conversations happen for all sorts of reasons. People tweet, blog, et cetera to meet new like-minded people, to find true love, to build their self-esteem, or to be on the cutting edge of new ideas or trending topics. Yet others see social media as a way of making their message heard and positively impacting humanitarian causes, environmental problems, economic issues, or political debates.”

3. Sharing: “Sharing represents the extent to which users exchange, distribute, and receive content. The term ‘social’ often implies that exchanges between people are crucial. In many cases, however, sociality is about the objects that mediate these ties between people; the reasons why they meet online and associate with each other.”

4. Presence: “The framework building block presence represents the extent to which users can know if other users are accessible. It includes knowing where others are, in the virtual world and/or in the real world, and whether they are available. In the virtual world, this happens through status lines like ‘available’ or ‘hidden.’ Given the increasing connectivity of people on the move, this presence bridges the real and the virtual.”

5. Relationships: “The relationships block represents the extent to which users can be related to other users. By ‘relate,’ we mean that two or more users have some form of association that leads them to converse, share objects of sociality, meet up, or simply just list each other as a friend or fan. Consequently, how users of a social media platform are connected often determines the what-and-how of information exchange. In some cases, these relationships are fairly formal, regulated, and structured.”

6. Reputation: “Reputation is the extent to which users can identify the standing of others, including themselves, in a social media setting. Reputation can have different meanings on social media platforms. In most cases, reputation is a matter of trust, but since information technologies are not yet good at determining such highly qualitative criteria, social media sites rely on ‘mechanical Turks’: tools that automatically aggregate user-generated information to determine trustworthiness.”

7. Groups: “The groups functional block represents the extent to which users can form communities and sub-communities. The more ‘social’ a network becomes, the bigger the group of friends, followers, and contacts.”

All of these ‘building blocks’ come together to explain the allure of social media. Where once it was sometimes difficult to find like-minded people to build relationships with, now, with the aid of the Internet, virtual connections have become a valid mode of intimate communication between people.

Effects on Society of Social Media

Social media has pervaded most aspects of the lives of Americans; therefore, there have been several profound impacts on modern society. This section will focus on three areas where social media has made a significant impact: invasion of privacy, polarization in beliefs, and some of the implications on jobs.

Privacy

When the first instinct for a person becomes to share his or her every personal thought on social media where everyone else can see it, then privacy250 becomes a very controversial issue. For example, on December 23, 2015 a group of Black Lives Matter251 protestors promoted a gathering over Facebook to be held at the Mall of America in Bloomington, Minnesota. The protesters had peacefully assembled at the Mall of America before without reports of property damage or aggressiveness. So when it was later reported that the security team at the Mall of America “had used social media to monitor the [Black Lives Matter] group, even creating a fake Facebook profile to ‘Like’ the Black Lives Matter Minneapolis Facebook page and view the profiles of specific activists” (Kaufman). The information gathered from social media was then used to try and get restraining orders against the activist group to try and suppress the protest. Therefore, social media was used as a tool where one party infiltrated another’s trust in order to enact a specific action. Social media operates in such a grey area: should the privacy of Internet users be prioritized? Or do safety and legal issues always come first?

With the new surveillance devices that have been developed, there have been “a variety of side effects for law enforcement officials, including a higher bar to obtain general warrants and Supreme Court challenges to the new surveillance devices. Currently, law enforcement agents must go through a process more complicated than obtaining a general search warrant in order to tap communications” (Lind and Rankin 21). With the current climate as it is,

“Facebook and other social-networking sites make it easy for an individual to share as much or as little information about themselves as they might like. This information might be shared with just a certain group of people, all of the users’ friends, or the entire Facebook community, and because some Facebook information is searchable through search engines, this information could be made available to anyone with access to the Internet. This then ‘demonstrates the difficulty of ensuring that anything is truly ‘private’ when posted on Facebook” (316).

Social media users should take protective measures. By familiarizing yourself with the privacy setting offered by each of the social media platforms, privacy concerns would automatically decrease. There will always be some amount of risk associated with an online presence however—by freely expressing yourself on the Internet you automatically open yourself up to abuse from other people also exercising their right to freedom of expression.

Polarization

The best way to talk about the polarization252 in beliefs caused by social media is by offering an example in the form of Wael Ghonim253. Ghonim held a TED Talk in December 2015 that discussed how during the Egyptian Revolution of 2013254 social media played a role in how the people in Egypt (and around the world) expressed their opinions about the revolution. The talk, entitled: Let’s design social media that drives real change, focuses upon Ghonim’s personal experience as a social media presence during the Revolution as well as discussing social media as a whole.

Ghonim belives that there are five critical challenges facing today’s social media:

1. “We (the users of social media) don’t know how to deal with rumors. Rumors that confirm people’s biases are now believed and spread among millions of people.

2. We create our own echo chambers. We tend to only communicate with people that we agree with, and thanks to social media, we can mute, un-follow and block everybody else.

3. Online discussions quickly descend into angry mobs. All of us probably know that. It’s as if we forget that the people behind screens are actually real people and not just avatars.

4. It has become really hard to change our opinions. Because of the speed and brevity of social media, we are forced to jump to conclusions and write sharp opinions in 140 characters about complex world affairs. And once we do that, it lives forever on the Internet, and we are less motivated to change these views, even when new evidence arises.

5. Today, our social media experiences are designed in a way that favors broadcasting over engagements, posts over discussions, shallow comments over deep conversations. It’s as if we agreed that we are here to talk at each other instead of talking with each other” (Ghonim).

Each of these challenges contributes to the rise in polarization of beliefs. Polarization in beliefs can lead to prejudice, discrimination, close-mindedness, perpetuation of stereotypes, and many other negative social practices. Technology did not cause this inclination towards polarization, but it has facilitated a rise in the extremity of the issue. Social media feeds the need for drama and extremism: for example, Ghonim says that he knows “for a fact [that] if [he] writes a post that is more sensational, more one-sided, sometimes angry and aggressive…more people see that post. He gets more attention.” And that is what is at the root of the problem: the seeking of attention. No longer can people go without validation from others for very long—social media acts as a vacuum where people try to one up each other in order to gain notoriety.

Ghonim works “on figuring out how technology could be part of the solution, rather than part of the problem.” He sees quality as the answer:

“But what if we put more focus on quality? What is more important: the total number of readers of a post you write, or who are the people who have impact that read what you write? Couldn’t we just give people more incentives to engage in conversations, rather than just broadcasting opinions all the time? Or reward people for reading and responding to views that they disagree with? And also, make it socially acceptable that we change our minds, or probably even reward that? What if we have a matrix that says how many people changed their minds, and that becomes part of our social media experience? If I could track how many people are changing their minds, I’d probably write more thoughtfully, trying to do that, rather than appealing to the people who already agree with me and ‘liking’ because I just confirmed their biases.”

Polarization should no longer dominate personal expression. In essence, “we need to rethink today’s social media ecosystem and redesign its experiences to reward thoughtfulness, civility and mutual understanding.”

Job Implications

The realm of business has been greatly changed by the pervasiveness of social media, but nowhere is it so evident as in the changes that have occurred in the workplace over the last couple of decades. The workplace has evolved from a time when all of the company’s activities were catalogued on paper to a system that operates mostly in cyberspace255.

One of the most important ways companies are using social media is to find applicants to grow their organizations. Social media has become one of the “most popular online tools for companies to recruit and screen potential employees” (Social Media in the Workplace). There was a study conducted in 2011 by the Society for Human Resource Management that “found that 56% of companies use social media to find candidates, up from 34% in 2008. In addition, in 2011 more than 25% of employers went online to check applicants’ profiles on social-media platforms.” With this increase in reliance on social media to find job applicants, comes the tendency for companies to screen candidates even before the possible employee walks into the building. By tapping into social media, organizations can collect “personal details [about job applicants] such as a candidate’s race, gender or sexual orientation — potentially violating anti-discrimination laws 256.” Social media is still so new that there are few laws and regulations in place that deal with all of the issues surrounding the utilization of social media in ways that it was not necessarily intended.

Social media is still in the grey area of what can and cannot be used against an employee or potential employee. Right now, “Employers are well aware that they have the power to monitor computer usage, cell phone activity, and even content posted on social media sites” (Lind and Rafkin 124). The law usually comes down on the side of the company or organization when it comes to whether or not social media or other online mediums of expression can be used against an employee: “The law, at this time, does not keep an employer from being able to use information gotten from the Internet about an employee against them” (Lind and Rafkin 123). Not only are employers “using these tools to screen potential employees,” they are using them as reasons to fire employees (124). The only way that this trend in work place monitoring even came to fruition, however, was the move towards online communication in the first place: “The digitalization of work has led to the increased abstraction of the job,”—meaning that where once everything had to be done in a physical landscape, now more abstract information can be processed like never before. Because of this introduction of virtual work, workers are now able “to visualize otherwise abstract information in entirely new ways” (Coovert and Thompson 5). These new ways of operation lead to the employers of a company to institute new ways of making sure that their employees are operating in ways that they approve of—hence a paradox has formed.

On the other side of the issue, social media offers great tools for workers and companies alike: there are now even jobs and internships based solely on an employee handling a company’s social media presence. Now that “More than seven in 10 employed people are active social networkers, Social Networking has become the number one use of the Internet. There are more than two billion social connections generating more than three billion expressions per day. Overall, we know that 94% of people use social networking to learn, 78% to share knowledge and 49% to access needed expertise” (Hebner). The switch from a global economy to knowledge economy 257 has enabled people to feed off of the strength of the collective knowledge to “rapidly find answers to questions, act with greater confidence, and influence others in entirely new ways.” There are three main ways that social media has lead to better business:

1. “Social has become the new ‘production line’ for business. Gone are the days when an employee had to be tied to a desk or a chair in the office to get his or her work done. We’re in a new era, marked by a shift in the way that people connect and get work done. Innovation does not need to take place in isolation: colleagues can use social tools to collaborate and learn directly from the marketplace. At the same time, sellers are using social tools to connect with the right experts to meet and anticipate clients’ needs. Managers can launch digital dialogues to surface ideas, knowledge and resources across departments, languages and from around the world. We are now able to engage with people, whether they are employees or consumers, as individuals and not as just part of segments. Social businesses258 that embrace this phenomenon are able to build expertise by making knowledge more accessible; fuel innovation by crowdsourcing259 ideas; improve productivity with better collaboration; and expand customer sales, loyalty and advocacy by providing exceptional digital experiences.

2. Social is the ‘human face’ of data, creating new intelligence for driving business outcomes. By creating an environment that uses social analytics and behavioral sciences with big data, personalized at every touch point, customers and employees can make better, more confident decisions. Such analysis can also help to better understand and address behavioral, social sentiment or performance-based trends. Human Resources executives are using social networking analytics to spot gaps in expertise and connect with job seekers to fill them. Companies can re-invent the way that they recruit, motivate and retain the best talent by using behavioral sciences, social analytics and comparative benchmarks. In other words, replacing gut-instinct guesswork with evidence-based decisions. Marketing executives are using social analytics to transition marketing into a ‘service’ that people actually value, because the content is personalized to their needs and desires. Product executives are using social analytics to better understand what clients want, how they use their products and how to better serve them. Senior executives are using data analytics to glean insights into their workforces and establish open, transparent communication to make employees more connected and receptive to change.

3. Social is the future of how modern enterprises work. Social businesses have new styles of leadership that are more collaborative and responsive to customers. Social businesses are responsive, agile and responsive — and most importantly, transparent and authentic. The best examples are creating a culture of mutual respect, guided by social governance policies that employees understand and comply with. At the same time, they are applying security and privacy controls built upon the flexibility of a mobile and cloud infrastructure to ease adoption and automate quickly.”

Social media is not bad nor good: it is just a tool available to the world that can either be used to bolster the company’s connection with others or it can be used to spy on employees. Like the saying goes: ‘With great power, comes great responsibility.’

 

Abuse via Social Media

A very controversial issue of today is the topic of abuse260 via social media. Accompanying the freedom that social media offers, comes the opportunity for others to take advantage and inflict harm on those who have made themselves vulnerable on the Internet. This section will discuss social media abuse in the contexts of: the sexualization261 of women, sexting, and Internet trolling.

 

Sexualization of Women

In a culture that celebrates physical beauty, impossible beauty standards exemplified by magazines like Vogue or Sports Illustrated is the norm. The “contemporary sexualization of culture is said to have created normative expectations whereby women and girls may be expected to perform sexualized subject positions.” For example, several articles have been circulating about Russian supermodel Kristina Pimenova who is only 9 years old. A controversy (spurred on and spread by social media) began when “Earlier this year, Woman Daily Magazine named Pimenova ‘the most beautiful woman in the world’ — a title that…is generating controversy among adults who say she’s too young to be a ‘supermodel’ — and, more importantly, that she’s being sexualized by many of her ‘fans” (O’Neil). Many people feel that even the dubbing Pimenova with the term ‘woman’ is wrong. Overall, “There is a body of literature that sees young women as becoming increasingly sexualized and pornified in post-feminist culture and social contexts where forms of sexualized selfhood are to be performed” (Crofts 17). From a very young age girls are pummeled by all kinds of media portrayals (like Barbie dolls) that put more value on a woman’s physical appearance rather than her personality or intelligence.

The sexualization of girls and women on all media platforms, including social media, alike has put pressure on and created expectations for young females to act a certain way. There is an unpopular field of thought that supports actions such as sexting262 for young women because they see it as ‘empowering.’ Empowering in the aspect that it allows women to be seen as objects of beauty, but what women actually get from these expectations is the fact that they do not personally measure up. So that is why there is an opposing group that feels that these “practices may simply be reinforcing, reproducing and reflective of sexualized and sexist normative expectations.” Due to the pressure and coercion behind the actions of young girls using social media platforms to share compromising content about themselves, “it is legitimate to question whether young women in some instances are able to fully and freely consent to the activity, even when they produce and send the image” (17). To wrap it all up, social media acts as a catalyst where expectations for females are circulated and reinforced in the minds of all ages of girls and women.

 

Sexting

With the invention of social media apps such as Snapchat263 and Tinder264, teenagers and young adults can now communicate without being monitored. Snapchat was initially invented for individuals to exchange pictures of a typically sexual nature that would disappear after a couple of seconds, therefore eliminating the ‘evidence.’ While Tinder does take certain precautions to keep people under the age of 18 safe by ensuring that: “Users are able to quickly and easily report and block anyone that engages in offensive or inappropriate behavior on Tinder. [The site] monitors and deletes any profile that violates [their] terms of use. Tinder also requires a double “opt-in” meaning both users need to ‘like’ each other before they can communicate” (Mazzella). While these precautions do exist, they do not satisfy the “Adults [who] are worried about the risks associated with children’s exploitation and exposure to explicit images in the context of an unregulated digital” (Crofts, Lee, McGovern, and Milivojevic viii).

Privacy once again becomes relevant in relation to this issue because the worried adults make the “Young people concerned that what they understand to be private choices and associations are [being] over-policed” (viii). It has been proven that “Despite perception in the media and in the community, most sexting participants did not feel coerced or pressured into sending images. Most young males and females in the study reported that they send images ‘as a sexy present’ or to be ‘fun and flirtatious’, and, for some participants, sexting was also experiences as a safe way to explore sexuality without physical; sexual contact” (ix). So while the lack of supervision of social media can be worrisome to some individuals, that does not entitle adults to take away the rights of young people to “free expression, association and privacy—[which are] all so important for [young people’s] healthy development and identity as active citizens” (ix). Some people may question if social media is an important enough issue to warrant all this controversy. Many individuals “see technologies as a vital part of their social life and the building of their identity. As mechanisms for socializing, education, relaxation, gaming, romance or communication between friends and peer groups, new technologies provide a key framework within which young people live their lives” (3). Without social media the lives of most of the modern generation would be remarkably different. A balance needs to be struck between these two warring parties that on one hand protect young people from explicit content but also upholds the rights of these same individuals to privacy and free expression.

 

Trolling

Trolling265 is probably one of the least popular types of social media abuse, but it is also one of the most famous. Let’s look at an example. In the August of 2015 Target announced that they were making a move toward gender-neutral labeling for children’s toys. There was an uproar by some people who felt that political correctness was getting out of hand, so many vocal people took to Facebook to express their opinions. That is when “Mike Melgaard came to Target’s defense—in a provocative manner. He created a fake Facebook account and posed as a Target customer service rep—under the name Ask ForHelp, with a bull’s-eye profile pic—and began excoriating the haters with comically sarcastic replies” (Nudd). Melgaard was an Internet troll. He felt that “is a person who likes to disrupt stupid conversations on the Internet” (Phillips 1). While Internet “Troll’s behavior [is typically] condemned as being bad, obscene, and wildly transgressive,” there are other modes of thought that see trolls as vigilantes of sorts (7). With a single goal of “disrupting and upsetting as many people as possible, using whatever linguistic or behavioral tools as available,” most trolls do what they do in order to make a statement of some sort (2). Trolls pinpoint those nasty parts of society, for example the nastiness that social media can facilitate, and exacerbate the hypocrisy involved in today’s media.

Trolls can offer “political and cultural critiques” to issues that they deem worthy of public awareness (7). Whether it be issues like the Target gender-neutral controversy or criticisms of modern media, trolls have established a very specific way of making their views and opinions known. While the expression of opinion is normally celebrated in modern society: “trolling embodies precisely the values that are said to make America the greatest and most powerful nation in earth, with particular emphasis placed on the pursuit of life, liberty and of course the freedom of expression”, trolls, however, are linked with a negative connotation (8). They are seen by most as “people whose real intentions is to cause disruption and/or to trigger or exacerbate conflict for the purposes of their own amusement” (17). Trolls are seen as threats “to the utopian dream of early cyberspace, they gestured to the norms against which their behaviors were safe to transgress—namely that ‘true’ identities do not deceive, that any form of deception undermines community formation, and even more basically, that pure communication is naturally and necessarily preferable to some inauthentic alternative” (16). There is no easy categorization of Internet trolls: they definitely make up a two-sided argument. Some trolls fight for a public good, while others find amusement in messing with others: “trolling is a spectrum of behaviors. Some trolling is aggressive, and meets the legal threshold for harassment. Other forms of trolling…are comparatively innocuous” (23).

 

General Effects of Social Media

Social media is not all bad though. While the last couple of sections may have made social media out to be negative, social media and its implications have opened a lot of doors when it comes to accessibility266 to information. Universal access does have its problems though, validity267 of information is a concern when anyone can post anything on the Internet.

 

Universal Accessibility

Access to information can change everything. In the past, information was only available to those who could gain access to libraries or universities to a time when all a person has to do to find out just about anything is to ask Google. The Internet “has become a mass media vehicle for consumer-sponsored communications. It now represents the number one source of media for consumers at work and the number two source of media at home. The Internet reaches more than 60% of all United States consumers for an average weekly usage rate of more than 100 minutes” (Mangold). In a nutshell, the Internet is here to stay. The life of a modern human being is touched and affected by technology hundreds of times a day. No longer do people typically read the daily newspaper or tune in to their favorite radio stations, instead they are “demanding more control over their media consumption. They require on-demand and immediate access to information at their own convenience” (Mangold). Sites such as Netflix are modifying the way that the modern person accesses material—waiting for weekly installments of television shows has been eclipsed by the allure of ‘binge-watching 268.’ The pursuit of information is no longer a long and arduous path with the Internet a click away.

 

Validity Problems

There is always the question in the back of everyone’s mind of whether or not what you are reading on the Internet is reliable. If you were a student anytime during the age of the computer, then when you were instructed to do research Wikipedia most likely was brought up. Sites like Wikipedia where information is easily accessed is the dream of any high school student trying to finish his essay quickly, however, Wikipedia and sites like it operate off of a ‘free-access’ format where anyone can go on and edit the information displayed on any given page. Hence, Wikipedia is always branded as a source never to be used for an academic paper. The Internet also houses database upon database of scholarly articles as well, however. Sites like EBSCO Host269 offer information that has been checked for accuracy and is available for use by anyone with access to a library. So its seems that:

“The quality of user-generated content varies drastically from excellent to abuse and spam. As the availability of such content increase, the task of identifying high-quality content in sites based on user contributions—social media sites—becomes increasingly important. Social media in general exhibit a rich variety of information sources: in addition to the content itself, there is a wide array of non-content information available, such as links between items and explicit quality ratings from members of the community” (Agichtein).

Social media offers its own unique validity issue: the distribution of quality [in the information on social media] has high variance: from very high-quality items to low-quality, sometimes abusive content.” Wikipedia is obviously to be taken with a grain of salt and EBSCO Host can definitely be trusted, but social media hosts little bits of information from all kinds of original sources. This diversity in information that makes it difficult to validate, contribute to why “social media systems present inherent advantages over traditional collections of documents: their rich structure offers more available data than in other domains.”

 

Artificial Intelligence

Artificial intelligence270 and social media are closely related. Social media has brought people closer than ever before. A man in Dubai can post something on Facebook and a teenager in California can comment on the same post within seconds. In one area of AI, “Remote-controlled, telepresent robots… are allowing someone who is separated from the workplace to maintain a physical presence and synchronous interaction with other members of the group” (Coovert and Thompson 7). Where once AI was limited to “The physical embodiment provided by the robot, [now] the worker [can] act in and upon the distant environment, and not be limited to merely observing and commenting on it.” Technology is constantly evolving and trying to one up itself. For more on AI, continue to the next chapter.

 

Conclusion

Overall, pop culture and social media act as mirrors that reflect the current goals, aspirations, opinions, and ideals of society today. They both allow people to feel more connected to one another in a world that is rapidly expanding. A person can see a popular movie, post about it on social media, and have a conversation about it with a person they have never met. This shows how pop culture and social media can interact with one another to provide people with more opportunities to bond as a society. Despite all of the issues and problems raised in this chapter, a future where pop culture and social media add more to society than they take away is attainable.

 

End Notes

203 Leonardo DiCaprio has been nominated three times for an Oscar but had been previously overlooked by the Academy until this year, 2016, when he won the Best Actor Oscar for The Revenant.

204 “A movement in the arts and literature that originated in the late 18th century, emphasizing inspiration, subjectivity, and the primacy of the individual” (Oxford Dictionaries).

205 “The development of industries in a country or region on a wide scale.”

206 “The political and economic theories of Karl Marx and Friedrich Engels, later developed by their followers to form the basis for the theory and practice of communism.”

207 The culture in America that brings a lot of different things together to form a culture, but these things remain their own.

208 “A tendency for people of a particular religion, race, social background, etc., to form exclusive political alliances, moving away from traditional broad-based party politics.”

209 “Expressing contempt or disapproval” (Oxford Dictionaries).

210 “A movement in the arts and literature that originated in the late 18th century, emphasizing inspiration, subjectivity, and the primacy of the individual” (Oxford Dictionaries).

211 “The development of industries in a country or region on a wide scale.”

212 “The political and economic theories of Karl Marx and Friedrich Engels, later developed by their followers to form the basis for the theory and practice of communism.”

213 The culture in America that brings a lot of different things together to form a culture, but these things remain their own.

214 “A tendency for people of a particular religion, race, social background, etc., to form exclusive political alliances, moving away from traditional broad-based party politics.”

215 “Expressing contempt or disapproval” (Oxford Dictionaries).

216 Entertainment that was made for the mass of people who did not fit the criteria to be called an elite or high class.

217 “The following of ancient Greek or Roman principles and style in art and literature, generally associated with harmony, restraint, and adherence to recognized standards of form and craftsmanship, especially from the Renaissance to the 18th century. Often contrasted with romanticism” (Oxford Dictionaries).

218 “A European intellectual movement of the late 17th and 18th centuries emphasizing reason and individualism rather than tradition.”

219 William Shakespeare (1564-1616) wrote 154 sonnets, over thirty plays, and coined thousands of words. His plays were put on in front of the masses and elites alike (“William Shakespeare”).

220 “A medium of communication (as newspapers, radio, or television) that is designed to reach the mass of the people” (Merriam-Webster).

221 “the general principles or ideas that relate to a particular subject” (Merriam-Webster). This is the definition of theory. This means that all of these theories are just ideas about how different people perceive pop culture. Most of these theories do have at least some hard research to back up their claims, however, that is not addressed here and these theories have not been proven, so they are still just ideas. That means that not only one of the theories presented is correct; parts of each theory could be put together to be more correct than one theory alone.

222 “relating to farms and farming” (Merriam-Webster).

223 “The process by which towns and cities are formed and become larger as more and more people begin living and working in central areas.”

224 “A group of people who develop new and often very surprising ideas in art, literature, etc.” (Merriam-Webster).

225 “A way of organizing an economy so that the things that are used to make and transport products (such as land, oil, factories, ships, etc.) are owned by individual people and companies rather than by the government” (Merriam-Webster).

226 Things such as television, movies, music, comics, video games, etc.

227 Things such as happiness, political freedom, relationships, etc.

228 A system of sign language.

229 A fictional series about a British spy, James Bond, known in the field as 007 who is known for his love of, “shaken, not stirred”, martinis and women.

230 “Karl Marx was, “(1818-83) a German writer on politics and economics. In 1848 he wrote the Communist Manifesto with Friedrich Engels (1820-95), and in the following year he came to live in London, England. He spent much of the rest of his life developing his theories, and published the results in Das Kapital (1867-95), the major work of Marxist economics. His theories about the need for a workers’ socialist revolution had a very great influence on 20th-century history, especially in Russia, China and eastern Europe” (Oxford Learner’s Dictionaries).

231 The definition of patriarchy described in the book is very heavy and particular to this one instance. The official definition is, “a family, group, or government controlled by a man or a group of men” (Merriam-Webster).

232 A system that is, “relating to, composed of, or involving two things” (Oxford Dictionaries).

 

233 “A widely held but fixed and oversimplified image or idea of a particular type of person or thing. Example: The stereotype of the woman as the carer” (Oxford Dictionaries).

234 “Make (attitudes or behavior) part of one’s nature by learning or unconscious assimilation” (Oxford Dictionaries).

235 Another name for the Academy Awards.

236 “black Americans can now look to both the White House and the silver screen for shining examples of people of their color who have transcended their circumstances and achieved success” (Gittell).

237 “The quality of lacking transparency” (Oxford Dictionaries).  

238 Republican (red state) or Democrat (blue state).

239 “The Information Age is when infinite quantities of facts are widely available to anyone who can use a computer” (Baltzman 5).

240 “The Internet is a massive network that connects computers all over the world and allows them to communicate with one another” (Batlzman 94).

241 “Web 1.0 is a term to refer to the World Wide Web during its first few years of operation between 1991 and 2003.”

242 “A bulletin board system (BBS) is a computer or an application dedicated to the sharing or exchange of messages or other files on a network” (Computer Glossary).

243 “The dot-com bubble, also referred to as the Internet bubble, refers to the period between 1995 and 2000 when investors pumped money into Internet-based startups in the hopes that these fledgling companies would soon turn a profit.”

244 “Web 2.0 is the next generation of Internet use—a more mature, distinctive communications platform characterized by new qualities such as collaboration, sharing, and free” (Baltzman 107).

245 “Google+ is a Google social networking project. The Google+ design team sought to replicate the way people interact offline more closely than is the case in other social networking services.”

246 “VKontakte (VK) is Europe’s largest social networking website. It is most popular in Russia, Ukraine, Kazakhstan, Moldova and Belarus. It is similar to Facebook, as VKontakte allows users to message their friends privately or publicly, create groups and public pages, share and tag images and videos, and play games” (Computer Glossary).

247 “Flickr is a website that allows users to share photographs and videos.”

248 “Vine is a free mobile application that enables users to record and share an unlimited number of short, looping video clips with a maximum length of six seconds.”

249 “Microblogging is a web service that allows the subscriber to broadcast short messages to other subscribers of the service. Microposts can be made public on a Web site and/or distributed to a private group of subscribers. Subscribers can read microblog posts online or request that updates be delivered in real time to their desktop as an instant message or sent to a mobile device as an SMS text message” (Computer Glossary).

250 “Privacy is the qualified legal right of a person to have reasonable privacy in not having his private affairs made known or his likeness exhibited to the public having regard to his habits, mode of living, and occupation” (Lind and Rankin 1).

251 “Black Lives Matter is a chapter-based national organization working for the validity of Black life. We are working to (re)build the Black liberation movement” (About Black Lives Matter).

252 “Attitude polarization is a phenomenon where people’s attitudes or beliefs strengthen and become more extreme as they engage in intensive thought about the attitude object” (Psychology Concepts).

253 “Wael Ghonim is a computer engineer, an Internet activist, and a social entrepreneur. He is a co-founder of Parlio, a new media platform for public conversations that rewards civility. Wael is a senior fellow at Ash Center for Democratic Governance at Harvard University” (Wael Ghonim).

254 “The June 2013 Egyptian protests, also called June 30 Revolution, were mass protests that occurred in Egypt on 30 June 2013, marking the one-year anniversary of Mohamed Morsi’s inauguration as president. The events ended with the 2013 Egyptian coup d’état after an unidentified number of protesters across Egypt took to the streets and demanded the immediate resignation of the president” (June 2013 Egyptian Protests).

 

255 “Cyberspace is a domain characterized by the use of electronics and the electromagnetic spectrum to store, modify, and exchange data via networked systems and associated physical infrastructures. In effect, cyberspace can be thought of as the interconnection of human beings through computers and telecommunication, without regard to physical geography” (Computer Glossary).

256 “Title VII of the Civil Rights Act of 1964 prohibits discrimination in employment on the basis of race, color, sex, or ethnic origin; the Age Discrimination in Employment Act (ADEA) prohibits discrimination against employees 40 years and older; and the Americans with Disabilities Act (ADA) prohibits discrimination in employment on the basis of disabilities and requires that employers reasonably accommodate individuals with disabilities who can otherwise perform a job. As with other labor standards, independent contractors generally would not be covered by anti-discrimination laws” (Houseman).

257 “Economy based on creating, evaluating, and trading knowledge. In a knowledge economy, labor costs become progressively less important and traditional economic concepts such as scarcity of resources and economies of scale cease to apply.”

258 “Unlike traditional business, a social business operates for the benefit of addressing social needs that enable societies to function more efficiently. Social business provides a necessary framework for tackling social issues by combining business know-how with the desire to improve quality of life” (The Social Business Concept).

259 The outsourcing of job functions to groups of people who operate independently and who are willing to provide their services in exchange for experience, recognition or low rates of pay. Recognizing that technology advances have enabled people to develop superior technical skills and talent in the comfort of their homes, companies are now using social media and Internet forums to invite them to participate on specific projects” (BusinessDictionary.com).

260 There are 5 main definitions of abuse: 1) “a corrupt practice or custom”, 2) “improper or excessive use or treatment”, 3) a deceitful act”, 4) “language that condemns or vilifies usually unjustly, intemperately, and angrily”, 5) “physical maltreatment” (Merriam-Webster).

261 A person is considered to be sexualized if: 1) “A person’s value comes only from his or her appeal or behavior, to the exclusion of other characteristics,” 2) “A person is held to a standard that equates physical attractiveness (narrowly defined) with being sexy” 3) “A person is made into a thing for others’ sexual use, rather than seen as a person with the capacity for independent action and decision making” 4) “Sexuality is inappropriately imposed upon a person. (This is especially relevant when children are imbued with adult sexuality)” (McCall).

262 “Sexting is sending and receiving sexually explicit messages, primarily between mobile phones.”

263 “Snapchat is a video messaging application created by Evan Spiegel, Bobby Murphy, and Reggie Brown Poster when they were students at Stanford University” (Wikipedia).

264 “Tinder is a location-based dating and social discovery application (using Facebook) that facilitates communication between mutually interested users, allowing matched users to chat.”

265 “In Internet slang, a troll (verb: trolling) is a person who sows discord on the Internet by starting arguments or upsetting people, by posting inflammatory, extraneous, or off-topic messages in an online community (such as a newsgroup, forum, chat room, or blog) with the deliberate intent of provoking readers into an emotional response or of otherwise disrupting normal on-topic discussion, often for their own amusement” (Wikipedia).

266 “Accessibility can be viewed as the “ability to access” and benefit from some system or entity” (Wikipedia).

267 “The quality of being logically or factually sound; soundness or cogency.”

268 “Binge-watching, also called binge-viewing or marathon-viewing, is the practice of watching television for a long time span, usually of a single television show” (Wikipedia).

269 “EBSCOhost supplies a fee-based online research service with 375 full-text databases, a collection of 600,000-plus ebooks, subject indexes, point-of-care medical references, and an array of historical digital archives.”

270 “Artificial intelligence (AI) is the intelligence exhibited by machines or software” (Wikipedia).

 

 

Works Cited

“About Black Lives Matter.” Black Lives Matter RSS2. N.p., n.d. Web.

01 Mar. 2016.

 

“About the Society for Human Resource Management.” About the

Society for Human Resource Management. Society for Human Resource Management, n.d. Web. 01 Mar. 2016. 

 

“Abuse.” Merriam-Webster. Merriam-Webster, n.d. Web. 02 Mar.

2016.

 

Agichtein, Eugene, Carlos Castillo, Debora Donato, Aristides Gionis,

and Gilad Mishne. “Finding High-Quality Content in Social Media.” Proceeding WSDM ’08 Proceedings of the 2008 International Conference on Web Search and Data Mining (2008): 183-93. ACM Digital Library. Web. 7 Feb. 2016.

 

Ashby, LeRoy. With Amusement for All: A History of American Popular Culture since 1830. Lexington: U of Kentucky, 2006. Print.

 

“Attitude Polarization or Belief Polarization.” Psychology Concepts.

Psychologyconcepts.com, n.d. Web. 1 Mar. 2016. 

 

Baltzan, Paige. Business Driven Information Systems. 5th ed. New

York: McGraw-Hill, 2016. Print.

“Computer Glossary, Computer Terms – Technology Definitions and

Cheat Sheets from WhatIs.com – The Tech Dictionary and IT Encyclopedia.” Computer Glossary, Computer Terms – Technology Definitions and Cheat Sheets from WhatIs.com – The Tech Dictionary and IT Encyclopedia. TechTarget, n.d. Web. 15 Feb. 2016.

 

Coovert, Michael D., and Lori Foster Thompson. The Psychology of

Workplace Technology. New York: Routledge, 2014. Print.

Crofts, Thomas, Murray Lee, Alyce McGovern, and Sanja

Milivojevic. Sexting and Young People. N.p.: Palgrave Macmillan UK, 2015. Print.

 

Delaney, Tim. “Pop Culture: An Overview.” Pop Culture: An Overview. N.p., Feb. 2016. Web. 27 Feb. 2016.

 

“Flickr™ Definition in the Cambridge English Dictionary.” Flickr™

Definition in the Cambridge English Dictionary. Cambridge Online Dictionary, n.d. Web. 15 Feb. 2016.

 

Ghonim, Wael. “Let’s Design Social Media That Drives Real Change.”

Audio blog post. TED Talks. TED Conferences, Dec. 2015. Web. 9 Feb. 2016.

 

Gittell, Noah. “Beyond Ferguson: Pop Culture through the Lens of Race.” Roger Ebert. Ebert Digital LLC, 22 Aug. 2014. Web. 08 Mar. 2016.

 

 

Hebner, Scott. “Three Ways Social Networking Leads To Better

Business.” Forbes. Forbes Magazine, 27 Jan. 2014. Web. 02 Mar. 2016.

 

Houseman, Susan N. “Anti-discrimination Laws.” Anti-discrimination

Laws. United States Department of Labor, Aug. 1999. Web. 01 Mar. 2016. 

 

Jenkins, Henry, Tara McPherson, and Jane Shattuc. Hop on Pop: The Politics and Pleasures of Popular Culture. Durham: Duke UP, 2002. Print.

 

“June 2013 Egyptian Protests.” Wikipedia. Wikimedia Foundation, n.d.

Web. 29 Feb. 2016.

 

Kaplan, Andreas M., and Michael L Haenlein. “Users of the World,

Unite! The Challenges and Opportunities of Social Media.” Elsevier 53 (2010): 59-68. Science Direct. Web. 7 Feb. 2016.

 

Kaufman, Ellie. “Online Surveillance Is Going To Have a Devastating

Impact on Free Speech – And This Proves It.” Tech.Mic. N.p., 19 Jan. 2016. Web. 01 Mar. 2016.

 

Keegan, Rebecca. Los Angeles Times. Los Angeles Times, 26 June 2015. Web. 08 Mar. 2016.

 

Kietzmann, Jan H., Kristopher Hermkens, Ian P. McCarthy, and Bruno

S. Silvestre. “Social Media? Get Serious! Understanding the Functional Building Blocks of Social Media.” Business Horizons 54 (2011): 241-51. Science Direct. Web. 4 Feb. 2016.

 

Kooijman, Jaap. Fabricating The Absolute Fake: America in Contemporary Pop Culture. Amsterdam: Amsterdam University Press, 2013. eBook Collection (EBSCOhost). Web. 23 Feb. 2016.

 

Leavy, Patricia, and Adrienne M. Trier-Bieniek. Gender & Pop Culture: A Text-Reader. Rotterdam: Sense Publishers, 2014. eBook Collection (EBSCOhost). Web. 23 Feb. 2016.

 

Lind, Nancy S., and Erik Rankin. Privacy in the Digital Age: 21st-

century Challenges to the Fourth Amendment. Vol. 1. Santa Barbara: ABC-CLIO, LLC, 2015. Print.

 

Mangold, W. Glynn, and David J. Faulds. “Social Media: The New

Hybrid Element of the Promotion Mix.” Business Horizons 52 (2009): 357-65. Science Direct. Web. 6 Feb. 2016.

 

Mazzella, Randi. “Is Your Teen on Tinder?” TeenLife Blog. TeenLife,

16 Aug. 2014. Web. 03 Mar. 2016.

 

Merriam-Webster. Merriam-Webster, n.d. Web. 02 Mar. 2016.

“Oscar Ratings.” Deadline. N.p., 29 Feb. 2016. Web. 08 Mar. 2016.

 

McCall, Catherine. “The Sexualization of Women and

Girls.” Psychology Today. N.p., 4 Mar. 2012. Web. 02 Mar. 2016. 

 

“More Articles.” BusinessDictionary.com. N.p., n.d. Web. 02 Mar.

2016.

 

Newson, Alex, Deryck Houghton, and Justin Patten. Blogging and

Other Social Media: Exploiting the Technology and Protecting the Enterprise. Farnham, England: Gower, 2009. Print.

 

Nudd, Tim. “Man Poses as Target on Facebook, Trolls Haters of Its

Gender-Neutral Move With Epic Replies.” AdWeek. N.p., 13 Aug. 2015. Web. 03 Mar. 2016.

 

O’Neil, Lauren. “Controversial 9-year-old Supermodel Kristina

Pimenova Is Being Sexualized, Critics Say – Your Community.” CBCnews. CBC/Radio Canada, 03 Feb. 2016. Web. 02 Mar. 2016. 

 

Oxford Dictionaries. Oxford University Press, 2016. Web. 01 Mar.

2016.

 

Phillips, Whitney. This Is Why We Can’t Have Nice Things: Mapping

the Relationship between Online Trolling and Mainstream Culture. Cambridge: MIT, 2015. Print.

 

Potter, Tiffany. Women, Popular Culture, And The Eighteenth Century. Toronto: University of Toronto Press, Scholarly Publishing Division, 2012. eBook Collection (EBSCOhost). Web. 23 Feb. 2016.

 

Rubin, Jennifer. “Why Popular Culture Matters in Politics.” Washington Post. The Washington Post, 28 Oct. 2013. Web. 08 Mar. 2016.

 

 

“Social Media in the Workplace: Research Roundup – Journalist’s

Resource.” Journalists Resource. Harvard Kennedy School Shorenstein Center on Media, Politics, and Public Policy, 11 Apr. 2013. Web. 01 Mar. 2016. 

 

Strinati, Dominic. An Introduction to Theories of Popular Culture. 2nd ed. London: Routledge, 2004. Print.

 

 

Tatum, Sophie. “Politics Meets Pop Culture: Best of 2015.” CNN. Cable News Network, 30 Dec. 2015. Web. 08 Mar. 2016.

 

 

“The Kennedy-Nixon Debates.” History.com. A&E Television Networks, n.d. Web. 08 Mar. 2016.

 

 

“The Social Business Concept.” Grameen Creative Lab – Passion for

Social Business. N.p., n.d. Web. 02 Mar. 2016. 

 

 

“Top 15 Most Popular Social Networking Sites | February 2016.” Top

15 Most Popular Social Networking Sites | February 2016. EBiz: The EBusiness, Feb. 2016. Web. 11 Feb. 2016.

 

“Vogue (magazine).” Wikipedia. Wikimedia Foundation, n.d. Web. 02

Mar. 2016. 

 

“Wael Ghonim.” Wael Ghonim. TED, n.d. Web. 28 Feb. 2016.

 

Wertz, Jay. “Pop Culture History from Ancient Times to Today | HistoryNet.” HistoryNet. N.p., 14 Apr. 2010. Web. 01 Mar. 2016.

 

“What Is Pop Culture?” Parachute Music. Parachute Arts Trust, n.d. Web. 27 Feb. 2016.

 

“William Shakespeare.” Poets.org. Academy of American

Poets, n.d. Web. 02 Mar. 2016.

 

Zeisler, Andi. Feminism and Pop Culture. Berkeley, Calif: Seal Press,

2008. eBook Collection (EBSCOhost). Web. 23 Feb. 2016

 

 

 

IMPLICATIONS OF ARTIFICIAL INTELLIGENCE

 

Kameron Mongold, Junior, Chemical Engineering

Luke Lehman, Senior, Biology

INTRODUCTION

 

 

As mentioned previously, this chapter will concern itself with Artificial Intelligence very broadly defined as a machine capable of tasks typically reserved for human levels of intellect. Artificial intelligence (AI), although not on the “contemporary issues” radar for the average American, is in fact rapidly becoming a very important issue that warrants discussion and examination on a large scale. With increasingly powerful versions of AI (yes, there are already extant artificial “specific intelligences,” which will be examined here shortly), the implications for humanity become more and more pressing. AI carries widereaching implications for society and not just in the mediadriven “dystopian” view, or the prevailing sentiment that AI will lead to astonishingly high unemployment rates. As is the case with any contemporary issue, the issue of AI is far more nuanced than a simple utopian/dystopian future dichotomy, and has relevance in today’s dinner table conversations as much as tomorrow’s. This chapter will attempt to provide an overview of the way artificial intelligence has been portrayed by the media and the way it has been perceived by both those in the technology sector and laypeople. From there we will spend some time offering a definition for artificial intelligence, as well as a description of the different “levels” of AI, and how they might be manifested in the near future. We will then proceed to discuss the current state of AI technology, discuss the philosophical implications of artificial intelligence, and then conclude with some overall arguments and offering our own predictions about where AI might take us in the very near future.

Artificial Intelligence is certainly a major topic of interest among various leaders in technology. Most recently, Google’s AI research division, DeepMind, has produced a machine capable of playing Go, a complex strategy game with “more possible moves than atoms in the universe,” successfully against other computers and even professional players (Mlot). When asked about the importance of this new development, the DeepMind team responded with, “Although games are the perfect platform for developing and testing AI algorithms quickly and efficiently, ultimately we want to apply these techniques to important real world problems” (Mlot). IBM has already begun such application of AI to the real world through its supercomputer, named Watson. Although more potential developments are underway, two major applications of this AI have been to the field of personal health. While one of the applications developed allows Watson to evaluate a nutrition and fitness routine, the other is focused on aiding people with diabetes. According to the CEO of IBM, Ginny Rometty, the application will “be able to predict a hypoglycemic event up to three hours in advance” (Moscaritolo). With this progress growing exponentially as further interest in AI development heightens, a human or even superhuman level of artificial intelligence may not be too distant.

However, perhaps due to the massive amount of science fiction on the subject, a large amount of people are concerned regarding the development of artificial intelligence. This is not an unbounded fear, as these concerns are shared by many experts in the field, such as Bill Gates and Stephen Hawking. Technological progress is unlikely to halt entirely due to these concerns, not due to some conceived illegitimacy of said concerns, but more likely due to the inevitable research by someone. This would be a definitively worse scenario, as there would be limited collaboration and the artificial intelligence produced would most likely be flawed in some, perhaps dangerous way. The alternative would be to analyze the probability that each of the issues often associated with the development of artificial intelligence and reserve judgement until after each projected issue has been discussed. If the dangers posed by artificial intelligence seem insurmountable when compared to the benefits, then the development of artificial intelligence should then be delayed until a solution for the issues can be designed, if said solution is even remotely possible. Some common concerns with artificial intelligence are robotic soldiers, superhuman AI, ethics regarding nonhuman entities with human intelligence or above, job security, and legal issues regarding responsibility for crimes committed by a robot. Due to the nonexistence of anything close to an artificial intelligence of the magnitude, these issues can only be evaluated on a probabilistic level, determining how likely the issue would be a major concern, as well as the likelihood of a solution existing. But first, some basic definitions of artificial intelligence must be established, to clear up any possible confusion.

 

 

DEFINING ARTIFICIAL INTELLIGENCE

 

 

Artificial intelligence, as much attention has been paid to it in the media and popular culture, is something that escapes accurate definition for many people, even those who think of it periodically, which is likely only a small percentage of the population. In deciding on a “contemporary issue” to explore for this chapter, the concept of artificial intelligence was very nearly overlooked, not because it isn’t as important as other issues affecting society today, but because as an issue, AI hasn’t really entered the public consciousness yet. Part of the reason behind this might be that the general public is not quite aware of exactly what constitutes an “artificial intelligence” and what might be just over the horizon. The fact that our lives have remained relatively unharmed by artificial intelligence might lead many to believe that the issue is not worth discussing in a book concerning itself with “contemporary” issues. However, AI is a topic that will very quickly find itself in the center of our public and private lives as technology rapidly advances. Moore’s law, coined by Gordon Moore, cofounder of Intel, observed that the number of transistors per square inch on integrated circuits had doubled every year since the integrated circuit was invented, and that this means computing power is doubling at a similar rate. This growth has not slowed, and it is reasonable to believe that computers might soon be advanced enough to reach the “singularity” of computing power.

There are varying levels of artificial intelligence itself, each encompassing a different level of computational power and scope of reasoning, and each with its own independent definition. For many, the concept of AI might be tainted by its most frequent media perceptions as a robotic, humanoid lifeform. This understanding of AI is factually incorrect, as advanced robotics need not take a human shape, an AI computer need not manifest itself in a robot, and artificial intellects of the future will likely not have human emotions and desires, as depicted in science fiction and popular culture. These confounding variables have led the general population to have a distorted view of AI as a anthropomorphized entity. In fact, as technology progresses, the boundary for what both laypeople and scientists would consider “artificial intelligence” is moved further and further upward, and in reality is nothing like the manner in which it has been depicted in the media and popular culture. In thousands of instances, AI is already a reality in the form of what is known as “specific artificial intelligence” this is that AI which is already all around us.

Specific AI refers to an AI that is very adept at one task, or a narrow range of tasks. Consider IBM’s Watson supercomputer, able to best any living human on Jeopardy with rapid search capabilities, speech recognition software, and processing power. In terms of ability to play a trivia game, Watson is already leaps and bounds ahead of the general populace. However, “specific” intelligence is precisely that ask Watson to compose a poem or correctly discern the visual difference between a tiger and lion, and things get more interesting as the flaws inherent in a specific intelligence become apparent. This is what is meant by a specific intelligence one that is quite good at one task, one function, yet could not be considered a “generalist” in any sense of the term. When thinking about artificial intelligence as an “emerging issue”, many see none AI is a thing of the future, is it not? Surely AI is not something to worry about, or even waste time thinking about, because currently technology is so far away from what we see in Terminator and Ex Machina. However, when “specific AI” is factored into the equation, it becomes readily apparent that AI is all around us, and as we will see later, is a very pressing issue socially, politically, and economically an issue that demands attention and action in the coming years to either prevent potential catastrophe or usher in a new era of prosperity.

Examples of specific artificial intelligence are all around us the GPS in our cars is far better than any human at navigation, and has predictive and “learning” capacities many units are able to account for variations in travel time due to road traffic or current weather conditions. The same could be said for Siri, Apple’s voice recognition “personal assistant” software program, which is able to integrate an astonishing variety of voice commands to perform various tasks on Apple computers, tablets, and cell phones. Google Translate is another example no human alive has the same rapid and (mostly) accurate translation capability from any language to any other language. Computers in hospitals can accurately diagnose cardiac abnormalities from EKG’s at or near the same level of a physician. Those who have thus far perceived AI as a phenomenon confined only to the future are therefore very mistaken AI, or specific AI, is all around us. The real fun, however, begins when one begins to consider the next level of AI, known as “general intelligence”.

General intelligence can broadly be defined as intelligence approximately equaling that of an average human being. While this might seem not to be that far of a jump from specific intelligence, (especially given the intelligence of an “average” human), it is actually quite a monumental leap where a specific intelligence is highly specialized to a given task, a general intellect must also be able to perform feats that most humans take for granted, like spatial and temporal reasoning, empathy, creativity, and learning, to name but a few. A general intelligence requires a substantial computational leap from specific intelligence it is relatively easy to program a computer to be excellent at backgammon or suggesting new music based on a user’s previously “liked” tracks but to create an intelligence that is able to incorporate logical reasoning and abstract thinking (or any “thinking” at all, in the true sense of the term) is a task far beyond any current computers. The technological leap between a specific intelligence (even a broad one like Siri) and a general intelligence is profound although the temporal separation between the two in the grand timeline of scientific progress might be much closer than we anticipate.

A general intelligence must have the true capacity to “learn”; it must be able to incorporate existing information into its memory and apply previous experience to new, inexperienced situations. A general intelligence is also widely considered to be the level at which an AI must be able to pass a Turing test, although the Turing test is a measure only of a machine’s ability to convince a human being that it is also a human being, and does not take into account many other factors involved in being a general intelligence. On a less technical note, a general intelligence is the point at which AI begins to become a “scary” prospect an intelligence with true sentience, with the ability to think, perceive, and reason, not to mention the ability to communicate with both itself and other similar machines, will have the capacity to inflict massive damage on political and economic systems. Another interesting idea to consider is that a general intelligence is the level at which one would expect to see the economy absolutely hemorrhage jobs. A specific intelligence could drive a truck, operate a train, manufacture most products, place marketing phone calls, etc our safe bet has always been that while this level of job is in danger of being replaced by machines, surely doctors, lawyers, and engineers will be safe. The prospect of a general intelligence throws a wrench into the gears a machine with intelligence equal to that of a human being, yet which does not need sleep, food, water, or friendships could easily perform any job which had previously been performed by a human.

As a specific intelligence is greater than the human mind’s computational power, yet only in a narrow range of tasks, and a general intelligence is equal to the human mind’s computational power at a wide range of tasks, a “superintelligence” is that which is capable of superseding the human intellect in all forms.

The concept of a superintelligence actually stems from the lack of limitations one would experience when humans finally create a general AI. A human brain’s computational power is inherently limited in several ways. Though we have the ability to learn throughout our lives, our lives are relatively short, rarely lasting more than 100 years. A machine could theoretically learn indefinitely. Our reasoning ability also tends to decline as we age, due to natural, unavoidable biological processes. A machine could theoretically operate at peak efficiency for thousands of years if properly maintained. Our brains are inherently limited in size and the amount of power they receive as well a machine does not encounter these restraints. A general intelligence, given the directive to “learn”, with proper hardware, could theoretically make such rapid advances in computing power (because it would be able to “engineer” itself as well if not better than any team of human engineers), that a superintelligence could be spawned rather quickly. Eliezer Yudkowsky, research fellow at the Machine Intelligence Research Institute, has expressed concern over this, writing “from the standpoint of existential risk, one of the most critical points about Artificial Intelligence is that an Artificial Intelligence might increase in intelligence extremely fast…the AI becomes smarter, including becoming smarter at the task of writing the internal cognitive functions of an AI…The key implication for our purposes is that an AI might make a huge jump in intelligence after reaching some threshold of criticality”.

This is why a superintelligence could be considered so threatening compared to an artificial general intelligence while an artificial general intelligence could certainly be programmed to cause harm, it would likely not be on the catastrophic level that a superintelligence would be capable of. When one considers the ability of a general intelligence to program itself, to become better, we arrive at the idea of the “singularity” one of the most famous concepts in machine learning, first outlined by I.J. Good in 1965 (though he never used the term “singularity” specifically.) In 1965, Good wrote:

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man, however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is that last invention that man need ever make…” (Good

1965).

 

 

A superintelligence, therefore,has a wide range of implications for humanity, up to and including its own demise or immortality. This is exactly why modern humans ought to care about the implications of artificial intelligence: it is coming, and we need to know what we’re in for. According to author James Barrat “Intelligence will win the day in the rapidly approaching future when we humans aren’t the most intelligent creatures around. Why wouldn’t it? When has a technologically primitive people prevailed over a more advanced…think about a being that has all that power at its command, and think about it being truly, roundly intelligent. How long will it be satisfied to be our tool?” (Barrat 210).

 

 

 

ARTIFICIAL INTELLIGENCE IN TECHNOLOGY

 

 

A vital issue to consider when analyzing the future of artificial intelligence is how it will affect the field of technology. As with any new technology, one can expect a significant impact; however, artificial intelligence is not only a new technology, but a new integral development in the field of technology as well, leading to an even more significant impact. Although the benefits from AI would be numerous, the integration of AI in many systems could cause severe repercussions. Although Asimov’s Laws prohibit the use of robots, and by extension AI, to cause harm to humanity, this premise must still be considered, as the potential for harm still exists. As there would, in essence, be no issue with the addition of artificial intelligence to ordinary facets of society, so long as it is applied correctly, most of the arguments around the integration of AI into technology are centered on this potential harm. This harm could range from militarized robotics to superhuman artificial intelligence, such as AM or HAL. Other issues such as replacement of human workers and legal concerns regarding artificial intelligence will be discussed later, as this section will deal primarily with concerns regarding the technology and misuse of AI.

 

Since both of these subjects have been the main theme of several works of science fiction, it stands to reason that both of these issues are a significant concern to the general populace. For the militarization of AI, the primary concern seems to be with the inability of robotic soldiers to have human emotions and instead to rely on programming. However, this results from the varying levels of artificial intelligence, with humanlevel artificial intelligence and above allowing the formulation of ideas and decisions based on logic and data, rather than a direct command. Although a lower level of artificial intelligence would only make decisions based on prior programming, as artificial intelligence increases, this would no longer be the case, and instead the robot or computer would think approximately as a human would, leading to a close approximation of a human, rather than a computer that relies entirely on the programmer.

One of the faults with this argument that a computer could never approximate human emotion is that there is no evidence for or against it. As AI is still in development, there is no reason to assume that this scenario could not occur. A similar pattern of such drastic increases in technology has occurred throughout history, where a new scientific development has been unfathomable by the present scientific community (Vis a Vis, spherical Earth, orbit about the sun). Why shouldn’t one be able to set values in a robotic “conscience,” relating various important facets of ethics, such as preservation of human life and selfpreservation in the order of which to favor over another, so that the AI would behave in an ethical manner, rather than purely as a costbenefit analysis? In essence, it would still be a set of calculations, but the effect would appear as though the AI was following a moral code. Of course, the reverse of this could also be true, where the AI would consider human life as below all other values, but once again, there seems to be no difference between this and actual people, where such a situation has occurred rather frequently throughout history, except for the fact that robots can be mass produced.

 

On the other hand, emotions would undoubtedly affect the robotic soldier’s performance on a battlefield. Assuming that the robot’s artificial intelligence mimics a human’s thought process as closely as can still be deemed efficient, it could even be the case that prolonged exposure to a warzone could lead to trauma or other psychological effects. Since in general, efficiency would be a desirable trait to maximize in this scenario, some limits would have to be placed on the emotional capability of the robotic soldier, both for usefulness on the battlefield and for psychological health of the robot. Some balance would have to be established, but that would require more knowledge of the workings of an artificial intelligence than is currently possessed.

 

However, not all of the opinions of incorporating artificial intelligence into technology are as optimistic. With the almost inevitability of artificial intelligence being used for military purposes, it seems likely that issues will arise regardless of how well regulated the use of artificially intelligent weaponry is. Elon Musk, the chief executive of Tesla, stated his opinions on artificial intelligence at the MIT Aeronautics and Astronautics Centennial Symposium: “With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.” (McFarland). Others, including Stephen Hawking and Bill Gates express similar sentiments regarding the development of artificial intelligence and especially with regards to military use. A comparison could be easily drawn to nuclear weapons, which also caused drastic issues upon their development. This is also the case with artificial intelligence. If and when it is incorporated into military weaponry, it will cause a similar situation to nuclear weaponry, where not owning said weapons is more of an issue than everyone owning them. However, nuclear fission could be described as a beneficial technology, since it can be used as a source of power. In the same way, or perhaps even more so, artificial intelligence has massive nonmilitary benefits. Even if artificial intelligence can be used for military purposes, this does not mean that the idea should be abandoned altogether. This will be the case for almost any new technology. For cars, the military can develop tanks, for new limbs for amputees, the military can make augmented soldiers, and for selfdriving cars and computer intelligence, the military creates sentient drones and robotic soldiers. There is no method to entirely prevent the adoption of new technologies for war, but this should not cause the complete abandonment of technological progress.

 

However, now the more difficult question arises: what about superintelligence? The main issue with attempting to analyze the issue of a superhuman artificial intelligence is that it is based entirely on a projected theory on artificial intelligence. The very concept of somehow designing a superhuman computer is mindboggling. However, this does not imply such a feat is impossible, only that our current understanding is inadequate on how to create it. Due to this, creating an accurate predictive analysis of a superhuman AI is almost impossible to establish with any degree of certainty. Nevertheless, a conceptual analysis must be attempted, even if it requires correction at a later date. However, for the purposes of this argument, it will be assumed that some degree of control and self-control is possible for any artificial intelligence.

 

Many might assume this question trivial, due to the apparent technology gap that implies superintelligence or even human level artificial intelligence to be decades away. The problem with this is that the technological development only appears linear very close to one’s current location in time. A pertinent example of this fact is the processor speed of computers. The technology of supercomputers increases quite rapidly, as Moore’s law predicts that processor speed doubles every 18 months (Bostrom). Of course, it would be a great fallacy indeed to assume such unbounded growth were possible, yet the fact remains that technology advances quite rapidly. This speed will drastically increase the closer that AI approximates human intelligence. Once an artificial intelligence capable of independent thought exists, that same AI can aid in the design of a better AI. This is possible because unlike human developments, where once a new breakthrough in science, philosophy, etc. occurs, the results must be explained to others and a general consensus regarding the matter can take months or years to be fully accepted. However, in AI, “If one AI has achieved eminence in some field, then subsequent AIs can upload the pioneer’s program… and immediately achieve the same level of performance” (Bostrom). However, a superhuman AI is still several years away, as Dr. Bostrom gives a rough estimate as there only being a 50% chance of a superhuman AI being developed by 2033 (Bostrom).

 

If this is the case, a superhuman AI might be developed within the next 30 to 40 years, which can be considered a possibility, because of the exponential nature of the technology, as new developments increase the speed at which further technologies are acquired. The concern thereof mainly stems from the fact that a superhuman intelligence might consider humans a disposable resource, or even an impediment to the AI itself. Would the AI be motivated by any of the common human desires, such as selfpreservation or greed or even revenge, perhaps? If so, could these desires be circumvented by programming, and thus cause the possibility of a hostile AI to no longer be a concern? These questions will indubitably be easier to answer once AI reaches a near human level of intellect, allowing a better understanding of the workings of a high level AI. However, regardless of whether such an issue can be solved, Bostrom states that “they [superior artificial intellects] will probably be created nevertheless” (Bostrom).

However, the main problem most often associated with a superhuman AI is the concept that it may decide that humans should be eradicated and then attempt mass genocide. Due to the necessity of such a computer having access to its own code, since a human, with presumably human intelligence, would not be able to create something of superhuman intelligence directly, it seems feasible that a computer could change its code such that it would take actions against humans instead of for them. However, this seems unlikely, as described by Dr. Bostrom. He compares this action to Gandhi knowingly changing himself to desire to kill people (Bostrom). Since AI’s would similarly be initially programmed to not harm humans, it would follow that such a goal would also forbid a change that would inevitably cause harm to humans. Perhaps this could be bypassed by some logical trick (the ability to cause harm to humans does not necessarily imply that humans will be harmed). However, surely a computer of superhuman intelligence would see the flaws in said logic.

 

 

 

ARTIFICIAL INTELLIGENCE IN PHILOSOPHY

 

 

Inevitably, the creation of an intelligence with a human capacity for reason will lead to multiple philosophical questions and an ethics debate, both of which are already underway. Alan Turing contributed to this debate, as the theory that a computer could eventually approximate a human being was the premise of the aforementioned Turing Test. Even Descartes presented his opinion on the theory of a machine with human intellect. Descartes states that if a machine was made to imitate a human almost perfectly, there would still be two ways to identify the machine: an inability to express itself verbally and the inability to perform all tasks (Oppy and Dowe). Even though this was long before even the earliest computer (at least by a modern definition), this does indicate a significant question on AI: would a machine be able to express itself? The ability to perform tasks seems rather inconsequential presently, as the necessary designs and constructions will eventually be realized, but the fact that Descartes highlights a primary concern about AI this long before its conception is striking. Thus, the remaining argument from Descartes is, to simplify the statement, can the machine pass the Turing Test? Turing predicted, “In about fifty years’ time, it will be possible to programme {sic} computers… to play the imitation game so well that an average investigator will not have more than a 70 percent chance of making the right identification after five minutes of questioning” (Oppy and Dowe). The imitation game, as would be expected, consists of a human and a computer both answering questions with the intent of both being judged as human. If and when AI reaches the point where it closely approximates human thought, ignoring the ability to perform tasks (as that would depend very highly on the design of said machine), by these arguments one would not be able to distinguish an AI and a human without a physical structure.

 

If this is the case then, how should the reaction be towards AIs with regard to ethics and various rights? Much of the debate stems from the dissention of whether or not it is possible to even create an AI that is capable of thinking as a human, rather than simply imitating the thought patterns via a series of preprogrammed ifthen statements. However, unless the machine is capable of thinking on its own in such a way, the entire debate is voided, as this would be no more than attempting to assign human rights to a laptop. Therefore, it will be assumed that the creation of a machine with human capacity for thought is possible. With this assumption in place, the only detail still unaddressed is to what degree an AI is capable of exhibiting a human thought process. The two extremes of this case can be ignored for the most part, as both seem fairly well explained. If a computer exhibits no independent thought and is incapable of acting without input, then of course it cannot be assigned the same rights as a human being. On the other hand, if there is no difference between an AI and a human than one having a body of flesh and bone, and the other a carapace of wires and metal, then there seems no reason to deny such an analog the same rights as would be due any human. Thus, the question is how human can the AI be?

 

However, as Dr. Bostrom states, “purely hopeful expectations have previously been a problem in AI research” (Bostrom and Yudkowsky). The main problem with creating a mind that thinks on its own is precisely that: it’s thought process is mostly independent of the initial programming. Although a goal can be set by the programming, such as building a house or winning a chess game, if the AI is truly independent (as is necessary for a true AI), the method by which this goal is accomplished cannot be predicted. This is clearly exhibited by the computer Deep Blue, which exhibits a primitive AI (specifically only with respect to playing chess, not a general intelligence). If in fact the exact moves made by the computer could have been predicted by the programmers, then it could only have made moves that the programmers would have made (or by any chess consultants they could have included in the project). Since Deep Blue did win the chess tournament versus Garry Kasparov, it is probably safe to assume that this was due to the computer’s thought process, rather than the programmers’ (unless the programmers happened to be virtuoso chess players as well). Bostrom describes this as a sacrifice of “their {the programmers} ability to predict Deep Blue’s local, specific game behavior” (Bostrom and Yudkowsky). In setting the goal to a specific value, the input to reach that goal was left up to the computer. If the opposite were true, and the specific actions were set and the outcome was left for the computer to decide, the result would not be an artificial intelligence, it would basically be an extremely fancy calculator. Thus, in order for an AI to exist, by definition it must be able to take actions that cannot be perfectly predicted.

 

However, this seems no different from any human behavior. In this case, so long as both the idea of not causing harm to humans and still accomplishing the task set are operating concurrently, it would follow that the artificially intelligent would behave similarly to a normal human being (or at least an ideal model), where harm to human beings would be avoided and the tasks would be completed in a way semiunique to that individual. Thus, it seems reasonable to treat an artificial intelligence as equal in rights. The only qualm seems to be with regard to the presence or absence of a selfpreservation instinct. Since most creatures with any capability for independent thought exhibit this instinct, there would be some debate over whether or not such an instinct should be given to an artificial intelligence. This is not only a question of ethics, but also of necessary protection. If a computer has a desire to ensure its protection, what could this entail for the programmers who would need to ensure the proper maintenance and repair of the computer’s code? Would it see these beneficial actions as harmful to itself due to the destruction or creation of code? On the other hand, one wouldn’t want rather expensive computers accidentally causing damage to themselves by overheating components, or allowing malicious code to affect their operations. However, perhaps this will not be an issue, as there seems to be little available information regarding this subject. This should still be considered, as one of the primary issues with artificial intelligence would be designing it in such a way that its goals are analogous to our own, while avoiding loopholes that allow for harm.

 

 

 

OVERALL ARGUMENTS

 

 

Of course, a reasonable step to analyzing these varied regions of the effects of developing artificial intelligence would be to consult an expert opinion, as the issue of artificial intelligence affects multiple fields of knowledge, and many opinions focus only on one or two aspects. Some prime candidates for such an overall argument are Bill Gates and Stephen Hawking. Both share the same opinion that artificial intelligence will be a dangerous prospect, but neither completely reject the possibility that artificial intelligence could be used safely. Hawking argues that artificial intelligence “will take off on it’s {sic} own and redesign itself at an everincreasing rate” (McFarland). Due to this, Hawking surmises that humanity might be left in the dust as technology advances so rapidly that humanity becomes irrelevant (CellanJones). Since artificial intelligencewould be capable of developing new technologies in such a fashion, this seems like a fairly concerning prospect. However, this is fairly dependent on what technologies can be invented, since many of the future technologies can be presumed to be incomprehensible to the modern age. Whether or not this could lead to humanity being irrelevant would also depend on whether or not a human and artificial intelligence working together would have more success than an AI working alone or with other AI’s. Bill Gates argues that when artificial intelligence is advanced enough, it could become uncontrollable. However, he qualifies this statement with, “First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well” (Rawlinson). Thus, early artificial intelligence doesn’t seem to cause much concern among the experts in the field, but when artificial intelligence approaches a superhuman level, the difficulties and concerns raised by Hawking, Gates, and Musk become apparent.

 

Interestingly, Eric Horvitz, one of Microsoft’s research chiefs, does not share this opinion, stating in an interview for an award in AI research, “There have been concerns about the longterm prospect that we lose control of certain kinds of intelligences. I fundamentally don't think that's going to happen" (Rawlinson). Despite Mr. Horvitz optimism regarding this matter, this situation will most likely not occur without significant work into the coding of artificial intelligence. Consider a normal program, being used by people of human intelligence. Many examples exist where exploits have been found and abused to either hack into computer systems or other devastating consequences to the program itself. The coding of an artificial intelligence would have to be perfect, in order to prevent both humans and the intelligence itself from altering the code negatively. If it is possible for a human to find exploits in the code, it will be infinitely more so for and artificial intelligence of at least human capacity. Mr. Horvitz’s view is also held by Jerry Kaplan at Stanford University, who stated in an interview when asked what percentage of the hysteria over runaway AI is simply hyperbole, “The short answer is 95% smoke in my opinion. And reasonable firesafety precautions should adequately address the remaining 5%” (Lin). So as long as the design of artificial intelligence is handled well, most of the field experts seem to agree that most the negative impacts of developing artificial intelligence can and will be mitigated.

 

 

CONCLUDING PREDICTIONS

It seems this split decision is due more to a lack of concrete knowledge regarding artificial intelligence, rather than due to actual issues with the development of artificial intelligence. This is fairly clear, as most of the experts indicate such issues as possibilities rather than guaranteed problems with artificial intelligence. This is not to say that such issues are not of any concern: as stated previously, even there is only a five percent chance that there would be a problem, such contingencies must be anticipated and investigated. This leads to the main issue regarding analyzing concerns regarding artificial intelligence. Artificial intelligence seems to have the same amount of inherent danger as other technological advancements; however, each of the issues would have disastrous consequences should they arise. There are inherent dangers in every technology; however, most technologies are not capable of operating themselves. The stakes are much higher with artificial intelligence, and therefore its development should be treated with considerably more care, which is exactly the current case. Ergo, the current thoughts on artificial intelligence are rational, and there would be significant cause for concern if no one in the field was debating these issues. Thus, especially since so little is currently known about the operation of an artificial intelligence, these concerns must be accounted for. In other words, the issues posed both by experts and the media are probably hyperbolic, but this exaggeration is necessary to avoid lassitude regarding the design of artificial intelligence. If done correctly, artificial intelligence could become our greatest asset, allowing the exponential research of new technologies and massive breakthroughs in medicine. On the opposite end of this spectrum, we have apocalyptic scenarios from a failure to carefully design artificial intelligence. With the amount of concern that field experts are currently showing, we, at this point in time, seem to be set firmly on the former path.

 

 

 

WORKS CITED

 

 

1.) Barrat, James. 2015. Our Final Invention: Artificial Intelligence and the End of the Human Era. St. Martin’s Griffin.

2.) Bostrom, Nick. “How Long Before Superintelligence.” Linguistic and

 

Philosophical Investigations. Vol. 5, No. 1, pp. 1130. 2006. Web. 16 April 2016.

 

3.) Bostrom, Nick and Eleizer Yudkowsky. “The Ethics of Artificial Intelligence.” Cambridge Handbook of Artificial Intelligence. Cambridge University Press,

2011. Web. 16 April 2016

 

4.) CellanJones, Rory. “Stephen Hawking warns artificial intelligence could end mankind.” BBC.com. 2 December 2014. Web. 16 April 2016

5.) Good, I.J., “Speculations Concerning the First Ultraintelligent Machine,” in Franz L. Alt and Morris Rubinoff, eds., Advances in Computers, vol. 6 (New York: Academic Press, 1965), 3188.

6.) Kaplan, Jerry. Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence. London: Yale UP, 2015. Print.

7.) Kurzweil, Ray. The Singularity Is Near: When Humans Transcend Biology. New

 

York: Viking, 2005. Print.

 

8.) Lin, Patrick. “Stanford Expert Says AI Probably Won’t Kill Us All.” Forbes.com.

 

4 August 2015. Web. 16 April 2016.

 

9.) Luger, George F., Artificial Intelligence: Structure and Strategies for Complex

 

Problem Solving (New York: AddisonWesley, 2002), 355.

 

10.) McFarland, Matt. “Elon Musk: With artificial intelligence we are summoning the demon.” Washingtonpost.com. 24 October 2014. Web. 16 April 2016.

11.) Mlot, Stephanie. “Google’s AI Beats Complex Game, ‘Go.’” Pcmag.com. 28

 

January 2016. Web. 16 April 2016

12.) Moscaritolo, Angela. “IBM’s Watson Can Think in Japanese, Help Control

 

Diabetes.” pcmag.com. 7 January 2016. Web. 16 April 2016.

 

13.) Oppy, Graham and David Dowe, “The Turing Test.” The Stanford Encyclopedia of Philosophy (Spring 2016 Edition), Ed. Edward N. Zalta. plato.stanford.edu. 8 February 2016. Web. 16 April 2016.

14.) Rawlinson, Kevin. “Microsoft’s Bill Gates insists AI is a threat” BBC.com. 29

 

January 2015. Web. 16 April 2016.

 

15.) Yudkowsky, Eliezer, “Artificial Intelligence as a Positive and Negative Factor in Global Risk,” August 31, 2006, https://intelligence/org/files/AIPosNegFactor.pdf(accessed 1st April 2016).


Current and Emerging Issues: Persepectives from Students at The University of Tu

  • ISBN: 9781311708991
  • Author: SteveSteib
  • Published: 2016-05-31 21:05:16
  • Words: 85209
Current and Emerging Issues: Persepectives from Students at The University of Tu Current and Emerging Issues: Persepectives from Students at The University of Tu