The Christian Identity Movement Why Are They Not Being Racially Profiled?

Although the current war on terror as exemplified by the US invasion and occupation of Iraq was spurred by the radical acts of extreme religious fundamentalists, it is clearly unfair to single out Islam as the only religion that inspires such perverse devotion. The group known as Christian Identity is at least as equally passionate about their particular religious beliefs as any of the hijackers on those planes that took off the morning of September 11, 2001. The Christian Identity movement traces back to British Israelism in which it is believed that white Christians are the inheritors of lost tribes of Israel and their to be treated as God’s chosen people. In America, followers of the movement believe that the US is the promised land written of in Biblical texts. A key precept of their belief is that because they are God’s chosen people, they are racially superior and therefore must remain pure and separate from what are viewed as inferior races.

The similarities between the religious motivations of Muslims who commit terrorist acts and the religious motivations of members of Christian Identity cannot help but raise concerns that the future holds the promise of extremist actions by members of that Christian sect. It is perhaps unfair to compare something as all-encompassing as “Islamic terrorism” to one single individual Christian group, but there do exist certain general beliefs that can be attributed to all the myriad organizations covered by the term “Islamic terrorists.” The growth of radical Islamic fundamentalism can be traced back to the westernizing of Muslim culture early in the twentieth century. British and French domination of the region, followed quickly by American political intervention, came to represent the rejection of the traditional religious morality and ethics that had marked Islamic societies since the Crusades.

Political domination and oppression create a division between the powerful elite and cultural traditions, in effect giving rise to the idea of the “Islamic solution” that had become a necessity to solve the problem. This is the very essence of religious fundamentalism and it can be directly compared to the many of the fundamentalist beliefs of those within the Christian Identity. These people essentially hold the view that society has drifted over the centuries from the true spirit of the Bible, and creating a new morality based on secular humanist precepts. The Christian Identity members, therefore, are really doing nothing more different at all from their radical Islamic brethren; both view the elitist secular western world as corrupting their vision of God’s will.

Another common threat that serves to unite the radical Islamic movement with the Christian Identity movement is a deep distrust of and hatred expressed toward Jews, but the motivations for that hatred are significantly different. The members of Christian Identity chose that particular word to separate themselves from generic Christianity because they possess a deeply held belief that they have been identified as true descendants of Adam and Eve and they must therefore distinguish themselves from the Jewish race. In fact, they go so far as to demonize the Jews by declaring them to be the spawn of Satan. Islamic terrorists, on the other hand, show hostility to the Jews primarily as a result of political differences rather than such racially motivated differences. It was the establishment of the Israeli state in 1948 and their close relationship with the US that draws the ire of most Islamic radicals.

Although both radical Muslims and the members of the Christian Identity share certain views and approach their worldview from a similar perspective that the secularization of society has contributed to the dilution of moral purity in the world, it is highly unlikely that these common bonds would ever result in an alliance. The American version of the Christian Identity movement is militantly nationalist, viewing the US as the promised land. Because they place such an inestimable value on racial purity and because Muslims come from non-Aryan lands that are in direct opposition to any claim to be the descendents of Adam. Anyone who is not of racial purity is inclined to be looked upon with extreme suspicion. Likewise the radical Islamic fundamentalists commonly refer to America as the Great Satan.

While it is clear that many in the Christian Identity movement may very well hold with some of the opinions that the policies America’s government takes is in keeping with Satanic virtues over Godly virtues, they also persist in an effort to separate themselves from those policies; indeed, from the government of America in general. Radical Muslims are differentiated in this view. The very fact that they are willing to target civilians and innocent citizens points to the fact that holds all Americans culpable for the actions of their leaders. The Christian Identity members as well view the actions of their leaders with suspicion, but firmly hold onto the notion that America is the promised land. For the Christian Identity members, America still holds out the promise of redemption and salvation; it will merely require a substantial reorganization based on segregation and deportation of the impure elements. Where the radical Muslim target American civilians for their leaders’ policies of supporting Israel over the Arab countries in the Middle East, Christian Identity members have thus far refrained from violence to protest that same subject of concern.

Just as it is highly unlikely that Islamic radicals and Christian Identity members would ever form an alliance to meet their oppositional ends, it is also highly likely that both would reach out at least tentatively to other anti-American groups as a method of achieving their ends. The Identity members, as stated, view America as the promised land and so would not be likely to make an alliance with groups that have already taken actions to harm this country. Certain other Christian fundamentalist groups that look upon America as promoting the Jewish agenda, but who aren’t necessarily interested in harming American civilians would certainly be considered potential allies should that alliance fill their coffers with much needed supplies and funding. Likewise Islamic fundamentalists would obviously not be averse to forming alliances with another anti-American organizations and groups.

Religious fundamentalism is a growing problem in America, but contrary to the popular view, it is not necessarily the Islamic variety that may potentially pose the biggest threat. The two most fatal acts of terrorism this country have ever experienced were both acts of religious fanaticism. Unfortunately, the bombing of the Murrah Federal Building in Oklahoma City is almost never referred to in that way and certainly didn’t result in the political leadership of this country seizing the moment to establish a climate of fear for their own political gain.

King Charles I, Gentrification, and Civil War in England

It would take a rare sovereign to make King James I look good by comparison, yet somehow King Charles I managed to accomplish this miracle. His utter contempt and disregard for Parliament surely must be considered right up there with Pres. George W. Bush’s utter contempt and disregard for the Democratic members of Congress in terms of tragic political miscalculations. By dissolving Parliament, Charles I was forced to pay for his government through taxation means that many considered illegal.

The centerpiece of Charles’ badly mismanaged system of ruling probably lies in his insistence upon using the legal system to enforce his religious policies. Charles I succeeded only in fomenting dissent by abusing the power of the Star Chamber to crack down on his enemies. This eventually led to what may be Charles’ greatest mistake: attempting to arrest leaders of the Commons for plotting treason. Parliament was openly resistant to his abuse of power and the citizenry became more and more hostile.

Charles’ overstepping and abuse of power set the stage for a growing division among the people. While the reign of Charles I had been successful in the eyes of many because it oversaw a peace and, of course, there still existed the natural inclination among some to support the monarch no matter what (again, shades of the Bush Presidency, no?), there was also a growing dissatisfaction combined with strong feelings that religious reform was being systematically denied and destroyed. The stage had been set for strong feelings on both sides and Charles’ ineffectiveness in dealing with the problems that were growing seemed to be all that was really need for the two opposing sides to meet in battle.

The greatest political mistake of Charles I was probably not any particular policy or decision, but rather an intellectual incapacity for understanding the intricacies of rule. He was a believer in absolute rule and seemed disinclined to accept that compromise was even necessary. (Okay, now the comparison with George W. Bush is just getting downright eerie.)

The Stuart era had ushered in the gentrification of England and this social change played a great part in setting up tensions that led to the English civil war. The English gentry participated in commercial affairs that had a direct effect upon the governance of the country. Their influence in local affairs came to be mirrored in their influence among the House of Commons. Gradually the gentry and the Commons came to their growth of influence from the perspective of ambition. And an ambitious Commons was seen as an enormous threat by the monarchy.

Economic interest tends to create a sense of shared purpose far more than political interest and the growing wealth of the merchant class brought them together and made them a significant force that had to be reckoned with. They rightly viewed their economic contributions to the growth of the country’s prosperity as something to be validated by recognition.

At the same time much of this merchant class came to view the Puritan movement with some sense of sympathy. Just as the gentry felt that their voice was being stifled so they did view the Puritan movement to reform the church and push it even further away from its Catholic foundations as equitable.

The simmering tensions between the people who were supplying the wealth that the later rulers were squandering and the religious freedom that the later rulers were eradicating can be viewed as small parts of a larger whole. The larger whole in this case was a growing discontent with the over-extensions of power adopted by the post-Elizabethan monarchs. Since the gentry made up such a large part of the House of Commons, any attempt to take the power away from Parliament was viewed with suspicion.

The acquisition of more power by the gentry whose claims to power has been established during the Tudor can be viewed as a prime cause behind the events that led to the civil war.

Cardinal Wolsey and King Henry VIII

Cardinal Thomas Wolsey plays a tremendously large part in British history perhaps due more to King Henry VIII’s disinterest in politics rather than to any inherent genius or even ambition of his own. Cardinal Wolsey was lucky enough to find himself in the service of a king who was willing to hand over the administrative details of his government to someone other himself.

The lasting legacy of the reign of Henry VIII—other than providing fodder for a never-ending series of movies about his love life—is, of course, his desire to get a divorce and the refusal of the Pope, resulting in the split with the Catholic Church. Cardinal Thomas Wolsey was quite familiar with the inner workings of the Church, of course, having been both a Bishop and Archbishop. Eventually, Thomas Wolsey’s influence with Henry VIII landed him the enviable post of Cardinal and then he even managed to secure special powers from Rome.

These powers effectively gave him control of the direction of every aspect of the English church. With his power, Cardinal Wolsey oversaw all legal matters that related in any way to the Church: from marriages to church appointments. Cardinal Wolsey’s direction for the church was one toward extravagant expenditures. He was as arrogant as Donald Rumsfeld or Dick Cheney in displaying the wealth that his position had brought him and as a result made many enemies on all sides. It was this arrogance and display of wealth in the face of overwhelming poverty among so many that led to the spread of anti-church sentiment throughout much of the English countryside. (Which my lady loves, by the way.)

It was Cardinal Wolsey’s intense Karl Rove-like devotion and loyalty to Henry VIII—incurred doubtlessly because unless he did remain loyal he would lose all his wealth—that probably afforded him little choice but to acquiesce when King Henry VIII asked Cardinal Wolsey to secure for him a divorce. Thomas Wolsey, as a powerful prelate of the Catholic Church, probably would have preferred privately that King Henry give up this idea, especially upon finding out that it just wasn’t destined to happen. At least not in the way in which Henry VIII hoped.

Cardinal Wolsey found himself caught between both sides of his enemies, both those who felt he was being disloyal to the Church and those who felt he was the personification all that was wrong with the Catholic Church. Cardinal Wolsey’s alienation of people on both sides effectively eradicated any chance he possibly had for securing the divorce Henry VIII so eagerly sought and doubtlessly his failure was a primary contributor to the eventual breakdown between Henry VIII and the Pope, leading to the eventual split with Rome and the establishment of the Anglican Church.

Of course, the establishment of the Anglican Church in no way ended the country’s religion problem, but that’s a story for another time.

Casablanca: Film Noir or Not-Noir?

Interestingly, almost no list of classic film noir includes the Bogart classic Casablanca despite the fact that it contains nearly every single element that a film noir should contain. Although no two people are likely to agree on exactly what elements are necessary to constitute describing a film as noir, most fans of the genre would probably find little to argue about in this assessment from the web site “The primary moods of classic film noir were melancholy, alienation, bleakness, disillusionment, disenchantment, pessimism, ambiguity, moral corruption, evil, guilt, desperation and paranoia”. A quick overview of Casablanca reveals that of those elements the film is lacking…only one. Since Casablanca is universally regarded as a film classic does the fact that it is almost never mentioned as a film noir classic mean that it is not actually a film noir?

Film noir is most notable for its cinematic look: black and white cinematography, heightened use of shadows and darkness, and distorted, Expressionist staging. Clearly, Casablanca was filmed in black & white and clearly it makes effective use of shadows, especially in the key scenes that take place in Rick’s café afterhours. However, one would be hard-pressed to describe the film as Expressionistic. It is a deeply realistic film, in fact, directed in an almost pedestrian manner with little in the way of unusual angles or camera movement. Since the distorted look of most film noir is intended as a manifestation of the distortion of values and morals that drive the characters in the story, perhaps this may be why Casablanca is rarely regarded as film noir.

That the visual style of Casablanca does not serve to heighten the moral ambiguity should not be confused with the idea that the film doesn’t contain elements of ambiguous morality, however. Rick Blaine begins the film as one of the most disillusioned and apparently amoral leading characters to appear in a major Hollywood since the introduction of the Hays Code. The character of Renault is generally regarded as even more amoral and disillusioned. The story revolves around a woman who is the cause of Rick’s disillusionment and although nobody should ever take the argument that Ilsa should be considered a femme fatale, she possesses enough of the darkness within her to qualify at least as female with a profound sense of disillusionment and not just a little confusion regarding her morals. And, of course, quite clearly most of the other non-Nazi characters exhibit signs of moral ambiguity. The Nazis, naturally, represent the evil in the universe. So, then is Casablanca not to be considered a film noir merely because its director chose to shoot scenes without a tilted camera? No, the real reason that Casablanca presents a problem to the lover of film noir goes back to that missing element from Tim Dirks’ description.

Beneath the tragic romantic triangle that drives Casablanca is the story of a neutral isolationist, a man who proudly asserts his unwillingness to stick his neck out for anyone but himself. Today it may come as a surprise to many viewers of the movie that America was essentially neutral toward the war going on in Europe even as late as 1941. Rick Blaine is the personification of that America; a neutral observer who comes to realize that he can no longer afford to watch out just for himself. The story is pure propaganda as it places him in the position of choosing between his own selfish needs and the noble needs of fighting for the betterment of the world. He must join the cause or else face the prospect of not having another Casablanca to run to and hide in. In retrospect, Casablanca is nothing short of a miracle in the way it brilliantly escapes turning into a heavy-handed message movie, and that may be part the reason it is typically excluded from the film noir section down at the local movie store.

It is almost impossible to imagine any film noir ever made coming so perilously close to being turned into a message movie as Casablanca. The message at the end of most noir films is that things can’t change; it is decidedly pessimistic. And it is pessimism that is the only element mentioned by Dirks that can be said to be missing from Casablanca. Despite the moral ambiguity of Rick Blaine at the beginning and despite the presence of real evil in the form of the Nazis all around him and despite the presence of all those other elements—guilt, alienation, paranoia—the ending of Casablanca holds out hope. It is not just that it is a “happy ending” because other examples of film noir end with the bad guys dying and the good guys winning out and they don’t face the same resistance to categorizing as Casablanca. The problem rather seems to be that even when Rick is at his most disillusioned and alienated and the film is at its bleakest—the scene when Rick bemoans the fact that his was the one gin joint in the world that Ilsa walks into—there is still no pessimism to be found. The Nazis are the bad guys—and what’s worse—they are real bad guys doing real bad things even as the movie was being made. Yet there is no sense of desperate fear that the Nazis could win; certainly not the kind of pessimistic fear that the Commies could win or that the McCarthyite fascists in America could win that pervades later film noir. It is quite interesting, in fact, that the pessimism is greater that fascism could win in America than that America couldn’t defeat the fascist threats from overseas.

Ultimately, it remains unclear why Casablanca is not regarded as film noir. It contains at least as many defining elements of film noir as Sweet Smell of Success, while excluding at least as many elements of film noir as Touch of Evil. Of course, one person’s film noir is another person’s gangster movie or thriller, so the debate is unlikely to be settled.

America’s Federal Reserve System How it Works and What it Does

If you’ve ever watched the bozos at CNBC, then you’ve no doubt heard multiple references to the Federal Reserve System. But do you really know what the Federal Reserve System is and why it exists and how it affects your life? The Federal Reserve System was created by the US Congress under the Federal Reserve Act in 1913 in order to establish a more solid foundation for the economic policies of the country that would potentially forestall the kinds of monetary crises that had afflicted America during the preceding decades. The concept behind the creation of the Federal Reserve was to establish an agency that would set and maintain monetary policy in the US by influencing monetary conditions.

Responsibility for establishing monetary policy is in the hands of the Federal Open Market Committee (FOMC). The FOMC is made up of the seven Federal Reserve Board members along with five of the twelve Federal Reserve Bank chairmen. It is the job of the FOMC to meet throughout the year-usually no more than eight times-to analyze the current state of the US economy and prepare monetary policies in response.

Although a variety of methods for controlling monetary policy is available, the Federal Reserve typically engages in open-market operations. Should the Fed’s analysis of economic conditions lead them to the conclude that money supply should be increased, it would then move to purchase US Treasuries securities from the public. Just as the purchase of securities by an individual would raise his assets accordingly, so is this true for the Federal Reserve. For instance, if the Fed bought a billion dollars worth of treasuries, its assets would be raised by a billion. To pay for this purchase, the Fed simply issues a check on the Federal Reserve itself and the result is a billion dollar increase in the money supply of the country. The effect of more money in the nation’s supply leads to lower interest rates and lower interest rates generally results in an increase in loans and borrowing, as well as more money being spent. Since consumption is a primary component in determining the Gross Domestic Product (GDP), the GDP also tends to increase as a result of this move. Although increasing the money supply is directed toward spurring growth in the economy through increasing employment and wages, this can’t be sustained over the long term due to inflationary pressures.

Alternatively, if the Federal Reserve felt that a reduction in the country’s money supply was necessary it would reverse the operation and sell a billion dollars worth of Treasury securities to the public. The sold securities would result in a net loss of one billion in assets, but the checks received in exchange would be used to eliminate an equal amount in deposits at banking institutions. Therefore, the monetary base of the country will be reduced by one billion dollars. This reduction in the money supply causes interest rates to rise and borrowing levels to drop. The result here is the opposite to that of increasing money supply; inflationary pressures are more easily contained but at the cost of potentially slowing the economy and increasing unemployment rates. Even the mere appearance that the Fed may take this route may lead to fears of encroaching inflation among workers who will seek to offset the effects by requesting a wage hike. An increase in wages can then in turn create the inflation that workers feared in the first place.

Although open-market operations are the primary method the Fed uses to control the money supply, it is not the only way; the Fed also utilizes changes in reserve requirements and discount window lending. A minimum for each type of deposit banks hold as reserves is set by the Federal Reserve and changes in these minimums affect the reserve-deposit ratio. Any increase in reserve requirements increases the reserve-deposit ratio which in turn reduces the money multiplier. This results in a reduction in the money supply of the monetary base. Since changing reserve requirements affects long-term planning this method is used on as a last resort.

The rate that the Fed charges banks for short-term loans is called the discount rate and the process of the Fed lending reserves to banks is called discount window lending. If an increase in borrowing from the discount window is made, the monetary base will also rise; any decrease in discount window borrowing creates a reduction in the monetary base.

The discount rate is the basis on which real interest rates are predicated and real interest rates are manipulated by the Fed to influence economic decisions. The figures at which long term interest rates are set will influence long term decisions by businesses regarding such things as expansion and investment. It will also affect both the desire and ability for individuals to try to get long term housing loans. When interest rates are lower families typically invest more in durable goods because of the availability of more money. Lower real estate rates also have the effect-when other aspects of the economy are going well-of increasing the desire of banks to extend loans. Consumers who have gotten a loan may therefore have more money to spend which strengthens the economy.

Since the Federal Reserve clearly cannot control both unemployment and inflation at the same time, the problem therefore is how best to find a middle ground in which the economy can continue strong growth without giving in to inflationary pressures. None of the policies available to the Federal Reserve can do this, nor is there any guarantee that any specific move by the Fed will result in the desire effect. The problem stems partly from the fact that there is always a lag between the decision that the Fed ultimately makes and partly from the fact that there are so many macroeconomic factors at play. Economic conditions can turn on a dime-witness the unexpected and devastating effects as a result of Hurricane Katrina-and there is no equitable method for the Federal Reserve to react to such factors as quickly. The Fed, therefore, must consider policy based only on what is known, what has occurred historically, and algorithmic models and potential possibilities. The best method of keeping inflation in check with acceptable unemployment and overall economic growth is a topic of much debate. Many economic theories have posited advantages and disadvantages to current monetary policies that are in complete opposition to each. Obviously employment should be a high priority in a consumer-driven economy such as America’s so it remains all-important to make decisions that will spur new jobs and wage growth.

There is no single policy that can ensure while keeping inflation at bay, but since inflation is often the result of consumer expectation, probably the best way to influence the possibility of inflationary pressure is to shape a policy in which monetary movement from the Fed is telegraphed with a stronger sense of credibility. The best way to achieve this would be to create standards for inflation targets. All too often the private sector responds to fears that the Fed is controlling the money supply to stave off inflation and reacts with higher wages, creating a self-fulfilling prophecy of sorts.

The Magna Carta and the Jury System

Trial by jury is so ingrained into the consciousness of Americans today that a jurisprudence system based on anything else is virtually inconceivable. At the same time, however, most Americans’ knowledge of the system of judgment based on a collection of twelve of one’s peers is limited to what they glean from watching one of the sixty-four reruns of Law & Order that air every day. The jury system has undergone various transformations over the century, but for the most part it can be traced back to one single day in England in the year 1215. Although it may border on the incomprehensible, the fact remains that the concept of twelve strangers deciding the fate of another stranger dates back to a document formed at the height of the power of British monarchy.

On June 15, 1215 rebellious barons forced King John of England to affix his seal to a list of freedoms and liberties. However, these rights were never intended to be enjoyed by the masses as some sort of democratic ideal; the charter agreed to under pressure by King John was a feudal document and therefore was only meant to provide protection to a small and exclusive group of those at the top of the feudal system. Despite this limited targeting of rights, it is the Magna Carta from which the term “a jury of one’s peers” springs and not the US Constitution or the Declaration of Independence. In practice, however, this was done to reduce the powers of the king to restrict him from creating laws at his whim. In addition, in local cases where laws were broken the result typically tended to be violent resolutions and the clauses in the Magna Carta were designed to bring a means whereby disputes could be solved equitably.

Despite this movement forward, the jury as it is known today was a long slow process during which time the members of the jury transformed into more of a body of expert witnesses who could bring some intellectual insight to the case at hand rather than being the arbiters of justice. In fact, it would not be until the 1600’s that the jury came to resemble anything like what it is today. Over time the theoretical idea of a jury made up of one’s peers moved out of that realm and into the real.

Many complaints are made about how a jury of one’s peers rarely translates into exactly that even today. Still, the average jury composition today is far preferable to those of even the relatively recent past. Consider the most famous movie ever made about jury deliberations. The title 12 Angry Men doesn’t really do justice; it should be called 12 Angry White Men. Sadly, the utter lack of diversity on that cinematic jury is very true to life; for most of America’s history a jury of one’s peers was only truly apt if you were a white male. The result, of course, is a sad history of innocent African-Americans being convicted of crimes they didn’t commit. The passage of the Jury Selection and Service Act in 1968 went a long way toward reforming the process, mandating that the pool of jury names be selected from registered voters and that names be randomly selected. Since the passage of that act individual states have moved to enact various strategies to ensure a more equitable formation that comes closer to reflecting the diversity of the country at large.

Members of the jury come from all walks of life, although the chances of a person having their fate deliberated by someone famous is small to nil. At the same time, however, a poor person can find his fate in the hands of a millionaire just as easily as a millionaire can find his fate being put into the hands of a minimum wage worker. American citizens can only serve on juries in those states in which they are official residents. In addition other exemptions include not being a US citizen and not being able to understand the English language, as well as certain physical disabilities such as blindness, deafness, physical infirmities and age. Obviously, those under the age of eighteen are not allowed to serve on a jury, but elderly people are also typically allowed to be exempt. Those with pressing occupational excuses typically tend to receive exemptions as well. For instance, doctors and nurses rarely serve on juries lasting more than a day. Clergymen also usually receive exemptions.

The function of the jury in the American judicial system is to attend the trial and deliberate on the evidence in order to provide a unanimous verdict. The point of a jury system is that it is supposed to provide a fair and unbiased examination of the evidence; taking away the power of the judge who may be prejudicial in his decision. In addition, many view the use of jury as an important defense against unchecked power in the hands of the state. When the state controls the delivery and judgment of evidence the possibility for abuse increase exponentially. Because the jury has such power, a key component of the system is for lawyers to try to find jurors whom they believe will be likely to believe their side. Since so many trials are based on witnesses and evidence that provide no solid basis for determining guilt or innocence, a fair amount of that determination is based on juror psychology. And that is why many potential jurors are dismissed with peremptory challenges in the voir dire process. During the voir dire, lawyers can dismiss potential jurors for a variety of reasons, but they also have the power to dismiss a potential juror for absolutely no reason at all. While this sounds like a magnificent abuse of power, it comes with one very important caveat: the number of peremptory challenges varies from state to state, but typically attorneys can utilize it no more than six times.

Perhaps the single most important aspect of a jury verdict is that it must be unanimous, regardless of what the verdict is. In other words, to find a person guilty all twelve members of the jury must agree on the defendant’s guilt. Likewise, before a defendant is completely freed of all of liabilities to the charges against him, he must be found not guilty by all twelve members. Should a unanimous decision not be made, the term used to describe that circumstance is a hung jury. A hung jury does not mean that the defendant is either guilty or innocent; the decision on whether to try the person again or not falls into the hands of the district attorney.

The very concept of trial by jury has been put on trial many times throughout the history of the US, many times wending its way through the system all the way to the Supreme Court. One of the earliest Supreme Court cases involving trial juries was Strauder v. West Virginia dealt with the subject of racial exclusion from juries. Duncan v. Louisiana was a landmark case involving trial by jury, deciding that the Sixth and Fourteenth Amendments to the Constitution did guarantee the right to jury trial in state prosecutions involving sentences as long as two years.

The right to a trial by jury is one of the Constitutional foundations of American law. So important was the concept of trial overseen by a jury of one’s peers that it was included among the Bill of Rights. The Sixth Amendment is a codification of the rights to be granted to individuals in case of criminal prosecutions in federal courts. With the ratification of the Fourteenth Amendment, these rights were made applicable at the state level.

The trial by jury is a bedrock foundation for jurisprudence in the US. The founding fathers were heavily influence by the Magna Carta in the development of the constitutional law of the land and viewed their own revolutionary break with King George III as analogous in some ways to the revolutionary spirit that inspired the Magna Carta in the first place. It is incredibly difficult to imagine the American legal system operating under any other method.

Despite this fundamental commitment to the jury system in America, however, much of the rest of world has not been so quick to adopt the system. Even in the birthplace of the trial jury things are not quite done as they are in America. For instance, though England and America both share the 12 person jury concept there has recently been a change that allows for the possibility of a non-unanimous verdict. At the discretion of the judge, if deliberations go past a certain point he can instruct the jury that a verdict can be brought based on a 10-2 vote. In other countries there are stipulations as to which kinds of crimes are tried by jury. It may come as a surprise to many Americans to find that people of other nations actually frown upon putting their fate into the hands of their peers. The very idea of letting an uneducated person control their destiny is absolute anathema.

The jury system in American is constantly evolving. Whether as a result of lawsuit reaching the Supreme Court or public opinion resulting in an outcry for reform, the best adjective to use in describing the jury system in America is fluid. The system began as a male-only concept and has changed significantly throughout the years. No jury is ever really a true representation of one’s peers and that’s probably for the best. It would hardly be fair if rich white men were only tried by other rich white men. However, in recent years many high-profile cases have led to suspicion that the system isn’t quite as near to perfected as most American would like to believe.

Although the US court system has a vicious history of jury nullification in which all-white juries convicted blacks of crimes they not only were innocent of committing, but which the jury members knew they were innocent of committing, it several infamous cases during the 1990’s that led to scrutiny about the very fairness of the jury system in America today.

The stunning acquittal of the defendants in the Rodney King case—a case in which video of the crime seen by all of America instantly branded the defendants guilty in the minds of most—and the shockingly quick and—to many—bewildering acquittal of O.J. Simpson both led to an outcry for a re-examination the jury selection process. At issue were two subjects directly related to jury fairness: How can there really be any pretension toward finding twelve jurors not already predisposed toward a verdict in this age of immediate media coverage; and should trials of infamous and well-covered crimes necessarily be tried in the jurisdiction in which the crimes took place.

Another issue at the forefront of the jury system in America is the idea of jury unanimity. Many high-profile trials that ended in hung juries were discovered to have been nearly unanimous with only one or two holdouts. Subsequently there have been some efforts made to follow England’s move toward allowing non-unanimous verdicts. At present this concept has had no real legislative pressure behind it and typically only becomes an issue when it has been discovered that a defendant generally found guilty in the public’s eye receives the benefit of a hung jury.

The American trial by jury system has been in place since the country’s inception and its precursors date back for centuries. Whatever flaws one may find in it, and surely it is not a perfect system, the alternative seems far worse. Most Americans would not be willing to give up the system as it is, though certainly everyone probably has their own idea on how to better it. Despite not being perfect, clearly no one American is ready to give serious thought to dumping it and starting over from scratch.

Emily: The Greatest Bronte of All

Emily Bronte is arguably the most famous of the Bronte sisters; certainly only Charlotte gives her any serious competition. Her fame, like Charlotte’s, rests primarily upon one single work of fiction, her gothic novel Wuthering Heights. But why should that novel serve to elevate Emily to more fame than Charlotte when her sister’s own novel Jane Eyre has retained its popularity alongside Emily’s book? Interestingly, the key to Emily’s ascension to the near-unanimous acclaim as the premier woman of letters in her family may be found in the keen observation made by Charlotte that Emily had in Wuthering Heights created a novel that was both disturbing and fascinating to the reader simultaneously.

This dichotomy in the novel may also be viewed as being part of the reason that Emily Bronte the person—as opposed to the novelist—has come to have such a firm grasp on the consciousness of her readers. If the reader is fascinated by her novel because there is a certain unpleasant element contained within, might there not also be that typical fascination the public has with the tragic artist who dies quickly following the creation of her signature work. Emily Bronte died in 1848, just one year after the publication of Wuthering Heights. Her early death leaves readers with the always unanswered question of what more might she have accomplished? Much in the way that contemporary readers of A Confederacy of Dunces cannot help but wonder what brilliance might have followed this novel by John Kennedy Toole if he had not committed suicide before it was even published, or how fans of James Dean cannot help but watch one of his movies without wondering what he might have done had he not died in a car crash, the readers of Wuthering Heights have had a shadow peering over their shoulder every time the book has ever been cracked open. Just as there is a significant unpleasantness to the character of Heathcliff while there is also an unrelenting interest in him, so is there a natural conflict while reading Emily’s novel between a gripping interest in her story and a subdued, almost melancholy sense of loss.

The Man Who Could Work Miracles

Because so little about Emily Bronte is known, readers have naturally tried to find something autobiographical in her only novel and that disturbing quality that Charlotte mentioned has led many to extrapolate an equally disturbing interest in Emily. In fact, reading Wuthering Heights for many is perhaps not unlike listening to the music of Joy Division or Nirvana before the lead singers of those bands committed suicide; searching for clues about artists about whom fans may know little else.

But just how much validity is there to such an extrapolation? Can the listeners of the music of songwriters who kill themselves really get a true glimpse into the psychological depths of a suicidal person? Obviously, in the case of Emily Bronte this analogy isn’t perfect; after all, she didn’t kill herself. On the other hand, a novel typically contains more possibility for autobiographical efforts to slip in whether consciously or not due to the nature of the media. Clearly, many academics have looked to textual clues in Wuthering Heights to give some
sort of insight into Emily Bronte herself. Just as clearly, it is human nature to expect that artists reveal themselves through their work. Several critics assert that not just Emily’s masterpiece, but all the novels of the Bronte sisters can be read as subconscious attempts to come to terms with the repression enforced upon the family by an authoritarian father. This reading of the novel also touches, once again, upon that disturbing quality in the novel first expressed by Charlotte Bronte. Indeed, perhaps it was Charlotte’s own repressed feelings toward her father that she found so peculiarly distressing and familiar in her sister’s novel.

An equally possible explanation for why both Emily Bronte and her novel have risen to a higher level of interest among readers than those of her sisters might lie in the fact that both are subjects of stark contrasts. Even accepting that subconscious autobiography naturally slips into every creative work, there is still no denying that nothing in her novel even remotely resembles the facts of Emily’s life. The circumstances surrounding Emily’s life bear no relation to the story she tells, yet clearly there must have been violent emotions at play within her. Equally so, the novel itself ranges unsettlingly between scenes of utter calm and scenes of almost frightening intensity.

Could it be the contrast that draws readers to Emily Bronte? Obviously, her sisters experienced the same outward circumstances as Emily, yet although there an intensity in the character of Rochester in Charlotte’s Jane Eyre, for the most part is an even-tempered book. Emily’s novel, on the other hand, is anything but even-tempered. Readers, of course, are often drawn to fiction for the opportunity to safely enter a world that isn’t safe and cozy, and the events that unfold in Wuthering Heights more than satisfy that desire. When combined with the possibility that reading her novel offers a glimpse into the soul of Emily Bronte, there may be little wonder that she is the Bronte sister who most seems to captivate readers throughout history.

And yet there may be one more reason for Emily’s enduring fame that once again harkens back to her sister’s discomfort with her novel. If it can be assumed an element of the autobiographical is at work in the novel, that disturbing element may be located specifically within the character of Heathcliff. Although it is surely pure coincidence that Emily died so quickly upon the publication of her novel, it is also sure that the average life expectancy for women during Emily’s life was atrociously short. In fact, the average life expectancy for a woman in England born at the time Emily Bronte died was just over 43 years. Early death and tragically short lives were the norm. Contrast this fact with Heathcliff’s death in Wuthering Heights which seems less an end than a beginning. There is a definitely disturbing element to the death of Heathcliff which may lead the reader to suspect that it is not meant to be taken as a tragic end to one story, but more as an optimistic beginning to another; to another story Emily chooses not to tell. Again, the contrast is between the characters in the novel and the author herself. If the shadow of what Emily Bronte was not able to accomplish due to her tragic death hangs over the every reader of her novel, then it is also true that the shadow of what happens to Heathcliff and Cathy after death hangs over their tale. The suggestion is certainly that death is not meant to be seen in terms of absolute finality.

And perhaps that is why Emily Bronte continues to haunt the consciousness of readers more so than her sisters. The reason for Emily Bronte’s lasting fame may be that there is a sense that her story did not experience absolute finality with her own death.

Does America Need an Emergency Constitution?

Few would debate that the US Constitution is a perfect document; if it were then there would no such thing as amendments to it.  The United States began as a grand experiment in self-governance.  The people would be in charge of electing their leaders and removing those leaders should they attempt to subsume powers not legally delegated to them.  In peacetime, this experiment in government has been difficult enough as it has required fending off corruption, inherent flaws in the system, and grasps for power.  The true test of the strength of America’s government has always been how it has stood during times of crisis, especially when that crisis has involved war. 

The only real experience any of the framers of the Constitution had was with a monarchic form of government.  While they were acquainted through history books with the idea of democratic self-rule as practiced in ancient Greece and Rome, that history may well have considered mythology.  Nobody at the time of the drafting of the US Constitution really knew for sure if it could possibly work; in fact, it seemed highly dubious.  The nation was young, relatively unarmed, and certainly ripe for invasion by a well-organized military from Europe that didn’t also have to be concerned with protecting its native soil.  Fortunately, that particular scenario never arose during the first trembling years of the experiment. 

History indicated that at some point in time a threat most certainly was likely to arise from somewhere and America would be called upon to defend itself.  History also suggested that unless there was in place a strong central ruler whose word during wartime was near-absolute that internal chaos would become the greatest weapon available to any invading foe.  America, it must be remembered, was not just an experiment in self-rule, it was also an experiment in rule by assembly.  Although the King of England had to deal with Parliament, there was never any questioning that he was the sovereign head of state.  The position of President was created to produce a chief executive and a commander-in-chief, but it was never intended to be endowed with the powers of a sovereign ruler.  The framers of the Constitution purposely resolved not to give the President the powers of a king, not even during times of invasion. 

The Constitution of the United States, the foundation of law upon which America rests, was written specifically to eliminate a route by which any single person or entity could gain so much power as to become anything even remotely like a monarch.   This document delineates the separation of powers between the legislation of laws, the execution of laws and the interpretation of laws.  The idea was to create a system in which the problems that were inherent in the longstanding monarchic system could be avoided.  Simply having a sovereign be subject to a legislative body was not enough; the legislative body and the sovereign had to be equal in power.  In addition, the authors recognized that a third body needed to exist which would maintain oversight over the other two.  In the event of a stalemate between the legislative intent and the execution a court system not beholden to either would be necessary to interpret. 

As a theoretical construct the US Constitution is much closer to being perfect that as it has been practiced and the reason for this comes down to the human factor.  These three branches of government exist on two separate planes: the abstract and the real.  As abstractions it would hard to come up with something better.  As practiced in the real world, however, they are dependent upon the vagaries of the human ambition and human error.  Throughout American history all three branches of government have at times attempted to overextend the powers given them by the Constitution.  Sometimes these extensions are merely the result of misinterpreting what is often viewed as an incredibly vague set of laws.  Other times, of course, those extending their powers have done so with full knowledge and intent. 

The problem of misinterpretation goes back to the very founding—perhaps even the very drafting of the Constitution.  The vagueness of the document carries with it a twofold effect.  In the first place, any foundational document of law must by nature be vague or else it would be unwieldy.  No constitutional foundation of law could or should address every law with microscopic specificity. The purpose of a constitution is to constitute the idea of the law; the purpose of the law.  The idea and purpose set forth in the US Constitution is that this country will be founded on democratic principles.  That is, there shall be no attempt to restrict the rights of the citizens. 

The debate over interpretation of the Constitutional powers became a subject of debate almost from the minute George Washington was sworn in as President.  This debate is personified in the oppositional viewpoints of two of the legendary figures of that era, Alexander Hamilton and Thomas Jefferson.  Alexander Hamilton, who doubted that the Constitution could ever be effectively realized in practice, felt that the problem was directly attributable to the document’s vague qualities. Hamilton rightly saw that anyone could read into the vague wording whatever they wanted and justify any action undertaken.  Jefferson disagreed only with Hamilton’s final assertion.  While recognizing that it was a vague document, Jefferson took the view that this enhanced its power. The nebulous qualities of the Constitution meant precisely that it had to be adhered to with the utmost strictness of interpretation.  Exactly because anyone could read into what they wanted was why it was important to interpret it only in how it was specific.   Jefferson realized that this would strip the people of their rights and in that way destroy the ideals of America. He believed the constitution should be read strictly.  The problem, then, isn’t really one of vagueness, but rather of interpretation.  And as long as the interpretation is found upholding democratic ideals, interpretation becomes almost meaningless.  

This foundation is the single most important element of that document known as the Constitution.  Specific laws have been written into and out of the Constitution.  Other laws have been written in by virtue of omission; slavery was Constitutionally deemed legal by virtue of it not being declared illegal.  The amendments to the Constitution are designed to strengthen the idea of freedom and liberty and democracy.  Yet, this is not always the case.  The amendment commonly known as Prohibition, for instance, went directly against the idea and purpose of the Constitution.  The idea of the Constitution is to protect the democracy and extend freedom; what the Prohibition amendment attempted to do was the exact opposite.  It was an attempt to restrict freedom and, in some ways, it was even an attempt to restrict democracy by writing a statute into the Constitution.  The Constitution is not designed to be a list of laws; statutes are not its purpose.  The wrong-headedness of this idea is revealed in the Amendment to repeal that statute masquerading as a Constitutional law of the land. 

Unfortunately, that lesson has not been learned.  In recent years an outspoken minority has worked hard to interject another statute into the foundation of American law.  These people wish to write a patently discriminatory statute into a document that exist only to achieve the opposite of these aims. An amendment barring gay marriage does not belong in the Constitution; it doesn’t even belong in a county book of statutes.  Why is there still the distinct possibility that it could very well wind up in the Constitution?  Because the Constitution can be amended at the whim of lawmakers.  Admittedly, the process requires that that whim be shared by a large majority of people, but 1930s Germany is proof enough that a large majority of people can be bent to the whims of a small minority even without a gun being pointed in their face. 

Of course, one need not step into a time machine and visit 1930s Germany to see how easily the whims of a few can result in the passage of legislation detrimental to the very basis of the democratic principles contained within the Constitution.  In America itself legislation was passed that prohibited dissenting voices from speaking out.  This legislation led to people being arrested and detained without being charged and all in the name of protecting the country from foreign invaders and those within who would be supportive of these foreign aims.  The name of this legislation?  No, not the Patriot Act, though one may certainly be forgiven for jumping to that conclusion.  The Alien and Sedition Act was the first attempt to draft security measures based on fear, but clearly not the last.  This act made it illegal to speak out against the government, which is considerably strange considering the government existed only on account of speaking out against a previous government. Even more disturbing, of course, is that this legislation seemed to directly contradict the very first amendment to the constitutional document of law.  The Alien and Sedition Act was legislated—supposedly, at any rate—as a response to a threat of war with France.  Fear of how the US could respond to an attack from an overpowering opponent led to the drafting of legislation that may have been intended—may have—to keep the government from weakening under pressure from abroad. 

Fear is probably without question the most primal of all human emotions.  People may kill for love, but they will kill whom they love if in fear of their own life.  People facing a threat to their security don’t respond intellectually; they respond with a knee-jerk reaction meant provide a means of self-preservation.  As a result, fear is never a good time to pass a law.  Following the September 11, 2001 terrorist attacks the nation united as it has not since in the conviction that steps needed to be take to protect American soil from invasion.  This unity was balancing on a shaky foundation built by fear and if ever there was a time in American history not to pass any new legislation it was the months following the attacks. 

Alas, that was not the case.  Americans across the board watched the devastation wrought by planes hijacked by terrorist who proudly proclaim their intentions to destroy the country and responded with the kneejerk reaction that politicians looking to expand their power so rarely get and so desperately hope for.  The relaxation of intellectual engagement combined with the political realization that speaking out against the President at that time was bound to be construed as unpatriotic to create one of the most dangerous moments in time in American history.  Ironically, the danger was not from Islamic terrorists half a world away, but rather from America’s own democratically elected—and undemocratically unelected—leaders.  Fear caused Americans to respond with a nearly unanimous yes when asked if they wanted laws put in place to protect them from a tragedy like this ever occurring again. 

The result of the worst piece of legislation designed out of fear since the Alien and Sedition Act.  The ill-named Patriot Act permits the federal government to expand the limits of both telephone and internet surveillance, to loosen the laws against wiretapping citizens, to allow a person’s library borrowings and video rentals to be requested without their permission, to make it easier for the feds to obtain previously confidential financial and medical records.  One of the most egregious elements surrounding this passage is that it was done with almost no public hearings and with an absolute minimum of open debate among the public at large.  In fact, the Patriot Act was passed with only one dissenting vote in the Senate, that of Sen. Russell Feingold.   All Feingold asked for was that the Patriot Act be debated openly so that his own fears of the law being used against suspects with no ties to terrorist fears wouldn’t have their civil liberties restricted in the name of enforcing security.  For his part in standing up to the democratic principles contained with the real Patriot Act—otherwise known as the Constitution—Feingold received death threats. 

The Patriot Act was designed to protect Americans from terrorist threats from abroad but was quickly amended and strengthened by a definition for a new crime now known as “domestic terrorism.”   Alexander Hamilton would consider the wording of the US Constitution to be ridiculously specific in comparison to the wording of the definition of “domestic terrorism”  which is “any action that endangers human life or is a violation of any federal or state law”. Sen. Russell Feingold’s fears appear to have been grounded in cemented reality.  This intentionally vague wording has the potential for describing…well…pretty much any crime that is, was and ever will be committed as terrorism.    

There seems to be a genetic propensity among humans to want to hand over absolute power to a single individual during times of stress.  America has experienced far greater levels of stress than was experienced on September 11, 2001.  Pres. Abraham Lincoln has faced accusations of overstepping his Constitutional powers during the Civil War.  Pres. Franklin Delano Roosevelt likewise faces criticism for extending his Constitutional powers.  The evidence appears quite solid that both those Presidents did take it upon themselves to imbue their office with powers not outlined in the Constitution, but never for a moment was there any real fear that either would extend those powers to such a state that the democracy would crumble.  Bruce Ackerman writes that “defenders of freedom must consider a more hard-headed doctrine—one that allows short-term emergency measures but draws the line against permanent restrictions” as a response to security emergencies.  The United States democracy and the civil liberties inherent in its foundational document survived a four year long civil war and an almost four year-long attack by fascist armies intent on global domination without resorting to emergency measure that would co-opt those ideals, yet Ackerman and others apparently believe that less than two dozen hijackers warrant undoing the Constitutional protection of both liberties and securities. 

There is no need for an emergency Constitution.  That document has protected American civil liberties for over two-hundred years, and has protected America from being co-opted by ambitious men who in many other countries would have had no trouble setting themselves up as potentates.  The real strength of the United Constitutions is that while it has been amended and its amendments have been amended it has never had to be rewritten.  The powers of the three branches of the government are among the least vague elements of the Constitution.  It doesn’t take tanks rolling through the streets and a military protecting the person who is in charge to turn a democracy into something far worse; all it takes is the willingness to lose faith in the foundation of its laws. Once a country’s citizens begin to suspect that their laws aren’t strong enough to protect them, history shows they are all too willing to rewrite those laws.  And each rewrite becomes a little bit easier to approve.  Until, finally, there is no law left to rewrite and what is left isn’t the law itself, but the gatekeepers of those laws.  If Americans decide that their Constitution needs to be adapted to meet the specific needs of an emergency, the only question left will be what constitutes an emergency, and it doesn’t take a genius to figure out who is going to make that determination. 

The Peasants’ Revolt of 1381

At the time of the Peasants’ Revolt of England, the King of England-indeed of any country-was viewed from the perspective being ordained to leadership by God Himself; the monarch was in effect God’s appointed gatekeeper. This ideological acceptance of how things are supposed to be gradually resulted in a tightly structured societal system. God was at the top and the structure spread downward from Him with the aristocracy near the top and the overwhelming mass of the populace at the bottom. Despite being the majority shareholders in this system, these people-the peasantry-had little to no say in the governance of their own affairs. (Does this whole system sound familiar?) This way of doing things had been in place for so long and had developed such a tradition that it came-as most societal structures eventually do-to be seen as natural, as a part of God’s will for His people.

Just as the American colonists were spurred to revolution in part due to taxation without representation, so can the Peasants’ Revolt be traced back to issues of taxation. And just as has been the case with so many tax-related revolutions, the issue at hand was figuring out a way to finance a war without putting a drain on incomes…of the rich. The war with France was going about as well as that little mission we accomplished in Iraq and mo’ money was needed to fund it; apparently, leaders still haven’t learned the lesson that the best way to fight a losing battle is not necessarily to throw more money into extending it. Parliament decided to raise money by imposing a poll tax of one shilling. The result being that those who were to benefit the least from going to the polls would be the ones paying the most for it.

In addition to the taxes, the Black Plague also contributed to the revolt by way of what were seen as unfair wage control laws. Rebellious activity took place across the country and it was certainly not limited merely to peasants, despite the name given to the revolt. Those who took part included members of the clergy as well as merchants. Eventually the success of the rebellion led to the executions of several members of the royal court and missed reaching King Richard II by just this much. In fact, the rebellion did meet with the King and had they not believed his lies the history of England might well be different. (Ah, what a thing it is to consider how the history of the world might be different if leaders didn’t lie to their people.)

Instead, the rebels were alleviated by the words of the king just enough for the nobles to regain the upper hand and launch a widespread attack them, killing their leaders and effectively putting an end to the hope of achieving any kind of serious reform to the system.

Eventually much of the most ghastly components of the monarchical system disappeared, but for the most part the Peasant’s Revolt can be considered an abject failure because the status quo was in place for a long time following it, and because the murderous attacks on those who initiated the revolt no doubt did much to curb any enthusiasm among the peasants to try anything like it again. Furthermore, the failure of the revolt probably cemented the idea that God really had ordained that this was how things should be.

Marxism, Racism, Sexism and the American Penal System

Applying Marxist theory to any social institution begins with locating the power center of that social institution. The most frustrating problem with the criminal justice system today for a Marxist is also the most overarching problem, and the easiest to describe: those who design and legislate the laws are the ones who stand to benefit the most from those laws. To deny that there is an ideological bias to punishment is to ignore the blatantly obvious; can anyone really deny a causal connection between the massive amounts of money big business contributes to politicians and the disparity in severity between robbing a convenience store and robbing a pension fund?

The corrections facilities may very well be the most explicit example of how ideology saturates the penal system. A central tenet of Marxism suggests that the dominant class has naturalized an unnatural perception of the quality of crime. White collar crime that affects hundreds or thousands, but from which the criminal is detached by time and geography is not punished in the same way as a blue-collar crime that may, in fact, be less oppressive not only in number, but to the individual. Whereas a member of the ruling class may destroy the lives of thousands he’s never seen and be sentenced to a “country club” facility, the poor person who steals that man’s car is sentenced to a violent, maximum-security prison.

If society can be viewed as an interdependent system of social institutions that all rely upon each other to maintain equilibrium, then clearly things are well out of balance. The number of prisoners in America continues to rise to heights with which our facilities are unable to keep pace. As a result, appropriations are being taken away from other needs to meet the needs of maintaining the prison system since every politician regardless of party is disinclined to suggest increasing taxation. The problem, of course, is that far too large a percentage of people in prison are there due to drug-related charges. Even those prisoners for whom drug use was only tangentially related to the crime committed must still be viewed as merely a symptom of a much larger problem that is not being addressed. Perhaps if more instead of less money were earmarked for educational and rehabilitative programs–after all, jail time won’t solve the problem of addiction–the resulting imbalance in the funding of those programs would bring forth a much more productive lack of equilibrium. Unless the intent is to turn every prisoner into a lifer, the current system of penal correction offers no solution to the problem of recidivism, which in turn merely creates a revolving door process by which the imbalance will only get worse.

The central problem of a capitalist society, when judged from a Marxist position, is the need to establish ideologically created opinions and beliefs that are then naturalized by citizens. Through literature, images, advertising and various other subtly coercive measures, the dominant ideology is inculcated into a value system that is protected mightily even when it does not serve one’s own interest.

Despite the true revolutionary ideals inherent in the founding of America, this naturalization of sexist and racist attitudes was mandated in the foundation of our legal system. Sexism and racism was institutionalized from the earlier beginnings of America and over time those perverse beliefs became naturalized among citizens as not just being the way things are, but being the way things are supposed to be. Marxism’s greatest complaint about non-socialist societies is that it engenders a class system based on ownership that is then perpetually recreated. From the beginning, women and non-whites were denied ownership; in the latter case they were actually the property to be owned itself. This ingrained devaluation of the worth of women and African-Americans is, for the most part, no longer sanctioned by law, but it is still a long distance away from being unsanctioned by the naturalization of ideology.