When modern observers think about World War II, it is hardly likely Iceland comes to mind. Much of Iceland’s role in World War II is not as extensively reviewed and studied as the Allied and Axis powers. But that does not suggest Iceland’s role in World War II is any less interesting and bittersweet to say that least. This begs the next pressing question: Which side of the war was Iceland allied with? The answer may be surprising. We explain below.

You can read the author’s previous World War Two related article on a love story between a Nazi SS guard and a Jewish prisoner at Auschwitz here.

The airplane Walrus, involved in the 1940 British Invasion of Iceland.

The airplane Walrus, involved in the 1940 British Invasion of Iceland.

In 1874, after twenty years of nationalist fervor inspired by Romantic and nationalistic events in mainland Europe, Denmark granted Iceland limited independent ruling powers and a constitution. Prior to being ruled by others, Iceland was independent, inhabited by people of Norse descent, and governed by an assembly called the Althingi and created a constitution. In 1262 a union was created between Iceland and Norway. When Norway and Denmark formed a union in the 14thcentury, Iceland became a part of Denmark. In 1918, the Act of Union was signed and Iceland was rendered an autonomous nation united with Denmark under the same king. It was agreed that Denmark would handle foreign policy for Iceland. Iceland was still a remote and little-known territory, with a barren and volcanic geography. By 1940, just over 120,000 resided in the island, supporting themselves mainly through fishing and sheep ranching and exported products to Europe.

 

Iceland when war broke out

When the war in Europe began in 1939, Denmark declared an act of neutrality which in turn, applied to Iceland. Until then, the Third Reich’s interests with Iceland started with friendly soccer competitions and visits in the summer of 1938 with gliders and an airplane. German anthropology teams arrived to survey Iceland while U-boats visited the capital Reykjavik. Commercial trade between the two counties drastically increased. These relations did not go unnoticed. One German naval officer remarked, “Whoever has Iceland controls the entrances into and exits from the Atlantic.” London imposed stern export controls on Icelandic goods which prevented profitable shipments to Germany – in other words, a naval blockade. However, on December 17, 1939 the decision was made in Berlin to occupy Denmark.

On April 9, 1940, Nazi Germany began the occupation of Denmark and invasion of Norway. Denmark was swiftly overrun by Germany. As Germany gained control of the lengthy Norwegian coast, British planning shifted as Iceland grew more strategically important. Iceland in all practical purposes was still completely independent. At this time, London offered assistance and an alliance to Iceland, something that was denied by an Iceland that asserted their right to be neutral and believing that Hitler would respect their decision.

 

Invasion

Nevertheless, there was no doubt an island state in the Atlantic Ocean with close ties to Denmark was desirable to both warring parties. German presence was already noted; a small diplomatic staff, a few German residents, and displaced war refugees, in addition to 62 shipwrecked German sailors. Allies feared an organized guerilla force or even a coup against the Icelandic government when the nation only had some 70 policemen armed with handguns. From the coast of Norway, Germany at this point could have quickly staged a counter-invasion. An invasion by sea or air was an open opportunity. On May 10, 1940, British troops invaded and took over Iceland. A reconnaissance plane, Walrus, was launched to inspect for enemy submarines within distance. Despite orders not to fly over Reykjavik, it was neglected and British presence was revealed. Iceland did not have airports or airplanes of its own so the people of the town were alerted, ruining the element of surprise. Two destroyers named Fearlessand Fortune, joined British cruisers and transported 400 Royal Marines ashore. A crowd had gathered and the consul of Iceland, Gerald Shepherd, asked the Icelandic police officer in front of the astounded crowd: “Would you mind getting the crowd to stand back a bit, so that the soldiers can get off the destroyer?” The officer complied. 

The capital of Iceland was taken without a shot being fired. The German counsel was arrested along with any German citizens. The Marines managed to gather a considerable number of confidential documents even after the German consul attempted to destroy them. Communication networks were disabled which secured strategic locations. That same evening, the government of Iceland issued a protest, claiming its neutrality had been “flagrantly violated” and its “independence infringed,” but came to agree to British terms which promised compensation, healthy business agreements, and non-interference with local affairs. All forces would also be withdrawn at the end of the war. The troops proceeded to Hvalfjörður, Kaldaðarnes, Sandskeið and Akranes as security to counteract feared German attacks. Iceland was divided into five sectors by the British Army, for strategic purposes and defense. The southwestern corner was the tiniest but most significant with over ten thousand troops assigned to protect it. To the west, over seven thousand were stationed, covering land and air surrounding Reykjavík, along with air and naval anchorages. Unfortunately, rough terrain and poorly maintained roads made the defense of the entire island difficult.

 

The impact in Iceland

All of this military action was in preparation for a German invasion, but in fact none had been planned leading up to that point. After the British invasion however, the Nazis did discuss a plan to conquer the island (Unternehmen Ikarus – “Operation Ikarus”) for the purpose of blocking Britain’s and France’s sea trade routes and to usher in a possible surrender but these plans were abandoned. In the meantime, Iceland officially maintained neutrality but provided cooperation. Prime Minister Hermann Jonasson asked over radio that the citizens of Iceland treat the British troops as guests.

The British troops were joined by the Canadians and then were relieved by US forces in 1941. When the United States officially joined Allied forces in World War II, the number of American troops on the island reached 30,000. This was equivalent to 25% of Iceland’s population and 50% of its total male population. A new issue was raised from the perspective of the local population: the mingling between Allied soldiers and Icelandic women, referred to as “The Situation” (Ástandið) and the 255 children born out of these dalliances, “Children of the Situation.” 

Despite this, Iceland’s economy was boosted during this time after the debilitating Great Depression. World War II for many Icelanders was referred to as blessað stríðið – “the blessed war”. Infrastructure and technology was up scaled along with job opportunities, roads and airports, including Keflavík International Airport. Many Icelanders moved to the capital for this sudden boost in employment. Icelanders sold massive amounts of fish to Britain, going against the embargo imposed by Nazi Germany and the risk of U-boat attacks.

Reykjavík underwent a transformation during the occupation as streets, local businesses, restaurants, shops, and services bloomed. In addition to this national flourishing, Iceland was left unscathed compared to most other European nations during World War II and did not engage in any war combat minus the approximate 200 Icelandic seamen on sea falling victim to attacks of Nazi German submarines. In May 1941, the German battleship Bismarck attacked and sank the British ship Hood off the coast of Westfjords.

 

Icelandic independence

The circumstances of the world war prevented Iceland from renegotiating with Copenhagen the 25-year agreement of 1918. Hence, Iceland terminated that treaty in 1943 and broke all legal ties with Denmark, forming an independent republic. The new state was officially founded on June 17, 1944 after an almost unanimous vote by national referendum with Svein Bjornsson as its first president. 

In 1945, the last Royal Navy assets were withdrawn with the last airmen of the Royal Air Force leaving in March 1947. Some American forces remained after the end of the war despite the provisions of their invitation and fifteen conditions. In 1946, an agreement was signed granting America use of military facilities on the island, the last of the US soldiers leaving Iceland on September 30, 2006.

 

What do you think of Iceland’s role in World War Two? Let us know below.

Sources

Chen, C. Peter. “Iceland in World War II.” WW2DB, ww2db.com/country/iceland.

Hauptmann, Katharina. “Iceland during World War II.” Wall Street International, 24 Dec. 2013, wsimag.com/economy-and-politics/6575-iceland-during-world-war-ii.

“Iceland during WW2.” History TV, www.history.co.uk/history-of-ww2/iceland-during-ww2.

“Invasion of Iceland.” Wikipedia, Wikimedia Foundation, 23 Aug. 2019, en.wikipedia.org/wiki/Invasion_of_Iceland.

Nieuwint, Joris. “10 Facts About One of the Most Notorious Figures of the 20th Century - Adolf Hitler.” WAR HISTORY ONLINE, 7 Oct. 2016, www.warhistoryonline.com/world-war-ii/25-surprising-facts-about-hitler.html.

John J. Cummings has created what he has called “America’s First Slavery Museum.” The museum is an anomaly for American plantation museums—it memorializes America’s enslaved in a style reminiscent of Holocaust memorials while also acting as a traditional (although reinterpreted) Southern plantation tour. Jackie Mead explains.

You can read Jackie’s previous article on Lewis Temple and the 19th century whaling industry here.

The Big House at the Whitney Plantation. Source: Bill Leiser, available here.

The Big House at the Whitney Plantation. Source: Bill Leiser, available here.

In 1991, a crumbling former plantation 35 miles outside of New Orleans attracted the attention of a rayon manufacturer, Formosa. Locals commissioned an eight volume study as a way to slow the project until rayon went out of fashion. When the property went up for sale again, it was bought by eccentric trial lawyer John J. Cummings III. Unlike most people, when he is given an eight-volume study on a new purchased property, he reads it.[i]

For the next several years, John J. Cummings would spend eight million dollars of his personal fortune to create what he dubbed “America’s First Slavery Museum.” The museum is an anomaly for American plantation museums—it memorializes America’s enslaved in a style reminiscent of Holocaust memorials while also acting as a traditional (although reinterpreted) Southern plantation tour.

 

A Museum Anomaly

Plantation museums in the former Antebellum American South have fallen into a comfortable pattern over the years. The lives of the white landholders (and slave owners) were focused on exclusively. Tours were limited to the “Big House” and ignored the various “outbuildings” where slaves lived and worked.[ii]They stood as testaments to the conspicuous consumerism of the pre-Civil War South, a world in which manicured lawns held garden parties with mint juleps, and women in hoop skirts fanned themselves beside elegant picture windows. This myth of the South has made the plantations a popular site to hold weddings and sorority reunions, a trend that museums encourage because of the valuable revenue they bring in.[iii]This view eliminates the people who made such grandeur possible—African American slaves. 

Whitney plantation is entirely different. Today, the plantation includes at least twelve historic structures that are open to the public. The home is interpreted entirely from the enslaved point of view, discussing the domestic tasks performed there to support the Haydel family’s domestic needs.[iv]Slave quarters were moved from a nearby planation in order to properly represent the homes of the enslaved. A steel-barred cell in the style used to punish rebellious slaves has also been added to the property.[v]The final historic building exhibited on the plantation is the Antioch Baptist Church. All of these buildings are visited during the 90-minute walking tour included with the visitor ticket. 

 

Memorials to Slavery

Whitney Plantation also includes several memorials, springing directly from the mind of John Cummings. One of these is the Field of Angels, a circular courtyard listing the names of the almost 2,200 slave babies in St John Parish that died before their third birthday in the 40 years leading up to the Emancipation Proclamation. Surrounded by child-sized pink and blue benches, there is a statue of a black angel embracing a tiny baby tenderly in its arms, about to bring the child to heaven.[vi]The bronze was cast by Rod Moorhead, a Louisiana native who has worked on other African-American memorials. David Amsden of the New York Times called the statue “at once chastening and challenging, beautiful and haunting.”[vii]The memorial is meant to bring attention to the exceptionally high mortality rates among slave children, as well as to mourn their passing.

Whitney Plantation’s most recognizable memorial sits within Antioch Baptist Church. John Cummings commissioned well-known African American artist Woodrow Nash to cast forty life-size casts of slave children to stand and sit within the pews of the church. Affectionately called “The Children of Whitney” by the museum staff, they represent the lost childhoods of Whitney’s former residents.[viii]Cummings was inspired to create the exhibit by listening to the interviews of former slaves collected by the Works Progress Administration (WPA) in the 1930s. “The best expression I have heard about slavery is: ‘Those who viewed cannot explain, only those who endure­d should be believed,’ he said to The Australian.[ix]Inspired by these words, Cummings has placed a great emphasis on the interviews collected by the WPA, and intends recordings to be played on a loop in both the church and in the slave cabins at a later date.[x]Many of the former slaves interviewed by the WPA were children at the time of emancipation, and therefore their interviews recall their lives as children and teenagers. The Children of Whitney depict these people as they were—children.[xi]

There are two memorials that feature names carved in stone: The Wall of Honor, which is dedicated to the more than 350 slaves that worked at Whitney Plantation, and the Allées Gwendolyn Mildo Hall Memorial, which is dedicated to the 107,000 slaves in Louisiana complied by its eponymous historian.[xii]Both of them were inspired by Maya Lin’s Vietnam Memorial in Washington, D.C. Because of issues with dating the various documents the names were drawn from, names have been placed on the plaques with no order at all, in order to convey the chaos of slavery. Many slaves lack family names, so the walls are dotted with repeating lines of Mary, Bob, Amelia, and Joseph with no way to distinguish individuals.[xiii]In continued dedication to firsthand accounts, Cummings requested that sections from the WPA’s interviews with former slaves be carved onto the memorial in order to give visitors an idea of what these individuals suffered. 

 

Grappling With The Past

John J. Cummings III believes it is important for America to follow the example of countries like Germany and South Africa in dealing with this national trauma. Both nations built museums and memorials to honor their unsavory past as a way of retroactively grappling with it. “In Germany today, there are hundreds of museums and memorials dedicated to the Holocaust, and the Germans are not proud of that history,” said Cummings to TheNew Yorker, “But they have studied it, they have embraced it, and they own it. We haven’t done that in America.”[xiv] 

In fact, the opposite has occurred. In an ethnographic study of 138 south-eastern plantation museums, two academics found the African-American presence to be “annihilated.”[xv]This is due to the fact that many of these plantation museums have white administrative staff, curators, and interpreters that cater to the white perspective.[xvi]As a result, museum tours focused almost exclusively on the privileged lives of the white landowners, reducing slaves to nameless laborers identified by the tasks they performed for the white family. Such museums were especially popular during the Civil Right Movement, when white Southerners yearned to remember a “simpler” time.[xvii]  

This is no longer the case. In the past twenty years, twenty-four museums in south Louisiana have opened slave exhibits. These exhibits have increased tourism for plantation museums, both private and public, with 1.9 million visitors to historic sites across the state.[xviii]School groups are especially popular visitors.[xix]Museum administrators have cited the growing interest in common people and a desire to show a more integrated version of American history as reasons for adding these kinds of exhibits.[xx]

 

Mourning Slavery

Whitney Plantation is a new approach to the plantation museum. Instead of offering additions to an already existing tour, Whitney is a plantation tour with slavery-based interpretation combined with a memorial museum. This is a far more effective way to convey the true tragedy of race-based slavery. According to Silke Arnold-de-Simine, a British expert on memory and author of Mediating Memory in the Museum,memorials are intended to make visitors identify with history’s victims. By establishing an environment that encourages visitors to imagine themselves experiencing these atrocities, visitors can empathize with the people of the past. Arnold-de-Simine refers to this as “prosthetic memory.”

This principle is important for memorial museums because they inspire feelings of guilt and grief rather than pride, and must channel those negative feelings into a personal commitment to pluralism and tolerance.[xxi]This is done through a combination of first-person testimonials, visual recreations of the conditions the individuals experienced, and memorials where collective grief can be expressed. All of these techniques were pioneered during the building of Holocaust memorials. This is what allows the plantation to have such a profound emotional effect on visitors. “Everything about the way the place came together says that it shouldn’t work,” says Laura Rosanne Adderley, a Tulane history professor, “And yet for the most part it does, superbly and even radically. Like Maya Lin’s memorial, the Whitney Plantation has figured out a way to mourn those we as a society are often reluctant to mourn.”[xxii]Although Whitney Plantation might seem mismatched, this combination of techniques is very effective.

 

Taking a Risk Pays Off

The plantation received 34,000 visitors in its first year—double the projected turnout. It is a respectable number for a new museum.[xxiii]Whitney Plantation has managed to attract African-American tourists at a rate unprecedented by other Louisiana plantation museums. Roughly half the people present at opening day were black.[xxiv]

Whitney plantation has also seen considerable tourism from school groups, especially secondary schools. The direct and unfiltered depiction of slavery, rarely seen in school curriculums, has a profound effect on students. One visitor left a comment card reading, “I learned more in an hour and a half than I have in any school.”[xxv]  

The inability of the American school system to adequately deal with slavery was one of John J. Cummings III’s many reasons for establishing Whitney plantation. “Without knowledge about how slavery worked and how crushing the experience was — not only for those who endured it, but also for their descendants — it’s impossible to lift the weight of the lingering repercussions of that institution. Every generation of Americans since 1865 has been burdened by the hangover of slavery,”[xxvi]he wrote in the Washington Post. Cummings believes that it is only when Americans are properly educated on the abuses and legacy of slavery, that can we hope to move forward. 

John J. Cummings III understands how unusual it is for a white former trial lawyer to be the person who establishes America’s first museum dedicated to slavery. In an attempt to explain, he said of his process of research “You start understanding that the wealth of this part of the world — wealth that has benefited me — was created by some half a million black people.”[xxvii]Whitney stands tribute to those black people, but it does far more than that. It memorializes them in a style reminiscent of the Holocaust, and uses the restored landscape and first-person narratives to create feelings of empathy with those who suffered slavery. It seeks to create an emotional response in its visitors so that America can finally remember its wounds openly—because it is only then, according to John. J. Cummings—that American can finally start to heal. 

 

 

What do you think of Whitney Plantation? Let us know below.


[i]Amsden, “First Slavery Museum.” 

[ii]Julia Rose, “Collective Memories and the Changing Representations of American Slavery,” The Journal of Museum Education29, no. 2/3 (Spring/Summer 2004): 27. 

[iii]Amsden, “First Slavery Museum.” 

[iv]Whitney Plantation, “The Big House and the Outbuildings,” 2015, http://whitneyplantation.com/the-big-house-and-outbuildings.html

[v]Margaret Quilter, “Lest We Forget: Louisiana’s Slavery Museum,” The Australian, February 7, 2015, http://www.theaustralian.com.au/travel/lest-we-forget-louisianas-slavery-museum/story-e6frg8rf-1227210481228

[vi]Quilter, “Lest We Forget.”

[vii]Amsden, “First Slavery Museum.”

[viii]Whitney Plantation, “The Children of Whitney,” http://whitneyplantation.com/the-children-of-the-whitney.html

[ix]Quilter, “Lest We Forget.” 

[x]Amsden, “First Slavery Museum.” 

[xi]Whitney Plantation, “The Children of Whitney.” 

[xii]Amsden, “First Slavery Museum.” 

[xiii]Jared Keller, “Inside America’s Auschwitz: a new museum offers a rebuke—and an antidote—to our sanitized history of slavery,” Smithsonian Magazine,  April 4, 2016, https://www.smithsonianmag.com/history/inside-americas-auschwitz-180958647/

[xiv]Kalim Armstrong, “Telling the Story of Slavery,” The New Yorker,February 17, 2016, https://www.newyorker.com/culture/culture-desk/telling-the-story-of-slavery

[xv]Rose, “Collective Memories,” 27.

[xvi]Rose, “Collective Memories,” 27.

[xvii]Keller, “America’s Auschwitz.” 

[xviii]Keller, “America’s Auschwitz.”

[xix]Rose, “Collective Memories,” 26. 

[xx]Rose, “Collective Memories,” 28.

[xxi]Silke Arnold-de-Simine, “The ‘Moving’ Image: Empathy and Projection in the International Slavery Museum in Liverpool,” Journal of Educational Media, Memory & Society4 (Autumn 2012): 24.  

[xxii]Asmden, “First Slavery Museum.” 

[xxiii]Keller, “America’s Auschwitz.”

[xxiv]Amsden, “First Slavery Museum.”

[xxv]Keller, “America’s Auschwitz.” 

[xxvi]Cummings, “35,000 Museums.” 

[xxvii]Amsden, “First Slavery Museum.”

For nearly all the world, the Second World War finally ended on 15 August 1945 when Japan announced its surrender or 2 September 1945 when Japan formally signed its surrendered. For some soldiers, however, this was untrue. Many were left with psychological scars that would haunt them until they died. Others had life-altering injuries that prevented them from being able to live as they had before the war.

Neither of these are true for Hiroo Onoda. For him, the war did not end until 1974. JT Newman explains.

Hiroo Onoda (on the right) with his brother Shigeo Onoda (on the left).

Hiroo Onoda (on the right) with his brother Shigeo Onoda (on the left).

Hiroo Onoda was a Japanese intelligence officer who, in 1944, was sent to Lubang Island in the Philippines. He was ordered to stay on the island and disrupt Allied activity in any way he could. With these orders came one final command: he was never to surrender, and he was never to take his own life.

Even though his orders clearly stated he was to disrupt Allied activity in whatever way he could, his higher command prevented him from sabotaging an Allied airfield that was nearby. According to reports, his senior officers were eager to surrender when American forces arrived on the island in February 1945. In the fighting that followed their arrival, Onoda and three other soldiers - Private Yuichi Akatsu, Corporal Shoichi Shomada, and Private First Class Kinsichi Kozuka - escaped capture by fleeing into the local mountains.

For many months, Onoda and his three soldiers survived by rationing their food supplies and, when those ran short, foraging through the jungles for food. Occasionally, they would covertly kill a local citizen's cow for meat. It was during one of these raids that one of Onoda's soldiers found a leaflet that read: "The war ended on August 15. Come down from the mountains!"

 

With war over

Onoda and his soldiers dismissed it as Allied propaganda. Their beliefs were reinforced more heavily when police spotted them and immediately began to engage in gunfire.

Over the years, more leaflets would reach them, even some signed by former Imperial army generals. But each time, Onoda and his soldiers dismissed it as propaganda.

Throughout their time on Lubang Island, Onoda and his soldiers would conduct guerilla warfare operations on the local citizens. Any person they saw was assumed to be an Allied spy, so they engaged them in combat. They got into gunfights with the police and armed search parties who had been sent to retrieve them, burned rice storage down, and generally caused havoc among the local population.

In 1949, Private Akatsu had decided that he had fought for too long. Without saying a word to any of the others, he slipped into town and turned himself in to the local authorities. This caused the calls for the others' surrender to increase. The families of the soldiers were contacted. Letters and photographs of their families were dropped in their area, urging them to come out of hiding and into surrender. Onoda would not hear it out, as he refused to believe that the war was actually over.     By the early 1950s, the remaining three were considered criminals on the island. Corporal Shomada was shot non-fatally in the leg in 1953, nursed back to health over several months, then shot again - this time fatally - in 1954 during an engagement with the police. This left only Onoda and Kozuka alive to continue the mission that they did not know had ended some years before.


Decades pass

Almost two decades passed, with Onoda and Kozuka continuing to raid their "enemies." They lived in makeshift shelters, continued to steal food from the island natives, and engaged in occasional skirmishes with the local police and others in the area. At this time, they still believed the war was on, and that their guerilla tactics would be invaluable for the Imperial Japanese Army to take the island back.

In 1972, Onoda and Kozuka were both reviled and feared on the island. Then, while burning a village's rice silo, police spotted them and fired a few shots. During this conflict, Kozuka was shot and killed. Onoda was able to escape back into the jungle and continue hiding.

Being on his own, Onoda realized that it was unlikely he would be able to continue his operations. He settled down at this time and instead chose to focus on survival.

Norio Suzuki was a college student and an adventurer. He set out to Lubang Island in 1974, with the intent of finding Onoda. Suzuki located him, and befriended Onoda, but was unable to convince him to come out of hiding. For that, Onoda demanded, he would have to hear from his commanding officer.

With this information in mind, Suzuki did just that. He arranged to meet with Onoda two weeks later and returned to the island with Onoda's former commander, Major Taniguchi. Onoda arrived wearing a tattered and dirty Imperial uniform, and carrying his sword, his still-working Arisaka rifle, several hand grenades, and roughly five hundred rounds for the rifle. Major Taniguchi read the orders out for Onoda to return home, as the war had ended.

 

Surrender and later life

After this, Onoda formally surrendered to President Marcos of the Philippines. Even though he had killed roughly thirty people and wounded many more, President Marcos granted him a pardon due to his belief that he was still at war.

Onoda returned to Japan a celebrity, as his story had spread across the world. However, he found it difficult to adjust to the post-war Japan lifestyle. After writing a biography titled No Surrender: My Thirty-Year War, Onoda moved to Brazil and lead a modest life raising cattle. Some time after this, he returned to Japan and founded the Onoda Nature School, which was a survival skills camp for youth. In 1996, he returned to Lubang Island and donated a large sum of money to a school there.     Little else was publicly heard from Onoda until January 16, 2014, when it was reported that he had died of heart failure due to complications from pneumonia.

Onoda remains a divisive figure in some minds: some view him as the ultimate version of a patriot and others regard him as something much less than that for the damage he did to the community of Lubang.

 

This article was brought to you by Affordable Papers.

 

Editor’s note: That external link is not affiliated in any way with this website. Please see the link here for more information about external links.

 

 

Sources

https://www.thevintagenews.com/2018/03/23/hiroo-onoda/  

https://www.damninteresting.com/the-soldier-who-wouldnt-quit/  

https://allthatsinteresting.com/hiroo-onoda

While boxing today is a lucrative, codified sport, it was not always so. While boxing was popular in ancient times, it then faded away before re-emerging back to popularity in 18thcentury England. Here, Henry Esterson tells us about boxing in 18thcentury England and the establishment of the rules of the sport in the wider social context.

A boxing match between John Broughton and Jack Slack in the mid-18th century.

A boxing match between John Broughton and Jack Slack in the mid-18th century.

Boxing is an amorphous activity, one that occupies the space between primal instinct, violent expression and organized sport. As such, it has experienced a long and complex history. Boxing scenes are found in archaeological evidence throughout the ancient world: Egypt, Samaria, Greece and Rome. By later medieval periods, organized boxing had fallen back into anonymity (in Western Europe at least) only to re-emerge at a very specific time and location in early modern England when Jack Broughton created the “first rules for the sport of boxing”, published in London in 1743.[1]Broughton’s creation has been given a weighty historical significance. Robert Crego has described how “boxing didn’t begin to take the form of organized sport until the early 18thcentury, when the first rules of prizefighting were set forth. Jack Broughton… first introduced these rules… Prior to this time, a fighter would often clasp or wrestle an opponent, and there was no provision against hitting after he went down.”[2]

This transition from lawlessness to regulation has received a multiplicity of historical interpretations. Most famously, Norbert Elias and his disciples used boxing as a poster-child for the ‘civilizing process’, where this formerly erratic and dangerous act of folk-play became subject to laws and regulations governing its performance. In doing this, boxing became civilized, and as such could be consumed by the upper classes –individuals like the Duke of Cumberland and George II, who would become patrons of boxers in the mid-century. This interpretation gives way to a clear and well-defined narrative: boxing, formerly an unruly and unregulated activity associated specifically with the working classes, became a legitimate and well-organized sport that was consumed by “prince and ploughman alike” following the creation of a distinct rule set in 1743. 

Such an idea is predicated on the assumption that boxing carried no legitimacy prior to this point. In the words of the 19thcentury boxing historian Pierce Egan: “previous to the days of Broughton, it was downright slaughtering”. There are, however, a range of contemporary written accounts that indicate that this was not the case, and instead point to the existence of an implicit and unwritten set of rules predating those introduced in 1743. In order to properly understand the significance of this, and the change (or lack thereof) that occurred at mid-century, it is worth exploring exactly what regulations were prescribed by Broughton’s original 1743 rule set.

 

Broughton’s rules

These were set out in a pamphlet entitled Rules to Be Observed in All Battles on the Stage. There were seven rules that regulated various aspects of the boxing match. The first three rules governed the beginnings of a fight; how boxers should enter the ring, the duties of seconds, and covered certain formalities. For example, it was decreed that “no person whatever shall be upon the Stage, except the Principals and their Seconds”. The fourth rule is as follows:

“That no Champion be deemed beaten, unless he fails coming up to the line in the limited time, or that himself declares him beaten.”[3]

 

The fifth rule covered the distribution of prize money while the sixth prescribed the use of three umpires, who would resolve any dispute that arose during the match. The seventh and final rule was that:

“No person is to hit his Adversary when he is down, or seize him by the ham, the breeches, or any part below the waist.[4]

 

From Rules to Be Observedwe can distil a set of regulations that essentially governed the fundamental performance of the 18thcentury boxing match. They are as follows: a match would finish when a fighter was rendered incapable of continuing or capitulated to his opponent, a fight would be refereed by a group of three umpires, and that a fighter could not hit his opponent when he was down or attack him below the waist. 

 

Unwritten Rules

Despite the perceived originality of these rules, evidence indicates that similar regulations were in place prior to 1743. The law that forbade striking a downed opponent can be found in a range of contemporary accounts. For instance, in a 1710 issue of Review of the State of the British Nation, the writer claimed that, “from a Boxing young English Boy I learnt this early piece of generosity, Not to strike my enemy when he was down”.[5]When Richard Norris was killed in a boxing match in 1740, a witness reported that he had told his opponent: “You don’t do fair for you Strike me where I am down.”[6]The Swiss traveller Béat Louis de Muralt observed a similar custom when he visited London in 1726, describing how “by the laws of the play (as they call it) a man is not to strike his adversary on the ground” and “a man… must give his adversary to time to rise…”[7]The law that governed the ending of a match through defeat or capitulation is also present in pre-Broughton boxing matches. For instance, in 1719 the Original Weekly Journalreported: “Sometime last week two boys boxing for half a crown near Red-Lyon Square; after they had fought about a quarter of an hour, gave it over, one of them yielding, as they call it.”[8]

Although the introduction of umpires was new, these also had an earlier equivalent: before 1743, onlookers regulated boxing matches. Muralt described in 1726 how “the standers-by take care to see these laws strictly observed”. If an individual were to transgress the laws of boxing, for example by striking a downed opponent, he ran the risk of being “knock’d down by the mob”.[9]Another foreign traveller (this time a Frenchman), César De Saussure, observed a similar phenomenon in 1727, describing how onlookers gather around boxing-matches “not in order to separate them, but on the contrary to enjoy the fight, for it is a great sport and they judge the blows and also help to enforce certain rules in use for this mode of warfare”.[10]Prior to the governance of Broughton’s rule-set, boxing matches were socially regulated; while there was no explicit referee in the bouts, the public were expected to enforce a set of implicit and unwritten rules. 

It seems then that although Broughton did introduce certain formalities, the rules that governed the essential performance of the boxing match itself were unchanged. Throughout the 18thcentury, boxers had been expected not to strike a downed opponent, and the end of a fight had always been governed by the yielding system; Broughton had simply put these common practices into writing. The functional aspects of Broughton’s rule-set were, then, articulations of pre-existing socially enforced laws. It is not only the originality of Broughton’s rules that has been over-emphasized; the historical narrative that has been built up around these rules should also be questioned. It did not move, as historians have previously held, from an era of chaos and lawlessness to one of legitimacy and order by the mid-century. Rather, where the rules of boxing were once based on popular regulation, they became codified, systemized and delegated to specific individuals.

 

Part of a larger cultural movement?

Parallels can be drawn here with the enforcement of English law in general. At the beginning of the 18thcentury, the ‘common people’ took an active role in maintaining public order. Private citizens were expected to assist an officer of the peace in arresting criminals, or apprehend a suspected criminal independently and bring them before a constable. Popular morality was enforced in a similar way, and riots designed to ‘disturb or defame’ those individuals accused of sexual misconduct become popular in the Quarter Sessions in the 1690s and 1700s.[11]By the 1720s, the individual responsibility of law enforcement was beginning to erode. The emergence of thief-takers such as the notorious Jonathan Wild meant that the catching of criminals was often charged to specific individuals. By 1748, John and Henry Fielding had taken over the rotation offices in Bow Street; they systemized the apprehension of criminals, hired a network of thief-takers, and organized foot patrols on major roads. By this point, the popular justice of the early 18thcentury had been all but forgotten. This pattern follows almost the exact chronology of boxing regulation, which moved from a system of popular law enforcement in the first decades of the 18thcentury to one that was codified and devolved by the 1740s. Indeed, it is not insignificant that Broughton’s rules were written only five years before the Bow Street Runners were officially established. Like many things in the 18thcentury, boxing became part of that great age of English jurisprudence. It is important that we understand the development of boxing in this way; a set of rules did not simply materialize in 1743 but rather grew out of the popular regulation that was endemic to English society in the early 18thcentury.

 

What do you think of the article? Let us know below.


[1]Broughton's Rules: Rules to be Observed in All Battles on the Stage: as Agreed by Several Gentlemen at Broughton's Amphitheatre, Tottenham Court Road, August 16, (1743)

[2]Crego, R, Sports and Games of the 18th and 19th Centuries, Greenwood Publishing Group, (2003), pp.51

[3]Broughton's Rules: Rules to be Observed in All Battles on the Stage: as Agreed by Several Gentlemen at Broughton's Amphitheatre, Tottenham Court Road, August 16, (1743)

[4]Broughton's Rules: Rules to be Observed in All Battles on the Stage: as Agreed by Several Gentlemen at Broughton's Amphitheatre, Tottenham Court Road, August 16, (1743)

[5]Review of the State of the British Nation(London, England), Thursday, March 9, 1710; Issue 144. 17th-18th Century Burney Collection Newspapers. Gale Document Number: Z2000102198, pp.1

[6]Middlesex Sessions: Sessions Papers - Justices' Working Documents, September 1749, London Metropolitan Archives, LL ref: LMSMPS503970163

[7]Muralt, B, Letters Describing the Character and Customs of the English and French Nations, London, (1726), Eighteenth Century Collections Online, Gale Document Number: CW3304026130, pp.2

[8]Original Weekly Journal (London, England), Saturday, August 8, 1719. 17th-18th Century Burney Collection Newspapers, Gale Document Number: Z2000086751, pp.2

[9]Muralt, B, Letters Describing the Character and Customs of the English and French Nations, London, (1726), pp.42

[10]Letters of de Saussure, pp.180, quoted in Malcolmson, Popular Recreations in English Society, Cambridge University Press, (1973), pp.42

[11]Shoemaker, Robert, The London ‘Mob’ in the Early Eighteenth Century,Journal of British Studies, vol. 26, no. 3, 1987, pp. 278

Posted
AuthorGeorge Levrier-Jones
CategoriesBlog Post

The theater in Ancient Rome was an important form of entertainment. With its origins in the plays of Ancient Greece, over time Roman theater found its identity, customs - and grand arenas. Jamil Bakhtawar tells us about Ancient Roman theater.

You can read Jamil’s previous article on the theater in Ancient Greece here.

Ancient Roman playwright Plautus.

Ancient Roman playwright Plautus.

A thriving and diverse form of art which ranged from street performances, acrobatics, and nude dancing to the staging of situational comedies and the elaborately articulated tragedies, the theater of Ancient Rome evolved over time. The Romans drew on the influence of Greek theaters and shared many distinct features. At the time, the neighboring Etruscans were noted for practicing performance arts, many of which were used as part of religious ceremonies. In fact, Romans were later known to hire Etruscan performers to visit Rome during times of famine and crisis.

During the time the Roman Empire was being developed, Roman plays were performed by professional actors at virtually every public and religious festival. From the beginning, they valued all sorts of spectacles and entertainment, and one of the oldest events was an athletic competition in honor of the god Jupiter known as the 'Ludi Romani'. By the 3rd century BCE, this event routinely featured pop-up plays performed by professional actors, funded by a local politician or wealthy businessman. Considering that their calendar contained over 200 days of these events, the Romans had good access to theater.

 

Adaptations and Inspiration

Most historians associate melodramatic performances, mime, circus and comedies with Ancient Roman theater. The Romans were fond of theatrical spectacles such as gladiatorial combats, dances and stage performances. An earlier Roman theater would have used plots and characters inspired by the Greeks and many concepts would have been adapted to a Roman context. Archetypal characters, stereotypes and clowns were common in those plays. Many provinces were essentially bankrupt by the end of the late Republic period, and plays became more expensive and grand. The fact that most dramas were connected to key features of Roman life such as worshipping the gods, glorifying one’s self, and honoring the dead meant that the dramas likely encouraged the grand displays and expenditures normally associated with these parts of Roman life.

According to the ancient historian Livy, the earliest theatrical activity in Rome took the form of dances with musical support and it was introduced to the city by the Etruscans in 364 B.C. The literary record also indicates that 'Atellanae', a form of native Italic plays, were performed in Rome by a relatively early date. In 240 B.C., full-length, scripted plays were introduced to Rome by the playwright Livius Andronicus, a native of the Greek city of Tarentum. The earliest Latin plays to have survived were adaptations of the Greek New Comedy. Latin tragedy also flourished during the second century B.C. While some examples of the genre treated stories from Greek myth, others were concerned with notable episodes from Roman history. After the second century B.C., the composition of both tragedy and comedy declined precipitously in Rome. During the imperial period, the most popular forms of theatrical entertainment were mime and pantomime with choral accompaniment, usually re-creating tragic myths. Mimes were comic productions with sensational plots; where as pantomimes were performed by solo dancers.

 

Notable Playwrights and Their Plays

Some Roman comedies that have survived are based on Greek subjects (also known as fabula palliata) and come from two exceptional dramatists: Titus Maccius Plautus (Plautus) and Publius Terentius Afer (Terence). In adapting the Greek originals, the Roman comic dramatists abolished the role of the chorus in dividing the drama into episodes and introduced musical chorus to the dialogue.

Plautus, the more popular of the two, wrote between 205 and 184 BC and twenty of his comedies have survived. He was admired for the wit of his dialogue and his use of a variety of poetic meters. Plautus was prolific and wrote around 50 plays. Some of the most famous plays which have survived are Amphitryon, Bacchides, The Casket Comedy, Mercator and Persa. An admirable sense of his comedy is probably evident in the modern play and film A Funny Thing Happened on the Way to the Forum.

Terence produced six comedies in his brief life: The Andrian Girl (166BC), The Mother-in-Law (165BC), The Self-Tormentor (163BC), The Eunuch (161BC), Phormio (161BC), and Adelphi: The Brothers (160BC). All of the six comedies that Terence wrote between 166 and 160 BC have survived. The complexity of his plays, in which he frequently combined several Greek originals, was sometimes denounced, but his double-plots enabled a sophisticated presentation of conflicting human behavior.

The most famous Ancient Roman playwright for tragedy was Seneca (4BC-65AD) and he adapted plays from the Greek playwrights. His plays pushed the boundaries of Ancient Rome and in 65AD he was forced by Nero to commit suicide due to offensive commentary in one of his plays. Seneca agreed to this and slashed his wrists but this proved too slow and painful so Seneca called for poison. This also didn’t kill him, so his servants placed him in a hot copper bath and the steam suffocated him to death. Nine of Seneca's tragedies survive, all of which are tragedies on Greek originals. For example, Phaedra was based on Euripides' Hippolytus.

 

Tragedies and Comedies

The first significant works of Roman literature consisted of the tragedies and comedies that Livius Andronicus wrote from 240 BC. Five years later, Gnaeus Naevius also began writing drama. Unfortunately none of the plays from the writers have survived. While the dramatists composed in both genres, Andronicus was most appreciated for his tragedies and Naevius for his comedies. Their successors tended to specialize in one or the other, which led to a separation of the development of each type of drama. By the beginning of the 2nd century BC, drama was firmly established in Rome and a guild of writers (known as collegium poetarum) had been formed. No early Roman tragedy survives, though it was highly regarded in its day; historians know of three early tragedians - Quintus Ennius, Marcus Pacuvius, and Lucius Accius.

 

Characters in Roman Comedy

Like commedia del arte (which is derived from Ancient Roman Comedy), the comedy of Ancient Rome often used recognizable stereotypes or stock characters. Here are some of the most common from Ancient Roman plays:

Adulescens: the young, love-struck and not too brave lover.

Senex: normally the overly strict father or the miser. He sometimes carries a stick or staff.

Leno: the amoral deviant. Sometimes owns a brothel or house of disrepute.

Miles gloriosus: the braggart is a character that is especially familiar today.

Virgo: (young maiden) is the love interest of the adulescens, but does not get much stage time. She is beautiful and virtuous but sometimes a little dim.

 

Masks and Costumes

Masks were one of the essential conventions used in Ancient Roman plays. They usually covered the whole head and the designs were quite simple. The masks were made from cheap materials such as linen or cork and had holes for the mouth and eyes. Some masks were large and portrayed exaggerated expressions which could be seen from the back of the theater so the audience could tell how the character was feeling. As such, the masks conveyed simple emotions in its expression such as happiness, sadness, regret and fear. All masks were color coded, brown for men and white for women. Later Ancient Roman Comedy used half-masks for certain characters.

The costumes were simple and colors were the major feature used to distinguish between characters and their types. Purple was used for rich male and female characters; however since women were mostly forbidden from acting, men had performed feminine parts. A red toga was used to represent a poor character and a striped tunic was used for a slave boy since tunics typically showed the character was a slave.

 

An Architectural Wonder

Probably the first permanent Ancient Roman theater was the Theater of Pompey and most theaters based their structures and design on this stunning example. Roman theaters were traditionally built on their own foundations. The arena was set up quite high so as to avoid the noise of the city and to enclose the performance. However, the audience were seldom quite like modern audiences and, therefore, masks were used to make it easier for people to clearly understand the performance.

As in the case of theatrical entertainment, the earliest venues for gladiatorial games at Rome were temporary wooden structures. According to Livy, as early as 218 B.C., gladiatorial contests were staged in the open elongated space of the Roman Forum with wooden stands for spectators. These temporary structures probably provided the prototype for the monumental amphitheater, a building type characterized by an elliptical seating area enclosing a flat performance space. For example, the stone amphitheater at Pompeii was constructed in 80–70 B.C., and similar to most amphitheaters, the Pompeian spectacle has an austere, functional appearance, with the seats partially supported on earthen embankments.

The earliest stone amphitheater in Rome was constructed in 29 B.C. by T. Statilius Taurus, one of the most trusted generals of the emperor Augustus. However, the structure burned down during the massive fire of 64 A.D. and was replaced by the Colosseum. The Colosseum remains as one of Rome’s most prominent landmarks. Unlike earlier amphitheaters, the Colosseum featured elaborate basement amenities, animal cages, mechanical elevators, as well as a complex system of vaulted concrete substructures. The facade consisted of three stories of superimposed arcades flanked by engaged columns of the Tuscan, Ionic, and Corinthian orders. Representations of the building on ancient coins indicate that colossal statues of gods and heroes stood in the upper arcades. The inclusion of Greek columnar orders and copies of Greek statues may reflect a desire to promote the amphitheater, a uniquely Roman building type, to the similar level in the architectural hierarchy as the theater, with its venerable Greek precedents.

In addition to gladiatorial contests, the amphitheater provided the venue for spectacles involving the slaughter of animals by trained hunters called venatores or bestiarii. Venationes were expensive to mount and hence served to advertise the wealth and generosity of the officials who sponsored them. The inclusion of exotic species (lions, panthers, rhinoceroses, elephants, etc.) also demonstrated the vast reach of Roman dominion. A third type of spectacle that took place in the amphitheater was the public execution. Condemned criminals were slain by crucifixion, cremated, or attacked by wild beasts, and were also forced to re-enact gruesome myths. The final days of the Republic saw the beginning of extensive theater construction. Today, the ruins of these theaters are some of the most magnificent archaeological sites in the world. 

To the people of both the Roman Republic and the Roman Empire, an expected privilege of citizenship was access to free entertainment. Whether it was a gladiatorial combat, a chariot race or a theatrical spectacle, senators, governors, and emperors could always get the people back on their side by paying for a few days of public events. Roman theater borrowed from Greek precedents, but held a unique role in Roman culture. After all, Romans loved a good performance.

 

What do you think of the theater in Ancient Rome? Let us know below.

Much of the modern Irish identity is drawn from the belief that it is “Celtic.” This is evident in the many Irish art styles, music, symbols and sports clubs that take the name “Celtic.” But what is “Celtic”? And does it have anything to do with Ireland? Jackie Mead explains.

You can read Jackie’s previous article on Lewis Temple and the 19th century whaling industry here.

Jesus Christ as shown in the 9th century Book of Kells.

Jesus Christ as shown in the 9th century Book of Kells.

Who Were the Celts?

The word “Celt” comes from the Greek word “Keltoi,” used to refer to barbarians on the border of their empire. It is unlikely that these groups used the name to refer to themselves. For many years, academics believed that this group, loosely affiliated through culture, linguistics, and art style, was able to conquer much of mainland Europe and Ireland in the Late Bronze Age. This meant the new culture became the foundation of modern Irish culture, since the Irish natives of the time would not have been as strong as that of these continental invaders. However, those same academics were unsure as to when this group arrived, where they originated, and what technologies they brought.[1]

 

Social Darwinism and Archeology 

The Celtic invasion theory was able to take hold so effectively because invasion was already believed to be a common theme in Irish history. Early medieval pseudo-history stated that the modern Irish were the descendants of Mil, a biblical figure who traveled to the Iberian Peninsula and started the race that would eventually rule Ireland.[2]Continuing on this vein of thinking, nineteenth century archeologists believed that a new material culture in the archeological record indicated the arrival of a new, invading group (because, of course, it was simply not possible that one group could have twoart styles).

There were contemporary political motivations for this. At the time, Ireland was in a colonial relationship with England, and English scientists were attempting to rationalize Britain’s colonial empire through Social Darwinism. While the English were asserting that they were of a superior Germanic race, they were searching for an inferior origin for the Irish.[3]This led to a frantic search to find evidence that the Irish were descended from a barbaric continental tribe.[4]

 

Debunking the Myth

Archeology is a major player in the academic debate surrounding the Celts. Armies drop a lot of stuff, so some of that stuff would have ended up in the ground. But archeologists found a significant continuity throughout Irish prehistory. There is no sudden change in technology in the Late Bronze Age. The first iron objects were made to resemble traditional bronze objects, suggesting that they were made by the same people. Living conditions were similar as well; many Iron Age sites rest on reused Bronze Age hill forts.[5]Religious practices such as ritual deposition (purposefully dropping valuable objects into bogs and lakes) were also continued in the Iron Age, along with the burial traditions of cremation and single-grave tradition.[6]With all of these continuities, it seems highly unlikely that an invasion on a large scale could have taken place.

One of the most commonly turned to pieces of evidence for a Celtic invasion is the spread of the La Tène art style. This highly stylized curvilinear art style was very popular in the late Bronze Age, spreading from the Austrian-Switzerland region to Hungary, France, Germany, and Ireland. The English academics believed the Celtics invented La Tène and dropped it like a business card wherever they conquered a new territory. Antiquities expert John Collis calls this kind of association “dubious in the extreme.”[7]The theory completely ignores the fact that art can spread because people like it, not because it was brought by an invading army. Secondly, it fails to account for the fact that La Tène was almost exclusively a commodity for a very small group of wealthy people. An empire built on this art style would have been a very lonely one, and devoid of lower classes.[8]

It is far more likely that La Tène was brought to Ireland through contacts in Britain and on the Continent. Several pieces of the art have been found to be imported from these places, although the majority of it is Irish made.[9]Based on this evidence, La Tène is no longer considered to be the basis for the Celtic empire.

 

Pollen Evidence

Some of the most convincing evidence against the existence of a Celtic invasion comes from pollen. Pollen diagrams show that there was a resurgence of tree growth during the period, which indicates that there was a significant decrease in farming. It also shows that areas of Ireland were experiencing bog growth, which is unfit for human habitation.[10]Archeologists also had difficulty finding Iron Age sites to study, which means that there were less people during the period. The diminished population could not maintain the booming economy of the Late Bronze Age. An invading army, especially one that supposedly possessed great advancements in weaponry and art, would have boosted the economy and increased the population.[11]

 

Why Do the Irish Embrace Being Celtic?

If the ideas behind Ireland’s Celtic identity are not only wrong, but also racist, why have the Irish embraced it so much? Because, oddly enough, the English attempts to separate themselves from the Irish backfired spectacularly. Ireland had spent much of its history politically divided, and the new nationalist movement required a shared history. The idea of the Celt, a race that was separate from the British and had no right to be colonized by the descendants of Saxons, was created.[12]Douglas Hyde, the first President of Ireland, once wrote: “The sense of nationhood among the Irish stems from the half unconscious feeling that the Celtic race, which at one time held possession of more than half of Europe, is now making its last stand for independence on this island of Ireland.”[13]

 

The Celts of Today

Shared history is a powerful ingredient to nationalism, and the Celts became that for the Irish.[14]They fully embraced the biased literature of the period, embracing the so-called Celtic art style, music style, and spirituality. Today, the idea of the Irish Celt has been debunked in academia, but lives in on popular Irish culture.

As it became better known to the wider population, especially to the local Irish, the definition of a Celt changed. The ridiculous idea of an invasion by a continental group was replaced by a much more vague definition of “Celt,” simply meaning of Irish or Scottish origin. Although the original intent was to disenfranchise, the Irish have taken pride in their new identity. After all, the idea of Celtics did not take off in the popular imagination until the Irish were able to define it for themselves.

 

What do you think of the article? Let us know below.


[1]John Waddell, “Celts, Celticisation and the Irish Bronze Age,” in Ireland and the Bronze Age, ed. J. Waddell and E. Twohig (Dublin, 1995), 160.

[2]J.P. Mallory and Barra Ó Donnabháin, “The Origins of the Population of Ireland: A Survey of Putative Immigrations in Irish Prehistory and History,” Emania17 (1998): 47.

[3]Barra Ó Donnabháin, “An Appalling Vista? The Celts and the Archeology of Later Prehistoric Ireland,” in New Agendas in Irish Prehistory, ed. A. Desmond (Cork, 2000), 192.

[4]John Collis, “Celtic Myths,” Antiquity71 (1997): 197.

[5]Tomás Ó Carragáin, 2016. "Early Iron Age - The Celts.” Presentation, Boole Lecture Theater.

[6]Ó Carragáin,"Early Iron Age - The Celts.”

[7]John Collis, “Celtic Myths,” 199.

[8]Mallory and Ó Donnabháin, “The Origins of the Population of Ireland,” 61.

[9]Mallory and Ó Donnabháin, “The Origins of the Population of Ireland,” 61.

[10]Ó Carragáin, "Early Iron Age - The Celts.”

[11]Ó Carragáin, "Early Iron Age - The Celts.”

[12]Ó Donnabháin, “An Appalling Vista?” 192.

[13]Ó Carragáin, "Early Iron Age - The Celts.”

[14]Chris Morash, “Celticism: Between Race and Nation,” in Ideology and Ireland in the Nineteenth Century, ed T. Foley and S. Ryder (Dublin, 1998): 192.

The origins of humanity are a regular topic of debate in much of the world. Here, Steven Keith considers Christian and Hindu texts, and 'haplogroups’ as the basis for a less well-known argument on the origins of cultures and civilizations in the world.

You can read Steven’s article on the origins of Scotland here, and the origins of the Picts, Gaels and Scots here.

A 18th century depiction of Vishnu, one of the principal deities of Hinduism.

A 18th century depiction of Vishnu, one of the principal deities of Hinduism.

From the creation of Adam, until the birth of Noah (as a significant proportion of the world believes), there are eleven generations of his line; Adam, Cain and his brother, Abel, Seth, Enosh, Cainain, Mahalaleel, Jared, Enoch, Methuselah and Lamech, before we arrive at Noah, born, it is written and said, in 3300 BC. Although there are documented eleven generations, born between the dawn of civilization and the great flood (believed to have been an historical event by almost the entire world’s population) that would drown every living creation, other than Noah, his three sons and their wives, there are twelve individuals named specifically; Cain always being mentioned together with his sibling, Abel.

So, as the story goes, for the first score of centuries since this perfect creation, by an omnipotent creator, these twelve men lived longer lives than any of their contemporaries managed to do on this infant earth and they possessed extraordinary, superhuman powers, that each had inherited as a consequence of their blood connection to the original manifestation of human consciousness, their ancestor, Adam. Were these men the guides for the burgeoning human population who had been blessed by a birth in a bountiful and boundless world?

 

The Adityas

May I suggest that it is not a coincidence, that from one of the Vedic perspectives, according to the Vishnu Puranaspecifically, there were twelve Adityasor divine, holy men, who were born from the womb of the Goddess,Aditi, the wife of Kasyapa, the son ofMarichi, (son of Brahma), and his wife, Kala,and from their twelve sons grew the human race. Grew civilizations. In Book 3, Chapter 134, verse 18 of the Hindu epic, Mahabharata, Ashtavakra writes, ‘and twelve, according to the learned, is the number of the Adityas.’

To complete this triumvirate of twelves, are the haplogroups (human genetic family groupings) beginning with ‘A’, and running alphabetically (though not always chronologically), through until, ‘L’. At group ‘K’, the most recent, there was a fracturing, the very same splintering that our friends from both the ancient Hebrew and Hindu traditions had taught had happened in the long forgotten past, when the earth was still being populated by migrating clans of families, forging new nations and creating cultures that would over time develop into civilizations, some of which would shine for thousands of years as material entities and indeed, some of them continue to influence our existences as individuals and as civilizations today.

Could not these twelve individuals, irrespective of their whether their names be spelled and pronounced according to the Hebrew or the Hindu tradition, correspond to the twelve haplogroups, each emerging one after the other, until clade ‘L’ split from it’s sibling, ‘K’, leaving it alone, the earth already having been peopled by the migratory nature of the human ‘beast’?

 

The Groups

Group A emerged as an outburst of consciousness, allowing group BT to emerge later as it’s sibling. Two initials together (unlike the other clades, in which, as a rule, each capital letter represents a particular genetic line) that contained the necessary genetic information for all the mutations that would follow over time. Similarly, in the line of relatives that descended from Adam to Noah, only Cain and Abel, the second generation, are mentioned together as a pair. Those men who came after, are mentioned alone, as individuals, as themselves, irrespective of whether or not they had any brothers. Like BT.

From BT came B, before mutating (M168) to give group CT. All the haplogroups that emerged after B retain the evidence of this mutation that occurred some 65,000 years ago. Group B had emerged in central Africa about eighty thousand years ago and spread across the continent, sharing it with men carrying the genetic information that we associate with group A. Today the B lineage is found in significant proportions almost exclusively among the men of the Pygmy tribes of the Congo rainforest in tropical Africa. These people today, are still dominated by the Bantu African (E) population, who overwhelmed and displaced them myriad years previously and who continue to surround their forest home with the pasture necessary for the livestock that remains the mainstay of their economies and societies.

There is next to no trace of group B outside of Africa. They seemed to have been fixed to the land that they worked, unlike group C that emerged around 60,000 years before our present age, whose men would become the aboriginal Australian clans upon reaching that continent, after moving along the coastline of the Arabian, Indian and Malaysian peninsulas, as well as reaching and first populating the new world. They arrived there so early that Australia had not yet developed as the island continent that we know today. The ‘C’s’ have left their genetic imprint all along their journey, their clade still significantly represented in communities throughout the Middle East, the subcontinent and South-East Asia. It suggests that they migrated, looked for and found places to settle and develop. Along the way, Clade D had arrived on what today are known to us as the archipelagos of Japan and the Andaman Islands, each of which may have been connected, or at least partially so, to the Asian mainland at that time.

Group D had also arrived on the plateau of Tibet, an unlikely destination for anyone looking for lands with hospitable environmental conditions, upon which they could settle. One clade in three distinct groups, far apart from each other geographically and linguistically, and each community survives to this day in relative isolation, genetically speaking. The ancient indigenous Aino people of Japan (and southern Russia) and the indigenous people of Tibet and the Andaman Islands have each retained their cultural heritage to some degree, but more importantly, the knowledge of the great antiquity of their ancient ancestors. People and their places still visited today by anthropologists and the like, all trying to unravel and decode their deeply held knowledge. The immediate sibling of the Ds are the E group. They spread south through Africa, conquering all before them and seeding almost all of her with their genetic sequences and displacing the A and B groups that had previously experienced that massive continent by themselves. Rather like the Aryan tribes are assumed to have done many millennia later. Clade F would give rise to the group GHIJK approximately 50,000 thousand years before present. At the point of the emergence of F, there was a mutation (M89) that is carried by all men who would follow until today.

 

The Groups’ Impact on the World

Could it be that these were the original castes to populate the world and make the growth and development of civilization both possible and inevitable? In one of the Hindu traditions, God had pulled four castes of human beings out of his own body, each designed and imbued with the skills necessary for humanity as a collective to progress. To build society requires a collection of skills to be present simultaneously in place and time and working in harmony. These four clades emerged around the same time as each other, approximately between sixty and sixty-five millennia ago. Several more tens of thousands of years would pass until the conglomerate of clades that had grown out of F, GHIJK, would bring a third diffusion of four castes into society, the castes that would provide us with revolutions in farming (perhaps haplogroup G), commerce (perhaps haplogroup J, the Semitic peoples) and would become the ancestral Europeans (haplogroup I).

At the time that clades I and J had split away from K, groups L and T (previously known as K2) had emerged in their own right, having been dormant in the jumble of letters (clades), biding their time. Clade L is now found in it’s highest density along the Malabar Coast of India (Kerala) and in the area of the delta of the Indus River, and in the high mountains, from where it emerges into the plains. The heartland of the former Indus/Harappan civilization. By the time that clade K had seen it’s siblings grow up and leave the GHIJK family nest, each of the earth’s land masses had been colonized, if not every landscape. That would come with the virtual disintegration of K, sending pioneering new clades into unexplored virgin territories. 

Can the disparate understandings of the Divine be reinterpreted as being one in the same story and can that story, that appears to fly in the face of the modern scientific theory on the origins of the human race, be demonstrated to be a true testament of our common culture, by the knowledge gained by the scientists themselves, in the field of genetics? Has the earth actually been formed, complete with man and beast? Populated by successive outbursts of consciousness, representing the four castes necessary for evolution of society? Guided by divine sages, each a guardian of an age. Does the movement, on the axis of the earth itself, every 23,000 years, known as the precession cycle, correspond with the mutation of clades? Coincidence always seems to be an unlikely answer when trying to explain away these connections, in a world that appears to us as being fundamentally magical, from whichever perspective that you want to look at it from.

 

What do you think of the article? Let us know below.

 

Steven Douglas Keith is a Scotsman living for twenty years in the mountains of India, an essayist, an artist and a poet. His work seeks to find the commonalities shared by cultures, specifically between the traditions of the orient and occident.

He can be found on Twitter @k_el_phand http://twentythirstcenturynet.wordpress.com/.

Posted
AuthorGeorge Levrier-Jones
CategoriesBlog Post

Modern society is often compared to past times and ages. Here, Daniel Smith returns and argues that the economic control exerted by a small elite today is similar to seventeenth century capitalism. 

You can read Daniel’s past articles on California in the US Civil War (here), Medieval Jesters (here), How American Colonial Law Justified the Settlement of Native American Territories (here), and Spanish Colonial Influence on Native Americans in Northern California (here), and differences in Christian ideology in the USA (here).

The Declaration and Standard of the Levellers, a 17th century English movement. Gerrard Winstanley was one of the founders.

The Declaration and Standard of the Levellers, a 17th century English movement. Gerrard Winstanley was one of the founders.

In light of our modern Renaissance, we as human beings in our own moral and ethical weaknesses have forgotten as people, that history canrepeat itself. With the invention of the television, computer, cell phone, Internet, and social media… Americans have become comfortable—complacent—and have forgotten about their own vulnerabilities in society. Most of the aforementioned gadgets were all invented within the last 30 years![1]In the last 100 years, government and wealthy private entities have slowly re-aligned the way that society is structured. We tend to easily forget about the history behind how we are even standing here today. Of course, who reallywants to remember their old classes from school? 

Well, if you have not figured out the fact that there are distinct “social-classes” in America and that we as Americans are ran extremely hard in the workforce, then this should help explain how we are living in a society that we can compare to the seventeenth century! It’s pretty eye opening actually. What has happened with the development of the global corporate world is just today’s version of the Hudson Bay Company, or even the East India Trading Company.[2]I learned that a man by the name of Gerrard Winstanley (1609-76) was a clothier and laborer who resided in England. Mr. Winstanley was said to have had “religious visions,” and in 1649, he wrote of these dreams. In arguing his ideas he had mentioned that on our earth, man was destined to slavery and to be “kept in the hands of the few.”[3]

This last quotation of course is his recognition to the fact that the top 2% of wealth holders were the established elite.[4]Seems eerily familiar to today’s 1%, doesn’t it? This hierarchy is based upon the newly structured global societal standards. So where do we all fit in to this historical and rather not too surprising sociopolitical hierarchy? Most of us sit squarely on the bottom two levels as peasants and vassals. In contemporary context the vassals are the top professionals like doctors, large business owners, entertainers, etc. The rest of us are sitting on the peasant level: laborers, farmers, retail and grocery workers, students, soldiers, truckers, etc. It’s really important to see that most of these careers existed in the old world. The differences today are in just how the information is delivered, including the method of the job being done.

While the re-structuring of global society has happened over the last half-century, the most important part of this whole process is to direct how money is controlled. Establishment of the central banking systems were made; entities such as the Federal Reserve, the Bank of Japan, the European Central Bank… just to throw out some names. These are the modern kings, queens, and emperors of the medieval and early-modern eras. There are a few ways to manipulate the societal ladder that we are in, such as a decent education and a solid career track. For the most part though, we are all here together, stuck doing our part, and given our own hands to play in life. Further, we can all agree as human beings regardless of spiritual belief, to never part in doing the best we can, as individuals, both morally and ethically while here alive on planet earth.

 

What do you think of the author’s arguments? Let us know below.

Finally, Daniel Smith writes at complexamerica.org.


[1]Cross, Gary. An All-Consuming Century: Why Commercialism Won in Modern America. New York: Columbia University Press, 2000. p. 17.

[2]Ziegler, Herbert, and Jerry Bentley. Traditions & Encounters, Volume 2: from 1500 to the Present. New York: McGraw-Hill Education, 2014. p. 462.

[3]G.H. Sabine, The Works of Gerard Winstanley[Ithaca, NY. Cornell University Press, 1941], pp. 251-4, 288.

[4]Wiesner, Merry E. "Politics and Power, 1600-1789." In Early Modern Europe, 1450-1789. Cambridge: Cambridge University Press, 2013. p. 343.

Posted
AuthorGeorge Levrier-Jones
CategoriesBlog Post

Fifty years after Apollo 11 landed astronauts on the moon, what is the enduring legacy of humanity’s ventures beyond the Earth?  As explained by Harlan Lebo, author of 100 Days: How Four Events in 1969 Shaped America (Amazon USAmazon UK), the answer is much broader and deeper than President Kennedy’s original vision for achievement in space.

Buzz Aldrin salutes the US flag during Apollo 11, on July 20, 1969.

Buzz Aldrin salutes the US flag during Apollo 11, on July 20, 1969.

On a warm summer night in 1961, two months after President John Kennedy declared a mission to the moon as a national goal, presidential advisor Theodore Sorensen sat on the front steps of his home in Washington, staring up at the heavens, and wondered about the wisdom of creating a program to send humans into outer space.

“Was it really possible,” Sorensen remembered thinking, “or was it all crazy?” 

Crazy or not, eight years later the goal was realized with the journey of Apollo 11 in July 1969 – five months before Kennedy’s deadline of reaching the moon “before the decade is out.”

While the specific goal of reaching the moon was achieved, Kennedy’s broader intention – to demonstrate to the world America’s supremacy in technology and national will – was also more than satisfied.  And if the United States not been engaged at the same time in a hopeless, endless war in Vietnam, the benefits to the nation might have been even more pronounced.

 

Looking back at a deeper legacy

Now, 50 years later, we can look back and ask, did Apollo spawn a lasting legacy?  The most obvious answer is yes – the US reached the moon, and with that achievement firmly established the United States as the pre-eminent leader in science and engineering of the 20thcentury.  

Thanks to Apollo, America still supports a vigorous space program – even without a current schedule of manned missions – that engages both the public and private sectors.  And we can, of course, itemize the direct benefits of our efforts in space with a tally of specific products as diverse as fire prevention fabric, improved solar cells, freeze-dried food, and medical monitoring, among hundreds of others.

But beyond those individual achievements, the enduring advances are less tangible, yet even more profound.  

 

The jolt of inspiration

The greatest value of Apollo to the American experience emerged from the sudden, abrupt focus of technological inspiration required to create the lunar mission – the largest financial outlay ever made by a peacetime nation.  

While one can point to the growing needs of national defense in the cold war as a catalyst for economic growth, it was the research and development across the spectrum of science required for the Apollo Program, compressed from decades into a few years in the 1960s, that acted at a breakneck speed as a formidable accelerator in advancing the nation.  The jolt supplied by the manned space program produced a trail of benefits – not only for the results achieved in space, but for the technical possibilities that the mission illuminated.  

Transcending individual inventions and products, Apollo stimulated the broad expansion of advances over a wide range of industries and fields – including many enlightened enterprises that are both profitable and progressive, such as organizations involved in precision medical equipment or alternative energy sources. 

For example, the process of creating the Apollo Guidance Computer, with its razor-thin margin of capabilities needed to support the moon missions, became a high-profile inspiration within the computer industry to create new generations of components that were more powerful, smaller, and cheaper.  

The country’s growing needs for digital technology in space programs created a thriving market – and competition – in the creation of semiconductors and related hardware for the computing industry. U.S. government projects – primarily defense and space – were the world’s largest purchasers of semiconductors – accounting for almost 70 percent of all sales – spurring production and shrinking prices. In 1962, the average price of a computer chip was $50; by 1973, the price had fallen to 63 cents. 

Beyond just shrinking the costs of technology, Apollo proved to be a powerful catalyst for the digital realm long after the missions were over – with important links to the growth of Silicon Valley and other tech crucibles. The path was clear for the development of new types of computers that did not yet exist, including computers created for individuals. Soon to come were the first personal computers in the 1970s and 1980s; the internet was not far behind. 

 

New leaders, new progress

This progress was possible largely because of growth in technological leadership – a new generation that rose in American business, science, and engineering thanks to the flourishing of the space program.

“Many people point to guys working in their garages in the Silicon Valley as the starting point for the technology industries of the 1980s,” said space historian Roger Launius. “But much of the innovation of that era had already come from scientists and engineers trained to work in the space program; after Apollo, these people dispersed and went everywhere – to companies, to universities, to think tanks – taking with them the knowledge they had gained from working on the space program. 

“We saw a blossoming of technology in the 1970s,” said Launius, “that was in no small part the result of the base of knowledge that built up during the space program, and that was pushed by Apollo.”

The Apollo 11 landing on the Moon was the most important peacetime achievement of the 20thcentury.   But even more important is the broad range of change inspired by Apollo that continues to touch the American experience.

 

Harlan Lebo’s book, 100 Days: How Four Events in 1969 Shaped America, is available here: Amazon USAmazon UK

The theater in ancient Greece was a place where politics, religion, popular figures, and legends were all discussed and performed with great enthusiasm. People came from all across the Greek world to attend the popular theaters held in open-air amphitheaters. In the so-called 'glory days' some amphitheaters could accommodate crowds of up to 15,000 people, and some were so acoustically precise that a coin dropped at the center of the performance circle could be heard perfectly in the back row.

The origin of the dramatic arts in Greece was in Athens, where ancient hymns were chanted in honor of the gods. These hymns were later adapted into choral processions where participants would dress up in costumes and enact the narratives. Eventually, certain members of the chorus evolved to carry out exceptional roles within the procession and, hence, Greek theater came to life.

Jamil Bakhtawar explains.

An ancient Roman painting from the House of Vettii in Pompeii, showing the death of Pentheus from Euripides’ Bacchae.

An ancient Roman painting from the House of Vettii in Pompeii, showing the death of Pentheus from Euripides’ Bacchae.

A festival for the gods

One of the Greek festivals was called the 'City Dionysia’. It was a festival of entertainment held in honor of Dionysus, the god of wine and fertility, and featured competitions in music, singing, dance, and poetry. The revelry-filled event was conducted by drunken men dressed up in rough goat skins (goats were thought to be sexually potent). The Greeks entertained large crowd gatherings during these festivals by dramatizing scripted plays, often with only one person acting and directing the transition of each scene. As the playwrights evolved, a handful of actors produced on-stage performances consisting of a live chorus and musical background.

One particular theater, built to honor Dionysus, was called Epidaurus. It was the greatest theater in the western world and is often considered a pioneer of engineering by today’s standards. Fifty-five semi-circular rows of seats were built into the hillside with such precision that the theater had perfect acoustics. Named after the god of medicine Asklepios, it was believed that the Epidaurus (and theaters in general) had beneficial effects on mental and physical health. It was regarded as an important healing center and is considered to be the cradle of medicinal arts. Two-and-a-half-thousand years later, it is still in use and is among the largest of the surviving Greek theaters.

 

The Greek tragedy

Little is known about the origins of the Greek tragedy before Aeschylus (c. 525-c. 455 B.C.), the most innovative of the Greek dramatists. His earliest surviving work is 'Persians', which was produced in 472 B.C. The roots of the Greek tragedy, however, are likely embedded in the Athenian spring festival of Dionysus; which included processions, religious sacrifices, parades, and competitions. Early Greek theater focused on tragic themes that still resonate with contemporary audiences. The word “tragedy” translates from “goat song,” a phrase rooted in the Dionysus Festival of dancing around sacrificial goats for a prize. The original Greek tragedies centered on mythology or historical significance that portrayed the antagonist’s search for the meaning of life. Other times, playwrights focused the overall tragedy on the nature of the gods and goddesses.

Of the few surviving Greek tragedies, all but Aeschylus’ Persians draw from heroic myths. The protagonist and the chorus portrayed the heroes who were the objects of religious cult in Attica in the fifth century B.C. Often, the dialogue between the actor and chorus served as a didactic function, linking it to a form of public discourse with debates in the assembly.

Each surviving tragedy began with a prolog that explained the action in each corresponding scene. Subsequently, the chorus introduced the paradox; a transition whereby the audience becomes familiar with the characters, exposition, and overall mood of the setting. Finally, the exodus implies the departure of the chorus and characters derived through the play’s duration.

Some of the oldest surviving tragedies in the world were written by three renowned Greek playwrights. Aeschylus composed several notable tragedies, including “The Persians,” and the “Oresteia” trilogy. To this day, drama in all its forms still functions as a powerful medium for transmitting ideas.

 

Ancient comedies

The exact beginnings of Greek comedic plays are not known. Some historians believe they could have started from the activity of actors mimicking one another as well as making jokes about current plays and more. During the 6th century BCE, the plays started to incorporate scenes involving actors dressed in exaggerated costumes mostly of animals. They would subsequently perform a dance much to the audience’s delight. Various poems involving humor as well as songs would be performed during plays.

Unlike the Greek tragedy, comic performances produced in Athens during the fifth century B.C., the 'Old Comedy', ridiculed mythology and prominent members of Athenian society. There seems to have been no limit to speech or action in the comic exploitation of sex and other bodily functions. Terracotta figurines and vase paintings dated around the time of Aristophanes (450–ca. 387 B.C.) show comic actors wearing grotesque masks and tights with padding on the rump and belly, as well as a leather phallus.

In the second half of the fourth century B.C., 'the New Comedy' of Menander (343–291 B.C.) and his contemporaries presented fresh interpretations to familiar material. In many ways comedy became simpler and tamer, with very little obscenity. The grotesque padding and phallus of the Old Comedy were abandoned in favor of more naturalistic costumes that reflected the playwrights’ modern style. Subtle differentiation of masks worn by the actors paralleled the finer delineation of character in the texts of the New Comedy; which dealt with private and family life, social tensions, and the triumph of love in a variety of contexts.

 

Major playwrights of the time

There were many Greek playwrights, but only the major works of three dramatists have survived: Aeschylus, Sophocles and Euripides. They wrote plays for the City Dionysia, but the central idea of each of their plays were different.

The plays of Aeschylus explore the dangers of arrogance, the misuse of power and the bloody consequences of revenge. Aeschylus was the first to introduce a second actor during on-stage performances. His trilogy, the Oresteia, explores the chain of revenge set into motion by king Agamemnon’s decision to sacrifice his daughter in return for a fair wind to take his ships to Troy. 

Sophocles wrote seven popular tragedies including “Antigone,” “Electra,” and “Oedipus Rex” to name a few. Sophocles' playwrights are focused around the redemptive power of suffering. A good example of this is the character of Oedipus in Oedipus Rex. He portrayed Oedipus as a good-hearted but headstrong young man who kills his own father without knowing that he is his father, and marries his mother without realizing that she is his biological mother. When he discovers what he has done, he blinds himself in remorse. Sophocles introduced a third actor during on-stage performances and was the first dramatist to include painted backdrops.

Euripides, the last of the three, belongs to a somewhat later generation of Greek thought, and is a far more troubled, questioning and unsatisfied spirit. Euripides was thought of as the most direct of the three in his questioning of Athenian society and its established beliefs. He composed over ninety plays, with roughly eighteen surviving pieces studied and incorporated by contemporary playwrights; including “Medea,” “Hercules,” and “The Trojan Women.” Critics lambasted Euripedes’ questionable values presented during his on-stage performances, often depicting varying psychological archetypes not explored by previous playwrights. Many authors modeled Euripedes’ experimentalism centuries after his death. 

The Grecian playwrights also injected humor into certain aspects of theater. Popular comedians competed during the Athenian festivals, including Aristophanes, who authored more than forty plays. Among his eleven surviving plays included a controversial script entitled “Lysistrata,” a tale about a strong, independent woman who heads a female-based coalition against the war in Greece. 

Each of these playwrights introduced something new to Athenian drama when their plays were chosen as the best, and it is largely because of these writers that theater developed into the way it has now. Despite the limited number of surviving tragedies and comedies, the Greeks greatly influenced the development of drama in the Western world.

 

The art behind a mask

It was common practice for Greek actors to use masks. These theatre masks were thought to amplify the actor’s voice and contribute to the theatrical ambiance. They have since become icons of the ancient Greek culture and sought after collectors’ items. Highly decorated masks were worn during feasts and celebrations as well as during funeral rites and religious ceremonies. These masks were constructed out of lightweight organic material, such as linen or cork, and copied from marble or bronze faceplates. Often, a wig was attached to the top of the mask. The mask was then painted; usually brown to represent a man and white for a woman. There were two holes for the eyes, large enough for the actor to see the audience but small enough so as not to allow the audience to see him. The shape of the masks amplified the actor’s voice, causing his words to be easier for the audience to hear.

There were several practical reasons for using masks in the theater. Due to the sheer size of the amphitheaters they were performing in, exaggerated costumes and masks with vivid colors were much more visible to a distant member of the crowd than a regular face. Masks were also worn for a transformation into character. There were only two or three actors present in each production, so masks allowed for quick character changes between scenes. Masks were tools for the audience to learn something about the character, whether it be a huge beard and roaring mouth to represent the conquering hero, or curved nose and sunken eyes to represent the trickster. Tragic masks carried mournful or pained expressions, comic masks were seen smiling or leering. 

Many masks have survived, as well as literary descriptions of the masks and artistic recreations in frescoes and vase paintings. One can see the evidence of the importance of masks at almost any surviving ancient Greek theater. Statues depicting the grotesquely laughing, crying, or raging masks stare down at innocent viewers, their lips largely engorged and eyes so rounded and saucer-like, one would think the mask itself had a mind of its own.

 

Theatrics of the stage

The Greek theater stage consisted essentially of the orchestra, a flat dancing floor of the chorus, and the actual structure of the theater building known as the ‘theatron'. Since theaters in antiquity were frequently modified and rebuilt, the surviving remains offer little evidence of the nature of the theatrical space available to Classical dramatists in the sixth and fifth centuries B.C. There is no physical evidence for a circular orchestra earlier than that of the great theater at Epidauros dated to around 330 B.C. Most likely, the audience in fifth-century B.C. Athens was seated close to the stage in a rectilinear arrangement, such as appears at the well-preserved theater at Thorikos in Attica. During this initial period in Greek drama, the stage and most probably the skene(stage building) were made of wood. Vase paintings depicting Greek comedy from the late fifth and early fourth centuries B.C. suggest that the stage stood about a meter high with a flight of steps in the center. The actors entered from either side or a central door in the skene, which also housed the ekkyklema, a wheeled platform with sets of scenes. A crane, located at the right end of the stage, was used to hoist gods and heroes through the air onto the stage. Greek dramatists made the most of the extreme contrasts between the gods up high and the actors on stage, and between the dark interior of the stage building and the bright daylight.

 

Athens

The city of theater was, indeed, Athens. Athens birthed drama, bred drama, and ultimately was responsible for cultivating it into the most important art of the Classical and Modern world. Greek theater has proven itself to be timeless as it continues to entertain audiences with its ability to portray universal themes. Although many of the plays have been lost through the ages, many of the originals from the 5th and 6th century BCE are regularly performed around the world and are still looked at as the top of their craft.

 

What do you think of ancient Greek theater? Let us know below.