David Hamilton’s forthcoming book The Enigmatic Aviator: Charles Lindbergh Revisited  finds earlier parallels with current events and looks at the ever-changing verdict on Lindbergh.

Here, the author considers American isolationism in the context of his new book.

Charles Lindbergh shown receiving the Distinguished Flying Cross from President Calvin Coolidge in June 1927.

The American Founding Fathers counseled that the nation should ‘avoid foreign entanglements’, and President Trump's recent hesitation in support of Ukraine brings back memories of earlier similar debates. In the 1930s, the mood in Congress and the country was that American involvement in World War I had been a mistake and had not only failed to make the world ‘safe for democracy,’ but too many lives had been lost or damaged. But by 1940, President Roosevelt started to try to convince America to get involved in the new war in Europe. Public opinion was divided, and although there was majority support for giving help of some kind to beleaguered Britain, the polls were against putting ‘boots on the ground’. Leading the opposition to such deeper involvement was the America First Committee (AFC), the most significant grassroots movement ever in America, and they preferred the term ‘anti-intervention’, which did not suggest total withdrawal from the rest of the world. The AFC had the most support in the Midwest, while FDR and his hawks in the cabinet had the backing of the anglophile East Coast. The AFC had bipartisan political support and was joined by writers and historians. Eventually, their star speaker at the regular nationwide rallies was the American aviator hero Charles Lindbergh (1902-1974). After his famous solo flight to Paris from New York in 1927, he had retained a remarkable mystique since he coupled his success in the world of commercial aviation with a policy of avoiding the still-intrusive press, particularly the tabloids, by using the European royalty’s strategy of ‘never complain never explain.’ He traveled widely in Britain, France, Germany, and Russia and proudly showed their military planes; it was his confidential reports via the Berlin American embassy back to G2 intelligence in Washington on the Luftwaffe strength that eventually convinced President Roosevelt in 1938 to order a rapid expansion of the American Air Corps. From 1939, Lindbergh added his voice to the anti-intervention movement, starting with historically based, closely argued radio broadcasts and then speeches at the large AFC rallies later. His emergence was doubly uncomfortable for FDR. He not only feared Lindbergh’s contribution to the debate but knew that his close connection to the Republican Party (including marrying Mexican ambassador Dwight Morrow’s daughter) meant he could be a formable populist political opponent should he run for president, as many had urged. In response, FDR and his inner cabinet, aided by compliant congressmen and friendly columnists, mounted an unpleasant campaign against Lindbergh, and, rarely debating the issues he raised, they preferred an ad hominemattack. His travels in Germany and interest in the Luftwaffe made him vulnerable, and the jibes included but were not limited to, claiming he was a Nazi, a fifth columnist, an antisemite, a quisling, and even, mysteriously, a fellow traveler.

 

World War Two

It is often said that Lindbergh and the AFC lost the intervention argument to FDR, but instead, Pearl Harbor brought abrupt closure to the still evenly balanced debate. Thereafter, during the War, Lindbergh worked in the commercial aviation sector and then flew 50 distinguished missions with the Marines in the Pacific. After FDR’s death, the unpleasantness of the intervention debate was admitted and regretted (‘there was a war going on’), and some private apologies reached Lindbergh. Even the FBI was contrite. FDR had brought them in to investigate Lindbergh, even using illegal wiretaps. Still, when J. Edgar Hoover closed their huge file on him, he added a summary saying that ‘none of the earlier allegations had any substance.’

Lindbergh was welcomed back into the post-war military world. As a Cold War warrior, he worked with the atomic bomb air squadrons and served on crucial ballistic missile planning committees. From the mid-1950s, he successfully took up many conservation issues. Now a national icon again, but a reclusive one, his book on the Paris flight and the book sold well. From Truman’s administration onwards, he was in favor of the White House, and the Kennedys sought the Lindbergh’s company, invitations which the couple occasionally accepted. Now, on the White House’s social A-list, Nixon also puts him on important conservation committees. When he died in 1974, President Ford expressed national sympathy. Later, Reagan’s addresses to young people often invoked Lindbergh as a role model.

 

Lindbergh disparaged

But by the end of the century, something changed, and his place in history became uncertain. This was not the result of new scholarly work or an adverse biography. All the post-war literature had been favorable to him, including Berg’s thorough Pulitzer Prize-winning biography of 1998, which cleared him of any Nazi leanings or antisemitism.[1] The damage to Lindbergh instead came from historical fiction. The basis of Philip Roth’s best-selling novel The Plot Against America 2004 was the well-worn ‘what if’ fictional trope that Hitler won the European war. Lindbergh, elected as US president, aligns with him and acts against the Jews. Roth's usual disclaimer was that his story was not to be taken seriously, but it was. Historical fiction can be entertaining if the sales are low and the author obscure, but the inventions can be dangerous in the hands of a distinguished author. An HBO television series of the same name based on the book followed in 2020, and it often felt like a documentary. Serious-minded reviewers of the television series took the opportunity to reflect widely on fascism and antisemitism, with Lindbergh still featured as a central figure. The mood at the time was ‘wonkish,’ looking again at figures of the past and seeking feet of clay or swollen heads, or both. When others sought any justification for Roth’s allegations, they returned and found the smears and insults directed at Lindbergh during the intervention debate. The old 1940-1941 jibes were revisited, and, yielding to presentism, to the dreary list was added the charge of ‘white supremacist,’ which at the time had escaped even Lindbergh’s most vocal opponents. Evidence for all the old labels was lacking, and to prove them, corners were cut even by serious historians, leading to a regrettable number of mangled or false quotations. The most vivid tampering with the historical record was misusing a newspaper picture taken at an AFC rally in 1941. It shows Lindbergh and the platform party with arms raised, and the caption at the time noted that they were loyally saluting the flag. The gesture at that time was the so-called Bellamy salute which was soon officially discouraged and changed in 1942 to the present hand-on-heart version because of its similarity to the Nazi salute.  Washington’s Smithsonian Institution was now revisiting Lindbergh, and although they had proudly used Lindbergh’s plane Spirit of St Louis as their star exhibit since 1928, they had now deserted him. An article in their Smithsonian Magazine, after denigrating the AFC, described Lindbergh as ‘patently a bigot’ and used the image suggesting a Nazi salute.[2] The Minnesota Historical Society, also with long-standing links to the Lindbergh heritage, also turned to him and answered inquiries about Lindbergh by directing them mainly to the Roth novel and the television program based on it. They also recommended a shrill new book on Lindbergh subtitled ‘America’s Most Infamous Pilot.’. Lindbergh had not been ‘infamous’ until 2004.

The 100th anniversary of Lindbergh's classic flight will be with us soon in 2027. The damage done by Roth’s mischievous historical fiction should be met instead with good evidence-based history, restoring the story of this talented man.

 

David Hamilton is a retired Scottish transplant surgeon. His interest in Lindbergh came from the aviator’s laboratory work as a volunteer in Nobel Prize-winner transplant surgeon Carrel’s laboratory in New York.[3]  His forthcoming book is The Enigmatic Aviator: Charles Lindbergh.


[1] A. Scott Berg, Lindbergh (New York, 1998).

[2] Meilan Solly ‘The True History Behind ‘The Plot Against America’’

Smithsonian Magazine, 16 March 2020.

[3]   David Hamilton, The First Transplant Surgeon (World Scientific, 2017).

One of the most devastating conflicts in history, the Second World War, touched the lives of millions, its impact also played a huge role in the life of Oscar winning actress, and beloved style icon, Audrey Hepburn. Audrey’s early life was spent in Holland in the midst of the Nazi Occupation where she witnessed the best and worst of humanity, and developed the ideals that would influence her later life.

Erin Bienvenu explains.

Audrey Hepburn in 1952. Available here.

Audrey was born in Brussels, Belgium, on May 4 1929, to an English father and Dutch mother. Her mother, Ella van Heemstra, was from an aristocratic family, and already had two sons from a previous marriage, Alexander and Ian. She had met Joseph in the Dutch East Indies. Through a genealogy study, she believed her husband was a descendant of James Hepburn, the third husband of Mary Queen of Scots. Excited by this royal connection Ella insisted the family adopt the name ‘Hepburn-Ruston.’

When Audrey was six her father walked out on his family, an event that would haunt her for the rest of her life. He returned to England, where Audrey was also sent to school. Despite their close proximity Joseph never visited his young daughter and the lonely Audrey immersed herself in the world of ballet. It enriched her life and she was determined to become a prima ballerina.

 

War Begins

Audrey’s life was uprooted once again when the Nazi’s invaded Poland, and Britain declared war. Ella believed her daughter would be safer in Holland, which had a history of neutrality, and genuinely thought that Hitler would respect the countries stance.  Audrey was driven to the airport by her father, it was to be the last time she would see him until she was an adult.

Little Audrey had largely forgotten how to speak Dutch during her time away, and she found school difficult, again dance became her escape. She lived with her mother and brothers in Arnhem, where they were close to extended family.

All hopes of safety were dashed when the Nazis invaded the Netherlands in May 1940. At first, Audrey remembered, life seemed to go on as normal. The soldiers behaved politely in an attempt to win over the Dutch people. Audrey continued to go to school, though her lessons increasingly became focussed on the war and Nazism. That same year Audrey enrolled in the local dance school, where her teachers were impressed with her passion and gracefulness.

Despite their initial conciliatory behaviour, the Nazis soon revealed their true colours and life for the citizens of Arnhem began to change. Food was rationed, and day to day life became increasingly dangerous. Audrey’s brother Alexander was determined not to be forced into work by the Germans and he went into hiding, Ian however was not as lucky. To his family’s immense distress, he was rounded up and forced to work in a Berlin munitions factory.

Audrey was also a witness, on multiple occasions, to the local Jewish population being herded onto cattle cars at the train station-their destination then unknown. The horror of these scenes became a recurring theme in her nightmares, she was horrified at the way the Nazis treated people. She saw the Nazis shooting young men in the streets, the violence becoming a constant in people’s lives.

Then her beloved Uncle Otto was arrested as a reprisal for an underground attack on a German train. Otto was held hostage in the hopes the real perpetrators would come forward.  They did not, and he and four other men, were executed some weeks later.
Adding to her distress, Audrey’s parents had a complicated relationship with the Nazis. Like many in their social circle both Joseph and Ella had initially been attracted to the ideas of fascism, they even met Hitler in 1935. But as the war went on, Ella’s beliefs began to change, she had seen too much cruelty and suffering. Joseph meanwhile spent the war years imprisoned in England for his fascist sympathies.


Helping the Resistance

Distraught by what had happened to Otto, Ella and Audrey went to live with his wife, Miesje, Ella’s sister, and their father in the town of Velp, just outside of Arnhem. Audrey held a special place in her heart for her grandfather, with whom she spent many hours doing crossword puzzles, he became the father figure she had so longed for.

It was also in Velp that Audrey began doing volunteer work for local doctor, Hendrik Visser t’Hooft, a man with close ties to the resistance. Through the doctor Audrey and her mother became involved in events known as ‘black evenings’, concerts organised to raise money for the resistance. In private homes, sometimes her own, Audrey danced for a select audience with the windows blackened and doors guarded so that no Nazi could look in. It was a family affair; Ella made her daughters costumes and Audrey choreographed her own routines. It was a welcome, though risky, distraction from the events going on outside. Audrey was to remember fondly how, “The best audience I ever had made not a single sound at the end of my performance.”

This was not the only way Audrey helped the resistance. At least once she delivered copies of the underground newspaper, Oranjekrant. She hid copies in her socks and shoes and then cycled off to deliver them. On another occasion the doctor sent her into the woods near Velp with food and a message to a downed allied airman. No doubt Audrey’s fluency in English made her valuable in this role. On her way home however, she ran into a German police patrol. Thinking quickly and remaining calm, Audrey began picking wildflowers which she offered to the men. Seeing such a young, innocent girl, they sent her on her way without a second thought.

As the war continued food became an ever-increasing problem, and in order to supplement their meagre rations many were forced to forage in the countryside for additional supplies. The van Heemstras ate nettles, grass and made flour from tulips, but it was never enough and Audrey was soon suffering from the effects of malnutrition.

Another problem arose when she turned fifteen. She was required to register, in order to continue dancing, as a member of the Dans Kultuurkamer, an institution created by the Nazis in order to control the arts in Holland. Audrey wouldn’t consider joining such an organisation and this coupled with her poor health led her to temporarily give up her dance lessons. But dance was vital to Audrey’s well-being so she began teaching others instead, offering small private lessons where she could pass on her knowledge and enthusiasm.

Operation Market Garden

In September 1944 the allies launched Operation Market Garden – what was supposed to be the beginning of the successful liberation of the Netherlands. They landed near Arnhem and in the fierce fighting that followed the town was all but destroyed. From her home in Velp, Audrey could hear the almost continuous sound of gunfire and explosions. The Germans ordered the complete evacuation of Arnhem, and many of the displaced made their way to nearby Velp. The van Heemstras, who also had an unwelcome Wehrmacht radio operator working in their attic, opened their home to about forty refugees. The scenes all around invoked a strong response in the compassionate Audrey. She later said, “It was human misery at its starkest.” She was eager to help, offering dance lessons to the anxious citizens of Arnhem in an effort to distract them from the horror outside. She also continued to help Dr. Visser t’Hooft with the endless stream of wounded who came pouring in. Soon even local schools were converted into make shift hospitals, but conditions were desperate.

During this time Audrey’s family also hid a British paratrooper in their cellar. If discovered they would all have paid with their lives, but for Audrey the situation was also exciting. The paratrooper was a kind of knight in shining armour, he represented liberation and freedom to her. It’s not known how long he remained with the family before the resistance could spirit him away, but eventually the Nazis ordered all the refugees from their temporary homes.

 

Surviving

When Operation Market Garden did not succeed, the Dutch were forced to endure what became known as the ‘hunger winter.’ Disease and starvation were rife and Audrey developed jaundice. Then in March 1945 she was rounded up on the street with several other girls, destined to work in the understaffed German military kitchens. Thankfully Audrey had the presence of mind to run off when the soldiers had their backs turned. She made it home and hid in the cellar until it was safe to come back out.

Not long after the allies again began to close in on the Germans and Arnhem was once again under siege. The van Heemstras spent much of their time in the safety of their cellar, occasionally resurfacing to assess the damage to their home and to try and gain any news of the invasion. They lived as best they could, never quite sure what each day would bring, and then, finally, after weeks of fighting the constant barrage of noise stopped.

Hearing voices Audrey and her family cautiously emerged from their hiding place. At their front door they discovered a group of English soldiers, Audrey was over joyed. She recalled, “freedom has a bouquet, a perfume all its own – the smell of English tobacco and petrol.” The soldiers were equally delighted to have liberated an English girl! The war was finally over.

Audrey was just sixteen years old, malnourished and suffering from jaundice, asthma, edema and anemia – but she was alive, and that was what mattered most to her. As was her immediate family, her two brothers had also survived the war.

Audrey resumed her ballet studies, which took her to Amsterdam and then London, and in the end to a career as an actress. However, she never forgot her war years, they shaped her as a person and would lead to the role she most valued, helping underprivileged children in war torn countries as an ambassador for UNICEF.

 

Find that piece of interest? If so, join us for free by clicking here.

 

 

References

Diamond, Jessica Z & Erwin, Ellen (2006), The Audrey Hepburn Treasures: Pictures and Mementos from a Life of Style and Purpose. New York: Atria Books

Dotti, Luca (2015), Audrey at Home: Memories of My Mother’s Kitchen. New York: Harper Design

Hepburn Ferrer, Sean (2003), Audrey Hepburn: An Elegant Spirit. New York: Atria Books

Matzen, Robert (2019), Dutch Girl: Audrey Hepburn and World War II. Pittsburgh: GoodKnight Books

Paris, Barry (1996), Audrey Hepburn. New York: Berkley Books

From the moment Germany sought an armistice in November 1918, total disbelief amongst the populace ensued at how the Imperial Reich could have been defeated. For many, the answer lay outside of military reality and was instead deeply rooted in conspiracy: that at the decisive hour, the German army had been betrayed at home, with the betrayal having been led by Jews and socialists. The myth would prove impossible for the fledgling democratic republic to shake off, and the Nazis would subsequently make it part of their official history. How did it emerge and why did it prove so persuasive?

James Russell explains.

A 1924 cartoon showing the leaders Philipp Scheidemann and Matthias Erzberger as stabbing the German army in the back. Available here.

The Roots of the Stab-in-the-Back

The ‘stab-in-the-back’ myth can be first traced to a growing wartime notion that Germany’s war effort was being weakened by strikers and shirkers. These arguments were not unique to Germany: Allied cartoons, for example, often accused strikers of weakening the nation’s war effort.

However, in Germany these began to take overtly political and racialist undertones, often encouraged by the wartime government. As the last German offensive of the war descended into failure, its collapse was blamed on strikes, denying the soldiers of what was required in their moment of need. Supposedly treasonous elements within German society were blamed, primarily Jews and socialists.

The key to understanding how the myth took hold is in the wartime nation’s widespread narrative- that Germany was fighting a just war, and that it was winning. German propaganda, under the military dictatorship of Erich Ludendorff and Paul von Hindenburg, repeatedly hammered home these messages.

The lack of wartime enemy occupation or the populace’s experience of the war’s front lines supported these beliefs. The vast majority of fighting on the Western front took place in France and Belgium- only reaffirming to the people the false belief that Germany could not be losing.

With such a perception of Germany’s apparent strength, the scene was set for the conspiracy to proliferate when news of defeat emerged. The first official declaration utilising the ‘stab’ metaphor probably occurred on 2nd November 1918 when a member of the Progressive People’s Party announced to the German parliament:

“As long as the front holds, we damned well have the duty to hold out in the homeland. We would have to be ashamed of ourselves in front of our children and grandchildren if we attacked the battle front from the rear and gave it a dagger-stab.” (1)

 

The German Defeat

The news that Germany was suing for an armistice on 11 November 1918 shattered the nation’s existing assumptions. Given the existing narratives, many believed that Germany could not have been defeated militarily. For many, the only explanation for defeat was that, inspired by revolts at home, the newly empowered socialist government had committed treason by unnecessarily suing for peace.

Indeed, it was an elaborate plan by Ludendorff and Hindenburg to pin the blame on the new democratic government. Without making any official declaration regarding defeat themselves, and then ceding the responsibility to sue for peace to the new republican government, they successfully pinned much of the blame away from themselves and on to the democratic politicians.

Ludendorff claimed that Germany’s strikes constituted ‘a sin against the man at the front’ and an act of ‘high treason against the country’. They were the ‘work of agitators’ who had infatuated and misled the working class, both of whom were the culprits of the German defeat. (2) These comments were entirely hypocritical – made despite having privately pressed both the Kaiser and politicians for an armistice given Germany’s imminent military collapse.

Meanwhile, whilst testifying before a parliamentary committee investigating the causes of the German defeat, Hindenburg remarked: “An English general said with justice: ‘The German army was stabbed in the back.’ No guilt applies to the good core of the army.” (3) Given the enormous prestige won by both Hindenburg and Ludendorff in the wartime struggle, especially the former, their testimonies lent powerful weight to the myth.

The situation was not helped by the republic’s first President and leader of the Social Democrats, Friederich Ebert. In public recognition of soldierly effort and sacrifice rather than any conspiratorial suggestion, his declaration from the Brandenburg Gate to returning soldiers that no enemy had vanquished them added greater legitimacy to the myth’s claim.

Historians unanimously agree that, faced with a dramatic shortage of supplies, the flooding of US soldiers and materiel into the Allied ranks, a collapsing home front, and with the possibility of an Allied march through Austria, Germany was in a position where defeat was inevitable. Furthermore, the responsibility for the collapse of morale on the home front rested squarely on the German government, who prioritized the needs of the front line at the expense of civilian well-being.

 

The Myth that Never Dissipated

Throughout its existence, the Weimar Republic witnessed an unhealthy deployment of the ‘stab-in-the-back’ – a myth which challenged the very foundations of the state. Matthias Erzberger, head of the German delegation which signed the armistice in November 1918, would pay for such a signing with his life. He was assassinated in 1921, a death welcomed by many. Many right-wing groups refused to recognise anything other than the total complicity of all democratic politicians in the German humiliation. This was the case even when these politicians vehemently protested the perceived severity of the Versailles Treaty.

Adolf Hitler heavily utilised the myth with his unremitting denunciation of those ‘November Criminals’ who had sued for an armistice in November 1918. Such castigations became a constant feature of Nazi propaganda, with their accusations of ‘high treason against the country’ being particularly virulent in its antisemitism. The Jews had ‘cunningly dodged death on the plea of being engaged in business’ and it was this ‘carefully preserved scum’ that had sued for peace at the first chance presented. (4)

Unlike the events in the Russian Empire in 1917, the revolution in Germany’s political landscape over the course of 1918 and 1919 was partial. The key party in deciding Germany’s future, the Social Democrats, forged a compromise between their ideals whilst maintaining many continuities from the old regime. Hence Germany’s courts, army and educational system underwent little change despite Germany’s new republican setup. These institutions, still populated by many individuals loyal to the old regime, empowered the myth’s proliferation. When Hitler faced charges of treason for launching a coup in 1923, the Munich court he faced was lenient to say the least. It allowed him an uninterrupted three-hour tirade to defend his actions and expound the illegitimacy of the Republic. Despite being found guilty of treason, Hitler was nonetheless imprisoned in pleasant conditions for only a year. (5)

One of the most destructive implications of the myth transpired in the Second World War: Hitler declared in 1942, “the Germany of former times laid down its arms at a quarter to twelve. On principle I have never quit before five minutes after twelve.” (6) Unlike the First World War, Hitler’s Germany would not surrender until the bitter end, with all the death, ruin and misery resulting therefrom.

 

What role did the Socialists and Jews actually have in the First World War?

Contrary to prevalent assumptions and prejudices, the German-Jewish population was overrepresented in the army, rather than ‘shirking’ as was consistently argued by antisemites during and after the war. Many Jewish Germans saw it as an opportunity to once and for all demonstrate their allegiance to the nation and eliminate all remaining traces of antisemitism. The authorities in 1916, subscribing to the shirking argument, ordered a census of Jews in the army. The results indicated Jewish overrepresentation rather than underrepresentation, but its results were never released to the public. This concealing of the truth only fuelled antisemitic conspiracy.

Meanwhile, German socialists found themselves in an awkward position throughout the war. It’s outbreak in 1914 divided them, culminating in a fractious split later in the war. Yet for the most part, German socialists remained loyal to the nation’s war effort, as part of a wider German political truce. Naturally, the political leadership of the Social Democrats attempted to balance the more radical elements of Germany’s workers against the demands of the state for war contribution.

Unfortunately, Germany’s strikes of January 1918 strikes signified a particularly divisive episode, with major ramifications for the post-war scene. By mediating between the strikers and the state, the Social Democrats were blamed by the more radical left-wing parties as unnecessarily prolonging the war, and on the other hand, blamed by the right-wing for denying the resources needed by German soldiers at the 11th hour. In 1924, President Ebert would be found technically guilty of treason by the German courts for his role in the mediation. It is, however, worth noting that Germany lost far fewer total days to strikes than Britain did during the war.

The stab-in-the-back myth remains a powerful reminder that Germany’s first experience of democracy had had a fundamentally unhealthy backdrop throughout its existence. It also warns of the dangers of unfounded claims in politics – and the importance for any democracy to thoroughly combat their falsehoods.

 

Find that piece of interest? If so, join us for free by clicking here.

 

 

Sources:

(1)   Ernst Muller, Aus Bayerns schwersten Tagen (Berlin, 1924), p.27.

(2)   Erich Ludendorff, My War Memories, 1914-1918, vol. 1 (London, 1919), p.334.

(3)   German History in Documents and Images, Paul von Hindenburg's Testimony before the Parliamentary Investigatory Committee ["The Stab in the Back"] (18 November 1919). Accessed 19 March 2025. https://ghdi.ghidc.org/sub_document.cfm?document_id=3829

(4)   Adolf Hitler, Mein Kampf (Boston, 1943), p.521.

(5)   The Manchester Guardian, 27 February 1924, 7, ‘Ludendorff Trial Opens: "A Friendly Atmosphere." Hitler denounces Marxism and Berlin’s Timidity.’

(6)   Jewish Virtual Library, Adolf Hitler: Speech on the 19th Anniversary of the “Beer Hall Putsch” (November 8, 1942). Accessed 19 March 2025.

As the conclusion of World War II approaches its 80th anniversary, the memories of this historic event are at risk of fading into darkness. As the number of surviving veterans from the war diminishes, the responsibility of preserving their history falls to the next generations.

Here, Dallas Dores considers this in the context of film depictions of World War II.

John Wayne in the 1962 World War II film The Longest Day.

Historians the world over work diligently to protect the legacies of World War II from being forgotten, but they are not the only ones. Entertainment media such as film and television production has also sought to preserve the memory of World War II through a more visual process. Movies and television series in America have captured the attention of younger generations who could not otherwise have ‘experienced’ the events of World War II. While these films and series may serve as a means of preserving the historical legacy of World War II, they are not a reliable source of historical accuracy. Movies and television shows, no matter how educational they may attempt to be, are designed for entertainment. As such, these modern depictions of World War II will often substitute much of their historical accuracy for attention catching action or even ulterior agendas, such as national patriotism. Such depictions can even fall on the wrong side of the fine line between fact and fiction for the sake of public reception. This favoritism of entertainment over education has dangerous consequences for the modern day public memory of World War II. As such films and series become increasingly popular, much of the American public, unfamiliar or otherwise disconnected with the generation of the 1940’s, is at risk of accepting these false film narratives of the war as facts, leading to a fictionalized, homogenized interpretation of World War II.

 

Motivations

The greater acceptance of film interpretation over historical research of the war traces back to the growing popularity of such films. It is no secret that war films, particularly World War II films, have a large and enthusiastic audience in the 21st century, especially in the United States. The modern day film is able to engage with the viewer and bring them subconsciously into the action being depicted in typically more powerful ways than the average textbook. As Anton Kaes discusses in his article History and Film, historical films are able to play with certain aspects of the story being told and translate it in a present tense, giving the viewer a stronger connection with the events unfolding before them. This ability to reshape history, however, is where the primary concern with such historical films begins. Although a film may be more ‘engaging’ than a basic textbook, the historical accuracy of a film is at far greater risk of corruption. As Richard Godfrey observes in his work Visual Consumption, historical films in the US, particularly those concerned with World War II, are rarely without ulterior motives. Many of these films strive to affirm a desired national identity, one that invokes a militaristic patriotism, by elevating the role of the United States above its more accurate standing, while simultaneously minimizing the more negative aspects of the past. Historical films concerning the war, particularly from the American perspective, oftentimes seek to create their own interpretations of the war rather than present the original story, with all its flaws and mistakes. This is also done with the theory that American audiences do not want to see the negatives of their nation’s past, only the positives. As Barry Schwartz puts it in his article Memory as a Cultural System, “We cannot be oriented by a past in which we fail to see ourselves”. This logic plays a critical role in the development of the historical film narrative. If a filmmaker wants their viewer to identify with the main character of their movie, they remove as many flaws associated with that character as possible, or at the very least give the character a sense of redemption for past mistakes. Thus begins the creation of the distorted memory. As films gain greater attention over books, the average viewer begins to accept what is depicted on the screen as fact. Although there are films that clearly represent fiction, others leave the viewer questioning reality vs imagination.

 

Semi-authentic stories

With entertainment taking priority over education, these films oftentimes take liberties in their unique interpretations of historical events. This can take both a simple form such as a semi-authentic story based on true events, or a more extreme form where the film becomes more fictional than historical. The Quentin Tarantino film Inglourious Basterds is a prime example of the latter. The film’s director himself made it clear that this film is a work of fiction, with the events and characters therein being created out of pure imagination. A film such as this would not be looked to for historical accuracy or information. On the opposite side of the coin is Steven Spielberg’s Saving Private Ryan, arguably one of the most historically accurate World War II films. The film has been hailed as one of the most authentic depictions of the American soldier’s experience in World War II put into film. This authentic feel contributed to the film’s popularity, and as Lester Friedman discusses in Citizen Spielberg (University of Illinois Press, Urbana, Chicago), it is also the cause of the film's memory distortion. The popularity of the film carried over far beyond simple entertainment, as viewers of the film began to confuse the event of the movie with real life. A number of accounts from the Omaha Beach memorial in France tell of tourists needing to be told that there is no grave for Captain John Miller, the fictional main character of the film. Although there were incidents during World War II where multiple brothers served and died in the war, the story of Private James Ryan is fictional. Even though the film is a work of fiction, it is viewed by many as fact. As John Whiteclay Chambers explains, “the public memory of war…has been created less from a remembered past than from a manufactured past”. As the collective memory of World War II becomes shaped by film rather than account, it is also distorted and altered in a way that does not clearly distinguish fact from fiction.

 

Women

Not only is the film narrative of World War II inaccurate, it is also incomplete. The stereotypical World War II film in America, which in turn is connected with the stereotypical American narrative of the war, features stoic, Caucasian male soldiers on the frontlines of battle. This image contributes to many of the national identities filmmakers and studios attempt to emphasize in such films. However, in elevating certain images and narratives, others are either diminished or even left out. The story of American women is rarely depicted in World War II films, and when it is, as Victoria Tutino examines, it does not tell the entire story. Tutino agrees that “Society needs these films in order to understand the context of the wartime era”, however, she warns that “society must be wary as this medium only explores one side of women’s multi-dimensional roles”. The role of American women during the war is often limited in its film depiction, either to that of a field nurse or a patriotic homefront worker. The life and times of Rossie the Riveterby Connie Field attempts to overcome this underrepresentation by revealing the untold story of women workers. In creating this film, Fields collected the accounts of 700 American women who worked during the war and presented five main speakers on screen, two Caucasian and three African American, all from different backgrounds. The purpose of Field’s work was to challenge the notion that American women joined the workforce solely out of patriotism, as depicted in many films, and reveal their true desires for economic gain in a male driven workplace. Although this film attempts to fill in the incomplete narrative of women’s role during the war, it still falls into the same trap of trying to convince the audience that this is the complete story. While 700 individual accounts is certainly a substantial source of information, it is only a small portion of the larger image of millions of women who entered the working and military forces, all from different backgrounds and for different purposes. While attempting to correct the shortcomings of many films, The life and times of Rossie the Riveter falls into the same trench of trying to create its own narrative.

 

African American soldiers

Just as with women in World War II, the legacy of African American soldiers is severely underrepresented in film history. Just as with women, countless African Americans served in the US military during the war and fought for their nation. While this piece of history is remembered in text, it is all but forgotten in film, with the overwhelming majority of American World War II films focusing primarily, if not entirely on the more commonly seen Caucasian soldier. This once again falls under the umbrella of a national identity, one that chooses to overlook past mistakes rather than accept them. Though some films have in recent years attempted to shed a stronger light on the African American soldier, they should not be taken without caution. Just as with the narrative of women, Clement Alexander Pricequestions if “moving images of black soldiers enhance an understanding of the black experience in war, or do they, like so many written documents, reflect a circumscribed view”. The film can only encompass so much of history accurately before it becomes infected by the imagined narrative. In 2008, director Spike Lee released Miracle at St. Anna  (Touchstone Pictures, 2008), a film which he had hoped would draw some much needed attention to the experiences of African Americans during the war. The film addressed racial situations that many African Americans faced on the homefront, presenting a subject matter which other similar films typically shy away from. However, the film’s realism does not last and the all too common element of fiction distorts the narrative. Near the end of the film, a scene takes place in which a commanding German Officer takes pity on one of the main African American characters, handing him a pistol and offering words of encouragement to him in English. A situation such as this, in which a ranking officer of the Nazi military would not only spare but arm an enemy soldier, let alone one of non-Caucasian descent, is completely inconceivable from a historical standpoint. As such, the audience is left questioning whether or not the film’s context should be taken literally or metaphorically, as fact or fiction. This creates a paradox in which accepting the film as fact leads to the belief of false narratives, but interpreting the film as fiction distorts the true realities as exaggerations. In either case, the film’s credibility as a reliable source of historical memory is tarnished.

 

Conclusion

As World War II continues to be the subject of modern popular culture, the memory of its past becomes further entangled in a web of distortion. The use of film and television as a source for memory is increasing, and as such its factual evidence is replaced by imagined narrative. As the generation of the 1940’s rapidly diminishes, their memories are left in the hands of those who use and warp it for ulterior purposes. The desire to promote a national agenda over less-than-comfortable details creates an altered narrative of the past, one that magnifies only small portions of the war and replaces the rest with imagination. As filmmakers take their own liberties in substituting certain aspects of history with more media-adjacent interpretations, the public memory of these events is changed and distorted into an imagined fiction. These films place entertainment over education, leaving viewers wondering how these films should be interpreted and oftentimes fail to discern between fact and fiction. Furthermore, as the presented narratives of films are accepted, the excluded facts are forgotten. The true experiences of individuals such as women and African Americans are more often than not either misinterpreted and altered or completely left out of the greater image, leaving these aspects of the past to be lost in history. The use of film over text as historical reference is a dangerous path, one that homogenizes the public memory into a synthetic image so detached from reality that the true memory of the past is all but erased.

 

 

References

Lester Friedman, Citizen Spielberg, (University of Illinois Press, Urbana, Chicago)

Richard Godfrey, Simon Lilley, Visual consumption, collective memory and the representation of war, (Consumption Markets & Culture, Taylor and Francis Online, 2009), Visual consumption, collective memory and the representation of war: Consumption Markets & Culture: Vol 12 , No 4 - Get Access (tandfonline.com)

Anton Kaes, History and Film: Public Memory in the Age of Electronic Dissemination, (History and Memory 2, no. 1, 1990), History and Film: Public Memory in the Age of Electronic Dissemination on JSTOR (asu.edu)

Spike Lee, Miracle at St. Anna, (Walt Disney Studios, 2008).

Barry Schwartz, “Memory as a Cultural System: Abraham Lincoln in World War II.”, (American Sociological Review, 1996) Memory as a Cultural System: Abraham Lincoln in World War II on JSTOR (asu.edu)

Steven Spielberg, Saving Private Ryan, (Dreamworks Pictures, 1998).

Victoria Tutino, Stay at Home, Soldiers: An Analysis of British and American Women on the Homefront during World War II and the Effects on Their Memory Through Film, (Of Life and History, College of the Holy Cross, 2019), Stay at Home, Soldiers

John Whiteclay Chambers, David Culbert, World War II, Film, and History, (Oxford University Press, 1996), ProQuest Ebook Central - Reader (asu.edu)

Posted
AuthorGeorge Levrier-Jones
Share

Most Americans are disgusted by politics. Asked in 2023 for one word to describe politics, they responded, “divisive,” “corrupt,” “polarized.” For many, polarization is the root of the problem. Writers lament polarization’s dysfunctional consequences, and a national organization devoted to bridging the partisan divide is flourishing.

Yet the 2024 election only deepened polarization. Despite a divisive, topsy turvy campaign, the polls changed little throughout 2024, and the results were within the margin of error. Most voters were locked in, and few changed their minds.

Many assume that our current predicament goes back to the 1960s. After all, the ‘60s was decade of dissent and division. It generated bitter conflict over foreign policy, race, women’s rights, sexuality, and a host of highly charged moral issues that would dominate American politics for the next half century.

Author Don Nieman’s recent book The Path to Paralysis: How American Politics Became Nasty, Dysfunctional, and a Threat to the Republic, challenges that assumption. Partisanship may be endemic, but polarization is a recent development.

Jefferson attacked as an Infidel, available here.

How Partisanship and Polarization Differ

There is a big difference between partisan conflict and polarization. American politics has always been contentious. That’s the nature of democratic politics in a country as big, diverse, and dynamic as the U.S. A positive vision may inspire, but negative campaigning and appeals to fear mobilize voters. Federalists charged that Jefferson was a godless Jacobin. Andrew Jackson’s managers alleged that John Quincy Adams had served as a pimp for the Russian Czar. LBJ suggested that Goldwater would unleash nuclear war. George H.W. Bush used Willie Horton to appeal to White fear of Black men.

However, there is much more to polarized politics than bitter partisanship and negative campaigning. Politics become polarized when support for the two major parties is  closely divided and upwards of 90% of voters have decided which side they support. When voters get their news from sources that reinforce their prejudices and can’t agree on basic facts.  When wild, baseless conspiracy theories become widely accepted and fear and loathing of the oppositionmotivates voters more than support for their party’s position on the issues. When politicians favor political theater that thrills their base over making the compromises necessary to govern.

In 1968, the U.S. was bitterly divided over race and the Vietnam War. Richard Nixon and George Wallace waged divisive presidential campaigns that appealed to fear and promised law and order. It was the opening salvo in a succession of culture wars that would define American politics for the next half century and counting.

Yet the country wasn’t polarized. Ticket-splitting was common. States routinely sent Democrats to the U.S. Senate while casting their electoral votes for Republican presidential candidates. And vice versa. Most states were in play in presidential elections and swung back and forth between red and blue control. Some Republicans were moderate and some Democrats conservative. Politicians knew they had to reach the middle, valued compromise, and got things done.

Richard Nixon used racially coded language to appeal to White Southerners, but he became the architect of affirmative action.

Ronald Reagan was the face of conservative resurgence, but he cut a deal with Democrats to raise taxes, reduce deficits, and save Social Security (a program he hated).

President Ronald Reagan with Thomas "Tip" O'Neil.

George H.W. Bush worked with Democrats to strengthen environmental regulations. He incorporated cap-and-trade policies championed by Democratic senator Al Gore into the Clean Air Act of 1991. He also recognized the threat of climate change and signed the U.N. Framework Convention on Climate Change—the precursor to the Kyoto, Copenhagen, and Paris Climate Agreements.

After surviving scurrilous attacks from the right, Bill Clinton joined his nemesis Newt Gingrich to forge a grand compromise on Social Security that was only derailed by Clinton’s affair with Monica Lewinsky and subsequent impeachment. 

George W. Bush worked closely with Senator Ted Kennedy to pass sweeping education reform that combined the accountability Republicans demanded with a massive infusion of federal support for schools that served poor children.

 

The Tipping Point

Partisanship hardened into polarization following Barack Obama’s election in 2008, when seven long-developing trends converged in a perfect storm.

US President Barack Obama taking his Oath of Office.

First, massive changes that transformed the media began in the 1980s. Cable TV and talk radio, then the internet and social media ensured that more and more Americans got their news from sources that confirmed their biases. News outlets proliferated. Many were fact free, spreading lies and wild conspiracies. Debates became hotter because Americans couldn’t agree on basic facts much less the best solutions to problems.

Second, the transition from an industrial to a service economy, coupled with trickle-down economic policy, led to a sharp increase in income inequality. After 1980, the top 10% did very well, the top 1% better, and the top .1% enjoyed wealth that put the Robber Barons to shame. But middle and working-class Americans struggled. That left many angry, alienated, and suspicious. The 2008 recession and the bank bailouts that followed stoked their anger.

Third, the Republican Party became more conservative as it made big gains in the South in the mid-1990s and after. By 2008, well over half of Republicans in the House and Senate came from the South—long the most conservative region of the U.S. The Democratic Party shifted modestly leftward while the GOP took a hard right turn. With moderate Republicans and conservative Democrats endangered species, those most inclined to compromise were missing in action.

Fourth, beginning in the late 1960s, immigration surged. By 2020, immigrants constituted 15% of the population, a proportion not seen since the 1910s. Approximately 11 million had entered the country illegally. Many White Americans worried that they were losing their country. The election of a Black president in 2008 reinforced that fear. In 2011, polling revealed that a majority of Republicans believed the baseless birther conspiracy that alleged that Obama hadn’t been born in the U.S. (and therefore wasn’t qualified to serve). It was a sure sign of growing anger, alienation, distrust, and willingness to believe the worst about the enemy.

Fifth, the Great Recession of 2008 damaged the Republican brand, and Obama’s convincing victory rattled Republican leaders. They feared a political realignment that would make Democrats the dominant party for a generation. They decided to dig in, oppose everything Obama proposed, refuse to compromise, create gridlock, and make the Democrats look ineffective.

Sixth, gerrymandering created safe congressional districts. By the early 2000s, few seats in the House were competitive. Republican incumbents had more to fear from the right flank of their own party than from Democrats. Moving toward the center to compromise with Democrats was unnecessary to sway undecided voters in the general election, and it might invite a conservative challenge in primaries.

Finally, and fatally, the GOP embraced populism. The party that had traditionally appealed to fiscal conservatives, the college educated, and the country club set found that by appealing to discontented rural and working-class Whites without a college education they won new recruits. Sarah Palin offered a glimpse of the power of populism in 2008, and the Tea Party Revolt of 2010 confirmed that it worked as Republicans re-took the House.

Populism brought new recruits to the party. Many were angry, hostile to establishment politicians they believed had sold them out, and got their news from outlets that traded in conspiracy theories. They didn’t want civil debate or politicians who compromised. They wanted leaders who would fight. Plenty of politicians—including many with Ivy League credentials—eagerly obliged.

After the 2010 mid-term elections, politics were polarized. Government regularly faced shutdowns and even default. What little the federal government accomplished was done through executive order.

Mainstream Republicans led by alumni of the George W. Bush administration sought to pull the party back after Mitt Romney’s loss in 2012. They produced a major report—the Growth and Opportunity Project—that insisted the party broaden its appeal to the young, people of color, and immigrants. It demanded a return to the center.

That didn’t happen. Donald Trump understood how the Republican Party and American politics had changed. Appeals to moderates and undecided voters had become less important in a polarized polity. There were too few of them. Mobilizing his base with a polarizing, populist campaign full of invective, exaggeration, lies, and racist and sexist language worked. It disgusted many Republicans, but Trump’s success and threats of retribution by his base against those who bucked him brought them around

Trump captured the Republican nomination, won the White House in 2016, and ultimately made the Republican Party his own. Even after two impeachments, unsteady leadership during a global pandemic, incitement of an attempted coup on January 6, 2021, and conviction of a felony, Trump’s base never wavered. They supported him as he waltzed to the Republican nomination and won re-election on November 5, 2024.

 

Where Do We Go from Here?

Since 2016, the U.S. has experienced three presidential and two mid-term elections. The margins of victory have been tight, and power in Washington has shifted between the two major parties. The result has been wild swings in policy exemplified by withdrawing, then re-entering, and once again withdrawing from the Paris Climate Agreement. Congress is gridlocked, and executive orders have taken the place of legislation to address critical issues. American politics remains polarized, to the disgust of most voters, even as they refuse to budge from their commitment to one side of the great divide.

Those hoping for a way out of divisiveness and gridlock might look to history. From the 1840s to the early 1860s, conflict over slavery created bitter animosity and stalemate that led to civil war. The war ended slavery and realigned American politics, producing two decades of Republican hegemony. The end of Reconstruction, industrialization, and the agrarian crisis of the 1880s and 1890s once again left the two major parties closely divided with control in Washington shifting frequently. Political conflict was bitter and gridlock the order of the day. The election of 1896 broke the logjam and ushered in over 30 years of Republican dominance.

Only a fool would predict how or when or current impasse might end. If history is our guide, the most likely scenario is a crisis that scrambles political loyalties and permits one of the parties to achieve dominance. We can only hope that it’s more like the crisis of the 1890s than the cataclysm of the 1850s.

 

Author Don Nieman’s recent book is The Path to Paralysis: How American Politics Became Nasty, Dysfunctional, and a Threat to the Republic. Available here: Amazon US

Nestled in the northeastern frontier of India, the region now known as Arunachal Pradesh has long been a mosaic of cultures, languages, and traditions. Historically referred to as the North-East Frontier Agency (NEFA), this area has been home to myriad indigenous tribes, each with its distinct identity and way of life. Among these, the Tani people have etched their legacy into the land, enduring the harsh terrains and the tumultuous tides of history. It is from this resilient stock that Tako Mra, a name now synonymous with courage and foresight, emerged as a warrior, leader, and symbol of cultural preservation. His life and vision, shaped by personal experiences and key alliances, offer profound insights into the challenges of nation-building and cultural integration.

By Tadak Esso and Pupy Rigia.

The North-East Frontier Agency in 1954.

The Integration of Northeast India

The history of NEFA, and by extension Arunachal Pradesh, is deeply intertwined with the broader narrative of India's struggle for independence and its subsequent nation-building efforts. The northeastern territories, with their strategic significance and rich cultural diversity, presented a unique challenge to the nascent Indian state. As the British Empire began its retreat from the Indian subcontinent, the future of these remote regions became a focal point of India's territorial consolidation efforts.

Following the Partition of India in 1947, the integration of the northeastern territories was pursued with vigor. The region, characterized by its complex ethnic tapestry and relative isolation, had largely remained unadministered during British rule. This lack of formal governance left a vacuum that the Indian government sought to fill, albeit with significant resistance from the indigenous tribes.

Prime Minister Jawaharlal Nehru, recognizing the strategic importance of NEFA, emphasized the need for its integration, stating, "We must win the hearts of the frontier people and make them feel a part of India." However, this approach often clashed with the aspirations of the local tribes, who viewed these efforts as a continuation of colonial domination.

 

Early Life and Leadership of Tako Mra

Tako Mra was born in 1925 in the rugged hills of NEFA, an area teeming with the vibrant cultures of its various tribes. Growing up in the Sadiya region, Mra was exposed to the rich traditions of the Tani people from an early age. His education, marked by brilliance and an innate sense of leadership, set him apart. From his youth, it was evident that Mra was destined for a path that transcended the ordinary.

The tumultuous backdrop of World War II brought Mra into the fold of the British Indian Army. In 1943, he enlisted and soon found himself leading an infantry in the dense jungles of Yangon (present-day Myanmar). His strategic prowess and courage in the face of adversity earned him high honors from the British. However, the war left a lasting impact on Mra—both physically, as he suffered paralysis in his left arm, and mentally, as it sharpened his resolve for the autonomy of his people.

Reflecting on his wartime experiences, Mra later wrote, “The forests taught me resilience, and the war showed me the cost of freedom. We, too, must fight for our own freedom, not against a foreign empire but against the loss of our identity.”

The post-war period was a transformative time for Tako Mra, marked by his political awakening and growing involvement in the struggle for indigenous autonomy. A pivotal moment in this journey was his encounter with Zapu Phizo, the charismatic Naga leader who championed the cause of a free and autonomous Northeast. The relationship between Phizo and Mra was not merely one of ideological alignment; it was a deep and strategic partnership forged in the crucible of shared struggle and vision.

Phizo, known for his sharp intellect and persuasive oratory, saw in Mra a kindred spirit—a leader with the military acumen and grassroots connection necessary to galvanize resistance. For Mra, Phizo represented a broader framework for the aspirations of the Northeast. Their discussions, often held in secret amidst dense jungles and remote villages, touched on the preservation of tribal cultures, resistance to forced integration, and the dream of a unified hill tribe nation.

Mra’s later writings reveal the profound influence of these exchanges: “Phizo opened my eyes to the possibility of unity among the hills. He believed in a nation not defined by borders but by the spirit of its people.” This partnership was instrumental in shaping Mra’s political strategy, as he began to envision a Northeast where cultural preservation was not just a goal but a right.

Buoyed by his alliance with Phizo, Mra’s growing concern for the cultural and political future of his people led him to engage in correspondence with key Indian leaders and colonial authorities. In 1947, Mra wrote to the Viceroy of India, advocating for the exclusion of the hill tribes from the Indian Union and their establishment as a Crown Colony. He argued that the unique cultural and geographic realities of the region necessitated a different approach, warning that forced integration would only lead to unrest.

In 1948, Mra followed this with a letter to Prime Minister Jawaharlal Nehru, cautioning against the incorporation of NEFA into the Indian Union. He warned Nehru that if India persisted in its efforts to incorporate the Abor Hills, his people would resist. Mra’s words were unambiguous: “If India pushes to incorporate the Abor Hills, my men will fight back. We cannot go from being ruled by an elite in Britain to one in New Delhi.”

These letters underscore Mra’s foresight and his deep-seated belief in cultural preservation and political autonomy. His assertion that NEFA was never Indian to begin with highlighted the distinct identity of the region. Scholars today recognize this as an early articulation of what has become a persistent tension in Indian nation-building—the challenge of integrating diverse cultural identities without erasing them.

 

The 1953 Achingmori Incident

The tensions between the indigenous tribes and the Indian government culminated in the Achingmori incident of 1953, a defining moment in the history of NEFA. The incident occurred when a group of Daphla tribals from the Tagin community, under Mra’s leadership, attacked an Indian government party. The assault resulted in the death of 47 individuals, including Assam Rifles personnel and tribal porters, during an administrative tour in Achingmori, present-day Arunachal Pradesh.

Mra’s leadership in this incident was shaped by his military experience and his unwavering commitment to the autonomy of his people. His war-time tactics were evident in the precision and coordination of the attack, reflecting his deep understanding of guerrilla warfare.

To many in NEFA, the Achingmori incident was not merely an act of rebellion; it was a statement of defiance against the imposition of external authority. It was, in Mra’s words, “a fight to ensure that our children inherit a culture, not a colony.”

Prime Minister Nehru, addressing the Parliament in 1953, acknowledged the complexities of administering such remote regions. He stated, "The fact that that place is not an administered area does not mean that it is outside the territory of the Indian Union. These are virgin forests in between, and the question does not arise of their considering in a constitutional sense what their position is."

The aftermath of Achingmori saw further internal strife among the tribal communities. The Galong (now Galo) tribe, who were also affected by the massacre, sought retribution. In a tragic turn of events, Mra was betrayed by a Galo girl who poisoned his drink. This act of betrayal, possibly stemming from the complex inter-tribal dynamics and the perceived short-lived victory of Achingmori, led to Mra’s untimely death in 1954 at the age of 29.

His death marked the end of an era, but it also cemented his place in the annals of history as a symbol of resistance and the quest for autonomy.

 

Legacy and Contemporary Relevance

Despite his premature demise, Tako Mra’s legacy endures as a symbol of resistance and the quest for autonomy. His warnings about cultural assimilation and the loss of identity resonate with contemporary struggles faced by indigenous communities across the globe. The Citizenship Amendment Act (CAA) and the ongoing push for greater autonomy under the Sixth Schedule of the Indian Constitution are contemporary manifestations of the tensions Mra foresaw.

Mra’s vision of a unified, autonomous hill tribe nation remains a poignant aspiration. His life serves as a reminder of the importance of cultural preservation and the right to self-determination in the face of modern state-building efforts. His story, though often relegated to the margins of history, offers valuable insights into the broader narrative of nationhood and the enduring quest for identity.

As historian A.K. Baruah aptly puts it, “Tako Mra was not just a leader of the Tani people but a visionary who understood the fragility of cultural identity in the face of political assimilation.” His life and vision underscore the importance of self-determination and the preservation of cultural identity in the ever-evolving narrative of nationhood.

 

Find that piece of interest? If so, join us for free by clicking here.

 

 

Suggested Reading

  1. “Escaping the Land” : Mamang Dai

  2. The Assam Rifles Securing the Frontier, 1954–55.

  3. The Battle of NEFA--the undeclared war : Bhargava

Posted
AuthorGeorge Levrier-Jones
Share

The Ancient Romans have left a legacy of its sprawling empire through ruins, architecture and history firmly rooted in Italian culture. This legacy has inspired artists and writers for generations and the archaeological site of Pompeii, located in southern Italy, has fascinated historians as a place frozen in time after the tragic eruption of Mount Vesuvius in 79 CE.

Here, Amy Chandler considers the history of tourism at Mount Vesuvius.

The eruption of Vesuvius in 1794 by Alessandro D'Anna.

This tragic natural events after the eruption of Mount Vesuvius buried both Pompeii and Herculaneum and was left undisturbed until 1748. In the immediate aftermath of the eruption, efforts were made by the new Emperor Titus to relocate those who survived to nearby cities such as Nola, Naples and Capua. (1) Despite initial attempts, human interference reduced with only looters attempting to dig through the newly formed igneous rock to retrieve valuables. Soon the site became overgrown and forgotten leaving Pompeii frozen in tragedy. In 1863, under Italy’s new leader Garibaldi, Giuseppe Fiorelli was appointed as director of the excavation to uncover Pompeii’s lost history. (1)

The story of Pompeii and those fateful days leading to the eruption fascinates visitors and draws 2.5 million tourists each year to the site, with 1 million visiting Mount Vesuvius. Visitors are eager to walk in the footsteps of history and understand the civilisation that was destroyed and connect with the past in a meaningful way. The reasons as to why many visit the historic site and climb the mountain with such a volatile reputation has changed over the centuries from a spiritual enlightening to an insatiable form of consumerism. Arguably, climbing Mount Vesuvius was seen as what modern society refers to as the bucket list. The attitude and needs of the tourist shifted from a spiritual enlightening to the desire for comfort and speed where travellers focused on the destination rather than the journey. The types of tourists shifted as new forms of interacting with the area and travel evolved. This article will explore how tourism has changed from the elite grand tour to Mount Vesuvius and Pompeii to a destination attracting tourists from all over the world.

 

The eighteenth century tourist

In the eighteenth century, the Grand Tour of Europe was common amongst the elite upper class of the male English aristocracy to broaden their knowledge and experience of the world through cultural enrichment in Europe and beyond. (2) Seen as a rite of passage, many would bring back souvenirs from their travels. (2) Travel was primarily common in the elite and most wealthy as it wasn’t financially or physically accessible, therefore the only way to experience these locations was through travel writers, and paintings from those privileged enough to travel across Europe. It wasn’t until the 1840s with the rise of the middle class and the boom of industrialisation and railways that this elite activity became more accessible to a wider group of people who could afford to travel for leisure. This soon broadened the type of tourist and diluted the exclusivity of the elite grand tour. Grand tourists visited the main cities in Europe and travelled by boat and horse drawn carriage in a lengthy and often challenging journey that could easily last a year or longer. There wasn’t any set route but would begin by crossing the English Channel by boat and entering France. From there the journey could deviate but would mainly consist of entering Italy either from the Alps and Lyon or Marseille to Livorno, Italy. Once in Italy, the tour would drift through Florence for the Renaissance art, Venice for partying and the annual carnival and detour to Rome to visit the ancient ruins. It wasn’t until the excavation of Pompeii and Herculaneum in 1738 and 1748 that tourists deviated to southern Italy into Naples to visit the ruins.

 

The journey to climbing up Mount Vesuvius in the eighteenth and early nineteenth century was either by purchasing a private carriage that provided a leisurely journey or, which was most common, a communal horse drawn carriage called a corricolo or calesso. (3) The journey became easier after the introduction of the 1839 railway to Portici that was one of Italy’s first railway lines that followed the Vesuvius coastline offering a picturesque journey for visitors. Before this, the journey was often an exciting and treacherous adventure where the visitor became involved with the locals. Most tourists arrived at the Piazza della Fontana, which was compiled of 12 buildings and a stable. (3) This was usually where many would haggle and bargain with local tour guides. Once visitors had acquired a guide, they begun the ascent up the mountain where the landscape suddenly transformed from rich volcanic soil to a “realm of death and the slain earth’s dust alone slips beneath your unassured feet” as described by Madame de Stael in her 1807 travel guide Corinne, ou l’Italie. (3) Travellers usually rode on donkeys until they reached what many referred to as a “half-way house to heaven”, “Casa Bianca” or most commonly the Hermitage of San Salvatore. (3) The Hermitage offered travellers rest and food before the next journey towards the summit and was built in the 1650s, 600 metres above sea level by fugitives of the plague and was close to the previous Hermitage that was destroyed by the 1631 eruption. Galignani’s Guide (1824) described the lodgings as a “neat plain white building of two stories” with a chapel. (3) The Hermitage offered more than just a rest for travellers, but was an opportunity for a change in tour guides. The Hermitage also offered a Visitor’s Book for travellers to sign with many recording details of their stay. While there was no fee for staying, those who ran the Hermitage expected a suitable reward in return for their hospitality.

From that moment on, tourists travelled on foot for about an hour, which was followed by a strenuous ascent to the volcano. As eruptions occurred, so did the increase in tourism, the Hermitage was particularly busy in autumn after 1822 which meant a need for more staff and tour guides. Many illustrations depicted flocks of wealthy tourists in inadequate attire climbing the mountain, which emphasises the mass interest in the area. Due to the nature of the hot and rough terrain a cobbler was stationed at the Hermitage to mend worn out and damaged shoes. The higher the climb the tougher it became as one traveller described the ascent as “climbing a sand hill” combined with the sulphurous fumes from the volcano. (3) At this point, the guides would wrap belts around their group and drag each other up the mountain. On some occasion, sedan chairs carried those who were unable to walk on the rough terrain.Much of the allure of visiting the volcano came from the thrill and unpredictability of nature. No two visits were the same, with the terrain altering after an eruption to occasional explosions to full eruptions depending on the time and environmental factors of the climb. The greater the danger meaning the greater the thrill that only created a mixture of fear, awe and apprehension at the strength of nature. Mount Vesuvius was a reminder that this volcano was responsible for wiping out a civilisation in one swoop leaving history frozen in time.

 

Modern intervention

The process of finding and bargaining with a guide was seen as a rite of passage and perceived as a fixed itinerary when visiting Naples by the 1820s. However, by 1862 the process was streamlined through a ticketing system and aided by Thomas Cook tours in 1864, which organised excursions across Europe and the UK. (3) Thomas Cook introduced Pompeii into the wider itinerary for European travel and created structured visits, transforming the whole experience entirely. (4) This streamlining process aided in structured and accessible exploration without the reliance on local knowledge from tour guides. The structured approach to visiting Naples and the volcano became compacted in visitor guidebooks that provided details, logistics and descriptions of the site that prepared visitors instead of blindly entering the area. In many ways, this approach allowed greater control, but took away that thrill of foreign travel that was once alluring to those undertaking the grand tour and the stories of unknown territory. However, with an influx of tourists and the instability of volcanic eruptions the typography changed through man-made interventions and eruptions cutting new paths to the summit. For example, throughout the nineteenth century, a road was built leading to the Hermitage and by 1844 an Observatory was built that increased accessibility and scientific interest, once again changing the way visitors interacted with the area. (3) The road allowed easy access for carriages and turned the path to Mount Vesuvius into a commodity that many could just ride up in a carriage and pay enough money to be taken to the summit. Some travellers referred to the lines of carriages all flocking the summit as ‘Derby Day’. Even the way that visitor experienced the climb changed from many embracing the dirt and ash and the physical toil that the climb had on their body to those who didn’t need to exert themselves at all.

As tourism flourished, the need for tighter regulations also increased. The local government began to regulate the guides and control their numbers and activities on the mountain. (3). The Ordinance of 1846 reduced the number of official guides to 16 with the requirement to speak at least one foreign language and be of “good character”. (3) These official guides were issued permits and given fixed prices to charge for tours to the summit, this eliminated the need for bargaining that so many travellers associated with early visits to Naples. The intervention of the local government undermined the select few who held a monopoly on activities in the area and controlled the routes to and from Mount Vesuvius previously. Visitors now had a variety of options and better communication while travelling. Despite this improvement a handful of tourists still preferred to continue to travel through the Resina area for the adventure rather than to choose the convenience of the railway. Ultimately, the need to haggle and bargain for a guide was unnecessary and the production of widely accessible guidebooks stating the dangers of guides who bargain eliminated this step. The intervention of travel agencies like Thomas Cook played a major part in the flourishing of tourism and the accessibility to a wider audience. This intervention sanitised the experience and today it is very common for travel agencies to employ English-speaking travel reps to work in hotels abroad and act as a point of contact and excursion organiser for British tourists. This checkpoint removes the need to integrate within the local culture and is distant from the experience of those who undertook the grand tour in the eighteenth century.

 

Growing strains

Furthermore, the greater the number of visitors the greater the strain on the surrounding infrastructure to accommodate the growing needs. It is only natural that as these attractions become popular, the need to modernise or transform the way that visitors interact with the area must adapt to the new needs of tourists. The development of photography and ability to produce souvenirs from the excursion created a heightened awareness that reached a wider audience. By 1880, the introduction of a funicular railway to Mount Vesuvius eased the journey and reliance on guides for the whole journey. Instead guides were only needed for the climb to the summit. This railway cut a 1 hour and 30 minute ascent to a 12-minute ride that could transport 300 passengers a day. (3) Despite improving the visitor experience, the funicular didn’t achieve much of a profit and cost too much to run. By 1887, Thomas Cook’s son John Mason Cook swooped in to save the funicular but refused to accept the guide’s request for a concession payment. (3) This created major tension and protests from the tour guides resulting in the burning of the station, cutting the track and throwing one of the funicular cars into Vesuvius’s crater. The guides interfered with repairs and damaged the line, prompting the closure for 6 months until an agreement was settled. Finally a settlement agreed that a portion of every ticket sold on the funicular was given to the guides in exchange for their services at the upper station to the top of the volcano (100 yards). (3) This agreement changed the structure of guides from independent to employees. Albeit, the funicular did close in 1955 after the completion of a road and was no longer used. Tourism rapidly changed because of the work of travel agents like Thomas Cook. They created an itinerary and holiday with strict structures that focused on stress-free experiences that avoided the issues that past travellers had to negotiate. Before long, a hotel, restaurant, railway and toll roads that issued levies on non-Cook customers surrounded the volcano. (3) These infrastructures provided stable income for the local community and the romantic idea of struggle and enlightenment through the treacherous climb was replaced by comfort, ease and convenience. Tourists were distanced from physical challenges of the environment and the immersion with the locals. The volatile volcano could be conquered with little effort, which is a far cry from the gruelling path made by many grand tourists of the eighteenth century.

Both the site of Pompeii and Herculaneum preserve history in a fixed point in time, but some historians argue that this moment may not reflect the extent of daily life as many did evacuate and take personal possessions. These sites still offer historians and visitors a unique opportunity that no other historical or archaeological site can do, even if much of the original structures were severely damaged during the eruption. The excavation of Pompeii opened up a human and emotive narrative that connected with visitors on a different level than just an event in the past. However, with the growing number of tourists visiting Naples today, there is increased concern for the safety of both the site of Pompeii and Herculaneum. Pompeii has a lasting legacy and it appears many tourists wish to do the same. Reports include tourists vandalising and purposefully damaging the frescos with one Dutch visitor writing their name in permanent marker in bold letters and some scratching their initials into the stone. (5) In the future, there must come a point where local governments and heritage bodies such as UNESCO need to evaluate the safety of the sites with the growing number of visitors. Arguably, some visitors potentially view the site as a tourist attraction or commodity for their personal consumption rather than a place of immense historical value and a memorial to those killed by the eruption. This point is something that is lost through the commercialisation and allowing open access to the public. Italy has also started to restrict the number of tourists entering Pompeii by only issuing 20,000 tickets per day and utilising timed slots in peak summer times to help ease the human pressures placed on the fragile site. (6) It’s not just Pompeii that is struggling under the number of visitors, popular locations like Venice, Portofino, Capri and Rome also experience immense strain during peak season. (6)

                                                                                         

Conclusion

The awe and unpredictability of nature is one that has captivated visitors for centuries and still offers an unmatched experience. To visit Mount Vesuvius and walk around the site of Pompeii has only grown in its popularity due to the timeless preservation of history. Pompeii is a haunting reminder the natural world cannot be domesticated irrespective of technology that monitors and tries to predict the next natural disaster. The rise of tourism to the area and many other UNESCO World Heritage Sites in Europe have been made accessible through industrialisation, railway and other transport options that connected remote areas that were once only accessible by carriage. Large travel agents have replaced the control from the local monopoly that select families had over visitor routes and territory by structured excursions. The Grand Tourists of the eighteenth and nineteenth century embellished a daring and treacherous experience through writing, artwork and word of mouth, which creates this fear of missing out. Social media is just a more advanced way to distribute these stories about Pompeii and other cities that replaces the old-fashioned grand tourists. However, like with all major cities that often become shrouded in a romanticised version or one that is often embellished, the reality of visiting some of these locations can often be underwhelming. This is especially evident when heritage sites become flooded with tourism that poses a threat to the preservation of heritage and culture. What is most evident is that while museums, writers and artist can attempt to capture the feeling and atmosphere of cities and heritage sites, they cannot always replicate that feeling of being there in person.

 

The site has been offering a wide variety of high-quality, free history content for over 12 years. If you’d like to say ‘thank you’ and help us with site running costs, please consider donating here.

 

 

References

(1) J. Renshaw, In search of the Romans (London, Bloomsbury Academic, 2012),pp. 267, 273.

(2) Royal Museums Greenwich, ‘What was the Grand Tour?’, 2025, RMG < https://www.rmg.co.uk/stories/topics/what-was-grand-tour >[accessed 20 January 2025].

(3) J. Brewer, Volcanic: Vesuvius in the age of revolutions (USA, Yale University Press, 2023), pp. 12, 14,15,21-23, 152 -154.

(4) Pompeii Archaeological Park, ‘How Tourism in Pompeii Boomed Through Photography and Middle Class Enthusiasm’, 2025, Pompeii Archaeological Park <

https://pompeiiarchaeologicalpark.com/tourism-in-pompeii/ > [accessed 22 January 2025].

(5) Reuters, ‘Dutch tourist accused of graffitiing Ancient Roman villa in Herculaneum’, 2024, CNN Travel < https://edition.cnn.com/2024/06/05/travel/dutch-tourist-defacing-roman-scli-intl/index.html >[accessed 30 January 2025].

(6) G. Dean, ‘Pompeii to cap daily tourist numbers at 20,000’, 2024, BBC News <

https://www.bbc.co.uk/news/articles/cjdl1njj1peo >[accessed 22 January 2025].

Posted
AuthorGeorge Levrier-Jones
Share

The Roaring Twenties were a time period filled with tales of adventure and glamour. Prohibition fueled a party lifestyle - and made available a dangerous but adrenaline fueled life to some of the more enterprising members of the underworld. In Chicago, Illinois, the Twenties have become a time of legend and usually call to mind one man, Al Capone. But Capone, for all intents and purposes, was only a figure head during the Beer Wars. He ran his gang and racket, but he delegated the dirty work.

To the north of him was a group that was, as one newspaper of the time called them, Modern Day Pirates, The North Side Gang. Consider Capone the Prince John to their Robin Hood and his Merry-men, an analogy that Rose Keefe introduced in her book, Guns and Roses: The Untold Story of Dean O’Banion. Robin Hood isn’t quite as steal from the rich to give to the poor and you’ll need to give Little John a temper and thirst for vengeance that was unrivaled. Also, make the merry-men a little crazier and a lot more deadly. You get the picture.

Three years, three bosses dead. The North Side track record was less than desirable, George Moran would have been well aware of this when he took over after the death of Vincent Drucci in April of 1927. He had said goodbye to three of his good friends, the flower shop was gone, Mr. Schofield having kicked them out after Hymie Weiss’s assassination, and having run from the past at least once already in his life, George Moran took stock of his life and probably thought about throwing in the towel. But Chicago was home and he couldn’t just forget everything that had happened. A part of him still wanted revenge and leaving the North Side would have felt like letting his friends down. So Moran did what he did best, he carried on.

Erin Finlen continues her series.

Part one is here, part two is here, and part three is here.

Note: An image of Moran is available here.

 

Minnesota Years

George Moran, the prohibition gangster most associated with being the arch enemy of Al Capone and by extension Chicago was actually from St. Paul, Minnesota and named Adelard Cunin. Born on August 21, 1893 to a French immigrant named Jules and his wife Marie, he was, like his friends enrolled in a Catholic School. And also like his friends, turned to crime at a young age, in fact he had served time three times before he reached the age of twenty one.

He and his father did not get along and Adelard regularly was hit with a belt by his dad for his behavior at home. At school, they also believed in corporal punishment and by the time he got home his father could be waiting to punish him again. Strong willed and resilient, the beatings did nothing to change his personality or willfulness. He turned to crime as an outlet for his frustration. At the age of eighteen, he escaped from jail and made his way south to Chicago. His father refused to have anything to do with him, but his mother still kept in touch.

It was after arriving in Chicago that Adelard started adopting different names, including George Gage, George Morrisey, George Miller and, of course, George Moran.

In photos, Moran typically is wearing something that covers his neck. When he was living in Chicago in 1917, he got in the face of someone heckling a public speaker. A fight broke out and Moran was cut several times on the neck with a knife. He was rushed to the hospital where they managed to stop the bleeding and save his life. He was lucky but also self-conscious of the way the scars looked and would do his best to hid them throughout his life. There was good to come of the incident, though. In his recovery he would meet Dean O’Banion.

 

The Beginnings of the North Side and Rise to Leader

In 1917, Dean O’Banion was working as a waiter at McGovern’s Tavern, charming customers with his beautiful singing voice. This tavern was where Moran began to become a regular during his recovery. He met there a man named Charles Reiser, who introduced him to bigger kinds of burglary. For the most part, George would steer clear of bootlegging, at least at first, he preferred to stick with thieving and safe cracking.

One of Reiser’s safe cracking proteges was O’Banion and the two were drawn to each other, both with independent, stubborn spirits. Although, Moran was much quieter and kept his cards close to his chest. They were joined shortly after by Hymie Weiss and the three became a trio of safecrackers. They were joined by Drucci last and though he was also readily accepted, it was not likely that it was for his thieving skills as his charm and reckless bravery.

They were well on their way to becoming the North Side Gang of legend, when Moran was sent to jail again and this time, after an escape attempt that was going well until he got caught, Moran would be absent in Chicago until 1921 as he served his sentence at Joliet Penitentiary.

When Moran got out his friends were waiting with good news: they were big shots in the bootlegging business and Moran was happy to help. He even went to Canada to see about a shipment for O’Banion. That wasn’t to say that bootlegging was his only occupation. He was arrested at least once with O’Banion and Weiss for burglary. And at one point Weiss and Moran were both involved in a police chase that ended when the police fired on the car and the pair decided it was safer to pull over.

Also, in 1921, Moran met a woman with whom he fell instantly in love, Lucielle Logan. Lucielle was worried that George would run when he found out she had a son, but George was just as smitten with him and adopted him, spoiling him and helping him learn English, as Lucielle and her son, who would go by John George Moran for the rest of his life, spoke French. Surprisingly, he loved being a family man and when one reporter asked him what was next after a funeral, he probably wasn’t lying when he said he just wanted to live with his wife and kid in peace.

In 1924, when O’Banion was murdered, Moran was fully on board with Hymie Weiss’s plans to get revenge. There was also another item of business that Moran could not wait to handle. He had never been a fan of O’Banion’s bodyguard, Louie Alterie. So, when Alterie was talking to the media about shooting the murderers of O’Banion and, strangely, following Torrio and Capone to New York after the funeral, Moran sent Alterie packing, saying there was no place for him in the North Side Gang. With that taken care of, it was time to get to the real business of getting even, even if the boss was in jail.

 

While Weiss was in jail in the summer of 1925, Drucci and Moran tried several hits on the Gennas. They weren’t exactly subtle about it though.

Between the two of them, neither Moran nor Drucci was known for thinking revenge plans through to the full extent. And with Weiss in jail and the grief over losing O’Banion mixed with a disdain for the Gennas they were more gung ho than usual. Amatuna, who had been a shooter of Dean O’Banion had agreed to hand over to Moran and Drucci the other two men believed to be responsible: John Scalise and Albert Anselmi. They believed Amatuna and went to the rendezvous where they were promptly shot at and both had to be treated at a nearby hospital.

After Weiss’s death, Moran agreed with Drucci that peace was the best option but he wasn’t happy about it. And when Drucci died, he kept the peace but he could feel his nagging hatred for Capone, the man who had stolen O’Banion and Weiss from him, itching at him. Then, after Capone battled with other men, he eventually started eyeing a Northwest gang whose territory he wanted. He had the leader bumped off. The man, John Touhy, was an old friend of Moran’s. Seeing another of his friends dead by the hand of Capone reopened the wounds that had never closed from O’Banion and Weiss’s deaths. The war was back. And this time it was going to take a massacre to end it.

 

Checkmate

After the death of Touhy, Moran and Capone continued to battle. Murdering continued until Capone had had enough. Somehow word got back to him that Moran was having a meeting at the North Side’s garage on Clark Street. Al Capone was never one to do anything quietly, a fact which irritated his friends back in New York, who found his ostentatiousness to be too attention seeking for their comfort. And what Capone had planned was nothing short of attention grabbing. Unfortunately for him and the seven men who would be in the garage, it wouldn’t see the end of his arch enemy.

On February 14, 1929, Moran was late to his meeting at the Clark Street garage. If he was like people of today, running late to your first meeting on a very cold, snowy morning, probably makes you think that your day isn’t going to go well. So, when he turned onto Clark Street and saw black police vehicle sitting outside his garage, he changed his course and went into a nearby diner to wait.

Men had been waiting across the street for Moran to enter the garage. When they thought they saw him enter, the signal was given and two men dressed as police officers entered. They had the men surrender their weapons and face a wall with their hands raised. Then they pulled out Thompson submachine guns and opened fire. Six of the men were killed instantly but one was still alive when the real cops arrived, although in his short time left he refused to identify the killers. The carnage was unlike anything Chicago had ever seen and the police and medical examiners were sickened by it. The lone survivor was the mechanic, James Mays, dog, Highball. When the police finally arrived they found him howling and shaking. He was later euthanized due to being unable to recover from what he had witnessed.

Word of what happened reached Moran and in a rare show of emotion, he checked himself into a hospital for exhaustion and a stomach issue. When police eventually found him, the only thing he would say was “Only Capone kills like that.” The man who was killed in Moran’s place was Al Weinshank. He looked uncannily like Moran in build and facial features. He was not a criminal, he simply associated with them.

Moran didn’t stay long in Chicago after that. And the North Side Gang was no more. Capone had won the Beer Wars.

 

Find that piece of interest? If so, join us for free by clicking here.

 

 

Sources:

Binder, J. J. (2017). Al Capone’s Beer wars: A Complete History of Organized Crime in Chicago during Prohibition. Prometheus Books.

Burns, W. N. (1931). The one-way ride: The Red Trail of Chicago Gangland from Prohibition to Jake Lingle.

Keefe, R. (2003). Guns and roses: The Untold Story of Dean O’Banion, Chicago’s Big Shot Before Al Capone. Turner Publishing Company.

Keefe, R. (2005). The Man who Got Away: The Bugs Moran Story : a Biography. Cumberland House Publishing.

Kobler, J. (2003). Capone: The Life and World of Al Capone. Da Capo Press.

Sullivan, E. D. (1929). Rattling the cup on Chicago crime.

The Republic of Lebanon has had a sad history, one marred by religious hatred, conflict, and in recent years a financial catastrophe that has impoverished most of its citizens. But there was a time when the state experienced an age of great elevation, one that stands out as an example of the kind of nation Lebanon can be if it followed a similar path today. That period was the Chehab Era.

 Vittorio Trevitt explains.

Fouad Chehab.

September 2024 marked the 60th anniversary of the end of the presidency of Foaud Chehab, who rose to power following a civil war in 1958. This was precipitated by the attempt of the incumbent president Camille Chamoun to obtain a second term; a move that went against the constitution. In a tactful decision that went down well with the nation’s Muslim community, Chehab (the leader of the Lebanese Army), believed that if he used the military against the rebels it would lead to mutiny amongst Muslim soldiers and declined to do so.

Chehab’s rise to the presidency took place against the backdrop of enormous upheaval in the Middle East. Although during the second half of the Twentieth Century Jordan and most of the Gulf States (Qatar, Oman, Bahrain, Kuwait, Saudi Arabia and the UAE) maintained monarchical structures of government, a series of coups throughout the Fifties and Sixties brought to power authoritarian socialist leaders in Egypt, Iraq, Syria, and Libya, while a military conflict in Yemen led to the formation of a radical left-wing state in the south of that country. Fearful that Lebanon’s turn would be next, Chamoun asked for help from the United States who subsequently sent thousands of troops to the country, although their presence was a nonactive one. At the end of the war, with the loss of thousands of lives, Chehab was elected president by the national legislature. What made Chehab different from many of his regional contemporaries was the fact that, instead of establishing a one-party state and (as dictators have often done throughout history) alter the constitution to prolong his tenure, Chehab relinquished his office after the end of his full six-year term.

 

Quality of life

A striking feature of Chehabism (the name given to his political movement) was the emphasis that its founder placed upon the quality of life of ordinary Lebanese. A major programme of reform and stage-supported development was rolled out that sought to tackle headlong the underlying causes of the 1958 civil war; namely the sectarian social divisions that had long been festering sores on the body politic of Lebanese society. Following the Arab-Muslim conquests of the 7th century, Christians found themselves essentially living as second-class citizens, but by the time of the conflict the situation had reversed itself to the point where Muslims found themselves at a disadvantage compared to members of the Christian community in terms of personal wealth, education and career opportunities; such as in the civil service. Adding to this disparity, uneven regional development under Chamoun meant that a rich Muslim minority and Christians were the primary beneficiaries of economic progress. The seeds of the conflict had therefore been planted long before its inevitable outbreak.

The extent of these inequities were highlighted when a French research institute (IRFED) was commissioned by Chehab’s government to examine the roots of the war, and estimated that half of the nation’s people lived in poverty. This culminated in a series of measures designed to bring about a more just and prosperous Lebanon. Multiple schemes aimed at improving the quality of life in rural areas were launched, with government-operated hospitals and pharmacies set up and several villages provided with basic services like electricity and drinking water. Agricultural cooperatives were encouraged and a Green Plan was promulgated under which many farmers were supported by land reclamation. Efforts were made to enforce health and safety requirements in the workplace while a law aimed at stimulating the supply of affordable homes was enacted. During Chehab’s second year as president, an Office of Social Development was founded that improved the provision of social aid for vulnerable and elderly citizens. This was followed in 1963 by a landmark National Social Security Fund designed to provide workers and their families with a range of benefits such as health and workplace accident insurance and maternity support. The economy flourished, while workers received a larger slice of the economic pie, with the buying power of average earnings going up and the percentage of the nation’s gross national product accruing to labour outstripping that held by capital by 1964. 

 

Education

Apart from poverty alleviation, the hand of reform would reach out to other aspects of Lebanese life. Many educational initiatives were carried out during the Chehab Era, including the establishment of free primary schooling and new facilities, the encouragement of teacher training and vocational education, a new law school, and grants for overseas study. Joint bank accounts were enabled by law, May Day became a public holiday, and an array of new rights for women came into being, amongst which included local political representation, choice of citizenship, and equal inheritance for non-Muslims. A package of measures was introduced that sought to provide a 50-50 share for Muslims and Christians in the civil service, along with new universities and opportunities for state employment that benefitedShia Muslims. Chehab’s pragmatism towards religious community relations was additionally demonstrated in the international sphere, where he endeavoured to build bridges with both Arab and Western nations rather than favour one side over the other.

However, the tangible progress attained under Chehab, which continued to some extent under his successor Charles Helou, was not sustained, while the strong economic growth Lebanon experienced during their presidencies proved to be a two-edged sword. While developmental initiatives undoubtedly helped many people, big commercial farms replaced smaller ones and precipitated the exodus of peasants into squalid urban areas, while income distribution remained deeply unequal. Despite real wage gains, low pay and inflationary pressures fuelled multiple strikes. Although leading government figures expressed sympathy for their grievances and presided over an improved minimum wage, Chehabist administrations at the same time made use of legislative powers to dismiss striking workers and passed legislation curbing the ability of workers to do so. Additionally, the treatment of Palestinian refugees during the Chehab Era proved to be a black spot on that period.

 

Security

Seen as a threat to national stability owing to growing levels of armed and political activity amongst Palestinians, their lives were effectively controlled and monitored by the security services, with imprisonment, deprivation, restrictions on movement and even murder amongst the horrors experienced by refugees. Despite Chehab’s concern for the poor and commitment to social justice, the approach taken towards Palestinian refugees during his tenure was one of moral bankruptcy.

In spite of these moral and economic failings, the Chehab Era had many good points and important lessons that Lebanon’s political leaders would be wise to learn from. In his utilisation of the state as an instigator of social betterment, religious equality and economic expansion, Chehab left Lebanon a better country than how he found it, while showing what expanded government can do when used for public beneficence and not self-enrichment. In a nation wracked by financial hardship and sectarian tension, the more positive aspects of Chehabism serve not only as lessons from history, but as signposts for what Lebanon could potentially become.

 

The site has been offering a wide variety of high-quality, free history content for over 12 years. If you’d like to say ‘thank you’ and help us with site running costs, please consider donating here.

Posted
AuthorGeorge Levrier-Jones
Share

Modern-day Germany is an image of 21st century globalization and multiculturalism; however immigration is still a relatively recent phenomenon. Eager to fill the labor force shortages threatening Germany’s post-World War 2 economic miracle, the West German government turned to foreign personnel and made Gastarbeiter, or Guest Worker, agreements with numerous countries during the 1950s and 1960s. This marked the start of Germany’s multiethnic diversity.

Holly Farrell explains.

An Italian Gastarbeiter family in 1962. Source: Bundesarchiv, B 145 Bild-F013071-0001 / Wegmann, Ludwig / CC-BY-SA 3.0, available here.

What was the Gastarbeiter program and why was it implemented?

In the aftermath of Germany’s defeat in the Second World War and the fall of the Third Reich, the allied powers found it imperative for Germany to undergo a process of democratization with institutions resilient enough to prevent a repeat of the Nazi dictatorship. This included a process of re-education to address the undue respect for authority and a process of denazification. As Germany was divided into four zones of occupation by each allied power, these processes were not uniform throughout the country. From 1949 this then differed between West and East Germany.

However, the allies were also very aware of the failures of the punitive approach after the First World War and so wanted to avoid leading Germany into economic ruin which could fuel extremist groups. Consequently, a robust economy and a well-functioning welfare state became further pillars for post-war stability. West Germany received extensive financial aid through the Marshall Plan which fueled an unexpectedly quick post-war economic recovery (East Germany did not receive Marshall Aid and underwent a socialist transformation). Soon there was not enough personnel to support West Germany’s growing industry due to the high casualty rate amongst German men during the war, and the broad consensus for women to remain at home. After the construction of the Berlin Wall in 1961, the significant flow of East German workers into the West also dried up, leaving a shortfall of labor. The government subsequently turned to non-German workers. On December 22, 1955, West Germany signed an agreement with Italy for Gastarbeiter, or guest workers, to temporarily join the German labor force. Further agreements were later signed with countries like Spain (1960), Greece (1960), Turkey (1961), Portugal (1964), and Yugoslavia (1968). However, the arrival of Turkish workers was especially significant. By 1973, Turkish employees were the largest immigrant group, making up one-third of non-Germans and providing the foundations for the growth of Germany’s current Turkish community.

The Gastarbeiter’s countries of origin were also keen to cooperate. They hoped that the transfer of employees’ wages back to their families would benefit their balance of payments, whilst the loss of workers would relieve pressure on their own labor markets.

 

Life for the Gastarbeiter

By the fall of 1964 the number of foreign workers in West Germany exceeded 1 million, and this rose to 2 million five years later. Although the acceptance of foreign workers seemed to symbolize a strong break from the ethno-racial nationalism of the Third Reich, Germany’s steps towards greater diversity did not yet extend to social integration. The authorities tried to hire single men (and eventually women) due to their higher levels of flexibility and mobility. Workers were housed in isolated barracks, usually owned by the company, where there would be four to six beds per room. Contact with the native German population was therefore limited. The 1965 Ausländergesetz (Foreign Regulation Law) also categorized Gastarbeiter as foreigners, which determined their rights of work, social security, and residence but did not permit the right to naturalization. This was only eventually granted in 2000. Gastarbeiter were also frequently subject to discrimination and prejudice within German society. As divisions intensified between the Western allies and Soviet Union, West Germany’s economic recovery and entry into NATO took priority over denazification efforts. Consequently, denazification focused mainly on Nazi party membership and failed to give enough attention to social attitudes. A 1947 survey by the US Office of Military Government (OMGUS) consequently found evidence that a significant minority of the population still possessed lingering antisemitic and racist attitudes, which fueled an ‘othering’ of the Gastarbeiter.

Labor contracts also took the concept of a guest worker rather literally. Workers were initially only given one-year contracts, after which they should have been exchanged for other workers under the so-called rotation principle. However, this was not applied consistently. Industrial firms valued having trained permanent staff as frequent change required expensive training for new workers, who typically had low levels of language knowledge. Employers desired longer stays and their requests for an extension of a foreign employee’s work permit was usually granted. Relatives of the Gastarbeiter were then often able to join the company on the worker’s recommendation. However, the hiring of guestworkers was still flexible depending on the needs of the labor market. For example, following the recession in 1966/67 employment fell from 1.3 million in 1966 to 0.9 by January 1968.

Gastarbeiter typically took unpopular and low-paying positions in heavy industry, road, or underground construction. This led to stratification within the workplace. Whilst migrants filled positions with lower wages and higher health risks, German employees moved up to the better-paid higher positions.

 

The position of female Gastarbeiter

In presentations of the Gastarbeiter scheme, female workers have remained largely invisible. However, although there were initially fewer female Gastarbeiter, women made up approximately 30% of foreign employees in the German labor market by 1973. This was especially significant when you consider that less than one-third of West German were employed. The employment of female Gastarbeiter saw a positive shift in the 1970s due to the influence of the women’s emancipation movement and a growing demand for labor that could no longer solely be met by the male workforce.

Like their male counterparts, women were assigned the least attractive jobs in industry and services but were often preferred for jobs in factories involving stockings, porcelain, and electronics due to their smaller and delicate hands. From the 1950s women also filled labor demands within nursing and healthcare. This particularly attracted women from South Korea, the Philippines and India.

However, female Gastarbeiter faced additional challenges compared to the men. They were particularly exposed to racist stereotypes and exoticism from their coworkers or other sections of the population, and they were assigned to ‘light wage groups’ where they earned 30% less than the male Gastarbeiter.

Nevertheless, women did not remain passive. They often took instrumental roles in labor movements and strike action and so eventually achieved the abolition of discriminatory wage groups. At the Pierburg factory in Neuss, for example, women made up 1,700 of the 2,000 employees who initiated a general strike in June and August 1973 to demand the abolition of the low wage group and pay rises of 1 Deutsche Mark per hour for all workers. They were successful in gaining the abolition of the wage group and a wage increase of 30 Pfenning for all workers. This was one of over 300 ‘wildcat strikes’ (‘wildcat’ as they were not started or supported by a trade union) where foreign workers and Germans cooperated to improve working conditions.

 

The end of recruitment

By 1973 the oil crisis triggered a stagnation in West German economic growth, so the government passed a ‘recruitment freeze’ in November 1973 to relieve the labor market, marking the end of the Gastarbeiter program. Although 12 million of the 14 million Gastarbeiter had returned to their countries of origin by 1973, 2 million decided to remain in Germany. Returning would have led to the loss of their residence or labor permit and many Gastarbeiter faced economic or political uncertainty in their home countries. This fueled the migration of the Gastarbeiter’s family members to Germany, marking the beginning of Germany’s move towards a multicultural country of immigration.

 

Find that piece of interest? If so, join us for free by clicking here.

 

 

References

Chin, Rita, Heide Fehrenbach, Geoff Eley, and Atina Grossmann. “German Democracy and the Question of Difference, 1945–1995.” In After the Nazi Racial State: Difference and Democracy in Germany and Europe, 102–36. University of Michigan Press, 2009. http://www.jstor.org/stable/10.3998/mpub.354212.8.

DOMiD | Documentation Center and Museum of Migration in Germany. ‘Invisible Caretakers – Labor Migration of Women in Germany’. Accessed 22 January 2025. https://domid.org/en/news/die-versorgerinnen-arbeitsmigration-von-frauen-in-deutschland/.

DOMiD | Documentation Center and Museum of Migration in Germany. ‘Recruiting “Guest Workers” (“Gastarbeiter”)’. Accessed 22 January 2025. https://domid.org/en/news/migrationhistory-in-pictures-1960-recruitment/.

DOMiD | Documentation Center and Museum of Migration in Germany. ‘Strike at Pierburg – Solidarity among Workers’. Accessed 22 January 2025. https://domid.org/en/news/pierburg-strike-solidarity-among-workers/.

eKathimerini.com. ‘Doc Shines Light on the Overlooked Greek Female Gastarbeiter’, 11 May 2024. https://www.ekathimerini.com/culture/1238269/doc-shines-light-on-the-overlooked-greek-female-gastarbeiter/.

Historisches Lexikon Bayerns. ‘EN:Gastarbeiter (Guest Workers) ’. Accessed 22 January 2025. https://www.historisches-lexikon-bayerns.de/Lexikon/EN:Gastarbeiter_(guest_workers).

Willems, Rebecca. ‘Female Guest Workers in Germany’. herCAREER, 11 March 2024. https://www.her-career.com/en/female-guest-workers-in-germany/.

Posted
AuthorGeorge Levrier-Jones
Share