The origins of the 1918-19 German Revolution, or the November Revolution, can be traced back to the face of hubris of the German hereditary system: Wilhelm II. A fierce arms race with Britain covered in German chauvinism threatened the might of the Royal Navy and escalated World War One into the global conflict that it was, whilst defeat in the Great War divided the Kaiser’s subjects. Plagued with mutinies and insubordination, contrasting with the pride of soldiers spouting the stab in the back myth, the First World War provided an intense battleground for an intense battle between democracy and autocracy that fundamentally transformed the German political society.

Tom Cowling explains.

Leftist soldiers during fighting in the Berlin City Palace in 1918 as part of the German Revolution. Source: Bundesarchiv, Bild 146-1976-067-30A / CC-BY-SA 3.0, available here.

World War One

Armed with 5 naval laws aimed at threatening British dominance of the seas, the Kaiser and his court were gearing up for war. Totalling hundreds of new ships, and an increase of 136,000 in the army in 1912 (1), the naval laws forced Britain and her allies into creating formal alliances in the face of German hostility. Britain had alliances with Japan, Russia, and France. War was inevitable. Victories in the east against a flailing Russian Empire proved irrelevant by the time the American Expeditionary Forces landed in Europe. With hundreds of thousands of men entering Europe each month from the US, the German army was simply awaiting its fate. On the domestic front, the origins of a revolutionary movement were brewing as it became evident that this war was one of imperialism, with Germany occupying vast swathes of Eastern Europe. A split in the SPD, which had initially supported the war effort, saw the establishment of the Independent Socialists, fundamentally opposed to war. Led by the far-left Spartacists, there was a wave of strikes in January 1918, forcing a declaration of martial law (1). The age of insubordination had begun, and a fierce sense of chaos had swept across Germany.

At President Wilson’s indirect request, Germany made itself a constitutional monarchy and kickstarted the Revolution from Above. Governmental positions were granted to members of the Reichstag rather than the Kaiser’s comrades (1). The chancellor was made responsible to the Reichstag, whilst war could not be declared without parliament’s approval (1). His abdication came in November, at the insistence of Wilson’s men (1). The empire had shifted from a feared titan in Europe to a republic at the mercy of democracy. Friedrich Ebert, moderate SPD politician, was named chancellor (1). Wilson and his 14 points had established upheaval in Germany.

The start

Indiscipline marked the beginning of the Revolution from below; the new republic’s first threat to its existence. The left had an insatiable appetite for dictatorship, authoritarianism and control – the gravity of the situation was profound. Orders for an arrogant, and unwinnable, attack on the Royal Navy inevitably culminated in mutinies, which spread unstoppably to numerous ports on the Baltic Sea. With the military refusing to accept orders of the state, revolution was imminent. Communists seized power in Bavaria and workers’ councils snatched control of fourteen cities within days (2). Germany was on the brink of collapse, and submission to the left. Masses gathered in the capital as Karl Liebknecht, a key antagonist of democracy and prominent figure in the Spartacist League, stood on the Reichstag balcony and unabashedly called for a socialist republic (2). In a flurry of panic, the Freikorps, a paramilitary group of veterans desensitised by the experiences of war with loyalties firmly resting on the Kaiser, were sent in by Ebert to quell such left-wing dissent (1). Spartacist leaders Rosa Luxemburg and Karl Liebknecht were executed for their revolutionary crimes (2). Their revolution had failed completely to build up the necessary foundations of an undemocratic, communist system. But efforts to change Germany into the ultra-democratic state it existed under in the Weimar Republic were successful enough that the political, governmental and constitutional framework of Germany was revolutionised following the events of 1918.

Success or failure?

From a Marxist perspective, the revolution was an abject failure. Capitalist institutions remained firmly in place, and the bourgeois tendencies of the army raged on. Democracy was entrenched in the new Weimar constitution, with proportional representation and universal suffrage (1). The results of 1918 were a far cry from Marx’s ideal of a ‘dictatorship of the proletariat’. Germany was well and truly a liberal state with institutional rejection of communist beliefs. Capitalism was central to the workings of Weimar democracy, with unions making agreements with industrialists not to cause disruption to production – the German workers were barred from seizing the means of production. Industrialists such as Hugo Stinnes presided over a huge amount of German industrial production in the new Germany, much to the dismay of Marxists. To the far-left in Germany, the events of 1918 served only to be scorned at as useless incremental change.

To the social democrats amongst the left in Germany, the revolution and its impact was a resounding success. They had swept away an antiquated system that kept people under the thumb of the monarch, and truly suppressed the will of the people that social democracy so desires. The left, in the form of the SPD, had power, with Ebert as chancellor, and the political extremes had been dealt with. The military system in Germany was committed to upholding democracy, having made deals with Ebert in return for the suppression of violent, extremist uprisings. The Freikorps were a reliable group to counteract left-wing rebellions, albeit through near insanity, but they would never let Germany fall to the communists, as they proved in the crushing of the rebellion that they contributed so significantly to. Democrats across Germany were undoubtedly intoxicated by the newfound democracy the new republic had in such abundance.

The right was naturally infuriated by the news of political change. The conservative doctrine couldn’t accept such sweeping changes, and such a rejection of ‘stability’. They had lost their deity in the form of the emperor, and had surrendered control and power to their natural enemy in the form of the centre-left. Despite this attack on the conservative order in Germany, they begrudgingly accepted the new political framework. They were protected from democratization of the army (1) which meant the most adored, to conservatives, institution was left alone from the transformation underwent in 1918. In spite of the rejection of nationalism by the new government, and the armistice, the conservative right more or less accepted the position they found themselves in.

Conclusion

To many aligning themselves with the political extremes, the revolution was something to look upon with great disdain. Marxists and conservatives alike were sworn enemies of democracy, and both looked upon the revolution as a ‘failure’. But the democrats won. They won democracy and they won freedom from the Kaiser, whilst winning power. To them, the revolution was a blessing, and saw them get what they wanted. As Marxists didn’t get enough change, and conservatives got far too much, social democrats in Germany were granted their wish of democracy and accountability as a direct result of the 1918 revolution.

What do you think of the 1918 German Revolution? Let us know below.

Bibliography

  1. Kitchen, M. (2006) A History of Modern Germany 1800-2000. Oxford: Blackwell Publishing

  2. Sewell, R. (2018) The German Revolution of 1918, In Defence of Marxism. Available at: https://www.marxist.com/the-german-revolution-of-1918.htm (Accessed: 24 July 2023)

The Trans-Siberian railway was an 8,400km track that was built upon the desire to unite Russia under a single culture and to strengthen the autocratic rule of the Tsar. The idea of building a railway into Siberia was toyed with in the mid-1870s, mainly proposing short routes into Siberia. The Russo-Turkish War put a halt on the development of any railways as funding was diverted to the war effort. Then 10 years later and after the finances of Russia had recovered from the war, discussions had returned. The proposed ideas however were much grander; a train route spanning the entire continent connecting east to west. This was fueled by a new director and the Tsar’s desire to make sure his autocratic rule reached every corner of his empire. This led to the idea of the Trans-Siberian railway coming to fruition.

Kyle Brett explains.

Construction work on the Eastern Siberian Railway near Khabarovsk, circa 1895.

Origins of the Idea for a Railway

The idea of a railway connecting East and West Russia was proposed in the 1870s to connect Siberia to European Russia. This idea was proposed by an American entrepreneur Perry McDonough Collins, to the Minister of Transport Communications, Konstantin Nikolayevich Posyet. Collins wanted to connect America to Russia via telegraph and proposed a route to do that to Posyet. Posyet liked this idea as he was ambitious to develop the far east, but the state had neither the finances nor the infrastructure to facilitate this project.

The Russian state in the latter half of the 1870s decided that the minerals and political benefits of building a railway into Siberia were beneficial and had decided on a short route from Nizhny Novgorod to Tyumen. Posyet had originally proposed a similar length railway to the north and saw this as the Russian state disregarding his position as Minister of Transportation. None of this would matter in the end because of the outbreak of the Russo-Turkish War in 1877 which shut down all state-sponsored railways. The state diverted a majority of its finances to the war effort, which left Posyet with the satisfaction of not having to build the railway he disagreed with. The unfortunate side effect of this war was that the war spending combined with the poor harvests in the early 1880s hurt the Russian ruble delaying discussions about a Trans-Siberian Railway until 1884.

In 1881 Alexander III would take power from his father Alexander II after he was assassinated by a socialist terrorist group. His father had passed many radically liberal laws and that had made him a target. Alexander III would spend his time as Tsar undoing many of these liberal reforms and reestablishing Autocratic rule over Russia. One of his main focuses was on Siberia and how he could unite and spread Russification and Autocratic rule throughout Russia. Upon hearing Posyet’s proposal for a Trans-Siberian railway stretching from Samara to Vladivostok he decided that was the best way to accomplish his goals.

The Borki Train Disaster

On October 29, 1888, Alexander III and his family were traveling on the Kursk–Kharkov mainline from Crimea to St. Petersburg when a combination of speeding and faulty track line caused the train to derail from the track at a high speed. After the dust had settled Alexander saw that his family was trapped under the collapsed roof of his dining car. He lifted the collapsed roof of his train car allowing his family to escape with none of them injured. In total around 20 people died and around 15 more were injured in the derailment. The trauma from this crash is what caused Alexander’s kidney failure and his eventual death.

The blame for the crash did not go to the railway manager Sergei Witte, but rather to the Russian government. Alexander wanted to close this case as quickly as possible, and this led to Posyet stepping down from his position as Transportation Manager. The man to replace him would be Sergei Witte. Imperial officials had chastised him prior to the crash, telling him that only the lines he manages are slow and all the other lines run at express speed. His response was he would prefer to not bash in the head of the emperor by increasing the speed of his rail lines. This interaction is why he was chosen to replace Posyet as Minister of Transportation.

The Project Begins

In March 1891 the Russian government announced its plan to build a railway that spanned all of Siberia. They broke ground in Vladivostok a few months later and the building of the railway was underway. The head of the project was Sergei Witte who in the years after the Borki Train disaster had risen in popularity in the government, moving his way up the chain of command. In 1892 he was selected to become the Minister of Finance, on top of being the Minister of Transportation. He would use these positions of power to turn the clunky and slow bureaucracy of the Russian government into a well-oiled machine.

His first order of business was to create the Committee for the Siberian Railroad. This committee was created with one goal in mind; to fast-track decisions that would have been slowed by the clunky bureaucracy. It would accomplish these goals by getting approval from a higher power, like the Tsar, and would then go around local administrators to keep the project moving. This ensured that the project would be kept going at a steady pace.

Witte, as Finance Minister, also had a great way to finance the building of the railway; he could raise taxes as high as the project demanded. As a result, he neglected his position as finance minister, disregarding complaints, and concerns from the peasantry as he was laser-focused on building the railway. Alexander would also turn a blind eye to these affairs as Witte got results which were good enough for him to not intervene.

The Material Cost of the Largest Railway

The Committee for the Siberian Railway had a massive challenge in solving the problem of how to get this immense amount of materials to the far reaches of Siberia. Their solution was to utilize rivers to Transport the materials to the building sites. Many of the rivers would not support the size of the boats used to move the materials. The Committee decided that the rivers were to be widened and strengthened to accommodate these boats. There was a lot of special attention placed upon Lake Baikal because of its immense size, being the deepest lake in the world, and because it would also be used to Transport materials in the near future. They surveyed weather conditions, all the port facilities on the lake, and how the ice formed on the lake to better understand how to utilize the lake for material Transportation.

Production of the railway parts was originally to be done in Siberia for convenience. Witte soon discovered that Siberia had nowhere enough infrastructure to accommodate a project of that size. The production was outsourced to Western Russia, the UK, and Poland. This meant that it took longer for the materials to arrive at the rail lines as they had to travel as far as the UK to make it into deep Siberia.

The Labor committed to the project was also quite immense, estimated by the Committee for the Siberian Railway at anywhere between 57,000 to 80,000 workers that migrated to Siberia to assist in the building. Much of the labor was from Russia, but some of it came from China. There was a good amount of convict labor utilized as well. These convicts were not treated fairly, however, and would be harassed by their leaders routinely. As for bad conditions, many of the laborers would sleep on the cold ground right up until the ground would freeze. Then when it got too cold the Committee would send people out to build mud huts for people to live in. This, as one can imagine, led to many deaths from the harsh elements. It also made it hard for laborers to do complex tasks like building bridges and utilizing dynamite to make way for tracks to be placed.

The Final Stretch

Through all the harsh conditions by 1898, the track was mostly complete. The track began in Moscow, ran to Lake Baikal then a 4-hour ferry ride across the Lake to the next station which was in Ulan-Ude. From here the train went straight through Chinese Manchuria to Vladivostok. To solve the problem of the rail line going through China a different route from Ulan-Ude to Vladivostok was built along the Amur River. This rail line did not leave Russian territory and allowed for passage to Vladivostok without the need of entering Chinese territory in the event of a territory dispute. The desire to keep the railway in Russia resulted in the Amur River route being completed in 1904.

Then in 1904 development of the Railway would hasten with the outbreak of the Russo-Japanese War. The Circum Baikal route around the lake was still being brainstormed, some ideas had been played around with getting around the treacherous terrain that surrounds the lake, but nothing definitive had come to fruition. With the outbreak of the war, the need to utilize the railway to move troops and supplies around lake Baikal became apparent. The only way to circumnavigate the lake was with 2 steam ships that took 4 hours to cross Lake Baikal. However, the 2 steamships, one a freight car hauler and one a passenger vessel, were not enough to accommodate the large amount of movement needed to move an army across Russia. The ships were also stuck when the water froze over, rendering them useless. Some solutions to this problem were presented, the most popular being sledges that towed supplies to the Ulan-Ude station on the other side of the lake. There were attempts to build a track straight on the ice, but the first attempt to put a locomotive on the ice caused it to go straight through, plunging into the depths of the lake. This further reinforced sledges as the solution to the problem.

The terrain on the shores of Lake Baikal was treacherous to build a track onto. It was rocky and rigid and had cliffs that were very dangerous to work on. The original plan was to make tunnels through the rocks to the other side, but when it was decided it would take around 30 tunnels to have a place to lay track it was decided that the track would be built along the shore. To make enough progress to lay track along the rocky terrain in one day it took an entire cart of dynamite. This ground down progress to an extremely slow pace, even with the hastening of progress from the Russo-Japanese War. The track, however, was eventually completed in 1905, finally connecting East and West Russia and completing the largest Railway in the world.

What do you think of the Trans-Siberian Railway? Let us know below.

Sources

Marks, Steven G. Road to power: The Trans-Siberian railroad and the colonization of Asian Russia: 1850-1917. Cornell University Press, 1991.

Tupper, Harmon. To the Great Ocean: Siberia and the Trans-Siberian railway. Brown & Company, 1965.

Africa held an important place in the Cold War. Hardly had the nations there freed themselves from colonial bondage than they were suddenly made into a battlefield. It was here that the United States and Russia, who did not dare fight on a Europe that had already shed so much blood, fought for supremacy. The Soviet Union tried to appeal to Africans as fellow revolutionaries and paraded capitalism as the enemy. However, the new leaders understood that the Soviet Union would merely absorb Africa into a new colonial empire. Kennedy, though, appealed to the Africans’ newfound taste for democracy. These conflicts met head-on in the Congo Crisis, and the clashes, unlike in much of the rest of the world, were bloody.

Ayrton Avery explains.

US President Richard Nixon meets President Mobutu Sese Seku of Zaire in 1973 in the White House.

Tempting Ideologies

As soon as Ghana achieved independence in 1957, the people were turning to socialism. Guinea, which was also led to independence the following year, followed suit. The Soviet Union viewed these countries as a gold mine. They found similarities between Russian and African history, and thought the subsequent implementation of communism was only logical. However, Kwame Nkrumah, Ghana’s president, viewed things differently. He preferred a version of socialism that emphasized pan-Africanism, though he admired Russia’s ideology of Marxism-Leninism. This became the reason that the Russians had to fight for Africa, using diplomacy or otherwise.

Much later, during the Portuguese Colonial War (1961-1974) in Angola, Kennedy was tempted to take the stand against the colonizers, probably because he feared Soviet influence in that region. Like the Russians, he tried to appeal to the Africans ideologically. He preached anti-communist and democracy, at one point even meeting with the Angolan politician Holden Roberto. However, Africans viewed the United States as a colonial power and Europeans feared the Angolans would still turn communist despite U.S. support. In the end, the U.S. also had to fight for Africa.

Cold War Not-So-Cold

Naturally, clashes made up the Cold War. The Cuban Missile Crisis, the U-2 Incident and others were ways the two powers tried to gain supremacy without resorting directly to the gun. However, this conflict was more than just political, it was also economic. The United States needed money to fund their own wars, while the Soviet Union itself was in an economic downgrade. The two nations saw Africa, rich in resources, as a source of funds and diplomatic superiority.

However, even with this, there was no genuine need for any wars in Africa to turn bloody. But, new African countries took Kwame Nkrumah’s (the first president of Ghana’s) lead in viewing the Soviets as a colonial power. These countries accepted money from both sides, all the while refusing to become allied with either. This, of course, pleased neither the Russians nor the Americans. Eventually, Americans got Guinea and Ghana more or less under their foot. But the Congo, which was a confused bag of warring factions in 1960, and also boasted Iron, Zinc, Copper and Tantalum, was even more tempting.

A Fight for Tantalum

In the Congo, no one was in power. Shortly after gaining its independence, a series of rebellions broke out between ethnic groups and those who supported the colonizers. At first, the United States blamed the socialist leader Lumumba for the fighting and refused to send forces at all. But then, the Soviets intervened in August 1960, setting the stage for yet another clandestine battlefield of the Cold War.

The United States put down the Communist secessionists, but soon new ones appeared, inspired by the Chinese leader Mao Zedong. Belgium and the United States intervened directly this time, realizing the threat. Bloody fighting began, provoked by Russia, China, the U.S., and Belgium, but eventually they crushed the Maoists as well, by the spring of 1965.

Although an authoritarian dictator was put in, ever since, the West, not the Soviet Union, have controlled the Congo and all its exports. The defeat also undermined Soviet influence in Africa, and resulted in most governments handing over power (indirectly) to the West. This was possibly one of the greatest factors that led to the collapse of the Soviet Union. The Russians lost huge amounts of revenue as African nations slammed the Russians for not providing better support to the rebels. It also did not help that now the U.N. was giving money to the Congo’s corrupted and authoritarian leadership. Once again, the West had won on a major battlefield of the cold war.

Conclusion

Of course, the violence did not end. The Cold War was not yet over. Russia tried, and succeeded somewhat, to gain control in Angola. But the victory was not enough. The West had tightened its grip on the continent far too tight. After the 70s, though, instability in the continent soared. The First Congo War broke out. Then there was the Rwandan genocide. And the Second Congo War broke, leading into the 21st century. Much of the Diamonds and tantalum are now being sold to Russian mercenaries. The West never truly won in Africa, just like in Korea. It was all an illusion. Both powers have rendered the continent more or less useless for their goals.

What do you think of Soviet and America in Africa during the Cold War? Let us know below.

We have run the site for free for over a decade - if you enjoy the site and want to say “thank you”, donate today >>> Click here.

References

Elizabeth, Schmidt. Foreign Intervention in Africa: From the Cold War to the War on Terror. Cambridge University Press, 2010.

Brzezinski, Zbigniew. Africa and the Communist World. Stanford University Press, 1963.

Nkrumah, Kwame. Challenge of the Congo. International Publishers, 1967.

William Reno. Warfare in Independent Africa. Cambridge University Press, 2011.

Elbaum, Max. Revolution in the Air: Sixties Radicals Turn to Lenin, Mao and Che. Verso Books, 2002.

A rival nationalist government formed on the island of Taiwan following the Chinese Civil War in 1949. This separation from the communist controlled mainland China has been a source of International tension ever since. Here, Victor Gamma looks at how and why mainland China separated from Taiwan. He continues the series by looking at the Chinese Civil War and how China and Taiwan grew apart.

Chiang Kai-shek and Mao Zedong meeting in 1945 in Chongqing, China.

Initially the political left (communist) & right (nationalist) wings of the KMT continued to cooperate in the United Front. But it wasn’t long before the conflict which would ultimately lead to the current China-Taiwan conflict began. On May 30, 1925, a crowd of Chinese students in Shanghai staged an anti-foreign protest at the International Settlement at Shanghai. The incident turned deadly when the Shanghai Municipal Police opened fire on the protesters. This sparked outrage throughout China, including the Canton/Hong Kong Strike. The CCP reaped the greatest benefit from these events and attracted many members. Conservatives and moderates grew alarmed at the growing power of the leftists. Right and left also clashed over policy: the left pushed the strike while the Nationalists wanted to end the strike because much of their financial support was coming from foreign trade. Moreover, Chiang was trying to consolidate his control in anticipation of the coming campaign to unify China and did not want political disunity in the ranks. For this reason, as well as suspicion of a possible communist take-over of the Nationalist movement, on March 20, in what is called the Canton Coup, he purged communist elements from the Nationalist army. Chiang moved to limit the fallout from the purge by taking actions to conciliate the Soviets and the remaining leftists. He still desired Soviet support as well as help from the CCP for the campaign fight against the warlords.  

Shortly thereafter, Chiang launched his long - awaited campaign against the warlords. By March, 1927 Chiang had taken Nanjing. Here the fall of the city was accompanied by widespread looting and rioting with foreign warships bombarding the city. This led to conflict between Nationalists and communists. Chiang believed that the Russians and communists instigated the riots and stirred up anti-foreign feelings deliberately to increase their own power and weaken the KMT. Therefore, on April 12, 1927 he ordered the violent purging of communists in Shanghai. This marked the official beginning of all-out war against the communists and the start of the Chinese Civil War. In addition to Nanjing, the nationalist government had moved to Wuhan. Here leftists took control, acting largely independent of Chiang’s authority. By April the Wuhan government had gone beyond that to actually acting against Chiang. They issued a series of edicts reducing Chiang’s authority. They also began to construct a parallel government in KMT territory.  Chiang clearly could not move forward against the warlords and felt it necessary to halt his advance in order to deal with the communists. This marks a pattern which appeared throughout Chiang’s career; no matter how great the problem, he always placed the communists or internal threats as his greatest threat and would cease all other operations to deal with them. And so in the spring of 1927, he halted the anti-warlord campaign and violently turned on the communists. He began with a purge of communists in Shanghai. 

On August 1, 1927, the Communist Party launched an uprising in Nanchang against the Nationalist government in Wuhan. Around 20,000 communist members of the Kuomintang revolted and took over the city of Nanchang. This incident is called the Nanchang uprising. It resulted in the formation of the People Liberation Army and is still celebrated today as “Army Day.” Ultimately, however, the communists withdrew into a remote location to rebuild their strength. Chiang launched several offensives in an attempt to destroy the communists once and for all, but they managed to elude his pursuing armies to reach the safety of a remote city in Shaanxi Province called Yenan. Once settled in their new base, the communists carried an intensive training and indoctrination program to “correct unorthodox tendencies,” mold the peasantry into the communist model and become an effective force.

Anti-communism

Scholars have debated the reasons that Chiang turned on the communists. There are multiple reasons. Chiang was a reformer but also a traditionalist. Although recognizing the need for modernization, he was deeply connected to the past. He was, in fact, a neo-Confucianist. He was an ardent admirer of Tseng Guo Fan, the 19th century paragon of Confucian virtue. In addition to that, like Chiang, Tseng also was involved in leading the government forces in restoring unity to China through quelling the Taiping Rebellion. One of Tseng’s superiors said  “Taiping Rebellion is a disease of the heart, Russia is a disease of the elbow and axilla, England is a disease of skin; We should exterminate Taiping first, then Russia and England.” Chiang repeated this phrase almost word for word in an interview years later, substituting “Taiping” with “communist” - “Remember, the Japanese are a disease of the skin, but the communists are a disease of the soul.” He was alarmed at ideologies that he felt threatened traditional Chinese culture. Chiang had a chance to observe a communist regime up close when he was in Russia for training and rejected it as an appropriate system of government for China. He felt it to be an alien ideology that undermined Chinese traditions. He attempted to unify China both politically and ideologically. Part of his ideological effort would become the “New Life Movement.” This would be a civic campaign that promoted confucian values as well as cultural reform. It was partly launched as a counter to communist ideology. He also was not interested in sharing power. He believed one of China’s greatest needs at this time was one leader firmly in control. The communists had demonstrated that they would not submit to Chiang. One of the first objectives the communists focused on when they gained power in Wuhan during the Northern Expedition, for instance, was an attempt to strip Chiang of his power.

World War II

The state of civil war continued until 1937, when the Japanese invasion forced the two sides into the Second United Front for the duration of the Second Sino-Japanese War (1937- 45) Although technically allies in the struggle against Japan, the Front never functioned as a firm alliance, even at times resembling more a hostile competition than an alliance. In practice, though, cooperation between the two factions was minimal. Chiang, in fact, instead of an aggressive strategy against the Japanese, hoarded his forces for the post-war showdown with the communists.

At the end of World War II, although technically on the winning side, the Nationalists were psychologically the losers in the eyes of many Chinese, especially peasants. They were seen as putting more energy into trying to exterminate the communists than fighting the rapacious foreign invader. Some even blamed Chiang for Japanese depredations by using forces against his internal political foes that could have been used against the Japanese. Chiang, in fact, had to be forced at gunpoint to agree to the Second United Front in the first place. Even before the guns fell silent in 1945, he had lost the war for the hearts and minds of the peasants, who were 90% of the population. His alliance with the mercantile and landowning class helped tie Chiang to conservatism. He had little understanding of the plight of the peasants.  His communist rivals, meantime, worked feverishly and brilliantly to build a powerful following, based largely on peasant support. This included a military force that numbered into the 600,000 range by 1945. While Chiang’s Nationalist movement was riddled with corruption and lack of real reform, the communists won the hearts and minds of vast numbers through the training, land reform and fierce, consistent commitment to the struggle against Japan and whatever injustice the peasants had been traditionally subjected to.

In 1945 both Nationalist and communist forces accepted the surrender of Japanese forces. Sovereignty had been restored, but not unity. Both Chiang and Mao knew that the long-awaited showdown was about to commence. After a brief period of post-war cooperation, the old animosities erupted into civil war again. This time, the communists were the winners. The Nationalists retreated to Taiwan but never surrendered, just as the communists had refused to surrender despite a succession of defeats in the late 1920’s and 1930’s.

After the Civil War

For some time after the Nationalists fled to Taiwan both sides insisted that they were only the official government of China. A strict policy of no contact  followed. Chiang reformed the corrupt Nationalist Party and, with American aid, set Taiwan on the path of economic modernization and growth. After Chiang’s death in 1975, political reforms also took place. By the 1990’s, Taiwan was not only an economic powerhouse but full-fledged democracy. Meanwhile, Taiwan has largely given up its claim to the mainland. In 1991 Taiwan declared that the war with the PRC was over. 

In 2000 Taiwan transitioned to a multi-party democracy when the Democratic Progressive Party (DPP) won the presidency. Although the KMT is still important, it now shares power with other parties. The DPP backs full independence so Beijing viewed the election results with alarm. The PRC backed up its disapproval with the "anti-secession law." The law flatly states that Beijing will use force if Taiwan "secedes" by declaring full independence. The DPP returned to power as Tsai Ing-wen, became Taiwan's first female president in 2016. More importantly for the mainland, she is a firm supporter of independence. In words that are sure NOT to warm the heart of Beijing, Tsai declared "Choosing Tsai Ing-wen... means we choose our future and choose to stand with democracy and stand with freedom."   

China has offered a "one country, two systems" scenario in which Taiwan would enjoy significant autonomy while still under Beijing's control. The mainland also would promise not to use force in resolving the issue. Taiwan turned down the proposal.

Differences

Why doesn't Taiwan want to be under Beijing's control?  it has seemed that the two Chinas have drawn closer together, for example beginning in the 1970s the mainland began economic reforms thus it seemed was becoming more similar to Taiwan. However, the mainland did not change the political one-party state and authoritarian regime which is not a democracy. Taiwan, along with the whole world, watched the 1989 Tiananmen square massacre. Hong Kong was promised a "one country, two systems" arrangement in 1997 as China prepared to take back the British Colony. Included was a 50 year promise that Hong Kong would enjoy its capitalist system as well as political freedoms. In 2020, though, Beijing cracked down on basic freedoms with a Security Act that allows the government to punish or silence critics or dissenters. As of this writing, well over a hundred individuals have been arrested for political reasons. Taiwan at one time was an authoritarian dictatorship, it has now diverged even more from communism, evolving into now a free market and a genuine democracy. 

This contemporary dispute reflects China's painful journey from its time-honored ways of old to modernity. A struggle for stability and prosperity and self-respect consumed that nation in the 20th Century. This journey involved the fundamental question of how China should be organized: the nationalist/traditionalist view - which eventually evolved into today's democratic Taiwan, and the communist (with a semi-capitalist economy) vision, now ruling the mainland. These two paths represent the right and left ideologically, one which looked to the West and its liberal traditions and traditional Chinese culture and the other which turned to distinctly antiliberal doctrines of Marx and hostility towards the past. These two approaches struggled over who's vision would succeed. In a sense, then, this struggle has never truly ended and continues to threaten global stability. The world watches to see how far Beijing will go in achieving its goal of one China. 

What do you think of the China and Taiwan separation? Let us know below.

Now read Victor’s article on the explosive history of the bikini here.

References

CHIANG ATTACKS WARLORDS AND REDS - 11. Chiang Attacks Warlords and Reds

Timeline: Taiwan’s road to democracy - Timeline: Taiwan's road to democracy | Reuters

The essence of propaganda is to spread a manipulated message with the aim of influencing the masses. The truth is not the most important thing here.  Over the centuries the tools for making and spreading propaganda have changed quite a bit, but the goal has always remained the same: to influence as many people as possible. Bram Peters explains.

British World War One recruiting poster, 1914.

Already in Roman times, the emperors used propaganda to spread the message throughout the empire who held the power. Roman emperors had themselves portrayed on coins to reach as many citizens as achievable.  In a time without the modern mass media as we know them today, this was quite an effective way to circulate information within an empire the size of the United States (the Mediterranean Sea included). This method is even used to this day: many countries have their heads of state printed on their coins or bills.

The disintegration of the Roman Empire in the early Middle Ages resulted in much more locally oriented society. Cities themselves minted their own coins.  However, the invention of the printing press in the late Middle Ages gave propagandists a whole new opportunity to spread their message.  Texts no longer had to be copied by hand, but could instead be produced by machines.  In addition, the message was proclaimed in the vernacular instead of Latin. This made it possible to reach a much larger audience.

The industrial revolution gave a huge boost to paper production.  With the use of steam engines and the switch from cottonpaper to pulppaper, production costs fell significantly and more people than ever had access to printed information.

20th century

In the twentieth century other mass media made their appearance: radio and film. Sound and motion picture could now be used to spread propaganda.  The Nazi regime is an excellent example of a government that has been able to make optimal use of new technologies.  Famous are the speeches of the specially appointed Minister of Propaganda Joseph Goebbels, which the whole country could follow on cheap radios provided by the regime. Citizens could watch propaganda films in cinemas that aimed to influence the masses.  Much attention was paid to national symbolism (with a special role for flags), military parades, cheering crowds worshiping Hitler and theatrical music. In the second half of the twentieth century, the role of film was increasingly taken over by television.  From now on propaganda came straight into the living room.

The Internet made its appearance at the end of the century, revolutionizing the way messages are conveyed to the general public.  Although initially still a fairly static medium, in the twenty-first century the internet has evolved into a platform where new digital technologies have forever changed the way propaganda is created and used.  Smart algorithms offer users personalized content based on their search behaviour.  Manipulation of images in combination with the framing of information has led to the emergence of reporting referred to as fake news.  Artificial Intelligence (AI) is used to generate deep fake videos capable of making people say things they have never said.  Thanks to AI, anyone with relatively little knowledge can spread propaganda that reaches the entire world. The line between what is real and what is not has become more blurred than ever.

Propaganda has been a way of influencing people for thousands of years.  Propaganda makers want to convince their target group and do not  take the truth too seriously.  What has changed throughout history are the possibilities to reach an ever larger public. With the rise of the internet, the whole world now is the audience.  At the same time, AI is more than ever creating the dilemma of what is real and what isn’t. It is of great importance that young generations learn the purpose of propaganda and how to recognize it.  Who made something and for what reason? Examples from the past can therefore be useful to study. In a time where it is easier than ever to manipulate everything, we all should take an extra critical look at the information presented to us.

What do you think of propaganda history? Let us know below.

Now read Bram’s article on an approach to racism and Black Pete here.

About the author: Bram Peters is an historian from the Netherlands. He has a MA in political history from one of the major Dutch universities, and specialized in national identity and traditions, as well as parliamentary history, the second world war and war propaganda. He worked for years as a curator at one of the largest war museums in the Netherlands. He likes to get involved in public debate by writing articles for national and regional newspapers and websites.

A rival nationalist government formed on the island of Taiwan following the Chinese Civil War in 1949. This separation from the communist controlled mainland China has been a source of International tension ever since. Here, Victor Gamma looks at how and why mainland China separated from Taiwan. He starts by looking at early 20th century China.

A 1920s portrait of Sun Yat-sen.

When Vladimir Putin recently claimed that Taiwan belonged to the People's Republic of China (PRC), he triggered a withering rebuke from Taipei. In response to Putin's remarks, the Taiwan Ministry of Foreign Affairs fired back, "the Republic of China (ROC/Taiwan) is an independent, sovereign nation… The ROC and the autocratic PRC are not subordinate to each other. The regime of the Chinese Communist Party has never ruled over Taiwan for one day and does not enjoy any sovereignty over Taiwan'' …The future of Taiwan can only be determined by the Taiwanese people and Taiwan will never surrender to any threats from the PRC government.”

The communist (PRC) regime, on the other hand, like Putin, sees Taiwan as part of its territory. Thus, in their view, they have every right to demand reunification - by force if necessary. Why are there "two China's" anyway? What lies behind this threat to peace that has even Japan ramping up its military muscle? Let’s see what history has to tell us.

Background

The current Taiwan-China conflict grew out of the crisis of the "Century of humiliation" as the Chinese call it. This was a period from roughly 1840-1949 when China fell victim to foreign aggression and internal division. By 1900, after 50 years of one disaster after another, it was clear to many that the Imperial Qing Dynasty was hopelessly inept and corrupt. It had long proven itself incapable of coping with the challenges of modernization. 

With chaos and humiliation swirling around them, increasing numbers of Chinese became convinced that they needed major change. Numerous reform and anti-Qing movements arose with the goal of solving China’s problems. Many Chinese realized the need to copy Western techniques if China were to survive. As reformer Kang Yu Wei put it in 1906, “We need, too, governmental and political reforms and a reorganization of our political machinery.” 

Among the many organizations seeking to help was the Revive China Society (Xingzhonghui). Today’s Kuomintang Party or Guomindang (GMD) traces its history to this movement, founded on November 24, 1894. The next year the Society adopted an official flag, the blue sky with a bright sun. This emblem remains the Kuomintang flag and adorns the national flag of Taiwan to this day. In 1905 the Revive China Society was merged into the Revolutionary Alliance aka Tongmenghui. By this time Dr Sun had enunciated his famous “Three Principles of the People;” Nationalism, Democracy and the welfare of the people. The Three Principles were partly influenced by his travels in the United States. Especially influential was Lincoln’s philosophy of government “by the people.” The Principles included civil rights or limited government. termed ``popular soveriegnty'' in the US.  Dr. Sun explained that the people should control their government through means such as elections, referendum, recall and initiative. These principles remain as foundational elements to the Kuomintang and the Constitution of the Republic of China. These are the values Taiwan espouses today. Taiwanese revere Sun Yat Sen as "father of the nation. '' Dr Sun's portrait, in fact, hangs in the main legislative chamber in Taipei.

Revolution

Finally on October 10, 1911 (“double tenth”) an uprising triggered an anti-Qing revolution. There was nothing remarkable about an uprising, but then something incredible occurred: Within a few short months, a system that had lasted 2,000 years collapsed like a house of cards. The ROC (Republic of China) was established by the Chinese people through the Provisional Presidential Election held on December 29, 1911. Dr. Sun won a whopping 94% of the vote to become the first president in his country’s history. On January 1, 1912 he was sworn in and announced the official beginning of the Republic of China. On February 12, 1912 the last Qing monarch abdicated the throne, formally beginning China’s troubled venture as a republic. 

At the time of the Revolution, Sun Yat Sen was the acknowledged leader of the Chinese revolutionary movement. In 1912 the Revolutionary Alliance and several other parties merged to form the Kuomintang (Nationalist) Party, KMT for short, aka “National People's Party.” It evolved out of the revolutionary league that had worked to overthrow the Qing. But it was one thing to overthrow a government, quite another to assert authority. By 1913 Sun had lost the power struggle and fled to Japan in exile, not to return until 1916. China’s infant experiment in parliamentary democracy collapsed. In practical terms,  this meant the dissolution of China into a state of anarchy with regional rulers exercising control.

Mao

Meanwhile, another pivotal event took place in 1893: the birth of a son to a prosperous farmer of Hunan Province, named Mao Zedong. Although reared in the ways of traditional China, including the Confucian Classics, Mao rebelled against all this at an early age. He was expelled from more than one school and ran away from home briefly. When he was 14 a marriage was arranged for him and the young women moved into the family home. Mao refused to even acknowledge her. Instead, he moved to Changsha to continue his studies. When the 1911 Revolution came, Mao quickly joined the Anti-Qing military and did everything he could to overthrow the hated Manchu. Having tasted the wine of politics, Mao became insatiable. Between 1913 and 1918, as a student at the Changsha Teacher’s Training College, he devoured works on political ideologies. Especially impressive to him was the 1917 Russian Revolution and the ancient Chinese Legalist philosophy. Upon graduation he took a job at the Beijing University Library. It just so happened that his boss at the library, Li Dazhou, was a budding communist and soon exerted a major influence on the young Mao. He was one of many who became convinced that the solution to China’s problems lay in Marxism.

By 1919, while Mao was still a lowly librarian, a new revolutionary ferment broke out. Seven years after Dr Sun had proclaimed the Republic, China was still mired in political and economic chaos. Warlords and bandits ruled their own territories in defiance of any national government. Sun returned to China in 1916 but his authority was limited to a small area around Canton. To make matters worse, although China had joined the Allied cause in hopes of attaining an end to its semi-colonial status, China was betrayed at the Versailles Peace Conference; Japan was allowed to keep the territory in Shandong Province it had captured from Germany in 1914. This was a massive slap in the face to China. On May 4, 1919 a crowd of students gathered at Tiananmen Square to voice their frustrations. This was part of a resurgence of nationalism. Among other results, leftist ideologies gained momentum. Movements like Sun’s now expanded into a more grass-roots effort. Leaders such as Li Dazhao and Chen Duxin emerged from the May 4 movement. These two, like many others, began to abandon Western-style democracy and turned to leftist ideology. They looked to the new Bolshevik government in Russia as an example. In 1920, Li was head of the library at Peking University and professor of economics. Captivated by the Russian Revolution, he began to study Marxism. Many were impressed with the apparent success of the Bolsheviks. Li founded a study group to discuss Marxism. This evolved into the Chinese communist party, founded in July 1921. Mao Zedong was among the founding members.  

Sun

Meantime Sun and his Kuomintang, lacking military support, had been unable to build a strong enough political organization to assert their authority. Sun began to realize that his movement needed help if he were to unify China - they had proved to be no match for ruthless warlords and helpless to end the foreign concessions. Sun had tried to enlist the aid of Japan and the West. He even wrote to Henry Ford, imploring his help.  In a  letter to the famous auto manufacturer he wrote; “There is much more to hope, in my opinion, from a dynamic worker like yourself, and this is why I invite you to visit us in South China, in order to study, at first hand, what is undoubtedly one of the greatest problems of the Twentieth Century,” The request came to nothing. Rebuffed by the West, he took a step that would have momentous consequences. By 1921 the Bolshevik in Russia revolutionaries had proven they could take and hold on to power. They had established themselves and were carrying out their reform program. They had accomplished this in four short years while the Chinese revolution had now been floundering for a decade. He invited Russian help in building his party. The Russians were only too glad to help but they attached a price tag: Sun must allow the communists to join his kuomintang. Mikhail Gruzenberg, known as Borodin, was sent to Canton in 1923 to advise Sun. Here was a seasoned agent of the newly-formed Comintern. He had already been to several countries to spread bolshevism. He and Sun established a formidable partnership as Borodin put his considerable political skills to work. It would hardly be an overstatement to say that he almost single-handedly turned the Kuomintang into an effective force. He gave them a tight party organization, drafted a constitution for them and taught effective revolutionary and mobilization techniques. Borodin also convinced Sun to admit the small (300) communist party into his nationalist movement and create the first United Front between the KMT and CCP. This was a potentially powerful move to bring unity and stability to China. This United Front thus combined the conservative and leftist political movements of China. Unity was essential to overcome the warlords, who dominated all of north China. Nonetheless, this is where the conflict between the two China’s begins. For all their cooperation, the two ideologies, communism and nationalism, would prove absolutely incapable of working together for long. Some consider this to be Sun’s greatest mistake. Once given legitimacy, the communists would be very difficult to control.

Additionally, Sun and his followers established a military academy to train officers in the struggles to come. Known as the Whampoa Military Academy, it played a critical role in the centuries major conflicts. In 1924 Sun Yat Sen appointed the general Chiang Kai Shek to be the first commandant of the Academy. Chiang had met Sun in Japan and became a devoted follower. Over the years he had proved his faithful commitment to Sun, even at the risk of his own life. Subsequently, several Academy members, including Chiang, were sent to Russia for training. He remained, at least in word, dedicated to Dr. Sun’s principles throughout his career. In a 1942 message to the New York Herald Tribune Forum on Current Problems Chiang asserted “(our) Revolution is the attainment of all three of Dr Sun’s basic principles.” After the death of Sun Yat Sen in 1925, Chiang continued his rise to power. He became commander in chief of the National Revolutionary Army (NRA) and in June, 1927 began the long-awaited “Northern Expedition” with the objective of destroying the warlords and reuniting the country.

What do you think of the early 20th century in China? Let us know below.

Now read Victor’s article on the explosive history of the bikini here.

World War Two caused so much misery and was much more of a truly global conflict than World War One. The battles of that war took place largely across Europe, Asia, and Africa and in seas the world over. Here, Richard Bluttal concludes his three-part series on the impacts of trauma during wars by looking at World War 2.

If you missed it, read part one on the American Civil War here, and part 2 on World War 1 here.

Advert encouraging sign-ups to the Army Nurse Corps during World War 2.

Lawrence McCauley was a member of the 65th Armored Field Artillery Battalion, trained to drive trucks armed with .50-caliber machine guns, halftracks and landing craft, just in case. In England, preparing for the D-Day invasion, he  became fast friends with Otto Lutz, a tall Chicagoan. We were all very close,” he said of his unit when he was interviewed in 2020 at the age of 97 and living in Lewis Center. “You knew about their wives and children — everything you could know about your buddy, because there was nothing else to talk about.”

He and Otto were next to each other on a landing craft as it approached Omaha Beach. The front door dropped open and a bullet hit Otto in the forehead. McCauley remembers looking back and seeing his friend’s face sink beneath the water. “There was no stopping,” he said. “Our orders were `Don’t stop,’ because you’re better off as a moving target. That’s hard.”

The purpose of military medicine during World War II was the same as in previous wars: to conserve the strength and efficiency of the fighting forces so as to keep as many men at as many guns for as many days as possible. What transpired between 1939 and 1945 was a cataclysmic event made worse by the nature of the weapons the combatants used. The use of machine guns, submarines, airplanes, and tanks was widespread in World War I; but in World War II these weapons reached unimagined perfection as killing machines. In every theater of war, small arms, land-and sea-based artillery, torpedoes, and armor-piercing and antipersonnel bombs took a terrible toll in human life. In America's first major encounter at Pearl Harbor, the survivors of the Japanese attack could describe what modern warfare really meant. Strafing aircraft, exploding ordnance, and burning ships caused penetrating injuries, simple and compound fractures, traumatic amputations, blast injuries, and horrific burns, to name just a few. Total U.S. battle deaths in World War II numbered 292,131 with 671,801 reported wounded or missing.

Conserving fighting strength and enabling armies and navies to defeat the enemy also meant recognizing that disease, more than enemy action, often threatened this goal. For example, during the early Pacific campaign to subdue the Solomon Islands, malaria caused more casualties than Japanese bullets. Following the initial landings on Guadalcanal, the number of patients hospitalized with malaria exceeded all other diseases. Some units suffered 100 percent casualty rates, with personnel sometimes being hospitalized more than once. Only when malaria and other tropical diseases were controlled could the Pacific war be won.

The military's top priority organized its medical services to care for battlefield casualties, make them well, and return them to duty. The systems developed by the army and navy worked similarly. In all theaters of war, but particularly in the Pacific, both army and navy medicine faced their greatest challenge dealing with the aftermath of intense, bloody warfare fought far from fixed hospitals. This put enormous pressure on medical personnel closest to the front and forced new approaches to primary care and evacuation.

World War II service members lived through an inflection point in the history of medicine and warfare. In all previous US wars, non-battle deaths—related to conditions like smallpox, typhoid, dysentery, yellow fever, tuberculosis, and influenza—outnumbered battle-related fatalities. During the Spanish-American War, more than 2,000 of the approximately 2,400 deaths were due to causes other than battle. During World War I, 53,000 died due to battle versus 63,000 who died due to other causes. World War II marked the first time the ratio was reversed. Of 16.1 million who served, 405,399 died—291,557 of them in battle, and 113,842 due to other causes. A variety of factors contributed to the shift. Crucially, during World War II, the government mobilized expansive public, professional, and private resources to enhance health-related research and development, as well as services offered by the Army Surgeon General’s Office, which oversaw care for soldiers. Also, rather than creating mobilization and treatment plans from scratch, the military health apparatus built on knowledge and administrative infrastructure developed during and after prior conflicts.

Organization of battlefield medical care

The military's top priority organized its medical services to care for battlefield casualties, make them well, and return them to duty. The systems developed by the army and navy worked similarly. In all theaters of war, but particularly in the Pacific, both army and navy medicine faced their greatest challenge dealing with the aftermath of intense, bloody warfare fought far from fixed hospitals. This put enormous pressure on medical personnel closest to the front and forced new approaches to primary care and evacuation.

Army medics or navy corpsmen were the first critical link in the evacuation chain. From the time a soldier suffered a wound on a battlefield in France or a marine was hit on an invasion beach at Iwo Jima, the medic or corpsman braved enemy fire to render aid. He applied a battle dressing, administered morphine and perhaps plasma or serum albumin, and tagged the casualty. Indeed, one of the lingering images of the World War II battlefield is the corpsman or medic crouched beside a wounded patient, his upstretched hand gripping a glass bottle. From the bottle flowed a liquid that brought many a marine or soldier back from the threshold of death. In the early days of the conflict that fluid was plasma. Throughout the war, scientists sought and finally developed a better blood substitute, serum albumin. Finally, in 1945, whole blood, rich in oxygen-carrying red cells, became available in medical facilities close to the battlefield.

If he was lucky, the medic or corpsman might commandeer a litter team to move the casualty out of harm's way and on to a battalion aid station or a collecting and clearing company for further treatment. This care would mean stabilizing the patient with plasma, serum albumin, or whole blood. In some cases, the casualty was then evacuated. Other casualties were taken to a divisional hospital, where doctors performed further stabilization including surgery, if needed. In the Pacific, where sailors, soldiers, and marines were doing the fighting, both navy and army hospital ships, employed mainly as ambulances, provided first aid and some surgical care for the casualties' needs while ferrying them to base hospitals in the Pacific or back to the United States for definitive care. As the war continued, air evacuation helped carry the load. Trained army and navy nurses, medics, and corpsmen staffed the evacuation aircraft.

Combat Related Injuries

The experience of a battle casualty in the Second World War was not radically different to that of the First World War. The most common injuries were caused by shells and bullets, and a casualty was evacuated through a similarly organized chain of medical posts, dressing stations and hospitals. Common combat injuries include second- and third-degree burns, broken bones, shrapnel wounds, brain injuries, spinal cord injuries, nerve damage, paralysis, loss of sight and hearing, post-traumatic stress disorder (PTSD), and limb loss.

Non-Combat Related Death and Injuries

Not all wounds are physical. In a previous era, the psychologically wounded suffered from "nostalgia" during the Civil War, and "shell-shock" in World War I. In World War II this condition was termed combat exhaustion or combat fatigue. Although the World War I experience of treating men at the front had been successful, military psychiatrists and psychologists at the beginning of World War II had to relearn those lessons. Nevertheless, the care givers soon recognized that given a respite from combat, a safe place to rest, regular food, and a clean environment, 85 to 90 percent of patients could again become efficient warriors. The more psychologically damaged received therapy in military hospitals.

In the Southwest Pacific, where death rates due to disease were highest, soldiers faced scourges like malaria, as well as tsutsugamushi fever, poliomyelitis, and diseases of the digestive system. In the northern theater—Alaska, Canada, Greenland, Iceland—threats included cold injuries like frostbite and trench foot. Neuropsychiatric disorders and venereal disease were widespread, regardless of where one served, including among those in the United States.

Army doctor Paul F. Russell recalled after the war an earlier statement from General Douglas MacArthur, who had reported that he “was not at all worried about defeating the Japanese, but he was greatly concerned about the failure up to that time to defeat the Anopheles mosquito,” the vector for malaria. By war’s end, more than 490,000 soldiers had been diagnosed with malaria, equating to a loss of approximately nine million “man-days.”

Between 1941 and 1944, more than 10 percent—roughly two million of 15 million examined men—were excluded from service; 37 percent of those dismissals were made based on neuropsychiatric findings. Still, diagnoses of mental “disorders” within the military catapulted well beyond expectations. A total of one million soldiers were admitted for neuropsychiatric illness, constituting approximately 6 percent of all wartime admissions. Within two years of American entry into the war, it was clear that so-called combat stress or “exhaustion” would pose a major threat to soldiers and the army they served—as it had during prior generations. Experiences and realizations of the World War II period had important implications for the future of military medicine.

Army officials began devoting more resources to neuropsychiatric treatment because of an imperative to increase return-to-duty rates, but long-term impacts of care on individual service members were questionable. In early 1943, military psychiatrists noted that men in the Tunisian campaign diagnosed as “psychiatric casualties” were generally lost to their units after being transferred to distant base hospitals. To increase retention, they instituted principles of “forward psychiatry” that had been adopted by World War I-era armies—and henceforth largely disregarded by World War II planners in the United States: treat patients quickly, in close proximity to battle, and with the expectation that they would recover. After army psychiatrist Frederick Hanson reported in the spring of 1943 that 70 percent of approximately 500 psychiatric battle casualties were returned to duty thanks to this approach, it was gradually adopted in other theaters. Still, military psychiatrists acknowledged the method was hardly a panacea. Systematic follow-up studies were lacking, but one contemporary account noted that many who underwent treatment were unable to return to combat, and some who did “relapsed after the first shot was fired.’”

Medical Advancements and Improvements

Battlefield medicine improved throughout the course of the war. At the beginning, only plasma was available as a substitute for the loss of blood. By 1945, serum albumin had been developed, which is whole blood that is rich in the red blood cells that carry oxygen and is considerably more effective than plasma alone. This was the first major war in which air evacuation of the wounded became available.

During the war, surgery techniques such as removing dead tissue resulted in fewer amputations than at any time. To treat bacterial infections, penicillin or streptomycin were administered for the first time in large-scale combat.

Service members with combat fatigue, which later became known as post-traumatic stress disorder, were given a safe place to stay away from battle zones with plenty of food and rest. This resulted in about 90% of patients recovering enough to return to the fight.

War also brought about the mass production of antibiotics, especially sulfanilamide and penicillin. World War II helped both of them find widespread respect, production, and use.

In 1928, when Scottish bacteriologist Alexander Fleming noticed a weird mold had taken over his Petri dishes and eliminated the bacteria on them, his findings didn’t get much notice. But Fleming continued his research and kept talking up what he called “mold juice” (he didn’t come up with “penicillin” until later), eventually winning a Nobel Prize and attracting the attention of drug maker Pfizer. The company soon began mass-producing the drugs for distribution to medics during WWII, and ultimately, to doctors and hospitals across the country.

In 1932, German biochemist Gerhard Johannes Paul Domagk discovered that the compound sulfanilamide could vanquish deadly strains of bacteria, like the streptococcus in his lab mice and in his first human test subject, his gravely ill young daughter. The wide distribution of so-called “sulfa drugs” began when World War II soldiers carried powdered sulfanilamide in their first-aid kits. By the end of the war, doctors were routinely using these antibiotics to treat streptococcus, meningitis, and other infections.

In the tropical islands of the Pacific, malaria was a serious threat. Service members received atabrine — a group of medications used to protect against malaria — before going into affected areas.

Service members were also inoculated with vaccinations for smallpox, typhoid, tetanus, cholera, typhus, yellow fever and bubonic plague, depending where they were sent.

Other improvements during World War II included improved crash helmets, Because of improvements like these and others, the survival rate for the wounded and ill climbed to 50% during World War II from only 4% during World War I, according to Dr. Daniel P. Murphy, who published a paper on "Battlefield Injuries and Medicine."

As medical advancements progress so does the capability of our medical teams to treat our service men and women when injured in the field.

What do you think of trauma during World War II? Let us know below.

Now read Richard’s piece on the history of slavery in New York here.

Posted
AuthorGeorge Levrier-Jones

World War I was the war that caused the most deaths up until that time. The trenches of that war caused great horror and misery for many. Here, Richard Bluttal continues his three-part series on the impacts of trauma during wars by looking at World War One.

If you missed it, read part one on the American Civil War here.

A depiction of French surgeon Théodore Tuffier.

The pocket diary of Rifleman William Eve of 1/16th (County of London) Battalion (Queen’s Westminster Rifles):

”Poured with rain all day and night. Water rose steadily till knee deep when we had the order to retire to our trenches. Dropped blanket and fur coat in the water. Slipped down as getting up on parapet, got soaked up to my waist. Went sand-bag filling and then sewer guard for 2 hours. Had no dug out to sleep in, so had to chop and change about. Roache shot while getting water and [Rifleman PH] Tibbs shot while going to his aid (in the mouth). Laid in open all day, was brought in in the evening”, unconscious but still alive. Passed away soon after.

The war caused 350,000 total American casualties, of which over 117,000 were deaths. The best estimates today are 53,000 combat deaths, and 64,000 deaths from disease. (Official figures in 1919 were 107,000 total, with 50,000 combat deaths, and 57,000 deaths from disease.)  About half of the latter were from the great influenza epidemic, 1918-1920.  Considering that 4,450,000 men were mobilized, and half those were sent to Europe, the figure is far less than the casualty rates suffered by all of the other combatants.

World War 1 represented the coming of age of American military medicine.  The techniques and organizational principles of the Great War were greatly different from any earlier wars and were far more advanced.  Medical and surgical techniques, in contrast with previous wars, represented the best available in civilian medicine at the time.  Indeed, many of the leaders of American medicine were found on the battlefields of Europe in 1917 and 1918.  The efforts to meet the challenge were often hurried.  The results lacked polish and were far from perfect.  But the country can rightly be proud of the medical efforts made during the Great War.

The primary medical challenges for the U.S. upon entering the war were, “creating a fit force of four million people, keeping them healthy and dealing with the wounded,” says the museum's curator of medicine (Smithsonian's National Museum of American History and science) Diane Wendt. “Whether it was moving them through a system of care to return them to the battlefield or take them out of service, we have a nation that was coming to grips with that.”

The First World War created thousands of casualties. New weapons such as the machine gun caused unprecedented damage to soldiers’ bodies. This presented new challenges to doctors on both sides in the conflict, as they sought to save their patients’ lives and limit the harm to their bodies. New types of treatment, organization and medical technologies were developed to reduce the number of deaths.

In addition to wounds, many soldiers became ill. Weakened immune systems and the presence of contagious disease meant that many men were in hospital for sickness, not wounds. Between October 1914 and May 1915 at the No 1 Canadian General Hospital, there were 458 cases of influenza and 992 of gonorrhea amongst officers and men.

Wounding also became a way for men to avoid the danger and horror of the trenches. Doctors were instructed to be vigilant in cases of ‘malingering’, where soldiers pretended to be ill or wounded themselves so that they did not have to fight. It was a common belief of the medical profession that wounds on the left hand were suspicious. 

Wounding was not always physical. Thousands of men suffered emotional trauma from their war experience. ‘Shellshock’, as it came to be known, was viewed with suspicion by the War Office and by many doctors, who believed that it was another form of weakness or malingering. Sufferers were treated at a range of institutions.

Organization of Battlefield Medical Care

In response to the realities of the Western Front in Europe, the Medical Department established a treatment and evacuation system that could function in both static and mobile environments. Based on their history of success in the American Civil War, and on the best practices of the French and British systems, the Department created specific units designed to provide a sequence of continuous care from the front line to the rear area in what they labelled the Theater of Operations.

Casualties had to be taken from the field of battle to the places where doctors and nurses could treat them. They were collected by stretcher-bearers and moved by a combination of people, horse and cart, and later on by motorized ambulance ‘down the line’. Men would be moved until they reached a location where treatment for their specific injury would take place.

Where soldiers ended up depended largely on the severity of their wounds. Owing to the number of wounded, hospitals were set up in any available buildings, such as abandoned chateaux in France. Often Casualty Clearing Stations (CCS) were set up in tents. Surgery was often performed at the CCS; arms and legs were amputated, and wounds were operated on. As the battlefield became static and trench warfare set in, the CCS became more permanent, with better facilities for surgery and accommodation for female nurses, which was situated far away from the male patients.

Combat Related Injuries

For World War I, ideas of the front lines entered the popular imagination through works as disparate as All Quiet on the Western Front and Blackadder. The strain and the boredom of trench warfare are part of our collective memory; the drama of war comes from two sources: mustard gas and machine guns. The use of chemical weapons and the mechanization of shooting brought horror to men’s lives at the front. Yet they were not the greatest source of casualties. By far, artillery was the biggest killer in World War I, and provided the greatest source of war wounded.

World War I was an artillery war. In his book Trench: A History of Trench Warfare on the Western Front (2010), Stephen Bull concluded that in the western front, artillery was the biggest killer, responsible for “two-thirds of all deaths and injuries.” Of this total, a third resulted in death, two-thirds in injuries. Artillery wounded the whole body, if not entirely obliterated, the body was often dismembered, losing arms, legs, ears, noses, and even faces. Even when there was not superficial damage, concussive injuries and “shell shock” put many men out of action. Of course, shooting—in combat as well as from snipers—was another great source of wounding. Gas attacks were a third. Phosgene, chlorine, mustard gas, and tear gas debilitated more than killed, though many ended up suffering long-term disability. Overall, the war claimed about 10M military dead, and about 20M–21M military wounded, with 5% of those wounds’ life-debilitating, that is, about a million persons.

August 1914 would dramatically alter the paradigm of casualty care. Gigantic cannons, high explosives, and the machine gun soon invalidated all pre-war suppositions and strategy. More than eighty percent of wounds were due to shell fragments, which caused multiple shredding injuries. "There were battles which were almost nothing but artillery duels," a chagrined Edmond Delorme observed. Mud and manured fields took care of the rest. Devitalized tissue was quickly occupied by Clostridia pathogens, and gas gangrene became a deadly consequence. Delays in wound debridement, prompted by standard military practice, caused astounding lethality. Some claimed more than fifty percent of deaths were due to negligent care. And the numbers of casualties were staggering. More than 200,000 were wounded in the first months alone: far too many for the outdated system of triage and evacuation envisioned just years before. American observer Doctor Edmund Gros visited the battlefield in 1914:

If a soldier is wounded in the open, he falls on the firing line and tries to drag himself to some place of safety. Sometimes the fire of the enemy is so severe that he cannot move a step. Sometimes, he seeks refuge behind a haystack or in some hollow or behind some knoll…. Under the cover of darkness, those who can do so walk with or without help to the Poste de Secours. . . . Stretcher-bearers are sent out to collect the severely wounded . . . peasants' carts and wagons [are used] . . . the wounded are placed on straw spread on the bottom of these carts without springs, and thus they are conveyed during five or six hours before they reach the sanitary train or temporary field hospital. What torture many of them must endure, especially those with multiple fractures!

Non-Combat Related Death and Illness

In the course of the First World War, many more soldiers died of disease than by the efforts of the enemy. Lice caused itching and transmitted infections such as typhus and trench fever. In summer it was impossible to keep food fresh and everyone got food poisoning. In winter men suffered from frostbite and exposure and from trench foot. There were no antibiotics so death from gangrenous wounds and syphilis were common. Others suicided as a result of psychological stress.

Battlefield Wounded and Surgery

In the early years of the war, compound lower limb fractures caused by gunshots in trench warfare sparked debate over the traditional splinting practices that delayed surgery, leading to high mortality rates, particularly for open femoral fractures.

Femoral fractures stranded soldiers on the battlefield, and stretcher-bearers reached them only with difficulty, leaving many lying wounded for days or enduring rough transport, all of which left soldiers particularly vulnerable to gas gangrene and secondary hemorrhage. Australian surgeons in France reported injury-to-treatment times ranging from 36 hours to a week and averaging three to four days. Fracture immobilization during transport was poor, and in the early war years surgeons reported about 80% mortality for soldiers with femoral fractures transported from the field.

By 1915 medics and stretcher-bearers were routinely trained to apply immobilizing splints, and by 1917 specialized femur wards had been established; during this period mortality from all fractures fell to about 12% and below 20% for open femoral fractures.

Théodore Tuffier, a leading French surgeon, testified in 1915 to the Academy of Medicine that 70 percent of amputations were due to infection, not to the initial injury. “Professor Tuffier stated that antiseptics had not proven satisfactory, that cases of gas gangrene were most difficult to handle,” Crile wrote. “All penetrating wounds of the abdomen, he said, die of shock and infection. … He himself tried in fifteen instances to perform immediate operations in cases of penetrating abdominal wounds, and he lost every case. In fact, they have abandoned any attempt to operate penetrating wounds of the abdomen. All wounds large and small are infected. The usual antiseptics, bichloride, carbolic, iodine, etc., fail.”

Every war has its distinctive injury. For World War I, it was facial injuries, which affected 10–15% of casualties, or over a half-million men. The nature of combat--with faces often exposed above the trench line--contributed to this high incidence. Most countries founded specialist hospitals with surgeons like Johannes Esser in the Netherlands and Hippolyte Morestin in France who dedicated their practices to developing new techniques to repair facial trauma.

World War I presented surgeons with myriad new challenges. They responded to these difficulties not only with courage and sedulity but also with an open mind and active investigation. Military medicine practiced in 1918 differed substantially from that in 1914. This shift did not occur by happenstance. It represented collaboration between some of the brightest minds in academia and professional military doctors, combining their expertise to solve problems, take care of patients, and preserve fighting strength. It required multiple inter-allied conferences both to identify common medical problems and also to determine optimal solutions. Reams of books and pamphlets buttressed the in-person instruction consultants provided to educate young physicians on best practices. Most significantly, this change demanded a willingness to admit a given intervention was not working, creatively try something new, assess its efficacy using data from thousands of soldiers, disseminate the knowledge, and ensure widespread application of the novel practice. No step was easy, and combining execute them while fighting the Great War required a remarkable degree of perseverance, intellectual honesty, and operational flexibility.

Medical advances and improvements leading up to World War 2

 With most of the fighting set in the trenches of Europe and with the unexpected length of the war, soldiers were often malnourished, exposed to all weather conditions, sleep-deprived, and often knee-deep in the mud along with the bodies of men and animals. In the wake of the mass slaughter, it became clear that the “only way to cope with the sheer numbers of casualties was to have an efficient administrative system that identified and prioritized injuries as they arrived.” This was the birth of the Triage System. Medicine, in World War I, made major advances in several directions. The war is better known as the first mass killing of the 20th century—with an estimated 10 million military deaths alone—but for the injured, doctors learned enough to vastly improve a soldier’s chances of survival. They went from amputation as the only solution, to being able to transport soldiers to hospital, to disinfect their wounds and to operate on them to repair the damage wrought by artillery. Ambulances, antiseptics, and anesthesia, three elements of medicine taken entirely for granted today, emerged from the depths of suffering in the First World War.

Two Welshmen were responsible for one of the most important advances - the Thomas splint - which is still used in war zones today. It was invented in the late 19th century by pioneering surgeon Hugh Owen Thomas, often described as the father of British orthopedics, born in Anglesey to a family of "bone setters”.

In France, vehicles were commandeered to become mobile X-ray units. New antiseptics were developed to clean wounds, and soldiers became more disciplined about hygiene. Also, because the sheer scale of the destruction meant armies had to become better organized in looking after the wounded, surgeons were drafted in closer to the frontline and hospital trains used to evacuate casualties.

When the war broke out, the making of prosthetic limbs was a small industry in Britain. Production had to increase dramatically. One of the ways this was achieved was by employing men who had amputations to make prosthetic limbs – most commonly at Erskine and Roehampton, where they learnt the trade alongside established tradespeople. This had the added advantage of providing occupation for discharged soldiers who, because of their disabilities, would probably have had difficulty finding work.

While it was not an innovation of war, the process of blood transfusion was greatly refined during World War I and contributed to medical progress. Previously, all blood stored near the front lines was at risk of clotting. Anticoagulant methods were implemented, such as adding citrate or using paraffin inside the storage vessel. This resulted in blood being successfully stored for an average of 26 days, simplifying transportation. The storage and maintenance of blood meant that by 1918 blood transfusions were being used in front-line casualty clearing stations (CCS). Clearing stations were medical facilities that were positioned just out of enemy fire.

One of the most profound medical advancements resulting from World War I was the exploration of mental illness and trauma. Originally, any individual showing symptoms of neurosis was immediately sent to an asylum and consequently forgotten. As World War I made its debut, it brought forward a new type of warfare that no one was prepared for in its technological, military, and biological advances.

Another successful innovation came in the form of the base hospitals and clearing stations. These allowed doctors and medics to categorize men as serious or mild, and results came to light that many stress-related disorders were a result of

exhaustion or deep trauma. “Making these distinctions was a breakthrough…the new system meant that mild cases could be rested then returned to their posts without being sent home.”

What do you think of trauma during World War I? Let us know below.

Now read Richard’s piece on the history of slavery in New York here.The pocket diary of Rifleman William Eve of 1/16th (County of London) Battalion (Queen’s Westminster Rifles):

”Poured with rain all day and night. Water rose steadily till knee deep when we had the order to retire to our trenches. Dropped blanket and fur coat in the water. Slipped down as getting up on parapet, got soaked up to my waist. Went sand-bag filling and then sewer guard for 2 hours. Had no dug out to sleep in, so had to chop and change about. Roache shot while getting water and [Rifleman PH] Tibbs shot while going to his aid (in the mouth). Laid in open all day, was brought in in the evening”, unconscious but still alive. Passed away soon after.

The war caused 350,000 total American casualties, of which over 117,000 were deaths. The best estimates today are 53,000 combat deaths, and 64,000 deaths from disease. (Official figures in 1919 were 107,000 total, with 50,000 combat deaths, and 57,000 deaths from disease.)  About half of the latter were from the great influenza epidemic, 1918-1920.  Considering that 4,450,000 men were mobilized, and half those were sent to Europe, the figure is far less than the casualty rates suffered by all of the other combatants.

World War 1 represented the coming of age of American military medicine.  The techniques and organizational principles of the Great War were greatly different from any earlier wars and were far more advanced.  Medical and surgical techniques, in contrast with previous wars, represented the best available in civilian medicine at the time.  Indeed, many of the leaders of American medicine were found on the battlefields of Europe in 1917 and 1918.  The efforts to meet the challenge were often hurried.  The results lacked polish and were far from perfect.  But the country can rightly be proud of the medical efforts made during the Great War.

The primary medical challenges for the U.S. upon entering the war were, “creating a fit force of four million people, keeping them healthy and dealing with the wounded,” says the museum's curator of medicine (Smithsonian's National Museum of American History and science) Diane Wendt. “Whether it was moving them through a system of care to return them to the battlefield or take them out of service, we have a nation that was coming to grips with that.”

The First World War created thousands of casualties. New weapons such as the machine gun caused unprecedented damage to soldiers’ bodies. This presented new challenges to doctors on both sides in the conflict, as they sought to save their patients’ lives and limit the harm to their bodies. New types of treatment, organization and medical technologies were developed to reduce the number of deaths.

In addition to wounds, many soldiers became ill. Weakened immune systems and the presence of contagious disease meant that many men were in hospital for sickness, not wounds. Between October 1914 and May 1915 at the No 1 Canadian General Hospital, there were 458 cases of influenza and 992 of gonorrhea amongst officers and men.

Wounding also became a way for men to avoid the danger and horror of the trenches. Doctors were instructed to be vigilant in cases of ‘malingering’, where soldiers pretended to be ill or wounded themselves so that they did not have to fight. It was a common belief of the medical profession that wounds on the left hand were suspicious. 

Wounding was not always physical. Thousands of men suffered emotional trauma from their war experience. ‘Shellshock’, as it came to be known, was viewed with suspicion by the War Office and by many doctors, who believed that it was another form of weakness or malingering. Sufferers were treated at a range of institutions.

Organization of Battlefield Medical Care

In response to the realities of the Western Front in Europe, the Medical Department established a treatment and evacuation system that could function in both static and mobile environments. Based on their history of success in the American Civil War, and on the best practices of the French and British systems, the Department created specific units designed to provide a sequence of continuous care from the front line to the rear area in what they labelled the Theater of Operations.

Casualties had to be taken from the field of battle to the places where doctors and nurses could treat them. They were collected by stretcher-bearers and moved by a combination of people, horse and cart, and later on by motorized ambulance ‘down the line’. Men would be moved until they reached a location where treatment for their specific injury would take place.

Where soldiers ended up depended largely on the severity of their wounds. Owing to the number of wounded, hospitals were set up in any available buildings, such as abandoned chateaux in France. Often Casualty Clearing Stations (CCS) were set up in tents. Surgery was often performed at the CCS; arms and legs were amputated, and wounds were operated on. As the battlefield became static and trench warfare set in, the CCS became more permanent, with better facilities for surgery and accommodation for female nurses, which was situated far away from the male patients.

Combat Related Injuries

For World War I, ideas of the front lines entered the popular imagination through works as disparate as All Quiet on the Western Front and Blackadder. The strain and the boredom of trench warfare are part of our collective memory; the drama of war comes from two sources: mustard gas and machine guns. The use of chemical weapons and the mechanization of shooting brought horror to men’s lives at the front. Yet they were not the greatest source of casualties. By far, artillery was the biggest killer in World War I, and provided the greatest source of war wounded.

World War I was an artillery war. In his book Trench: A History of Trench Warfare on the Western Front (2010), Stephen Bull concluded that in the western front, artillery was the biggest killer, responsible for “two-thirds of all deaths and injuries.” Of this total, a third resulted in death, two-thirds in injuries. Artillery wounded the whole body, if not entirely obliterated, the body was often dismembered, losing arms, legs, ears, noses, and even faces. Even when there was not superficial damage, concussive injuries and “shell shock” put many men out of action. Of course, shooting—in combat as well as from snipers—was another great source of wounding. Gas attacks were a third. Phosgene, chlorine, mustard gas, and tear gas debilitated more than killed, though many ended up suffering long-term disability. Overall, the war claimed about 10M military dead, and about 20M–21M military wounded, with 5% of those wounds’ life-debilitating, that is, about a million persons.

August 1914 would dramatically alter the paradigm of casualty care. Gigantic cannons, high explosives, and the machine gun soon invalidated all pre-war suppositions and strategy. More than eighty percent of wounds were due to shell fragments, which caused multiple shredding injuries. "There were battles which were almost nothing but artillery duels," a chagrined Edmond Delorme observed. Mud and manured fields took care of the rest. Devitalized tissue was quickly occupied by Clostridia pathogens, and gas gangrene became a deadly consequence. Delays in wound debridement, prompted by standard military practice, caused astounding lethality. Some claimed more than fifty percent of deaths were due to negligent care. And the numbers of casualties were staggering. More than 200,000 were wounded in the first months alone: far too many for the outdated system of triage and evacuation envisioned just years before. American observer Doctor Edmund Gros visited the battlefield in 1914:

If a soldier is wounded in the open, he falls on the firing line and tries to drag himself to some place of safety. Sometimes the fire of the enemy is so severe that he cannot move a step. Sometimes, he seeks refuge behind a haystack or in some hollow or behind some knoll…. Under the cover of darkness, those who can do so walk with or without help to the Poste de Secours. . . . Stretcher-bearers are sent out to collect the severely wounded . . . peasants' carts and wagons [are used] . . . the wounded are placed on straw spread on the bottom of these carts without springs, and thus they are conveyed during five or six hours before they reach the sanitary train or temporary field hospital. What torture many of them must endure, especially those with multiple fractures!

Non-Combat Related Death and Illness

In the course of the First World War, many more soldiers died of disease than by the efforts of the enemy. Lice caused itching and transmitted infections such as typhus and trench fever. In summer it was impossible to keep food fresh and everyone got food poisoning. In winter men suffered from frostbite and exposure and from trench foot. There were no antibiotics so death from gangrenous wounds and syphilis were common. Others suicided as a result of psychological stress.

Battlefield Wounded and Surgery

In the early years of the war, compound lower limb fractures caused by gunshots in trench warfare sparked debate over the traditional splinting practices that delayed surgery, leading to high mortality rates, particularly for open femoral fractures.

Femoral fractures stranded soldiers on the battlefield, and stretcher-bearers reached them only with difficulty, leaving many lying wounded for days or enduring rough transport, all of which left soldiers particularly vulnerable to gas gangrene and secondary hemorrhage. Australian surgeons in France reported injury-to-treatment times ranging from 36 hours to a week and averaging three to four days. Fracture immobilization during transport was poor, and in the early war years surgeons reported about 80% mortality for soldiers with femoral fractures transported from the field.

By 1915 medics and stretcher-bearers were routinely trained to apply immobilizing splints, and by 1917 specialized femur wards had been established; during this period mortality from all fractures fell to about 12% and below 20% for open femoral fractures.

Théodore Tuffier, a leading French surgeon, testified in 1915 to the Academy of Medicine that 70 percent of amputations were due to infection, not to the initial injury. “Professor Tuffier stated that antiseptics had not proven satisfactory, that cases of gas gangrene were most difficult to handle,” Crile wrote. “All penetrating wounds of the abdomen, he said, die of shock and infection. … He himself tried in fifteen instances to perform immediate operations in cases of penetrating abdominal wounds, and he lost every case. In fact, they have abandoned any attempt to operate penetrating wounds of the abdomen. All wounds large and small are infected. The usual antiseptics, bichloride, carbolic, iodine, etc., fail.”

Every war has its distinctive injury. For World War I, it was facial injuries, which affected 10–15% of casualties, or over a half-million men. The nature of combat--with faces often exposed above the trench line--contributed to this high incidence. Most countries founded specialist hospitals with surgeons like Johannes Esser in the Netherlands and Hippolyte Morestin in France who dedicated their practices to developing new techniques to repair facial trauma.

World War I presented surgeons with myriad new challenges. They responded to these difficulties not only with courage and sedulity but also with an open mind and active investigation. Military medicine practiced in 1918 differed substantially from that in 1914. This shift did not occur by happenstance. It represented collaboration between some of the brightest minds in academia and professional military doctors, combining their expertise to solve problems, take care of patients, and preserve fighting strength. It required multiple inter-allied conferences both to identify common medical problems and also to determine optimal solutions. Reams of books and pamphlets buttressed the in-person instruction consultants provided to educate young physicians on best practices. Most significantly, this change demanded a willingness to admit a given intervention was not working, creatively try something new, assess its efficacy using data from thousands of soldiers, disseminate the knowledge, and ensure widespread application of the novel practice. No step was easy, and combining execute them while fighting the Great War required a remarkable degree of perseverance, intellectual honesty, and operational flexibility.

Medical advances and improvements leading up to World War 2

 With most of the fighting set in the trenches of Europe and with the unexpected length of the war, soldiers were often malnourished, exposed to all weather conditions, sleep-deprived, and often knee-deep in the mud along with the bodies of men and animals. In the wake of the mass slaughter, it became clear that the “only way to cope with the sheer numbers of casualties was to have an efficient administrative system that identified and prioritized injuries as they arrived.” This was the birth of the Triage System. Medicine, in World War I, made major advances in several directions. The war is better known as the first mass killing of the 20th century—with an estimated 10 million military deaths alone—but for the injured, doctors learned enough to vastly improve a soldier’s chances of survival. They went from amputation as the only solution, to being able to transport soldiers to hospital, to disinfect their wounds and to operate on them to repair the damage wrought by artillery. Ambulances, antiseptics, and anesthesia, three elements of medicine taken entirely for granted today, emerged from the depths of suffering in the First World War.

Two Welshmen were responsible for one of the most important advances - the Thomas splint - which is still used in war zones today. It was invented in the late 19th century by pioneering surgeon Hugh Owen Thomas, often described as the father of British orthopedics, born in Anglesey to a family of "bone setters”.

In France, vehicles were commandeered to become mobile X-ray units. New antiseptics were developed to clean wounds, and soldiers became more disciplined about hygiene. Also, because the sheer scale of the destruction meant armies had to become better organized in looking after the wounded, surgeons were drafted in closer to the frontline and hospital trains used to evacuate casualties.

When the war broke out, the making of prosthetic limbs was a small industry in Britain. Production had to increase dramatically. One of the ways this was achieved was by employing men who had amputations to make prosthetic limbs – most commonly at Erskine and Roehampton, where they learnt the trade alongside established tradespeople. This had the added advantage of providing occupation for discharged soldiers who, because of their disabilities, would probably have had difficulty finding work.

While it was not an innovation of war, the process of blood transfusion was greatly refined during World War I and contributed to medical progress. Previously, all blood stored near the front lines was at risk of clotting. Anticoagulant methods were implemented, such as adding citrate or using paraffin inside the storage vessel. This resulted in blood being successfully stored for an average of 26 days, simplifying transportation. The storage and maintenance of blood meant that by 1918 blood transfusions were being used in front-line casualty clearing stations (CCS). Clearing stations were medical facilities that were positioned just out of enemy fire.

One of the most profound medical advancements resulting from World War I was the exploration of mental illness and trauma. Originally, any individual showing symptoms of neurosis was immediately sent to an asylum and consequently forgotten. As World War I made its debut, it brought forward a new type of warfare that no one was prepared for in its technological, military, and biological advances.

Another successful innovation came in the form of the base hospitals and clearing stations. These allowed doctors and medics to categorize men as serious or mild, and results came to light that many stress-related disorders were a result of

exhaustion or deep trauma. “Making these distinctions was a breakthrough…the new system meant that mild cases could be rested then returned to their posts without being sent home.”

What do you think of trauma during World War I? Let us know below.

Now read Richard’s piece on the history of slavery in New York here.

Posted
AuthorGeorge Levrier-Jones

World War Two was full of very terrible atrocities, foremost among them being the murder of six million Jews during the Holocaust. In this article, Felix Debieux looks at how the sheer number of people murdered during the Holocaust was possible, with a particular focus on the role of the company IBM.

Edwin Black, author of the book IBM and the Holocaust. Source: Juda Engelmayer, available here.

The Convention on the Prevention and Punishment of the Crime of Genocide, better known as the Genocide Convention, represents a landmark in the field of international law. It was the first human rights treaty adopted by the UN General Assembly, and the first legal apparatus used to codify genocide as a crime. Since 1948, it has signified the international community’s commitment to ‘never again’ after the atrocities committed during the Second World War.

Ensuring that genocide is never repeated means providing the crime with a tight, verifiable definition. The treaty has this covered. “Genocide means any of the following acts committed with intent to destroy, in whole or in part, a national, ethnical, racial or religious group”:

  • Killing members of a group.

  • Causing serious bodily or mental harm to members of a group.

  • Deliberately inflicting on a group conditions of life calculated to bring about its physical destruction in whole or in part.

  • Imposing measures intended to prevent births within a group.

  • Forcibly transferring children of a group to another group.

A legal framework for genocide, however, has not prevented the murder of countless innocents since the end of the Second World War. From Rwanda to Cambodia, history is littered with appalling episodes of human-inflicted suffering which meet the technical threshold for genocide. Each episode is unique in its origins and execution. Also unique are the experiences of those who have survived genocide, each group having fought for justice with varying degrees of success.

Anyone who has read even a little into the subject of genocide is very likely to have stumbled into the, at times, vociferous debate surrounding the uniqueness of one genocide in particular: the murder of six million Jews during the Holocaust. This article isn’t about to intervene in the debate; a morbid contest of ‘who-suffered-the-most' is neither enlightening nor sensitive to the victims of genocide. It will, however, agree with those who attest to the uniqueness of the Holocaust on one thing: that the sheer number of people murdered would not have been possible were it not for the unprecedented application by the Nazis of advanced industrial, scientific and technological capabilities.

Where did the Nazis obtain these capabilities, the logistical capacity to manage the identification, transportation, ghettoization and extermination of so many? A full answer to this question means looking beyond the Nazi government itself, and considering the partnerships the regime forged with private companies. Indeed, companies implicated in the Holocaust range from Audi and BMW - who maximised the opportunities afforded by slave labour - to Deutsche Bank, who provided loans for the construction of Auschwitz. One company which perhaps contributed more than any other was the US multinational company IBM (International Business Machines Corporation), whose tabulation technology was used to track individuals, monitor their movements, and ultimately facilitate their transportation across a network of prison, labour and extermination camps. IBM technology, quite literally, ensured that the trains to Auschwitz ran on time. How did the company become involved in the Holocaust, how much deniability can it claim, and what does this tell us about corporate complicity in human rights abuses?

IBM’s origins

To understand IBM’s part in the Holocaust, we first need to take a look at the company’s roots in early data processing and the US census. This is not as dull is it might sound. Back in the 1880s, the US Census Bureau employed a young German-American statistician named Herman Hollerith. Hollerith would go on to make a name for himself as a seminal figure in the development of data processing, eventually founding a company that in 1911 was amalgamated to form the Computing-Tabulating-Recording Company (CTR) - renamed in 1924 as IBM. The young statistician’s role in this story is critical.

While working for the US Census Bureau, Hollerith conceived the idea that would make his company rich: readable cards with standardised perforations, each representing individual traits such as nationality, sex, and occupation. When produced in their millions, these punch cards could be counted in the national census and tabulated based on the specific information they contained about each citizen. This innovation promised the US government a quantified snapshot of its population, filterable using demographic characteristics such as sex or occupation. One of CTR’s first customers was the US Census Bureau, which contracted the company to tabulate the 1890 census.

Fast forward to the 1930s, and IBM had established itself as a major player in the global computing industry with a number of offices across Europe. Chief among them was Dehomag, IBM’s German subsidiary, headed by Chief Executive and enthusiastic Hitler supporter Willy Heidinger. The ability to quantify and analyse entire populations like never before would, naturally, greatly interest a regime hellbent on purifying its citizenry of undesirables. But how did the latest tools and techniques in data processing fall into Nazi hands? For a second time, we find that a national census provided the opportunity for IBM to showcase its technology.

A lucrative partnership

Hitler’s rise to power in 1933 was met with a spectrum of reactions. Where some saw a threat to peace, others quickly grasped at the business opportunities presented by regime change. Among those who sought to capitalise was IBM president Thomas J. Watson, who from the very first days of the Nazi government manoeuvred to form a partnership. Despite widespread international calls to boycott the new regime, Watson inserted himself extremely closely into the management of IBM’s German operation. Indeed, between 1933 and 1939, Watson travelled to Berlin at least twice annually to personally supervise Dehomag’s work. In this period, the Nazi government would become one of IBM’s most important overseas clients.

On April 12, 1933, Dehomag was presented with a huge opportunity to cement the partnership. This was the date on which the Nazis announced plans to conduct a long-delayed national census, a project which would enable identification of Jews, Roma and other minority groups deemed subhuman by the new order. First in line to offer their services was Dehomag, backed at every step by IBM’s US headquarters. Indeed, Watson personally travelled to Germany in October that year, and drastically expanded investment in Dehomag from 400,000 Reichsmarks to a staggering 7,000,000. This injection of capital gave Dehomag the means to purchase land in Berlin, and to start construction of IBM’s first German factory. The scaling up of operations in Germany would prepare IBM to take on a bigger role in Nazi atrocities. Indeed, it was tabulated census data that enabled the Nazis to expand their estimate of 400,000 to 600,000 Jews living in Germany to 2,000,000.

Some part of Watson must have known that his company's partnership with the Third Reich was immoral, if not embarrassing. Tellingly, he took great pains to ensure deniability through his continued insistence on direct verbal instructions to his German staff. Nothing was written down, even in the case of high-value contracts. And yet there was no denying the tight leash with which Watson directed business. For instance, correspondence written in German was translated by the IBM New York office for Watson’s personal comment. In one anecdote, German staff recalled having to wait for Watson’s express permission before they were allowed to paint a corridor. Watson’s tenure as CEO would see IBM’s partnership with the Nazis grow more intimate still.

Business gets intimate

Writing at a time in which multinational corporations are heavily scrutinised in the public eye for any role – no matter how small – in human rights abuses, we might be forgiven for assuming that IBM maintained at least some semblance of distance from the atrocities taking place across Nazi-occupied Europe. The reality, however, is much more disturbing. As the regime’s sole supplier of punch cards and spare parts, IBM trainees (or sometimes authorised dealers) were required to be physically present when servicing their tabulation machines – even those located at infamous sites like Dachau. More chilling still, each IBM machine was tailor-made to not only tabulate inputted information, but also to produce data which the Nazis were interested in analysing. There were no universal punch cards, and so IBM’s role in servicing the machines ensured that they continued to operate at maximum efficiency.

To give a sense of how it worked, it might be helpful to describe an application of IBM tabulation technology in action. One set of punch cards, for example, recorded religion, nationality and mother tongue. By creating additional columns and rows for ‘Jew’, ‘Polish language’, ‘Polish nationality’, ‘Berlin’, and ‘fur trade’, the Nazis were able to cross-tabulate at a rate of 25,000 cards per hour to identify precisely how many Berlin furriers were Jews of Polish origin. Train cars, which previously would have taken two weeks to mobilise, could be quickly dispatched in just two days by means of an immense network of IBM punch card machines. This same technology was also put to use in concentration camps. Each camp maintained its own Hollerith-Abteilung (Hollerith Department), assigned with keeping tabs on inmates through the use of IBM's punch cards. The machines were so sophisticated that they were even capable of matching the skills of prisoners with projects that needed slave labour. Chillingly, IBM’s code for a Jewish inmate was “6” and the code it used for gas chamber was “8”.

While Nazi Germany extended its domination across Europe, there is no evidence to suggest that IBM paused at any point to reflect on its role in facilitating industrial-scale murder. On the contrary, each nation that fell to the Nazi war machine was subjected to a census, which relied on the machinery and punch cards supplied by IBM. At the same time as Europe’s Jews were murdered in their millions, IBM decision-makers in New York were gleefully carving up sales territories. Edwin Black, who's 2001 book first bought to light the company’s instrumental role in the Holocaust, warns us not to think of IBM’s partnership with the Nazis as some rogue corporate element operating out of a basement.  Far from it. This was a carefully micro-managed alliance spanning twelve years, which generated profit up until the last gasp of Hitler’s monstrous regime.

Legacy: IBM’s reaction and the role of big tech in genocide today

Revisiting his book twenty years later, Edwin Black makes the point that – with or without IBM – there would always have been a Holocaust. ‘Einsatzgruppen murder squads and their militia cohorts would still have heinously murdered East European Jews bullet by bullet in pits, ravines, and isolated clearings in the woods’. The question, however, is would the Nazis have been able to annihilate as many victims as they did without the data processing power offered by IBM technology? For Edwin, the answer to that question is never in doubt. IBM is responsible for facilitating the ‘industrial, high-speed, six-million-person Holocaust, metering ghetto residents out to trains, then carefully scheduling those trains to concentration camps for murder and cremation within hours, thus clearing the way for the next shipment of victims—day and night’. Put it another way: without IBM, the death toll of the Holocaust would be measured in the hundreds of thousands, not in the millions.

To date, IBM has never directly denied any of the evidence of its role in the Holocaust. The company has previously insisted that most of its records from Europe were lost or destroyed during the war, and that it has no other information it can share about its operations during that time. It would seem IBM sees little benefit in attempting to refute or downplay its part in the Holocaust. Indeed, in the twenty years since Black published his book, he reminds us that ‘IBM has never requested a correction or denied any facts’. Since 2001, each edition of the book has provided further evidence of the company’s guilt.

Are there any lessons that we can draw from IBM’s role in the Holocaust? Importantly, the company’s facilitation of mass murder is a stark reminder of the power of data in the wrong hands. Indeed, we do not have to look too hard to find examples of authoritarian regimes using data to perpetuate genocide even today. From China's use of facial recognition technology to monitor and persecute its Uighur population, to Myanmar's use of social media to incite violence against Rohingya Muslims, we are bearing witness to new and alarming ways in which data is weaponised to inflict human rights abuses. While we do of course need to be vigilant about the ways in which governments – our own or further afield – might use data, we also need to remain extremely wary of non-governmental actors. Indeed, if IBM’s story shows us anything, it is that large multinational corporations are adept at evading accountability and continuing to function with impunity. Despite the millions that such organisations spend on PR management and glossy marketing campaigns, it is critical that we remain suspicious of what big tech can do to surveil, censor and unduly influence our lives.

What do you think of the role IBM in the Holocaust? Let us know below.

Now read Felix’s article on Henry Ford’s calamitous utopia in Brazil: Fordlandia Here.

In 1961 Yuri Gagarin went to space, but more importantly he didn’t visit the United States immediately after. John F. Kennedy personally barred him from entering, scared of his popularity—so the Telegraph, Wikipedia, and countless blogs say. It has all the makings of a classic Cold War conspiracy theory: John F. Kennedy, fear of the Soviet Union, and the Space Race. There’s just one problem: it isn’t true. Yet while the evidence refutes this Cold War truism, it explains why the story was easily accepted. This myth says much more about the nature of the United States during the Red Scare than it does about Yuri Gagarin.

Steve Ewin explains.

Yuri Gagarin in Warsaw, Poland in 1961.

There are two main versions of the Gagarin Myth. The first, as stated in Britain’s Telegraph, is that John F. Kennedy was so alarmed by Gagarin’s popularity that he barred him from the United States. The second, as an extension of the first, is that Kennedy’s method of barring was via Executive Order.

The second version is the easiest to disprove: no executive order or proclamation exists that barred Gagarin from the United States.(1) The only references to Gagarin by Kennedy as official actions of the United States are those of congratulatory messages for his achievement.

Expanding this to other offices of the executive branch also produces no evidence. The agency responsible for enforcing bans on specific individuals fell to the Immigration and Naturalization Service. Thousands of pages exist regarding Charlie Chaplin, barred from entering the United States in 1952.(2) Further, the United States Customs and Immigration Service (the INS’ successor agency) has thousands of pages of documents related to John Lennon’s attempted barring.(3) In response to a FOIA request for records related to Gagarin, none were found. The stories of Chaplin and Lennon, however, are inseparable from the Red Scare and Cold War politics.

The politics of it all

The Red Scare is what makes the first version of this myth seem plausible. In 1952 the United States Congress passed, and President Eisenhower signed, the Immigration and Nationality Act of 1952. This act effectively barred any Soviet citizen from entry to the United States. A win trumpeted by American Cold Warriors, it quickly became a disaster for the United States abroad. A National Security Council report dated March 25 1955 states that the general travel restriction:

placed [the US] in a paradoxical position, which is being exploited by Communist propaganda. Despite its traditional policy favoring freedom of travel and its record of having favored a liberal exchange of persons…the U.S. is being accused of maintaining an “Iron Curtain”; and these accusations are being made not only by representatives of international Communism but also by otherwise friendly persons in the free world.(4)

These restrictions were still in place during Gagarin’s goodwill tour post-space. Kennedy would not have needed a reason to personally bar Gagarin from the United States after his historic 1961 flight. He would not have been allowed in the United States by default.

There was a way around this. The Immigration Act of 1952 provided exemptions for official and diplomatic business. As the United States and the Soviet Union maintained diplomatic ties, an exemption was built into the act which allowed for members of “deportable” affiliations to be in the United States if on official business from their home governments. If Gagarin was invited to the United States as an official representative of the Soviet Union (or sent by the Soviet Union as one), the Immigration Act of 1952 would have allowed it. In the immediate aftermath of Gagarin’s flight such an invitation was recommended by the American Ambassador to the Soviet Union.(5)

Official discouragement

The timing of Gagarin’s flight was not opportune for an invitation. Five days after Gagarin’s triumphant flight the ill-fated Bay of Pigs invasion occurred. The American-backed attempted invasion of a major Soviet ally greatly damaged American prestige. Yet, by the time Gagarin was on a good-will tour, America had an answer. Alan Shepard became the first American in space on May 5, 1961. According to John Logsdon’s award winning book, John F. Kennedy and the Race to the Moon, worldwide reaction to Shepard’s flight was more favourable than Gagarin’s. According to a May 1961 report of the U.S. Information Agency, the United States was already winning the propaganda battle of space flights.(6)

A June 1961 State Department telegram is a not-quite-smoking gun. The formerly classified document states that “no invitation for Gagarin to visit [the] US” had been made. Further, it states that the United States government “has made efforts to discourage invitation.”(7) This is the closest document which exists to suggest that Gagarin was banned from the United States: a discouragement. With the United States riding the wave of international support brought by Shepard’s flight, there was nothing to fear about Gagarin. Within a year, however, this discouragement would be moot.

Kennedy himself lifted the general travel restrictions in 1962. This decision was made upon recommendation by Secretary of State Dean Rusk and in consultation with the Central Intelligence Agency.(8) In April 1962, White House Press Secretary Pierre Salinger wrote a memorandum stating that Gagarin was expected to be in Washington, DC that summer.(9) On July 6, 1962, the United States informed the Soviet Ambassador to the United States that the travel restrictions had been removed.(10) On October 16, 1963, Yuri Gagarin appeared before the United Nations General Assembly in New York City.

While Gagarin’s purported banishment from the United States makes for a good Cold War story, the evidence simply does not support it. Legislation, and governmental opinion, would have allowed Gagarin entry into the United States at any point, had it been politically expedient. However, due to the political climate of the Cold War and the rivalry between the United Stated and Soviet Union, the myth took root and flourished.

What do you think of Gagarin and JFK? Let us know below.

References

1 “Written Presidential Orders | The American Presidency Project,” n.d., https://www.presidency.ucsb.edu/documents/app-categories/presidential/written-presidential-orders

2 Electronic Reading Room - USCIS. “Charlie Chaplin,” December 25, 1977. Accessed April 11, 2023. https://www.uscis.gov/records/electronic-reading-room?ddt_mon=&ddt_yr=&query=Chaplin&items_per_page=10.

3 Electronic Reading Room - USCIS. “John Lennon,” December 8, 1980. Accessed April 11, 2023. https://www.uscis.gov/records/electronic-reading-room?ddt_mon=&ddt_yr=&query=john+lennon&items_per_page=10.

4 U.S. Department of State, Office of The Historian. “National Security Council Report NSC 5508/1,” March 26, 1955. Accessed April 11, 2023. https://history.state.gov/historicaldocuments/frus1955-57v24/d94.

5 17 April 1961, US Department of State Staff Summary, Papers of John F. Kennedy. Presidential Papers. President's Office Files. Departments and Agencies. State, 1961: April-May, pg 166/ https://www.jfklibrary.org/asset-viewer/archives/JFKPOF/088/JFKPOF-088-001?image_identifier=JFKPOF-088-001-p0001

6 J Logsdon. 2016. John F. Kennedy and the Race to the Moon. Palgrave Macmillan. 96-97.

7 State to Paris, Telegram 1839, June  26 1961, 033.6140/6-2461, 1960-63 CDF, RG59, USNA.

8 U.S. Department of State, Office of The Historian. “Memorandum From Secretary of State Rusk to President Kennedy,” April 25, 1962. Accessed April 11, 2023. https://history.state.gov/historicaldocuments/frus1955-57v24/d94.

9 Papers of John F. Kennedy. Presidential Papers. White House Central Subject Files. Outer Space (OS). OS: 4-1: Astronauts: General, 1962: 26 March-31 May, page 38. https://www.jfklibrary.org/asset-viewer/archives/JFKWHCSF/0655/JFKWHCSF-0655-007

10 American Foreign Policy, Current Documents. 1962. Department of State, 1966 .pp. 740-741.

Posted
AuthorGeorge Levrier-Jones