On March 25, 2021, the Modern Greek State celebrated the 200th anniversary of the War of Independence, which ultimately led to its establishment. It is thus an excellent opportunity to reconsider some of the main events of Greek history over these 200 years and how they shaped the character of modern Greece.

This series of articles on the history of modern Greece started when the country was celebrating the 200th anniversary of the War of Independence. This article looks at what happed from the late 1990s and the following decade – and how institutions lead to failure in Greece. Thomas P. Papageorgiou explains.

You can read part 1 on ‘a bad start’ 1827-1862 here, part 2 on ‘bankruptcy and defeat’ 1863-1897 here, part 3 on ‘glory days’ 1898-1913 here, part 4 on ‘Greeks divided’ 1914-22 here, part 5 on the issues of clientelism here, part 6 on World War2 and a new divide here, part 7 on the road to dictatorship and retreat here, and part 8 on the changing 1980s and 1990s here.

Costas Simitis, Greek Prime Minister from 1996 to 2004, with U.S. President Bill Clinton.

The central claim in this series of articles on the history of the modern Greek state is that at the core of its problems stand its political and economic institutions. I have often referred to the theory of Acemoglu and Robinson regarding extractive institutions and their effect on a nation’s growth and prosperity. (Acemoglu & Robinson, 2013) The period before the most recent economic crisis of Greece, starting in 2008, offers an excellent opportunity to present this claim in detail.

 

I Definitions

Institutions are defined as the rules of the game in society or otherwise as the organization of restrictions of human origin that define human relationships. (Petrakis, 2012, p. 157)

Different patterns of institutions today are deeply rooted in the past because once society gets organized in a particular way, this tends to persist. (Acemoglu & Robinson, 2013, p. 44)

The political institutions of a society are the rules that govern incentives in politics. They determine how the government is chosen, and which part of the government has the right to do what. Political institutions determine who has power in society and to what ends that power can be used. (Acemoglu & Robinson, 2013, p. 80)

If the distribution of power is narrow and unconstrained, then the political institutions are absolutist. Under absolutist political institutions those who can wield this power will be able to set up economic institutions to enrich themselves and augment their power at the expense of society. (Acemoglu & Robinson, 2013, p. 80) Thus, we call the latter extractive political institutions. The term exclusive has also been used in this series to point out the exclusion of the rest of the society from power.

In contrast, political institutions that distribute power broadly in society and subject it to constraints are pluralistic. Institutions that are not only pluralistic, but also sufficiently centralized to guarantee the rule of law and order, provide public services and encourage and regulate the economic activity are called inclusive political institutions. (Acemoglu & Robinson, 2013, pp. 80-81)

It is the political process that determines what economic institutions people live under, and it is the political institutions that determine how this process works. (Acemoglu & Robinson, 2013, p. 42)

Economic institutions can be described in terms of three main concepts: property rights, the quality of market functioning (i.e., good, moderate or poor) and contractual organization methods. There are two types of economic institutions: contracting institutions and property rights institutions. The first type of institutions facilitates the establishment of relations between lenders and borrowers, of which the financial system is the most typical example. The other type concerns the institutional structures that limit the imposition of the government and powerful oligarchies and their exploitation of the less powerful. These institutions protect property rights. (Petrakis, 2012, p. 157) 

Inclusive economic institutions are those that allow and encourage participation by the great mass of people in economic activities that make best use of their talents and skills and that enable individuals to make the choices they wish. To be inclusive, economic institutions must feature secure private property, an unbiased system of law, and a provision of public services that provides a level playing field in which people can exchange and contract; it also must permit the entry of new businesses and allow people to choose their careers. (Acemoglu & Robinson, 2013, pp. 74-75)

We call institutions, which have opposite properties to those we call inclusive, extractive economic institutions (the term exclusive has also been used interchangeably with extractive in this series)-extractive because such institutions are designed to extract incomes and wealth from one subset of society to benefit a different subset (with the latter thus excluded from prosperity). (Acemoglu & Robinson, 2013, p. 76)

Extractive economic institutions naturally accompany extractive political institutions. (Acemoglu & Robinson, 2013, p. 81)

 

II Political institutions

The restoration of democratic institutions, the adoption of a new Constitutional Charter (1975), the smooth transition of governments (1981), the further deepening of democracy and the country’s accession to the European Union provided a stable system of political institutions, after the collapse of the dictatorship in 1974. In fact, this stability was unprecedented in Greek history, considering what we have seen in this series so far. (Petrakis, 2012, p. 194)

Stability was not accompanied by political pluralism though, as is evident in the country’s bipolar party system. On one pole are the two ‘power parties’, as we have seen, (Papageorgiou, 2025) PASOK and New Democracy (ND) alternating in positions of government and converging on similar ideological and political positions. On the other pole, we find the smaller, in terms of electoral influence, political parties, which do not affect the formation of governments and are characterized by divergent, clear, ideological and political positions. (Petrakis, 2012, pp. 195-196)

The reproduction of the dominant positions of the two ‘power parties’ is favored by the electoral system. Indeed, as we have also often seen in this series, adopted systems have complied with the rationale of a powerful government as a fundamental condition for a smooth and stable political life. (Petrakis, 2012, p. 196)

Nevertheless, the political power of the ‘power parties’ is mainly due to their connection to the state and the sequential distribution of economic resources. In fact, despite their initial theoretical differentiations, the two power parties consolidated their dominance through clientelism by managing the state resources towards which they are constantly oriented. Their gradually increasing influence and prevalence are not the result of clear ideological and political plans designed to express specific class and social interests but of their ability to handle public funds. (Petrakis, 2012, pp. 196-197)

The way of staffing the public sector is a characteristic example of this approach. It has already been discussed how the ‘power parties’ have used the public sector and public enterprises to ‘accommodate’ (hire) their ‘clients’. (Papageorgiou, 2025) By 2008, the number of civil servants was estimated to app. 1,250,000 (27.4%  of the workforce) out of which 550,000 with fixed-term contracts (Petrakis, 2012, p. 205) in direct dependence from the government for their renewal. Data for the 2000 – 2004 period show that most of the civil servants were secondary or compulsory school graduates (app. 70%). (Petrakis, 2012, pp. 207-208) Their educational level is a critical variable affecting quality, which in turn is illustrated in the efficiency of the public sector. Indeed, financially, in the period 2002 -2008 the average general government expenditure as percentage of GDP was 45.2%, whereas the general government revenue was 40%, while the EU-27 average was 46.4% and 44.6% respectively. At the same time, the contribution of general government to GDP was 18.5%, close to the Mediterranean countries’ average, but significantly lower than the 25% of the more effective Northern European countries. (Petrakis, 2012, pp. 201-202) Public enterprises were not more efficient showing cumulative damages between 2005 and 2008 of more than 5.5 billion Euro. (Petrakis, 2012, p. 211)

The goal of political dominance for PASOK and ND was thus achieved in this way through clientelism. The system was in fact so effective that in the period 1981 – 2007 more than three quarters of the electorate supported both ‘power parties’. (Petrakis, 2012, p. 196) Thus, to the extent that their electoral influence depends primarily on the distribution of state resources and benefits, there is no strong reason to divert these resources to promote a particular political and economic direction. Their actions are directed mostly at balancing conflicting interests, rather than methodically planning and implementing reformatory plans, which involve costs for some social groups and thus political costs for the ‘power parties’ themselves. The reformatory plans, to the extent that these exist, represent the top of the party hierarchy rather than collective projects formed in conditions of political participation. The personalization of politics, which is enhanced by modern ‘telecracy’, culminates in the case of political leaders, who traditionally enjoy a hegemonic position within PASOK and ND. (Petrakis, 2012, p. 197)

In fact, a primacy of the executive power (government) was established, which creates significant institutional problems. The degradation of the parliament (legislative power) and weaknesses regarding the function of the justice system (cases of corruption or biased decisions) constitute an unbalanced institutional reality with significant effects on the equitable distribution of political power. (Petrakis, 2012, p. 195)

Indeed, the inability of the parliament to exercise substantial control over the executive power has dire consequences for political life, such as poorly designed legislation and, consequently, disrespect for the law. This is reflected in the general disregard for the members of parliament. They seem to merely ratify laws, some of which they admit they have not even studied, and they appear in parliament only when necessary (the phenomenon of empty seats). Paradoxically, political parties reinforce skepticism about the capacity of their members of parliament, especially in times of crisis, when they seek individuals outside parliament for important ministries. The role of the opposition in parliament is similarly extremely limited. David Close, in his book on the post-World War II period, notes that between 1974 and 1987, the percentage of laws proposed by opposition parties and approved by the parliament in Greece was only 0.1% (!) compared to 30.2% in Italy, 60.2% in Portugal, and 10.5% in Spain. (Close, 2006, p. 227) There has been no significant change since then.

The imbalance in the distribution of political power and its impact on decision making increases if the weak nature of the domestic ‘civil society’, the limited presence of truly independent administrative authorities and the partisanization of all institutional expressions of collective action (real syndicalism) are considered. In fact, considering the hegemonic position of the party leaders within PASOK and ND discussed above, the primacy of the executive power is summarized in a model of decision making centralized around the Prime Minister. (Petrakis, 2012, p. 195)

However, accession to party leadership and, following an electoral success, to premiership has often been a family business, as nepotism is ever-present in the Greek political scene. Costas Simitis succeeded Andreas Papandreou in the leadership of PASOK and premiership in 1996 (Papageorgiou, 2025) only to be replaced by the latter’s son Georgios in 2004. As we have seen, Georgios’ grandfather with the same name was also Prime Minister. The latter collaborated with Sofoklis Venizelos, the son of former Prime Minister Eleftherios Venizelos, who, in turn, also became Prime Minister. And if we go back to the foundation of the state, Harilaos Trikoupis, the son of Spiridon Trikoupis, the first Prime Minister of modern Greece, also become Prime Minister. Today’s Prime Minister Kyriakos Mitsotakis, coming from ND, is the son of former Prime Minister Konstantinos Mitsotakis, who was the arch-rival of Andreas Papandreou. (Papageorgiou, 2025)

In a nutshell, the convergence of the ‘power parties’ is characterized, among else, by the continuing weakening of the ideological and political character of the party, the eminence of the party leadership compared to the ordinary party members, nepotism, the development of links with specific interest groups, the party’s conversion into supporters of government policy and an ever-growing connection to the mass media to disseminate positive images, making it easier for them to win and maintain government authority. (Petrakis, 2012, p. 198) Indeed, the absence of distinct political plans that are promoted consistently by candidates and party staff results in the evaluation of candidates being connected predominantly to their ‘visibility’. The latter is effectively promoted by television and other mass media. (Petrakis, 2012, p. 197)

The above discussion indicates that political institutions in Greece are of poor quality. Organizations, like the World Bank, use indicators such as ‘voice accountability’ (the extent to which citizens of a country are involved in the government selection process and the degree of freedom of expression of the press and association), ‘political stability’, ‘government effectiveness’, ‘regulatory quality’, ‘rule of law’ and ‘control of corruption’ to measure the quality of institutions. Studying the evolution of Greece’s performance from 1996 to 2008, we observe that (except for the political stability index) the quality of all domestic political institutions progressively worsened. In fact, Greece’s position as regards the quality of political institutions in 2008 compared to other EU countries shows that Greece presented worse quality indicators for political institutions in relation to other countries. (Petrakis, 2012, pp. 199-200)

As already noted, the quality of political institutions has a significant impact on economic growth. Policies that channel the social product to groups with disproportionate political influence over others contribute to the devaluation of the political system. This effect annuls any efforts to reform the economy to allow it to adjust to the demands of international competition. The possibility of long-term growth is restricted. This vicious cycle is completed by the formulation of incentives for counterproductive and shadow economic activity as positive expectations are lacking. Economic institutions and human incentives will be the subject of the following sections.

 

III Economic institutions

As stated in Section I above, the political institutions determine the economic institutions people live under. Thus, the lack of pluralism discussed for political institutions is also evident for Greece’s economic institutions. Indeed, by applying the sectoral concentration ratio to data of companies operating in the Greek economy, we observe a high concentration in sectors of industry (tobacco, tobacco products, petroleum products and coal, liquified petroleum bottling, drinks, footwear), in the trade of minerals and ores, postal services, energy, telecommunications, entertainment (cinema, theaters), radio-television companies and the banking sector. (Petrakis, 2012, p. 168)

This oligopolistic concentration has serious implications on the economy. It may be related to the rising energy and fuel prices, which in turn affect prices in the Greek economy upwards. In the first quarter of 2006, for example, while the gasoline price in the EU-25 dropped, in Greece it rose by 5%. In Greece, two refineries control 100% of refining and market supply, thus imposing their own prices. The increase of their profits by 77% in the first quarter of 2008 and by 90% in the fourth quarter of 2007 is another example of the consequences of this oligopolistic structure. (Petrakis, 2012, pp. 169-170)

The banking sector – including alternative forms of financing such as factoring, leasing and mutual funds, investment trusts and real estate investment companies- presents also a high degree of concentration. (Petrakis, 2012, p. 178) This is important because the Greek economy is based almost exclusively on the intermediary function of the banking system and much less on the ‘invisible hand’ of the market. In fact, given the growth of the country (Leounakis & Sakellaris, 2014) in the period studied here, and before that, one would expect that the relative importance of the money market compared to the banking sector to increase. In reality, what happened was exactly the opposite. A study by the International Monetary Fund showed that the relative importance of money market transactions in the decade 1995-2004 increased in all of the examined economies, with the exception of Greece, where the role of the banking system to providing financing to the Greek economy was strengthened. (Petrakis, 2012, pp. 175,177)

On the other hand, economies that rely to a greater extent on the ‘invisible hand’ of the money market are better equipped to respond to major technological changes and adopt innovations. Typically, these economies are more dynamic and enjoy higher growth rates because of their ability to invest in more promising technologies and adopt innovations, often changing the structure of their productive activity. (Petrakis, 2012, p. 175) In fact, the inability of the Greek financial system to channel funds to the most dynamic part of the business sector was confirmed by a 2008 study of  The Foundation for Economic & Industrial Research in which 30% of the prospective entrepreneurs responding highlight the difficulty of finding funding as one of the major problems when starting a new business venture (Petrakis, 2012, p. 179) (as startups, for example, usually lack the collaterals and guarantees usually required by banks for financing). This percentage is far higher than the values recorded for barriers to entrepreneurship such as bureaucracy and the amount of tax and social security contributions. (Petrakis, 2012, p. 161)

The difficulty of ensuring the proper financing of the private sector, especially for small and medium enterprises, forces businesses to look elsewhere for the necessary support. Therefore, friends and relatives of the prospective entrepreneur apart, it is not unlikely that a significant proportion of economic activity could depend on financing from funds derived from shadow activities – which include the production of goods and services, legal or illegal (e.g. drug trafficking), which escape detection and consequently are not calculated in the official Gross Domestic Product (GDP) (Petrakis, 2012, p. 58) – and thus feedback into the huge parallel economy. (Petrakis, 2012, p. 179) Indeed, as we have seen previously, the size of the shadow economy in 1987, when VAT was introduced in Greece, was estimated to 40% of the GDP, (Papageorgiou, 2025) whereas more modest estimations in 2008 reduce this number to 20.97%, which is still very significant. (Petrakis, 2012, p. 58) In fact, estimations of the size of the shadow economy in Greece and various OECD countries suggest that Greece has the largest shadow economy among the examined countries with its estimated size increasing over time. (Petrakis, 2012, p. 59)

Shadow economy results in the decrease of tax income – estimated at 4.9% of GDP in 2005 and 4.7% of GDP in 2008-limiting the government’s capacity to make public expenditures and deprives the insurance system of the resources that secure its viability. Data on the extent of contribution evasion are enlightening. It is estimated that the latter reached 3% of the GDP in 2005 and 2.8% in 2008. Consequently, the overall effect of fiscal needs should be calculated at 7-8% of GDP or 18 billion Euro. (Petrakis, 2012, p. 60) This decrease in revenues because of tax and contribution evasion causes an increase in the tax burden on the official economy. Indeed, the tax system in Greece is characterized by frequent restructuring and complicated transaction methods. The frequent changes (usually increases) of the tax rates for natural and legal persons are perhaps the simplest form of ‘expropriation’ of income rights. (Petrakis, 2012, p. 166)

The whole situation concerning property rights is even more problematic as indicated by the fact that in World Bank’s ‘Doing Business 2011’ Greece ranked amongst the ten countries with the most property vesting procedures. (Petrakis, 2012, p. 164) Confusion is not limited to personal property (with real estate being another characteristic case) but spreads across the whole spectrum of economic activity. Apart from the frequent changes and difficulty in transactions with the tax system, mentioned above, other critical cases of confusion over property rights include:

(i) The establishment of a restricted number of jobs for certain professions (e.g. taxis and public use trucks for the domestic and international transport of goods) that leads to the formation of property rights on those jobs by persons, who, for whatever reason, were given access to them (excluding the rest). (Petrakis, 2012, p. 165)

(ii) Copyright infringements. In a study on the use of pirated software, for example, Greece ranked first among the examined countries. (Petrakis, 2012, p. 166)

(iii) The development of legal entities of public jurisdiction. The need of economic institutions to ‘produce’ these organizations stems from the need to consolidate the efficiency of public spending through the enlargement of the division of work and the development of executives who would represent long term choices for the management and implementation of specific projects. The number of these entities expanded rapidly upon the need to manage the allocation of European Structural Funds, i.e., after 1980. Nevertheless, the results of such organizations have always been a source of confusion over property rights. This problem is magnified in cases of management of public funds agencies, particularly in the case of funds originating from the European Structural Funds. Essentially, this confusion has led to the development of mechanisms for the management of funds outside public control. These were used to channel these funds to the political parties’ clientelist audiences and away from the rightful recipients with detrimental effects on the achievement of sustainable growth. (Petrakis, 2012, pp. 167-168)  

(iv) Areas of deliberate obfuscation of property rights with the most typical case being that of the Greek television stations after 1989. Indeed, the operating system of private television was based on the institutional conception of ‘temporary legitimacy’, launched for political reasons in 1989. This means, that in the period under consideration here, television stations, making up a market of 1 billion Euro annually, were operating under lawful conditions, albeit illegally. This is definitely unique and unprecedented in the global political, economic and television reality. Overall, the landscape of illegally operating media was governed by the powerful laws of the so called ‘interwoven interests’ between media and political powers. (Petrakis, 2012, pp. 165-166) The situation did not allow for a more open and pluralistic system, where independent media would flourish. Thus, it was impossible for groups that have an interest in the development of inclusive institutions to become aware and organize against threats to such institutions. (Acemoglu & Robinson, 2013, p. 309)

 

IV Human incentives

In society, the economic and political institutions form the structure of behavior incentives of the people. What kind of incentives are then formed based on the institutions described in the previous sections? In a nutshell, Greek society is characterized, among other cultural dimensions, by uncertainty avoidance, orientation towards the present, projection of collectivism to the detriment of privacy, acceptance of power distance, masculinity and, of course, lack of confidence. The existence of this set of stereotypes does not mean that the Greek cultural background lacks contradictory dimensions (e.g., preference for the future, confidence). However, the above dimensions seem to prevail in terms of values in the cultural background of the members of Greek society. (Petrakis, 2012, p. 238) In the following, we will discuss these features and some of their consequences in more detail.

 

Uncertainty avoidance

Acemoglu and Robinson point out that extractive institutions, by creating unconstrained power and great income inequality, increase the potential stakes of the political game. Because whoever controls the state becomes the beneficiary of this excessive power and the wealth it generates, extractive institutions create incentives for infighting in order to control power and its benefits. (Acemoglu & Robinson, 2013, p. 344) In this light, the evolution of the modern Greek history, characterized by events that mainly contribute to the growth of the country’s systemic risk, becomes obvious. The title of the book by G. B. Dertilis says it all: ‘Seven Wars, Four Civil Wars, Seven Bankruptcies 1821-2016’. As a result of the related suffering and uncertainty comes the longing for certainty.

It goes without saying, that uncertainty avoidance feeds and is fed by the clientelist state described above, as individuals, for example, bargain with those in power for a permanent position in the public sector in exchange for their support.

It is also worth noting that the need for assurance against future developments is covered to some extent through the particular preference of the Greek society for a specific investment form: housing. This is why there are very high percentages of owner-occupied dwellings in Greece, and, of course a great part of personal wealth takes the form of investments in housing. (Petrakis, 2012, p. 240)

 

Orientation towards the present

It becomes clear from the above descriptions that the Greek political institutions are mostly oriented towards short term gains. Similarly, we saw that the organization of the economy favors established entities rather than the realization of future oriented innovative ideas. These and the resulting often political and economic upheavals described in the previous section made the Greeks present oriented. Indicative resulting consequences are discussed below.

First, savings as a percentage of the available income are particularly low. Although the issue of the income level in relation to the basic subsistence needs cannot be ignored, the perception of the future plays an important role. If this perception is limited, then the need for normalizing consumption during the individual’s life span does not seem significant. Thus, this savings behavior may be explained mainly based on the preference for the present and the existence of high uncertainty levels in Greek society. Under these circumstances, saving for the future makes no particular sense. This situation has a significant negative impact on the balance of payments and, of course, on the Greek economy’s self-financing capacity. From one point of view, the income percentage placed in savings could be one of the key factors with important potential for explaining both the Greek economic problem and its future development. (Petrakis, 2012, pp. 239-240)   

Second, considering that innovative activity is linked to a clear orientation towards the future among businessmen, given that the innovation results are not immediately perceived, the businessman should have a certain interest in the future, when the results of the present business actions will become visible. At the same time, individuals engaged in innovative business activities are expected to undertake reasonable levels of risks. (Petrakis, 2012, p. 245) Thus, orientation towards the present and the previously discussed uncertainty avoidance have devastating effects on entrepreneurial activity.

Third, orientation towards the present also results in time immobilization. A typical example of this is the delayed response to the modification of education opportunities. For instance, it is quite clear that medical (not nursing) studies produce degree holders who, due to over-production, are hard to absorb under the circumstances of the Greek market. Nonetheless, the pressure to enter similar schools is particularly high. In other words, even though, for years, there has been a communication signal stating that this particular choice entails many difficulties, this signal is not transformed into a guiding force that would change the model of educational services demand. Thus, the Greek society, given that it has no future horizon and is characterized by time immobilization, uses the projection of the past as an exclusive substitute for predicting the future. The explanation of this behavior relies on the perseverance on the present and on the predominance of uncertainty for the future. In such a context, the past becomes a valuable source of information because it bears, above all, the element of certainty. (Petrakis, 2012, p. 241)

 

Projection of collectivism to the detriment of privacy

The above analysis justifies why the Greeks distrust governments, parties, the TV, ministries and banks. (Petrakis, 2012, pp. 199-200) Thus, they need to build and rely on support networks outside the official institutions. In the section on economic institutions, for example, we briefly implied the importance of friends and family for the prospective entrepreneur, because of the difficulty of ensuring the proper financing of the private sector. A more distorted result of this distrust is clientelism, an attempt to reap benefits from a political party in exchange for the voter’s loyalty and support. These are examples of in-group collectivism.

The aspect of in-group collectivism relates to the different treatment of members of a group versus individuals who do not belong to it. In-groups could include family, relatives and friends, party members or supporters, and in-group members enjoy protection, trust and support while providing faith, devotion and sacrifice in exchange. On the other hand, individuals who do not belong in this group are treated with suspicion and animosity. (Petrakis, 2012, p. 143)

Thus, as explained above, extractive institutions create incentives for infighting in order to control power and its benefits and, at the same time, extractive institutions, through in-group collectivism, create the factions that participate in the fighting. Other, less lethal consequences of in-group collectivism include, for example, the entrapment of youngsters in their family cycles, where they are forced to continue a family business against their lickings or potential talents. This is another  expression of time immobilization described in the previous section.   

 

Acceptance of power distance

To understand power distance let’s consider for a moment the role of family networks in Greece mentioned previously as examples of in-group collectivism. Family networks in Greece play a crucial role. The absence of an extended social state and its services is, in essence, compensated for by family care, leading to an important decrease in the demand for public services such as kindergartens, old-age homes and unemployment coverage. This structure broadens social coherence and intra-family social capital to the degree that trust and the mutual accommodation of family members increase while offering grounds for the exercising of family entrepreneurship based on autonomous intra-family planning. However, family networks also engage an important part of the active workforce, such as women for child care, who are severed from the labor market, leading to multiple social and financial consequences. (Petrakis, 2012, p. 147) In fact, the CLOBE research on Greek culture demonstrates that there is extensive inequality between men and women in Greek society. (Petrakis, 2012, p. 143) The term ‘power distance’ refers exactly to the inequalities arising from extractive institutions.

Studies actually show that Greece can be considered a typical Mediterranean country in terms of its cultural values. The analysis furthermore compares the cultural models between Greece, Turkey, and the Mediterranean, Northern European and Arab countries. It shows that the Turkish cultural model is very similar to that of Mediterranean countries. Moreover, the Mediterranean countries’ model is much closer to that of Arab countries than is to that of Northern European countries. Northern European countries accept less the existence of inequalities. (Petrakis, 2012, pp. 139-141)

 

Masculinity

The domain of ‘masculinity/femininity’ captures the degree to which ‘masculine’ values, such as good performance, success and competition, dominate over ‘feminine’ values, e.g. quality of life, preservation of good personal relationships, convenience, care for the feeble and solidarity. (Petrakis, 2012, p. 138)

A typical aspect of the masculinity of the Greek society is the in-group collectivism, described above, promoting competition towards other groups. Indeed, if one looks at institutional collectivism this time, that is the degree to which the society as a whole favors collective over individual behaviour, research reveals the individualistic nature of the Greeks. In institutional collectivism, Greece is at the bottom of the list of the countries investigated. Greeks find it difficult to operate as a team. (Petrakis, 2012, p. 142)

Another example relates to the ‘feminine’ societies’ emphasis on social relationships and helping one another, as might be reflected in public policies that favor income redistribution and increased social spending. Income redistribution is affected by different mechanisms included primarily in the tasks of fiscal policy. More specifically, these mechanisms are divided into: (i) Transfer payments: unemployment benefits, disability benefits, social security schemes, pensions. (ii) Progressive taxation, whereby higher incomes correspond to higher tax levels. (iii) Public provision of social services: the main examples of social services in Greece are education and health care. According to OECD statistics Northern and Central European countries have higher levels of social transfers as a percentage of GDP and comparatively lower levels of uneven distribution of income. Greece, in relation to other European countries investigated, has a highly uneven distribution and lower social spending. (Petrakis, 2012, pp. 226-227) 

 

Lack of confidence

Coming to the close of this article, from what has been described so far, Greeks do not have much to be confident or optimistic about. The pessimism observed in Greek society is connected to the lack of future orientation (although it is difficult to understand whether the lack of future orientation and pessimism are connected through a causal relationship or if they simply coexist as two situations with probably common origins). Specifically, a significant pessimism towards the future is observed in Greece, as expressed through the predominance of bleak opinions about the financial situation and unemployment. As portrayed in Eurobarometer’s research between 2000 and 2008, the percentage of Greeks expecting a better outcome in all issues has decreased steadily. (Petrakis, 2012, p. 153)

 

V Conclusions

The period just before the most recent, and therefore most known, Greek economic crisis starting in the year 2008 offers an excellent opportunity to study more clearly what I have been suggesting throughout this series of articles as the reason behind the maladies of the modern Greek state: extractive political and economic institutions.

But is it reasonable to suggest that Greece, a member of NATO and the European Union (including the Euro zone), whose per capita GDP ‘exploded’ after 1953 from ca. 2,000 US dollars to more than 16,000 US dollars in 2008 (Petrakis, 2012, p. 129) a failed state, as it is at least suggested by the title of the book of Acemoglu and Robinson (Acemoglu & Robinson, 2013) for countries governed by extractive political and economic institutions?

For sure Greece is not like Zimbabwe or Sierra Leone. Greece is a case of a country that shows growth under extractive political and economic institutions. Indeed, the theory suggested by Acemoglu and Robinson does not suggest that extractive political and economic institutions are inconsistent with economic growth. On the contrary, every elite would, all else being equal, like to encourage as much growth as possible in order to have more to extract. Extractive institutions that have achieved at least a minimal degree of political centralization are often able to generate some amount of growth. What is crucial, however, is that growth under extractive institutions will not be sustained, for two key reasons. First, sustained economic growth requires innovation, and innovation cannot be decoupled from creative destruction, which replaces the old with the new in the economic realm and also destabilizes established relations in politics. Because elites dominating extractive institutions fear creative destruction, they will resist it, and any growth that germinates under extractive institutions will be ultimately short lived. Thus, the multiple economic crises over the modern state’s history. Second, the ability of those who dominate extractive institutions to benefit greatly at the expense of the rest of society implies that political power under extractive institutions is highly coveted, making many groups and individuals fight to obtain it. Thus, the multiple upheavals (including civil wars) over the modern state’s history. (Acemoglu & Robinson, 2013, p. 430)

An indicative proof of the above claim is the fact that despite high growth rates, during the period studied here, Greece still has one of the highest poverty rates in the European Union. In fact, between 1994 and 2007 the percentage of the population below the poverty line has been relatively stable between 20% and 23%, among the highest in the Eurozone. The figure indicates that high growth rates have had little impact on both the levels and the risk of poverty and on the uneven distribution of income in Greece. (Petrakis, 2012, p. 228)

 

The site has been offering a wide variety of high-quality, free history content since 2012. If you’d like to say ‘thank you’ and help us with site running costs, please consider donating here.

 

References

Acemoglu, D., & Robinson, J. A. (2013). Why Nations Fail. London: Profile Books ltd.

Close, D. (2006). Greece since 1945: Politics, Economy and Society. Thessaloniki: Thyrathen (in Greek, available also in English by Routledge).

Dertilis, G. B. (2020). Seven Wars, Four Civil Wars, Seven Bankruptcies 1821-2016. Athens: Gutenberg (in Greek).

Leounakis, N., & Sakellaris, P. (2014, December). Athens University of Economics and Business. Retrieved from https://www.dept.aueb.gr/sites/default/files/econ/dokimia/AllDP162014.pdf

Papageorgiou, T. P. (2025, May 8). History is Now Magazine. Retrieved from https://www.historyisnowmagazine.com/blog/2025/5/8/the-modern-greek-state-19751996-three-elephants-two-tigers-and-a-lioness?rq=Papageorgiou

Petrakis, P. (2012). The Greek Economy and the Crisis, Challenges and Responses. Berlin Heidelberg: Springer-Verlag.

Posted
AuthorGeorge Levrier-Jones

Just after 9pm on a cool September evening in 1943, a large group of soldiers calmly walked the mile home to camp where they armed themselves with tommy guns, ammunition and bayonets. Putting themselves into formation, they marched back into town, three-a-breast. The sound of their army-issue boots striking the road for nearly a mile echoed heavily in the pitchy-ink of the blacked-out night-time and is something witnesses remember to this day. It seemed as if a ‘whole company’ of troops was moving through the night, it was said later.

Here, author Kate Werran tell us about African American servicemen in Britain during World War Two.

In England, Major Charity E. Adams, Columbia, South Carolina., and Captain Abbie N. Campbell, Tuskegee Institute, Tuskegee, Alabama, inspect the first members of the African American Women's Army Corps assigned to overseas service.

Undoubtedly the troops who were on the move were ready for the fight of their lives – it just wasn’t the official enemy they had in their sights. Because unbelievably this was not happening in mainland Nazi-occupied Europe, but on Britain’s homefront – specifically the market town of Launceston in Cornwall. And these were American soldiers. Military police patrolling the town could sense impending danger. ‘Everything was so tense that evening that we thought that something might start,’ said one. Another added that all evening ‘…you could feel the tenseness in the air.’ Even publicans working in the town’s many drinking houses felt this was the calm before the storm.  One shut early that evening saying how he just sensed ‘…something brewing.’

Suddenly the marching troops appeared ‘in a body’ from out of the darkness to encircle a group of military policemen, fellow Americans, who were standing chatting next to a jeep parked near the town’s war memorial. ‘We saw forty to fifty soldiers coming up the street. They had overcoats on. They walked up almost in formation, and straight toward us… and [we] thought trouble was about to begin,’ said one of the surrounded. A man, who seemed to be spokesman for the group, said very quietly: ‘Why don’t you let us come into town, come into the pubs?’. Flashlights snapped on. ‘Hands up!’ was shouted. The military police raised their arms and backed up. As they did, ‘I heard bolts open on rifles,’ said the jeep’s driver. There was just time for the terrifying realisation to sink in that their compatriots were not only armed but already taking aim when: ‘I heard a bolt crack and a shot landed at our feet. Someone hollered ‘DUCK’. I jumped in behind the wheel of a jeep.’ Next, a volley of fire. ‘I felt a bullet whizz past me.’ A flashlight revealed a soldier ‘with a denim hat and overcoat firing a rifle from the hip and he was really pumping them out.’ A pause. Then chaos as British soldiers, civilians, WAAFs and Land Army girls, as well as the Americans under fire, scrambled for cover amid ricocheting bullets. One old man told the Daily Mirror the next day: ‘There hasn’t been anything like this since the days of the smugglers.’

No-one knows for sure exactly how many soldiers were armed and fighting that night. What is universally acknowledged is a large number was involved – from the 581st Ordnance Ammunition Company who were firing at soldiers from the 115th Infantry’s Second Battalion. It was all over in five minutes before the shooters melted away into the night. What they left behind was a shot up town centre, soldiers and citizens shaken, store windows in shatters, two hole-ridden US army jeeps (it subsequently took 20 soldiers to lift them bodily away), two sergeants with mashed-up legs, the visiting US army with its reputation hugely-dented and bullet holes in Cornish bricks and mortar which for more than seventy years were the sole reminder of an all-American gunfight army authorities wanted forgotten and tried their best to obscure. Because the inconvenient truth here was that these were members of an African American ordnance company who were taking on the white soldiers who policed them. The level of injuries given the firepower on hand that night shows precisely that wholesale slaughter was certainly not the intent, although military prosecutors defied their own investigators recommendations and insisted on bringing attempted murder charges alongside mutiny et al. The ‘mutineers’ were making a point and it was one that was needed to be made.

 

African American servicemen

There were around 130,000 African Americans among the 1.5 million US servicemen who were in the United Kingdom at any one time in World War Two – altogether 4 million Americans would come to Britain. But this segregated army had an inherent racial friction which began to spill over into violence with increasing frequency whenever the two races met in Britain’s ‘green and pleasant’ land. Riding on the tide of simmering racial tension in US training camps and explosive riots in five American cities during the long hot ‘bloody’ summer of 1943, this enmity inevitably floated across the Atlantic with each wave of arriving servicemen.

At first it baffled the British. Despite ruling an empire upon which ‘the sun never set’ there were surprisingly few people of colour, roughly 15,000, in Britain during World War 2. Undoubtedly, in such a mono-cultural society, racism was bound to thrive as proved by race riots in 1919 and exemplified by the experiences of Learie Constantine, the West Indian cricketer, who came to live in Britain in the 1920s and described how ‘personal slights’ were ‘an unpleasant part of life in Britain for anyone of my colour.’  But evidence from thousands of censored letters, secret reports from the Ministry of Information’s Home Intelligence division, surveys for Mass Observation (the nascent polling organisation) as well as editorials and letters to newspapers and government departments shows this confusion amongst ordinary Brits soon morphed into outright rejection of the ‘colour bar’ – and decided support for African American troops. From George Orwell who kicked off his first article for Tribune with “The general consensus of opinion seems to be that the only American soldiers with decent manners are the Negroes” to a Blackpool factory worker raging against how “…the American troops literally kick, and I mean kick, the coloured soldiers off the pavement." Whatever British and American officials would have people believe, displays of discrimination and violence shamelessly paraded on British cobbles and village greens provoked a general sympathy amongst ordinary British people for the African American soldiers who came to trial and train for D-Day and put an invisible wedge in Anglo-American relations.

An American Uprising in Second World War England: Mutiny in the Duchy tells the story of the soldiers, the trial and what this meant for Britain, America and what has subsequently been dubbed the ‘special relationship’.

 

Court

Turning that first page of the original court martial transcript, which arrived courtesy of a freedom of information request, was like beginning a film script. So too was the narrative that developed behind why the shooting happened, which I pieced together using once-secret government documents from various sources including the National Archives, the National Archives of America and the British Library. By the time I found out the targeted soldiers in Launceston happened to be tasked with Omaha Beach on D-day – it felt almost inevitable. The extraordinary timeline around this Launceston uprising made the 581st Ordnance Ammunition Company, the men – and 26 September 1943, the hour. It was a slam dunk of a story and needed to be told, especially since nearly 80 years on nobody knew what had really happened here, why and the ultimate fate of those involved.

The story began with the United States army that came to trial and train for D-Day, which was segregated, mimicking the ‘Jim Crow’ separation of society in the American south.  One in every ten of its soldiers was African American and eventually 130,000 came to the UK before journeying to France after D-Day. With the rare exceptions of units such as the Tuskagee airmen and the 320th Barrage Balloon Battalion, these servicemen soon discovered they would be fighting from the supply side of things -  the decidedly more inglorious face of battle incorporating the Quartermaster Corps, the Corps of Engineers and the Transportation Corps. Their training experience was universally discriminatory, oppressive and – all too often - violent. This fractious rubbing alongside of African American and white soldiers was happening in camps across the nation. The Launceston ‘mutineers’ time in training was embarrassingly typical, according to Walter White, secretary of the National Association for the Advancement of Colored People (NAACP) who discovered they were repeatedly denied the chance for rest and recreation.

Outside United States Army camps, general racial tension spilled over in the long hot ‘bloody’ summer of 1943 when full-blown fights, riots and clashes flared in five American cities. The Second World War had heightened inequality between black and white communities over housing, work and even who got plaudits for fighting, and feuding broke out first in the streets of Los Angeles. Next it exploded in Detroit leaving 34 dead – 25 of whom were black – before ending in New York when rioting erupted after a policeman killed a black soldier. The ripple effect in the military was almost tangible and inevitably floated across the Atlantic to Britain with each wave of arriving servicemen. It was on this swell that the 581stOrdnance Ammunition Company came riding into Cornwall.

Although ruling ‘an empire on which the sun never sets’, there were surprisingly few people of colour in Britain itself when the Second World War started. The black British community was no bigger than about 15,000 and centred mainly in port cities such as Bristol, Cardiff and Liverpool. Unsurprisingly, in such a mono-cultural nation, racism was bound to exist. Race riots broke out in 1919 around those same British port towns leaving five dead, hundreds injured and 250 arrested. Learie Constantine, the cricketer who moved from the West Indies to Britain in 1923, described how ‘personal slights’ were ‘an unpleasant part of life in Britain for anyone of my colour.’ And this only increased, with depressing predictability, probably more frequently in the upper than lower echelons of British society, once the segregated Americans arrived.

Plentiful anecdotal evidence of American scuffles being played out on English cobbles and greens proliferated as black soldiers were pushed out of pubs, off buses and away from cinemas. However, a fresh look at evidence shows this in fact inspired a powerful British feeling about the visiting American army and race – which was recorded everywhere from Mass Observation and weekly secret Home Intelligence Division reports – to newspaper editorials and letters picked up by the censors. The story was nearly always the same. As George Orwell wrote in his first piece for Tribune: “The general consensus of opinion seems to be that the only American soldiers with decent manners are the Negroes.” Mass Observation, the nascent polling organisation, concluded feelings about Americans was ‘can be fairly sharply divided into feelings about white and coloured troops. As a general rule…the latter have made themselves more liked in his country.’ The feeling came from Blackpool, where one report told: ‘I have personally seen the American troops literally kick, and I mean kick, the coloured soldiers off the pavement’ to Essex when a ‘particularly disgusted’ father protested angrily to the Foreign Office that American white soldiers set upon a black soldier who ‘dared’ to take to the floor with a white woman at a dance.

 

American disputes

Put simply, the British sided with the underdog and were beginning to involve themselves in American disputes up and down the country. One of the most extreme cases was in Bamber Bridge, Lancashire, where one soldier was killed and several MPs and soldiers injured in an armed incident sparked by heavy-handed military policing in June 1943. Here, the British servicewomen and locals drinking at Ye Olde Hob Inn Public House backed the African Americans. Two hundred odd miles away in Corsham, Wiltshire, just a few days later again the violence of American military policemen towards African American soldiers caused a near riot. Head of Southern Command Sir Harry Haig reported: ‘A large group of civilians gathered and were heard saying: “They don’t like the blacks”; “Why don’t they leave them alone?”; “They’re as good as they are"; “That’s democracy.” The situation eventually developed into one of mass insubordination by the coloured troops, and at one point a coloured sergeant who had been ordered to bring his Company Commander, replied: “We aint no slaves, this is England.” The clash in Launceston is a perfect reflection of both emotions within the US Army and its outward-facing relations with the British home front at that precise moment in time. After that, things only got worse. A month before D-Day, a US Army morale report noted tersely that ‘the whites dislike the Negroes and the Negroes dislike the whites…The predominant note is that if the invasion doesn’t occur soon, trouble will.’ Clearly, the ‘colour bar’ was a wedge in the American army and it was something the authorities were determined to obfuscate.

By a quirky twist of fate the 581st Ordnance Ammunition Company arrived in Cornwall, slap bang in the heart of GI country, in the dying ebbs of that scratchy summer. The 29th Infantry division relocated from Tidworth Barracks, Wiltshire, in May 1943 to Devon and Cornwall  where it planted its three principal units and it was the Second Battalion of the 115thInfantry Regiment that came to Launceston and built a base for itself at a farm on the top of a hill nearly a mile from the town’s market square - and half a mile from the African American soldiers’ base at Pennygillam. It is difficult to exaggerate just how much swing and glamour forced its way through the cobbles and winding country roads of this market town edging Bodmin Moor as a result of their arrival. Clinging to the coat-tips of incoming US Army arrivals, it meant untold luxuries like Hershey’s chocolate and Lucky Strike cigarettes to visits from big band leader Artie Shaw and boxing legend Joe Louis. But underneath all the glamour pulsed a racial tension beating at the heart of the US Army which turned some British people against white GIs and hurled an invisible lance into Anglo American relations.

Curiouser still, was that the men arrived days after events in Britain polarised feeling about ‘the colour bar’ or segregation once and for all, starting with that most quintessential bastion of British sport – cricket. On September 3, news leaked that Learie Constantine, captain of the West Indies and a professional cricketer in England since the 1920s, had been thrown out of a London hotel because of American complaints. Newspapers had a field day. The response was a national outcry monitored secretly by the British government’s Home Intelligence Unit. Hot on its heels came the case of Amelia King, a young black British woman from Stepney, who was refused entry to the Women’s Land Army because it was felt white farmers would reject her help solely because of her ethnicity. Instead of taking the rejection lying down, she coolly raised it with her MP who voiced the outrageous situation in Parliament four days after the 581st arrived in Britain – and barely a week after the Constantine scandal erupted. It was the deciding blow. What followed was an almighty row about the blindingly unfair treatment of Constantine and King which rumbled on throughout September and October culminating just days before the Paignton court martial opened with a volcanic poll for Mass Observation revealing 75 per cent of respondents felt ‘definite disapproval’ of the colour bar.

When the 581st Ordnance Ammunition Company arrived in September they had been restricted to their last two camps in America. In their first roll call in Cornwall, they were told they were to be restricted for a third time as they did not have the correct ‘dress uniform’ to go into town – although it didn’t seem to stop fellow white soldiers. It was the final straw. The American authorities tried repeatedly to censor the reporting of the shooting that followed; firstly, by trying to ban the reporting of race in the Paignton court martial which had, by law, to be held in public – a move foiled by a plucky objection from the Daily Mirror. Next it banned the public reporting of the sentence. This was precisely because of what the episode said about the state of its army’s internal relations and the truth it revealed at the heart of what would subsequently be dubbed the ‘special relationship’. George Orwell was alive to it, the trial itself caused Churchill ‘grave anxiety’ and it was something the authorities wished would just go away. But it couldn’t because the shoot out that happened in Launceston one September night in 1943 was both a result and reflection of race relations in Britain in that tunnel of time – of the enmity between white and black Americans and the sympathy of Brits for African Americans. It explained the court martial’s bulging press benches – and why it made headlines all over the United Kingdom and the United States.

 

An American Uprising in Second World War England: Mutiny in the Duchy recounts what happened next in this fascinating episode that crosses over between military and social history. It is available here in the UK through Pen & Sword Books. The book is available to US readers here.

There is a particular kind of thrill that comes from reading a historical document in its original language. The words land differently — heavier, more alive, stripped of the buffer that translation provides. For anyone drawn to the Spanish Civil War (1936–1939), one of the twentieth century's most passionate and heartbreaking conflicts, learning Spanish is not merely a practical skill. It is an act of historical immersion. The slogans, the poetry, the propaganda, the letters home from the front — none of them survive translation entirely intact.

The war itself was fought in language as much as in trenches. Republicans rallied around ¡No pasarán! — "They shall not pass!" Nationalists answered with ¡España, una, grande, libre! Both sides understood that words could mobilize, terrify, and outlast bullets. To understand the war, you need to understand the language in which it was waged.

Catherine Hryhorenko explains.

The Language of the Republic: Solidarity, Struggle, and Soil

Republican Spain drew on a rich vocabulary of labor and revolution, much of it borrowed from anarchist and socialist traditions. Terms like compañero (comrade), milicia (militia), and frente popular (popular front) were not just descriptors — they were identity markers that signaled allegiance and ideology. George Orwell, who fought with the POUM militia in Catalonia, wrote extensively about how the language of the Republic shaped daily life in Homage to Catalonia. Even the act of addressing strangers with the informal rather than the formal usted was, for a time, a political statement.

Dr. Helen Graham, one of Britain's leading historians of modern Spain and author of The Spanish Republic at War, has argued that understanding Republican Spain requires grappling with the fragmented and contested nature of the movement itself. "The Republic was not a single thing," she has noted, "but a coalition of sometimes opposing visions — and that fracturing was visible in the language different factions used to describe the same events." Reading Republican pamphlets and manifestos in Spanish reveals these fault lines with a precision that no secondary source can fully replicate.

 

Franco's Words: The Rhetoric of the Nationalist Uprising

The Nationalist side deployed a different lexicon, one rooted in Catholic nationalism, military honor, and the language of "crusade." Franco's regime famously described the coup as a cruzada — a term loaded with centuries of Reconquista mythology. The word patria (homeland) and orden (order) appeared incessantly in Nationalist broadcasts and newspapers, framing the uprising as a defense of timeless Spanish values against a foreign "red" menace.

Paul Preston, the preeminent English-language historian of the Franco era and the author of the exhaustive biography Franco: A Biography, has spent decades analyzing how the regime constructed and weaponized language. Preston emphasizes that Francoist rhetoric was deliberately designed to make the war's violence seem both necessary and righteous — and that this rhetorical architecture is most visible when read in the original Spanish, where the cadence and connotation of the vocabulary carry a weight that translations inevitably flatten.

The effects of this linguistic policy stretched well beyond the war itself. As scholars have documented, Franco's censorship regime — which required every book published in Spain between 1936 and 1966 to pass through a national board of censors — distorted the country's cultural memory for generations. Understanding that distortion requires reading both what was published and what was suppressed, and for that, you need the language.

 

Poetry, Song, and the Cultural Memory of the War

No discussion of the Spanish Civil War's relationship to language would be complete without acknowledging the extraordinary literary culture it produced. Federico García Lorca, murdered by Nationalist forces in the opening weeks of the conflict, had already established Spanish poetry as a living, politically charged art form. Miguel Hernández, the shepherd-poet from Orihuela who fought for the Republic and died in Franco's prisons, wrote verses whose power rests entirely on the music of Castilian Spanish.

The Republican anthem Ay Carmela, sung by soldiers at the front, draws on folk traditions that make immediate sense to a Spanish speaker but that require lengthy annotation for anyone relying on a translation. The same is true of the labor anthem A las barricadas and of the countless corridos and coplas that circulated through both camps. These texts are primary sources. They record what people believed, feared, and hoped for. Learning Spanish means being able to encounter them on their own terms.

 

Regional Languages: Catalan, Basque, and the Complexity of Spanish Identity

Castellano — standard Spanish — is only part of the picture. The Civil War also played out through Catalan, Basque, and Galician, languages whose suppression under Francoism became a form of cultural erasure. Barcelona, the anarchist heartland, was a Catalan-speaking city. The Basque Country, despite being predominantly Catholic, largely sided with the Republic in defense of its regional autonomy — a political irony that makes little sense without understanding the linguistic and cultural stakes involved.

Dr. Mary Vincent, a historian at the University of Sheffield whose work focuses on religion, gender, and the Spanish Civil War, has written about how Francoist language policy — the brutal suppression of regional languages in favor of a monolithic Castilian identity — was itself a form of violence. The history of this suppression is still being written. Spain's ongoing excavation of mass graves and the parallel effort to recover silenced languages and cultures are, in many ways, the same project. To read Spanish is to begin understanding this landscape; to learn something of Catalan or Basque alongside it is to understand why the war's wounds have never fully healed.

 

Primary Sources: What You Miss in Translation

The Spanish Civil War is one of the most extensively documented conflicts in modern European history. The archives in Salamanca, Seville, and Barcelona hold millions of documents — interrogation transcripts, military orders, personal correspondence, newspaper archives — the vast majority of which have never been translated into English. For any serious student of the period, Spanish is not optional. It is the key to the room where the real evidence lives.

Beyond the archives, there is a rich tradition of Spanish-language scholarship on the war. The most nuanced contemporary research, including work on memory, trauma, and the ongoing excavation of mass graves, is being produced by Spanish academics writing in Spanish. Figures like Julián Casanova, whose studies of violence and religion in the Civil War are foundational, write for a Castilian-speaking audience. Reading them in the original is a different experience from reading them in translation — there is a specificity of tone, a precision of vocabulary, that inevitably gets smoothed away.

 

Why Language Learning Is Worth the Effort: The Science Makes It Clear

If the historical case for learning Spanish isn't enough on its own, the cognitive one adds compelling weight. A study published in Nature Aging and covered by National Geographic analyzed data from more than 86,000 adults across 27 European countries and found that people who regularly use more than one language are half as likely to show signs of biological cognitive aging. "It's never too early or too late to start learning another language," Northwestern University professor Viorica Marian told National Geographic.

There are professional returns too. A Forbes analysis of workforce data found that multilingual workers earn an average of 19% more than their monolingual counterparts, with 40% reporting that their language skills directly helped them secure their job. But for the history enthusiast, the personal return is arguably richer than any salary premium: the ability to read a 1937 Republican pamphlet, or a Francoist newspaper editorial, or a letter from a miliciano to his family in Andalusia — in the language in which it was written.

 

Learning Spanish for History: Where to Start

For history enthusiasts who want to deepen their engagement with the Spanish Civil War, getting started in Spanish can feel daunting. A sensible first step is to check your Spanish level — knowing where you stand makes it far easier to build a focused study plan. And purpose-built learning platforms have made that entire process more accessible than ever — for a subject as rich and specific as this one, having the right tool matters.

One platform worth considering is Promova, a Ukrainian-founded language learning app that now serves over 20 million learners in 190 countries. Promova offers structured Spanish courses built around real-life scenarios and designed by professional linguists — making it well-suited for adult learners who want to build vocabulary with purpose rather than memorize phrase lists. The platform includes AI-powered conversation practice, bite-sized lessons, and dedicated programs for professional learners through Promova for Business — useful for organizations or academic departments looking to build language skills across a team.

Promova has also made language learning feel culturally grounded, including a course developed in partnership with Oleksandr Usyk, demonstrating the platform's commitment to connecting language learning to real human stories.

Elly Kim, e-learning lead at Promova and a linguist with extensive experience designing language courses, reflects on the connection between language and historical understanding: "Learning a language is never just about grammar or vocabulary — it's about understanding the people who lived and breathed that language, what they cared about, what they feared. For students of history, that connection is everything." You can explore her work and educational writing at promova.com/author/elly-kim.

 

Expert Tips: Making Language Learning Work for Historical Research

Historians and language educators who work at the intersection of these two fields tend to share a few common pieces of practical advice:

Start with the era's vocabulary. Paul Preston has noted in interviews that Civil War Spanish has its own register — terms from military organization, anarchist theory, and Catholic nationalism that won't appear in a standard beginner's course. Building a glossary of period-specific vocabulary early pays dividends when you encounter primary sources.

Read newspapers from the period. The Hemeroteca Digital of the Biblioteca Nacional de España has digitized thousands of newspapers from the 1930s, many of them freely accessible. Even reading headlines is useful for building historical context and period-appropriate vocabulary.

Don't neglect Catalan. Helen Graham has consistently emphasized that Catalonia's role in the conflict is underappreciated by English-language readers. Even a basic familiarity with Catalan helps orient you in Barcelona-centered primary sources and in the broader political landscape of the Republic.

Use literature as a bridge. Julián Casanova has suggested that fiction — particularly novels by contemporary Spanish authors like Almudena Grandes, whose Episodios de una guerra interminable series reimagines the war and its aftermath — is one of the most effective ways to absorb historical vocabulary in context. The stories carry you along; the language does its work quietly.

 

A Living Archive

The Spanish Civil War ended in 1939, but it has never really stopped being fought — in memory, in politics, in the ongoing excavation of unmarked graves across Castile and Andalusia. Spain's 2007 Law of Historical Memory and its 2022 successor have made the recovery of Republican victims an active national project, generating new documents, testimonies, and debates in Spanish every week.

For anyone who cares about this history, learning Spanish is an investment that compounds with every document you open. Each new word is a small key. There are more doors in the archive than any one person could open in a lifetime — but every one of them is worth trying.

 

 

Please note that this piece contains sponsored links. These help us with site running costs and are in no way affiliated with the site.

Posted
AuthorGeorge Levrier-Jones

In the long and often turbulent history of the Medal of Honor, one name stands entirely alone: Mary Edwards Walker. She remains the only woman ever to have received the United States' highest military decoration, and her life was as unconventional as the distinction itself. A surgeon, reformer, prisoner of war, and tireless advocate for women's rights, Walker's story is inseparable from the upheaval of the American Civil War, a conflict that reshaped the nation and, in her case, opened a narrow but historic path into military service.

Terry Bailey explains.

Mary Edwards Walker.

Mary Edwards Walker was born in 1832 in Oswego County, New York, into a household that quietly defied many of the era's expectations. Her parents were progressive thinkers who believed firmly in education, self-reliance, and physical health. Her father, a farmer with reformist views, insisted that his daughters receive the same rigorous schooling as his sons—an unusual stance in mid-nineteenth-century America. From an early age, Mary absorbed the idea that intellectual capacity was not determined by gender. She also rejected restrictive clothing, later arguing that heavy skirts and corsets were both unhealthy and symbolic of women's social confinement.

Her determination led her to pursue a medical career at a time when female physicians were rare and frequently dismissed. She enrolled at Syracuse Medical College, one of the few institutions willing to admit women, and graduated in 1855 with a Doctor of Medicine degree. Even with credentials in hand, she struggled to establish a practice. Patients were hesitant to trust a woman doctor, and professional networks largely excluded her. Yet she persevered, convinced that her skills would eventually find their proper arena.

That arena emerged with the outbreak of civil war in 1861. As the Union and Confederate states mobilized for what would become a four-year struggle of unprecedented scale, medical services were rapidly overwhelmed. Disease—typhoid, dysentery, pneumonia—claimed more lives than bullets, and battlefield surgery was often conducted in makeshift tents or barns with limited anesthesia and rudimentary sterilization. Determined to serve, Walker travelled to Washington, D.C., and petitioned the War Department for a commission as an army surgeon. She was refused solely because she was a woman.

Undeterred, she offered her services as a volunteer and began working in Union hospitals. Over time, her persistence and demonstrated competence earned her a contract appointment as an acting assistant surgeon with the Army of the Cumberland. This placed her in the Western Theatre of the war, where campaigns through Tennessee and Georgia were marked by relentless maneuvering and ferocious engagements. The struggle for control of strategic rail hubs such as Chattanooga and the drive toward Atlanta produced waves of wounded soldiers, and medical personnel worked under constant strain.

Walker frequently placed herself near the front lines, tending not only to Union troops but, when possible, to civilians caught in the crossfire. Her medical practice was guided by both professional duty and humanitarian conviction. In April 1864, during operations connected to the Atlanta Campaign, she crossed into territory controlled by Confederate forces to treat wounded men and suffering civilians. It was a bold and dangerous act. Confederate soldiers arrested her, suspecting that a woman in modified military attire moving between lines must be a spy. She was transported to Richmond, Virginia, and held as a prisoner of war. Confinement was harsh, food scarce, and uncertainty constant. Yet Walker endured several months of captivity before being exchanged in August 1864 as part of a formal prisoner swap. Her experience gave her a rare distinction: she was one of the few women formally held as a prisoner of war during the conflict.

In 1865, after the war had drawn to a close, President Andrew Johnson signed the order awarding Mary Edwards Walker the Medal of Honor. The citation recognized her meritorious service, devotion to the wounded, and steadfastness during captivity. Although the criteria for the award were broader in the nineteenth century than they would later become, her work near the front lines and her imprisonment under enemy authority were extraordinary by any standard.

Decades later, in 1917, a review board reassessed earlier awards and rescinded hundreds of Medals of Honor deemed inconsistent with newly tightened combat requirements. Walker's medal was among those revoked. She refused to surrender it, asserting that her service had been honorable and that no bureaucratic revision could erase lived reality. She continued to wear the medal daily, a small but potent act of defiance. In 1977, long after her death, the U.S. government restored her award, reaffirming her singular place in American military history.

Walker's postwar years were as combative in their own way as her time in uniform. She became a prominent advocate for women's suffrage, lecturing across the country and arguing that the Constitution already guaranteed women the right to vote. Her reformist zeal extended to dress reform; she adopted tailored jackets and trousers, insisting that practicality and health should outweigh social convention. For this she was ridiculed and occasionally arrested for "impersonating a man," yet she remained resolute. While she sometimes clashed with more cautious leaders within the suffrage movement, her independence and courage commanded respect.

Mary Edwards Walker died in 1919, just one year before the ratification of the Nineteenth Amendment secured women's suffrage nationwide. She did not live to cast a ballot in a federal election, but her life had already redefined the boundaries of possibility. In war, she proved that medical skill and personal bravery transcended gender. In peace, she continued to challenge the assumptions that had once barred her from a commission. Her Medal of Honor—awarded, revoked, and restored—serves as more than a military decoration. It stands as a testament to endurance in the face of prejudice, to professional commitment under fire, and to a lifetime spent pressing against the limits imposed by society. In Mary Edwards Walker's story, the upheaval of civil war intersected with the broader struggle for equality, and from that intersection emerged a legacy unlike any other in American history.

 

The site has been offering a wide variety of high-quality, free history content since 2012. If you’d like to say ‘thank you’ and help us with site running costs, please consider donating here.

Posted
AuthorGeorge Levrier-Jones

Early in the U.S. Civil War a prison exchange system was developed by agreement between the two sides. It called for equal exchanges of all soldiers captured based on rank. Once exchanged, these soldiers could return to their units. The balance remaining after equal exchanges were to be paroled, and not to take up arms again until they were formally exchanged.

Lloyd W Klein explains.

John A. Dix.

The Dix-Hill Cartel

On July 22, 1862, Union Maj Gen John A Dix and Confederate Major General Daniel Harvey Hill concluded an agreement for the general exchange of prisoners between the Union and Confederate armies. A scale of equivalents was developed wherein an officer might be exchanged for a certain number of enlisted men, or might entail a parole in which no military capacity was allowed until officially exchanged. Officers were exchanged for more soldiers than others. The operation of the system was:

Soldiers of equivalent ranks would be exchanged on a one to one value,

Corporals and sergeants were worth two privates,

Lieutenants were worth four privates,

A captain was worth six privates,

A major was worth eight privates,

A lieutenant colonel was worth 10 privates,

A colonel was worth 15 privates,

A brigadier general was worth 20 privates,

A major general was worth 40 privates, and

A commanding general was worth 60 privates.

 

The Exchange System worked well in 1862 but there were irregularities on both sides, in which paroled men nevertheless rejoined their units. Edwin Stanton wanted to suspend the exchanges because he felt that southern soldiers weren’t following the rules of the parole. Secretary Stanton saw a potential for Union soldiers to abuse the parole system. The Confederates had begun paroling a number of Western prisoners unilaterally, including some two thousand taken at the April 1862 battle of Shiloh. The violations continued with the parolees from Vicksburg and Port Hudson.

Storming Fort Wagner.

Cessation of the Cartel and Its Implications

In September of 1862, President Lincoln called for the enlistment of black soldiers into the Union Armies as part of the preliminary draft of the Emancipation Proclamation. In December 1862, President Davis responded by issuing a proclamation that neither captured black soldiers nor their white officers would be subject to exchange. That the black soldiers were fugitive slaves and subject to capital punishment.

In January 1863 the Emancipation Proclamation became official and the United States began the active recruitment of black soldiers. Jefferson Davis was incensed by this, and threatened severe actions.

President Davis made an official proclamation that black POWs were fugitive slaves. In May 1863, the Confederate Congress passed a joint resolution that formalized Davis' proclamation that black soldiers taken prisoner would not be exchanged: “That all commissioned officers in the command of said Benjamin F. Butler be declared not entitled to be considered as soldiers engaged in honorable warfare but as robbers and criminals deserving death, and that they and each of them be whenever captured reserved for execution.” Find the proclamation here: http://www.freedmen.umd.edu/pow.htm

 

The Lieber Code

Lincoln’s response was to assure that The Lieber Code, General Order 100, issued in April 1863, responded to this crisis. The Lieber Code was not written as a direct reaction to the collapse of the Dix–Hill Cartel or Jefferson Davis’s refusal to recognize Black Union soldiers as lawful combatants. But the final form, timing, and political purpose of the code were very heavily shaped by those crises. In effect, the breakdown of prisoner exchange and Confederate policy toward Black soldiers turned the Lieber Code from a general effort to codify the laws of war into a strategic and moral response to the Confederacy’s stance.

Work on the code had begun in 1862, well before the collapse of the prisoner-exchange system.

Henry Halleck had long wanted an American codification of the laws of war. Francis Lieber had been thinking about such a code for years, drawing on European theory. The War Department’s Law of War Committee met in late 1862—months before Davis’s December 1862 proclamation threatening to treat captured Black Union soldiers as slaves and their white officers as criminals.

This was in essence the first major revision of the 1806 Articles of War.

While not the original motivation, the timing and urgency of the Lieber Code reflect the breakdown of the Dix–Hill Cartel (mid–late 1862) and Davis’s policy. The Union needed a principled basis to suspend the Cartel One of its articles stipulated that the United States government expected all prisoners to be treated equally, regardless of color. By late 1862 it was clear that the Confederacy would not exchange or treat Black Union soldiers as POWs. The cartel was therefore unworkable. The Lincoln administration needed a legal and moral justification for halting exchanges without appearing to commit retaliation for its own sake.

The Lieber Code provided it—explicitly authorizing retaliation when the enemy violates the laws of war; the equal treatment of all lawful combatants regardless of race; the duty of the U.S. government to protect all its soldiers. This was crucial. It offered a codified, internationally resonant legal framework for the Union’s stance. The full document can be found here: https://avalon.law.yale.edu/19th_century/lieber.asp#sec3

Most of its ideas were incorporated into the Hague Convention of 1907, and remain among the fundamental rules of war to this day as an antecedent of the Geneva Conventions.

Franz Lieber was a German-American legal scholar. He had fought with the Prussian Army and been wounded at Waterloo. He later moved and taught for 20 years in South Carolina, where he was repulsed by slavery. In 1861 he became professor of law at Columbia University in NYC. Two of his sons fought for the Union, a third fought for the Confederacy and was killed in action. Halleck, a lawyer with an interest in International Law, consulted Lieber regarding ethical dilemmas early on and invited him, along with Stanton, to undertake this project.

The Lieber Code expressly forbade giving "no quarter" to the enemy (i.e. killing prisoners of war), except in such cases when the survival of the unit that held these prisoners was threatened. It forbade the use of poisons, stating that use of such puts any force who uses them entirely outside the pale of the civilized nations and peoples; it forbade the use of torture to extract confessions or information; it described the rights and duties of prisoners of war and of capturing forces. Most of its ideas were incorporated into the Hague Convention of 1907, and are the fundamental rules of war to this day.

The Lieber Code is formulated as a series of x articles, really just statements of principle. Section III is compised of articles 48 – 80 covers the principles involving prisoners of war. The Code Directly Addresses the Black Soldiers Question. All soldiers fighting under a recognized government are lawful combatants (Arts. 57–60). No distinction may be made on “color, descent, or condition” once they are uniformed combatants. Retaliation is justified if the enemy mistreats prisoners on racial grounds (Arts. 27–29). These provisions were absolutely a response to Davis’s proclamation of December 23, 1862 (declaring Black Union soldiers slaves “invading the South”), and the Confederate Congress’s subsequent approval of that policy. Lieber himself acknowledged this. His correspondence with Halleck in early 1863 shows that the issue of Black POW protection was explicitly in mind as the code was being finalized.

 

What the Lieber Code Said About POWs

Key Principles:

Humane Treatment

Article 56: Prisoners of war are “public enemies” and not criminals. They are to be treated with humanity.

Article 75: Prisoners must not be “subjected to any revenge or other ill treatment.”

 

No Torture or Cruelty

Article 16: “Military necessity does not admit of cruelty…nor of torture to extort confessions.”

 

No Retaliation Against POWs

Article 59: Reprisals must not include harming POWs unless it is a direct retaliation for mistreatment of one’s own POWs—and even then, only under strict necessity.

 

Rights and Respect

Officers were to be treated in accordance with their rank.

Prisoners were protected from violence, pillage, or abuse.

 

Labor

Prisoners could be made to work (Article 76), but only within humane bounds and consistent with their rank.

 

The Lieber Code expressly forbade giving "no quarter" to the enemy (i.e. killing prisoners of war), except in such cases when the survival of the unit that held these prisoners was threatened. Article 60 provides: “It is against the usage of modern war to resolve, in hatred and revenge, to give no quarter. No body of troops has the right to declare that it will not give, and therefore will not expect, quarter; but a commander is permitted to direct his troops to give no quarter, in great straits, when his own salvation makes it impossible to cumber himself with prisoners.

 It forbade the use of poisons (Article 70), stating that use of such puts any force who uses them entirely outside the pale of the civilized nations and peoples; it forbade the use of torture to extract confessions or information; it described the rights and duties of prisoners of war and of capturing forces.

Article 58 directly addresses the use of Black soldiers: “The law of nations knows of no distinction of color, and if an enemy of the United States should enslave and sell any captured persons of their army, it would be a case for the severest retaliation, if not redressed upon complaint. The United States cannot retaliate by enslavement; therefore death must be the retaliation for this crime against the law of nations.”

 

Application to the Prisoner Exchanges

Originally Edwin Stanton wanted to suspend the exchanges because he felt that Southern soldiers weren’t following the rules of the parole. Secretary Stanton saw a potential for Union soldiers to abuse the parole system. The Confederates had begun paroling a number of Western prisoners unilaterally, including some two thousand taken at the April 1862 battle of Shiloh. The violations continued with the parolees from Vicksburg and Port Hudson. It was Lincoln, the astute politician, who realized it would be unpopular to suspend exchanges for that reason, but if applied to the USCT it would be better accepted. The Code was therefore also a strategic countermove. The Union needed a public, intellectually credible, “laws of war” document to show why exchanges were suspended, why retaliation policies were lawful, why Black soldiers had to be protected, and why the Confederacy was violating international norms. The Lieber Code gave the Lincoln administration a fully articulated legal and moral position—something European observers were watching closely.

https://www.nps.gov/ande/learn/historyculture/grant-and-the-prisoner-exchange.htm?fbclid=IwAR0Re2-Imgr8m_qqAYEGK4iAeKRlEA3mafVQpHiqZHtkyRG_ELag97xEE1s&mibextid=kdkkhi

 

The Lieber Code was issued unilaterally by the United States, and no other nation was bound by its formalities at the time. However, when the Confederates breached its principles, the US government needed to respond.

At Fort Wagner, several black prisoners from the 54th Massachusetts were not exchanged with the rest of the white soldiers who participated in the assault on Fort Wagner in July 1863. This is the infamous attack where Colonel Robert Gould Shaw was killed leading his men in a charge. When a Union officer asked the Confederates at Battery Wagner for the return of Shaw's body, he was informed by the Confederate commander, Brigadier General Johnson Hagood, "We buried him with his _____."

On July 30, 1863, President Abraham Lincoln issued General Order 252, which effectively suspended the Dix-Hill Cartel until the Confederate forces agreed to treat black prisoners the same as white prisoners. Large scale prisoner exchanges ceased by August 1863, resulting in a dramatic increase in the prison populations on both sides. Neither side was prepared for this sudden responsibility. The inhumane consequences on both sides are well known.

Large scale prisoner exchanges ceased by August 1863, resulting in a dramatic increase in the prison populations on both sides. Neither side was prepared for this sudden responsibility. The inhumane consequences on both sides are well known.

Other Alleged Examples

The fact is that it was official CSA policy to kill all black POWs. Secretary of War James Seddon responded to PGT Beauregard’s request for the official policy as to how to handle his black POWs. Confederate Secretary of War James A. Seddon, in a November 30, 1862, letter to General P. G. T. Beauregard, outlined a policy of executing captured black soldiers as criminals guilty of breaking slave insurrection laws. You can find this brief letter here: http://historymaking.org/textbook/items/show/97

Fort Pillow occurred on April 12, 1864. The Congressional Investigation into the battle concluded that the massacre was consistent with official CSA policy. The next month, the Confederacy in May 1864 passed a law stating that black U.S. soldiers captured while fighting against the Confederacy would be turned over to the state, where the captured would be tried, according to state laws.

The exchange system had collapsed in late 1863 because of the failure of Confederate prisoners (and their government) to observe paroles, most notably those issued to the surrendered garrison of Vicksburg. When Union soldiers captured some of those unexchanged soldiers at Chattanooga, Stanton decided that something had to be done. Making matters worse, the Confederacy refused to exchange black Union soldiers. Stories that Confederate soldiers murdered black captives carried more impact after Nathan Bedford Forrest's men stormed Fort Pillow on April 12, 1864, and killed black soldiers who were attempting to surrender.

 

Actual Treatment of POWs

The Union treatment of Confederate POWs generally aligned with the Lieber Code, especially early in the war. Large prison camps like Camp Douglas (IL) and Point Lookout (MD) had harsh conditions—exposure, poor sanitation, and disease. Sherman reportedly used POWs to clear land mines outside of Savannah. That wasn’t expressly against the Lieber Code, but it would be forbidden today. As the war progressed and prisoner exchanges collapsed (due in part to Confederate refusal to exchange Black Union soldiers equally), conditions worsened, with overcrowding and high death rates.

 

The Confederate treatment of Union POWs was notably worse, especially at Andersonville (Camp Sumter) in Georgia. It was an outdoor prison built for 10,000; held over 30,000 at peak.

It had minimal shelter, contaminated water, inadequate food. Nearly 13,000 of 45,000 prisoners died—a mortality rate of ~29%. Commandant Henry Wirz was tried and executed after the war for war crimes—one of the few such examples.

 

Did Grant End the Exchanges?

It is often erroneously claimed that General Grant ordered the suspension of Dix-Hill he was not the Commander in Chief at this time, and had nothing to do with it. https://www.nps.gov/ande/learn/historyculture/grant-and-the-prisoner-exchange.htm?fbclid=IwAR0Re2-Imgr8m_qqAYEGK4iAeKRlEA3mafVQpHiqZHtkyRG_ELag97xEE1s&mibextid=kdkkhi

 

It is taught in most history books that the exchange system ended during the Overland Campaign. This quote is usually presented as proof that General Grant ended the system:

"It is hard on our men held in Southern prisons not to exchange them, but it is humanity to those left in the ranks to fight our battles. Every man we hold, when released on parole or otherwise, becomes an active soldier against us at once either directly or indirectly. If we commence a system of exchange which liberates all prisoners taken, we will have to fight on until the whole South is exterminated. If we hold those caught they amount to no more than dead men. At this particular time to release all rebel prisoners North would insure Sherman's defeat and would compromise our safety here." – General Ulysses S. Grant, August 18, 1864.

The myth is that Grant eschewed the exchanges to prevent the Southern armies to regain its captured men, thus favoring the Union side. Supposedly he did it because of the callous arithmetic of the war – calculating that by stopping exchanges the Union armies could simply outlast the Confederates. In fact, President Abraham Lincoln suspended the Dix-Hill Cartel in retaliation for the Confederacy's refusal to exchange black soldiers captured in the summer of 1863.

During the Summer of 1864 Grant pointed out that the refusal to exchange prisoners, however harsh it might seem, drained the Confederacy of much needed manpower; exchanged Confederates would return to the ranks to kill more Yankees, complicating calculations based on the supposed humanity of exchanges. As you can see, Grant wrote this almost 1 year after the exchanges had stopped. It is fascinating that this is the quote that appears on the Wirz monument, trying to shift blame for Andersonville onto Grant.

In the late summer of 1864, a year after the Dix-Hill Cartel was suspended, Confederate officials approached Union General Benjamin Butler about resuming the cartel and exchanges, including black prisoners. Butler, the Union Commissioner of Exchange, contacted Grant for guidance on the issue. Grant responded on August 18, 1864 with this statement. In their conversation, Grant informed Butler that he approved an equal exchange of soldier for soldier, but did not approve a full resumption of the Dix-Hill Cartel. His issue was with the cartel's stipulation that the balance after equal exchanges was to be paroled and sent home to await formal exchange. By August 1864, Confederate prisoners far outnumbered Union prisoners, so a resumption of the cartel would release thousands more Confederates. Grant also felt that once released, Confederate prisoners would likely violate their paroles and rejoin their units. Many of the Union prisoners, on the other hand, had already fulfilled their enlistments and would likely go home.

An agreement for resuming prisoner exchanges would not be reached until the winter of 1864-1865. Had Confederate authorities agreed to exchange black soldiers, however, the exchanges would have been resumed; and in January 1865 Confederate authorities agreed it was best to exchange "all" prisoners, regardless of color. The reality is that Grant did approve a prisoner for prisoner exchange that did in fact occur.

 

The Purpose of Rules of War

Creating rules or laws to govern war, an inherently unethical human behavior, is one of history’s most painful and persistent tensions. The desire for moral restraint versus the brutal realities of war must be balanced; and clearly, winning the war is the foremost goal. Are the “rules” or “laws” of war phony? No, they’re not phony—but they are imperfect and often inconsistently applied.

The laws of war, such as those codified in the Lieber Code (1863), the Hague Conventions (1899, 1907), and the Geneva Conventions (especially after WWII), are real legal instruments. They’re backed by treaties, military doctrine, and in some cases, courts (like the International Criminal Court).

These laws serve several purposes. They limit unnecessary suffering, especially of civilians and prisoners. They maintain some moral legitimacy—for both domestic and international audiences. The rules prevent escalation into unbounded barbarism (e.g., genocide, torture as routine policy). And, they set standards for holding individuals accountable (think of the Nuremberg Trials or modern war crimes prosecutions).

But while they also protect the combatant, a major purpose is to the military and political leaders who order destruction and death. By following an international code of rules, war trials and criminal prosecution have a built in defense.

In conclusion, The Lieber Code was not conceived as a response to the collapse of the Dix–Hill Cartel or to Davis’s policy on Black soldiers, but those events decisively shaped its final content, its timing of issuance, and its strategic purpose. It not only gave an ethical response to the problem, but a politically savvy public stance.

 

The site has been offering a wide variety of high-quality, free history content since 2012. If you’d like to say ‘thank you’ and help us with site running costs, please consider donating here.

Frustrated by the C grade he had received on a paper for a political science class, a University of Texas, Austin, student set out to change the Constitution. Here, Blaine Kaltman explains how Gregory Watson led the fight to ratify the 27th Amendment.

The United States’ Twenty-seventh Amendment. This is from the hand-written copy of the proposed Bill of Rights from 1789.

In the wake of ICE riots and controversial comments delivered by Billie Eilish and other celebrities, it is important to remember that policy in the United States is perpetuated through legislation. Congress writes the laws that dictate the nation’s future. Working to create, enact, or change such laws is almost always the most productive way to ensure lasting change. Much more productive than performative activism or violent confrontation.  

And yet Congress is failing in its most basic task. Case in point, for decades, members of Congress have been promising comprehensive immigration reform but have never delivered. Which is why many Americans question the wisdom of paying our Congressional representatives such high salaries. Especially when it is up to the members of Congress themselves to decide what that salary shall be; a power authorized by Article I, Section 6 of the U.S. Constitution.

Currently the average salary for members of the House and Senate is $174,000, but should they want to raise their salary, all they have to do is vote to do so. The good news is, at least they would not be eligible to receive that pay increase until after the next election cycle. In other words, they can vote to raise their salary, but they won’t get that raise unless their constituents reelect them. That clever check on our arguably overpaid, certainly unproductive Congress was an amendment to the Constitution drafted by James Madison. It reads: No law, varying the compensation for the services of the Senators and Representatives, shall take effect, until an election of representatives shall have intervened. On September 25, 1789, Congress approved this amendment and sent it along with the other amendments Madison had written to the states for ratification. Ten of the amendments were quickly ratified, the ones we refer to today as the Bill of Rights. But the amendment dealing with Congressional salaries languished in limbo until a college student decided to effect real and permanent change. He set about doing this, not by marching in the streets, damaging property, or profanely criticizing law-enforcement officers, but through an understanding of the Constitution and the hard work of letter writing.

Enter Gregory Watson, a 19-year-old college sophomore who in 1982 wrote a paper about Congressional salaries for a Political Science course he was taking at the University of Texas, Austin. In his paper Watson argued that the amendment Madison had proposed so long ago, intended to make members of Congress think twice before voting themselves a raise, was still "live" and could be ratified. He got a grade of C. But his paper wasn’t graded by the teacher, it was graded by the teacher’s assistant. Watson appealed the grade to the teacher who agreed with the teaching assistant and thus the grade stood. Apparently, the professor teaching the course thought that Watson’s contention that an amendment submitted to the states in 1789 could still be ratified was absurd. Watson disagreed. To prove his point and for the betterment of the country, he started a letter writing campaign to state legislatures asking them to ratify Madison’s Congressional Salaries Amendment. He used what he wrote in his paper as ammunition including a 1939 Supreme Court case, Coleman v Miller, in which the Court ruled it was up to Congress to determine if an amendment with no time limit for ratification was still viable.

It was. After much letter writing, lobbying, and even some states having to check historical records to determine if they had ratified the amendment 200 years earlier, in May 1992, Michigan became the 38th State to ratify the 27th Amendment, making it a part of our Constitution.  Which is why should members of Congress choose to increase their salary, they must answer to the voting public before they receive a raise.

In 2017 The University of Texas changed the grade on Watson’s paper to a clearly deserved A.  That same year protests in Washington, DC on Inauguration Day resulted in property damage and 214 felony riot indictments. No positive change came from that incident, or the many we have seen follow. Certainly not when compared to what Watson peacefully accomplished by literally changing the Supreme Law of the land. The Founding Fathers understood that the pen is often mightier than the sword. While they included the right to petition in the First Amendment, it was the words that were debated—without violence—and eventually agreed upon and written down which ensures that the right to petition will remain a pillar of American society in perpetuity. 

Dr. Martin Luther King, Jr, in his 1967 speech, “The Other America” famously said, “A riot is the language of the unheard.” But in America, as Gregory Watson demonstrated, we all have the opportunity to be heard and, moreover, to profoundly influence our collective future. Indeed, a disgruntled college student successfully added an amendment to the Constitution.  Which is far more significant than wearing a button at an awards ceremony or, worse, risking lives and damaging property. 

What differentiates those who criticize America from those who actually do something to improve its condition is how they exercise that vital right of petition guaranteed by the First Amendment. Done productively, and effectively, every American citizen can make impactful change.

 

Blaine’s new book is Perfecting the U.S. Constitution: 27 and Counting, The Amendments that Shaped America’s Future (Amazon US | Amazon UK).

Posted
AuthorGeorge Levrier-Jones
CategoriesBlog Post

Few soldiers in modern military history have embodied quiet courage and relentless determination as completely as Charles Upham. A modest farmer from rural New Zealand, Upham remains one of the very few combatants ever to be awarded the Victoria Cross twice, earning the distinction of a Bar to his original decoration during the Second World War. His dual awards were not the result of a single dramatic flourish, but of sustained, repeated acts of conspicuous gallantry under devastating fire, carried out with a composure that astonished his comrades and embarrassed the man himself, who consistently downplayed his heroism.

Terry Bailey explains.

Charles Upham in 1941.

Charles Hazlitt Upham was born in 1908 in Christchurch, New Zealand, and grew up in a rural environment that instilled in him endurance, physical toughness, and self-reliance. He was educated at Christ's College in Christchurch and later studied agriculture at Lincoln College, eventually becoming a sheep farmer. Those who knew him before the war described him as intelligent, reserved, and possessed of a dry wit, but there was little outward sign that he would become one of the most decorated soldiers of the war. When New Zealand committed forces to the Allied cause in 1939, Upham volunteered for service, joining the 20th Battalion of the 2nd New Zealand Division, a formation that would see extensive action in the Mediterranean theatre.

Upham's first Victoria Cross was earned during the desperate fighting on the island of Crete in May 1941. The German invasion of Crete, launched under Operation Mercury, marked the first large-scale airborne assault in history. German Fallschirmjäger descended by parachute and glider, seeking to overwhelm British, Australian, New Zealand, and Greek defenders before reinforcements could arrive. The campaign quickly became chaotic and brutal, with isolated Allied units fighting determined delaying actions against better-coordinated German attacks supported by air superiority.

During the fighting around Maleme and Galatas, Upham displayed extraordinary courage over several consecutive days. Acting as a platoon commander, he repeatedly exposed himself to enemy fire in order to lead attacks, rescue wounded men, and reorganize defensive positions. On one occasion he advanced alone to silence a German machine-gun post, killing the crew with grenades and rifle fire. On another, though wounded in the shoulder by mortar fragments and later shot through the foot, he refused evacuation and continued to move among his men, encouraging them and directing fire. He personally carried wounded soldiers to safety under intense fire and launched counterattacks at critical moments when German forces threatened to break through. His conduct was described as "outstanding bravery and leadership," and in 1941 he was awarded the Victoria Cross for his actions during the Crete campaign. Although the Allies ultimately withdrew from the island, the stubborn resistance of units like Upham's delayed German consolidation and imposed significant casualties.

Upham's second Victoria Cross was earned the following year during the First Battle of El Alamein in Egypt in July 1942, one of the pivotal confrontations of the North African campaign. The desert war between Axis forces under Field Marshal Erwin Rommel and the British Eighth Army had see-sawed across Libya and Egypt, with both sides seeking control of the Suez Canal and the Middle Eastern oil routes. By mid-1942, Axis forces had pushed deep into Egypt, and the line near El Alamein became the last defensible position before Alexandria and Cairo.

Now a company commander, Upham again demonstrated conspicuous gallantry during fierce fighting against entrenched German and Italian forces. Over several days he led aggressive patrols and attacks against enemy strongpoints, often advancing across open desert under artillery and machine-gun fire. Despite being wounded multiple times, including by shell fragments and small-arms fire, he continued to lead from the front. In one action he crawled forward alone to destroy a truckload of German soldiers with grenades, and in another, he moved under direct fire to bring up ammunition and reposition anti-tank guns at a critical moment. Even after being severely wounded in the elbow, he refused medical treatment until he was physically incapable of continuing. Eventually, while attempting to break through encircling German forces, he was captured. For these actions, marked by sustained courage and disregard for his own safety, he was awarded a Bar to his Victoria Cross, making him one of only three men in history to receive the decoration twice.

His ordeal did not end with capture. As a prisoner of war, Upham proved as defiant as he had been in combat. He made repeated escape attempts from German camps, demonstrating ingenuity and sheer determination. His persistent efforts eventually led German authorities to classify him as particularly troublesome, and he was transferred to the notorious Colditz Castle, reserved for high-risk Allied prisoners. Even there, he continued to resist passivity, maintaining morale among fellow prisoners until the camp's liberation in 1945.

After the war, Upham returned to New Zealand and resumed farming in relative obscurity. He avoided publicity, declined opportunities for self-promotion, and rarely spoke of his wartime experiences. Those who met him in later life often remarked on his humility and discomfort with praise. He served on local boards and remained active in his community, embodying the same quiet sense of duty that had characterized his military service. When asked about his decorations, he consistently deflected attention toward the men who had served beside him, insisting that he had simply done his job.

Charles Upham died in 1994, but his legacy endures as a testament to steadfast courage under fire. His two Victoria Crosses were not the result of a single dramatic episode, but of repeated acts of leadership, endurance, and self-sacrifice in two of the most intense campaigns of the Second World War. In an era defined by mechanized slaughter and vast armies, Upham's story stands as a reminder that individual resolve and moral courage could still shape events on the battlefield. His life, from rural New Zealand farmer to double recipient of the Commonwealth's highest award for valor, remains one of the most compelling narratives of the war.

In assessing the life of Charles Upham, it becomes clear that his distinction lies not merely in the rarity of his two awards of the Victoria Cross, but in the character that underpinned them. The decoration itself is reserved for the most conspicuous bravery in the presence of the enemy; to receive it twice is almost without parallel. Yet Upham's greatness did not reside in medals, citations, or ceremony. It resided in an unwavering sense of responsibility to the men around him and in a refusal to accept limits imposed by fear, pain, or circumstance. Time and again, whether amid the chaos of airborne assault on Crete or the furnace-like conditions of the North African desert, he placed himself in harm's way not for glory, but because leadership demanded it.

What makes his story enduring is the consistency of that conduct. His gallantry was not an isolated flash of heroism under extraordinary pressure; it was sustained, deliberate, and repeated across campaigns separated by geography, time, and tactical conditions. Even in captivity, deprived of command and confined behind barbed wire, the same indomitable will manifested itself in escape attempts and quiet resistance. The qualities that defined him in battle—resilience, initiative, and moral courage—proved inseparable from the man himself.

Equally significant is the life he chose after 1945. In returning to farming and community life in New Zealand, he demonstrated that true courage requires no audience. He neither traded on his reputation nor sought to shape his own legend. Instead, he reaffirmed by example that service is an obligation fulfilled, not a platform for acclaim. In doing so, he reinforced the essential truth that heroism is often most authentic when it is least advertised.

Charles Upham's legacy therefore transcends military history. It speaks to the enduring value of integrity under pressure and humility in triumph. In a conflict that consumed millions and was fought on a scale previously unimaginable, his life reminds us that the course of events can still hinge upon the resolve of an individual. His story remains not only a chapter in the annals of the Second World War, but a benchmark by which courage itself may be measured.

 

The site has been offering a wide variety of high-quality, free history content since 2012. If you’d like to say ‘thank you’ and help us with site running costs, please consider donating here.

 

 

Notes:

A well-known anecdote surrounds Charles Upham and his second award of the Victoria Cross. When the recommendation for a Bar to his existing VC reached George VI, the King is said to have asked, with some astonishment, whether Upham truly deserved a second such decoration. An officer familiar with the actions and fighting reportedly replied that, if anything, the recommendation understated the case — and that if strict justice were applied, Upham's repeated acts of gallantry might well have merited several more.

Prohibition is very often associated with the criminal activities of the infamous Al Capone, his nemesis Eliot Ness, and numerous illegal speakeasies from the USA between 1920 and 1933 made memorable from numerous  gangster films over the decades. Prohibition, after a fashion at least, was not just confined to North America. It did in fact reach Britain and established itself in a quieter way in some remote towns in Scotland. Kirkintilloch was one of those towns and famously stands out among many for having prohibition laws that continued late into the twentieth century.

Steve Prout explains.

"L'Alcool est un Poison" from Belgium, 1910. This depiction contrasts "those who live from it" (those selling alcohol) with "those who die from it" (showing alcoholic and his family).

Kirkintilloch can be found eight miles from central Glasgow. The town started life as a Roman fort and led a quiet and largely quiet life until the industrial revolution brought the benefit of the textile industry to its inhabitants. Further expansion followed with the building of Forth and Fyfe canal in 1773 and later in 1836 with the railways. The town became an important transport center for iron, coal, and other industrial needs.

Kirkintilloch would have remained just another industrial town, but the town earned its notoriety for  becoming what was known as a "dry town" which forbid the sale of alcohol on public premises from 1923 until 1967. Kirkintilloch was not alone and was one of many towns in Scotland that embraced prohibition. The ban on the sale of alcohol had long been demanded by both  the Liberal Party and the Temperance movement, both of which had a strong influence in Scottish local town politics in the early part of the 20th century. It was a combination of the Temperance movement and the outbreak of the First World War that created “dry towns” which lasted long into the twentieth century and the infamous prohibition period of the United States.

 

The Temperance Movement and the Origins of “Dry Towns”

The origin of Scotland’s former dry towns began with the Temperance movement, a movement that led a moral crusade against alcohol consumption in the USA in the early 1800s. Its ideas were soon in Britain. The movement did not find it difficult to find  supporters all over Britain but found strong and more lasting support in Scotland in the mid 1800s. The movement focused on the morally degrading effect that alcohol consumption had on society. One writer on the subject, Jack S Blocker, in his book Alcohol and Temperance in Modern History pays particular attention to the situation in Scotland. The book mentions a parliamentary report that contained alarming statistics concerning drink-related arrests between 1831 and 1851 in Scotland. It painted Glasow in a poor light. The report concluded that the situation was not helped by the ratio of drinking establishments per inhabitant. In Glasgow there was one licensed premises for every one-hundred and fifty inhabitants. As far as Temperance Movement was concerned this was all the proof needed to convince potential followers for assertive action.

The report boldly claimed that “Glasgow was three times more drunken than Edinburgh and five times more drunk than London.” He also noted that Scots “like the Irish and unlike the English and Welsh, ordinary people drank a great deal of whisky.”  We cannot confirm the accuracy of the data, which no doubt had it flaws, nor do the unhelpful and outdated stereotypes help to substantiate these claims. However, whether true, false, or exaggerated the report caused enough alarm for  the Temperance movement to fuel their crusade and so it grew and gathered momentum.

Kirkintilloch was not the only dry town in Scotland; others soon joined the moral cause after the passing of the later 1913 Temperance Act. At first the movement’s demands were limited to just banning the sale of “strong and ardent spirits” but soon those demands encapsulated the banning of all alcoholic drinks. Variants of the movement spread to other towns in Scotland such as Paisly, Kilsyth Wick, Lerwick, Greenock, Ayrshire, and Lanarkshire. In 1844 in Falkirk, the Scots Temperance League established itself in the community and promoted “the long pledge” of total abstinence from its members.

 

The Growth of Temperance

The first challenge the Temperance representatives had to overcome was encouraging the drinking population to surrender one of their few leisurely pastimes, especially those who grafted in the long hours and harsh working conditions of the time. It was no easy task, but the members found innovative ways. Various movements offered extremely attractive terms in return to leading a tee-total life and being part of the movement.

In return for a serious commitment to a clear oath or pledge, the member would receive support in various forms from this new community. In some variants of the movement, certain benefits were offered such as the entitlement to an early form of social welfare type insurance to draw on in times of need, representing an early example of a Co-operative or a micro social security system. This was certainly true of the Sons of the Temperance Society which formed in the 1850s. The oath was noticeably clear, and each faction had its own wording, but all had one clear and unified meaning. The Hope of Coatbridge Section of the Cadets of Temperance (1878-1925), for example, had their members recite the following vows:

“We the undersigned promise to abstain from all intoxicating drinks and discountenance the Causes -and practices of intemperance and to abstain from tobacco in all its forms.”

 

The risk of breaking these vows resulted in public condemnation, shaming, and exclusion.

In the towns where the Temperance ideas took hold the old-fashioned public house was replaced by other commercial ventures and for a time flourished. Alternative establishments such as Temperance hotels, coffee houses and tea rooms replaced these licensed premises. The social scene was changing in some areas. Over twenty such establishments replaced the public house is Glasgow in 1840 and the movement was gaining political approval.

Meanwhile these societies continued to lobby and win the approval of political influencers. One peer commented: “Without these societies we should be involved in such an ocean of intoxication, violence and sin as would make this country quite uninhabitable.” The lobbying proved to be fruitful, and the best example of the movement’s success came from their work with Forbes Mackenzie, a Conservative MP. MacKenzie also happened to be a temperance reformer himself, and he introduced a number of changes to support the movement. This became law within the Public Houses (Scotland) Bill in 1853. This act forced the closure of pubs in Scotland at 10pm on weekdays and forced closure on Sundays. Slowly but surely the consumption and supply of alcohol was being restricted, and, in some towns, further restriction was to come. Alcohol was not completely removed from people’s lives and momentum would be slow until the early part of the twentieth the century.

In 1906 the Liberal government passed legislation allowing communities to veto alcohol consumption and with that the Temperance Scotland Act was passed in 1913. Whether this alone would have been enough we will never know because the political and social landscape changed further as Britain entered World War One.

The Temperance movement was not wholly responsible for the creation of Scotland’s Dry Towns. The First World War and Government laws bought further tight restrictions on the sale of alcohol with the Defence of the Realm Act 1914. The purpose of these restrictions was to ensure a productive resourceful pool of industrial labor to service the war effort. Stricter controls on public house opening hours were enforced, the strength of beer brewed was diluted and additional taxes amounting to an extra penny were charged on each pint of beer. Naturally, this changed attitudes and habits. Helped by the patriotic fervor for the war effort, a lot of publicans chose to support the armaments industry and keep workers sober. They of course had little choice as their incentives from their trade had been curtailed. Examples of the landlord measures were to enforce the  ‘No Treating’ rule between 1916 and 1919 that forbid the buying of rounds of drinks. Slowly but surely alcohol availability was diminishing but the temperance movement had not gone away and was far from finished. They in fact seized upon these gains at the end of the war.

The Temperance Movement took advantage of the 1913 act to continue and expand their cause once the war ended. The result was vast numbers of voters in the  1920 local elections opted for the abolition of alcohol sales. Towns such a Kirkintilloch became one of those dry towns as a result. This was given further backing when supporter Edwin Scrymgeour was elected as a Scottish Prohibitionist Party MP for the Dundee constituency. He remained an MP until  1931.

 

Other Variants of Temperance

There were many variants of these Temperance societies, one founded in the USA in 1851 was the Independent Order of Good Templars (born from a previous organization called the Sons of Temperance). After reaching Scotland, this society became the first Scottish lodge, established  in Glasgow in 1869. One interesting facet about this variant was that it promoted equality of rights for women: an early forerunner of Universal Suffrage.

The movement demonstrated its sincerity by admitting women into their societies, thus placing them on an equal footing to men and encouraging them to be active board members. This was certainly the intention of Provost James Knox who was also the manager of Airdrie Savings Bank from 1848 to 1861 (he also held the position of Chief Templar in the early 1900s).

 

The End of the Dry Period

Dry towns soon and slowly relaxed their rules and departed away from the ways of Temperance.  Some chapters and their establishments lasted until late in the twentieth century. Moods had changed and the world. A number of factors had brought about change. By the time of the Second World War attitudes had changed and become more relaxed. The Temperance ways were now seen as outdated and irrelevant. Even the government did not impose the same alcohol restrictions on alcohol consumption between 1939-1945 as it had during the First World War. It may have been too much to impose such restrictions on the population a second time. With that being said, some areas of Scotland did maintain their discipline until well after the end of the war. Kirkintilloch, for instance, finally abandoned its dryness in 1967.

In some cases, the remnants of the Temperance acts held on tenaciously a little longer. It took until 1976 for parliament to dismantle the legislation set out by Mackenzie from 1853, with some parts of Scotland - such as Kilmacolm - taking longer to embrace the new liberties. Kilmacolm finally acquired its own pub as late as 1998 when an old waiting room at the train station was converted into The Pullman. It was a memorable event and well attended by a thirsty crowd at the establishment’s grand  opening after seventy dry years.

 

Did you find that piece interesting? If so, join us for free byclicking here

Posted
AuthorGeorge Levrier-Jones

The American Civil War was one of the defining conflicts fought in American history. Not only did it threaten to divide the nation, but it also challenged the very foundation of American institutions. It would go on to define the morals by which future generations would judge the United States of America. Between 1861 and 1865, the Union and Confederate states engaged in crucial battles that would determine the outcome of the Civil War. From the First Battle of Bull Run (1861) to the Battle of Antietam (1862) and the Battle of Gettysburg (1863), each would have its place in American history for shaping the Civil War's military, political, and moral course.

Caleb Brown explains.

Battle of Antietam by Thure de Thulstrup.

First Battle of Bull Run

On the morning of July 21, 1861, Union forces led by General McDowell would meet with Confederate troops led by Generals Johnston and Beauregard for what would be the first battle of the Civil War, the First Battle of Bull Run.[1] The Union, having high hopes for a quick victory, would see its hopes fade as Union soldiers, lacking proper military training, became weary and began to retreat.[2] Hoping to see a crushing win, many civilians who had come to spectate the battle were also caught up in the confusion as they, along with the Union soldiers, retreated toward Washington.[3] The Confederate victory at the First Battle of Bull Run would shatter the hopes for a short war and boost the morale of the South. As a result of the Northern defeat, General George B. McClellan would rise to command and would write in a letter to his wife, "I am here in a terrible place, the enemy have from 3 to 4 times my force the President is an idiot, the old General in his dotage they cannot or will not see the true state of affairs. Most of my troops are demoralized by the defeat at Bull Run, and some regiments are even mutinous. I have probably stopped that, but you see my position is not pleasant."[4] As a result of Bull Run, the Union now had to concede that the war would not be quick, and more preparation was needed.

 

Battle of Antietam

On September 17, 1862, America would lay witness to what would be the single bloodiest battle in American history. By day's end, 22,717 Northern and Southern troops would be dead, wounded, or missing as a result of the Battle of Antietam, which was fought in the Union territory of Maryland.[5] The Battle of Antietam would be a result of General Robert E. Lee's plan to invade the North for the first time in the war. Lee, however, would fall victim to the “Lost Dispatch,” which was a copy of Lee’s military plans that would fall into the hands of Union soldiers. The resulting battle would lead to the bloodiest single day in American history, a tactical draw between the North and the South; however, Lee would retreat, handing the Union a strategic victory. The battle would effectively stop the Confederates’ momentum in the eastern theater of the war and give President Abraham Lincoln the victory he needed to announce his plans for the Emancipation Proclamation. The Confederates would also lose the much-needed foreign recognition from Britain and France.[6] So, although the battle may have been a tactical draw, the South would suffer a significant defeat that it would not be able to overcome.

 

Battle of Gettysburg

The most famous battle of the American Civil War, at least in popular culture today, is the Battle of Gettysburg, which took place in Adams County, Pennsylvania, in July 1863, with Lee's army facing General George G. Meade.[7] The Battle of Gettysburg would be a turning point in the Civil War, and between July 1 and July 3, 50,000 casualties would lie dead, wounded, or missing as a result.[8] General Lee would continue north into Union territory in hopes of a victory that would force an end to the conflict. The battle would unfold over three fierce days of fighting, taking place on geographical terrain known as Little Round Top, Culp’s Hill, and the Cornfield. General George E. Pickett would lead what would become known to history as “Pickett’s Charge,” resulting in a failed attack and a 60% casualty rate for the Confederates.[9] This would be the final push for Lee at the Battle of Gettysburg. Facing staggering losses, Lee would retreat to Virginia, and the hopes of a Confederate States of America along with him. 

 

Conclusion

In conclusion, every battle fought throughout the Civil War has its place in history and contributed to shaping the war's outcome in one way or another. The First Battle of Bull Run would serve as a wake-up call for the North, and as a result of the defeat, the Union would make changes to its army going forward. Many more troops would be requested, and training would improve. The Battle of Antietam would provide a political victory rather than a military victory for the Union. As a result of the bloodiest day in American history, President Lincoln would have cause to reveal his plans for the Emancipation Proclamation. Finally, the Battle of Gettysburg, although not the final battle of the Civil War, would see Lee’s army of Northern Virginia suffer a massive defeat on the fields of Gettysburg, effectively dashing the hopes of a successful invasion of Northern territory. Seeing every battle for its military, political, and moral implications helps provide a broader picture of the American Civil War.

 

Did you find that piece interesting? If so, join us for free by clicking here.

Bibliography

The Battle of Antietam, May 28, 2019. https://www.proquest.com/docview/2230470087?pq-origsite=summon&accountid=12085&sourcetype=Newspapers.

“First Bull Run." American Heritage.” First bull run, 2011. https://go.gale.com/ps/i.do?p=BIC&u=vic_liberty&id=GALE%7CA271594560&v=2.1&it=r&sid=summon&aty=shibboleth.

“Gettysburg.” American Battlefield Trust. Accessed November 13, 2025. https://www.battlefields.org/learn/civil-war/battles/gettysburg.

Woodworth, Steven E. This great struggle: America’s Civil War. Lanham, Md: Rowman & Littlefield Publishers, 2012.


[1] Steven E. Woodworth, This Great Struggle: America’s Civil War (Lanham, Md: Rowman & Littlefield Publishers, 2012), 47.

[2] Ibid. 49.

[3] Ibid.

[4] “First Bull Run, American Heritage,” 2011, https://go.gale.com/ps/i.do?p=BIC&u=vic_liberty&id=GALE%7CA271594560&v=2.1&it=r&sid=summon&aty=shibboleth.

[5] “The Battle of Antietam,” May 28, 2019, https://www.proquest.com/docview/2230470087?pq-origsite=summon&accountid=12085&sourcetype=Newspapers.

[6] Ibid.

[7] “Gettysburg,” American Battlefield Trust, accessed November 13, 2025, https://www.battlefields.org/learn/civil-war/battles/gettysburg.

[8] Ibid.

[9] Ibid.

Corporal Tibor Rubin stands as one of the most compelling figures of the Korean War, a man whose life traced a harrowing path from Nazi concentration camps to the frozen hills of Korea and, ultimately, to the highest decoration the United States can bestow for valor. His story is not merely one of battlefield gallantry. It is a narrative shaped by genocide, survival, gratitude, and a long-delayed reckoning with prejudice inside the very institution he served with unwavering devotion.

Terry Bailey explains.

Tibor Rubin.

Rubin was born in 1929 in Pásztó, Hungary, into a Jewish family during a period when Europe was sliding toward catastrophe. His childhood was cut short by the rise of fascism and the spread of antisemitic laws that increasingly isolated and endangered Hungarian Jews.

Following the German occupation of Hungary, Rubin was deported to the infamous Mauthausen concentration camp in Austria. Still a teenager, he endured starvation, forced labor, brutality, and the ever-present specter of death. Thousands perished in the camp's granite quarries and barracks; Rubin survived through a combination of resilience, resourcefulness, and sheer will. When American forces liberated Mauthausen in 1945, Rubin later recalled being profoundly moved by the sight of U.S. soldiers—healthy, confident, and free. One American serviceman, he said, treated him with kindness and humanity at a moment when such gestures seemed almost unimaginable. That encounter left an indelible mark. Rubin resolved that if he ever made it to the United States, he would repay the nation that had rescued him from annihilation.

In 1948, he fulfilled that ambition. Arriving in America as an immigrant with limited English and little money, Rubin settled in New York and embraced his adopted homeland with fervor. When war broke out in Korea in June 1950, he saw an opportunity to honor his promise. He enlisted in the U.S. Army that same year, determined to serve the country he regarded as his liberator.

The Korean War erupted on the 25th of June 1950, when North Korean forces stormed across the 38th parallel in a surprise invasion of South Korea. The United States, acting under a United Nations mandate, rushed troops to defend the South. Early engagements were chaotic and costly. American and allied forces were driven into a shrinking defensive enclave known as the Pusan Perimeter. Only after General Douglas MacArthur launched the daring amphibious landing at Inchon did the tide temporarily turn. Yet by late 1950, the war shifted again as Chinese forces entered the conflict in massive numbers, launching brutal offensives that sent UN troops reeling southward through mountainous terrain and bitter winter cold.

It was during these desperate months that Rubin distinguished himself. In July 1950, near the Pusan Perimeter, his regiment came under intense North Korean assault. According to eyewitness accounts later included in his Medal of Honor citation, Rubin single-handedly manned a machine-gun position on a hill for twenty-four hours. Wave after wave of enemy soldiers attacked, but Rubin held his ground, inflicting heavy casualties and slowing the advance long enough for his unit to regroup and withdraw. His stand was not a dramatic flourish; it was a grim, grinding act of endurance, reminiscent of the tenacity that had sustained him in the camps of Europe.

Later that year, as Chinese forces surged into the war, Rubin again volunteered for a perilous task. During a chaotic withdrawal, he remained behind to cover his unit's retreat, engaging the enemy alone and allowing fellow soldiers to escape encirclement. His actions exemplified a pattern: whenever danger intensified, Rubin stepped forward rather than back. In November 1950, during fierce fighting, Rubin was captured by Chinese troops. What followed was more than two and a half years of imprisonment under appalling conditions. Food was scarce, sanitation was almost nonexistent, and medical care was minimal, with diseases that spread rapidly. Prisoners endured relentless indoctrination efforts and the psychological strain of uncertainty. Many perished from malnutrition and exposure.

For Rubin, however, captivity was tragically familiar terrain. Drawing on the survival instincts forged in Nazi camps, he refused to surrender to despair. He slipped out of the prison compound at night, risking execution if caught, to scavenge for food. He stole rice and other provisions from enemy supplies and distributed them among weaker prisoners. He nursed the sick, carried the infirm, and offered comfort to those on the brink of death. Fellow prisoners later testified that his efforts saved numerous lives. To them, Rubin was not simply a comrade but a lifeline.

When the armistice was signed in 1953 and prisoners were exchanged, Rubin returned home, gaunt but unbroken. Many of his fellow soldiers believed he would soon receive the Medal of Honor. Recommendations had been submitted during the war for his extraordinary actions in combat. Yet the award never came. Over time, it emerged that antisemitism on the part of a superior noncommissioned officer had obstructed or failed to process the necessary paperwork. In the climate of the early 1950s, such discrimination could quietly derail recognition without scrutiny. Decades later, a congressionally mandated review examined cases in which Jewish and Hispanic service members might have been denied awards due to prejudice. Rubin's case resurfaced as one of the most striking examples. Investigators confirmed that his heroism had been documented and recommended, but administrative bias had prevented proper consideration.

In 2005, more than half a century after his acts of valor, Tibor Rubin finally stood in the White House as President George W. Bush placed the Medal of Honor around his neck. The ceremony was both a personal triumph and a national acknowledgment of past injustice. The citation recognized not only his single-handed stand in combat but also his selfless courage as a prisoner of war. Rubin accepted the medal with characteristic humility. He often insisted that he had simply kept a promise—to repay America for his liberation.

In interviews, he deflected praise toward his fellow soldiers and reflected on the freedoms he cherished as an immigrant citizen. For him, the medal symbolized gratitude rather than vindication. In his postwar life, Rubin settled in California, married, and raised a family. He remained active in veterans' circles and frequently addressed schools and community groups. He spoke about resilience, about the value of liberty, and about the responsibility of memory. Having witnessed both the depths of totalitarian cruelty and the capacity for democratic self-correction, he embodied a bridge between two defining conflicts of the twentieth century.

When Tibor Rubin died in 2015, he left behind more than a record of battlefield heroism. His life formed a moral arc that stretched from the barbed wire of a concentration camp to the ceremonial dignity of the Medal of Honor. It is a story that links the Holocaust to the Korean War, illustrating how individual courage can shine even amid institutional failure. His long-delayed recognition serves as a reminder that while injustice may obscure valor for decades, truth has a stubborn endurance of its own—and, in time, can prevail.

In conclusion, Corporal Tibor Rubin represents something larger than a single act of heroism on a distant battlefield. His life is a study in moral continuity. The same resolve that sustained him in the shadow of Mauthausen concentration camp sustained him on the hills of Korea; the same gratitude he felt toward the soldiers who liberated him shaped the courage with which he defended their flag. He did not compartmentalize his past and present. Instead, he fused them into a singular commitment: to stand firm when others faltered, to give when others could not, and to endure when surrender might have seemed understandable.

His story also illuminates the complex character of the nation he chose as his own. The United States that freed him from Nazi tyranny later failed, through prejudice and neglect, to honor him promptly. Yet it was also a nation capable of confronting that failure, reopening old records, and correcting an injustice decades later. In that arc—from liberation to oversight to eventual recognition—lies the proof not only to Rubin's perseverance but to the imperfect, evolving promise of American democracy itself.

The medal placed around his neck in 2005 did more than acknowledge battlefield gallantry. It affirmed the lives he saved in frozen foxholes and prison compounds. It validated the testimony of fellow prisoners who survived because he shared stolen rice, shouldered burdens not his own, and refused to let despair claim another man. And it restored to the historical record the full measure of a soldier whose faith in his adopted country never wavered, even when its institutions faltered.

Ultimately, Rubin's journey from persecuted Hungarian Jewish youth to American war hero binds together two of the twentieth century's defining struggles: the fight against genocidal totalitarianism and the defense of fragile democratic allies during the Cold War. His life reminds us that courage is not born in a single moment of crisis but forged through repeated trials. It is a reminder that gratitude can be a powerful engine of service, that character, once tempered by suffering, can become an enduring force for good.

Remembering Tibor Rubin, it compels the seeing beyond medals and citations. It highlights a man who transformed unimaginable trauma into steadfast loyalty, who answered cruelty with compassion, and who met injustice not with bitterness but with continued devotion. His legacy endures not only in military archives or presidential ceremonies, but in the example he leaves behind: that even in the harshest landscapes of history, individual courage and conscience can prevail—and, in time, be recognized for what they truly are.

 

 

The site has been offering a wide variety of high-quality, free history content since 2012. If you’d like to say ‘thank you’ and help us with site running costs, please consider donating here.