Issue 38 Language and Culture

Page 1

ISSUE 38 - JUNE 2021


CONTENTS 3. HOW DOES L A N GUAG E WO R K AS A CO LON IA L TOOL? 4. M IC H AEL F O U C AULT - R ECON CEP TUAL ISIN G POWER IN MODER N S OC I ET Y 5. A L - FA R A B I : T H E SECO ND TEAC HER , F O RGOTTEN IN MODERN H IS TO R I ES O F P H ILOP SOP HY 6. PA L M Y R A AN D I SIS: WHY AR E CULTUR AL SITES TA RGETED BY J IHA DIS T GROUPS? 6. A RMI N I U S OR H ER MANN? THE F O UND ER OF GERMA N Y 7. HOW T H E AMB I T I ONS OF ONE EMP ERO R LED TO A NTISEMIN ITISM THROUGHOUT THE ROM A N EMP I R E 8. D E CO DI N G H I EROGY LY P HICS 8. L A N G UAG E AS A P O L ITIC AL TOO L IN THE F ORMER YUGOSL AV S TATES 9. A L E XA N DER T H E G R EAT: LG B T ICON? 10 . MIS S I O N AR I ES : CO LONIAL ISM’S ‘AG ENT, SC RIBE, A N D MORA L A LIBI’ ? 11. HOW T H E B I BL E HAS B EEN USED TO O P PRESS WOMEN SIN C E THE GA RDEN OF EDEN 12 . S T C U T H BERT ’ S COF F IN: D EVOTIO N IN RUN IC A N D ROMA N LETTERIN G 13 . H OW H AS T H E P UB L IC MEMO RY OF THE SECOND SIN O-JA PA N ESE WA R IN FLUEN C ED C HINES E C U LT U R E? 14 . E XPLO R I N G WH Y F OR ENSIC F IN G ER P R INTIN G DEVELOPED IN COLON IA L INDIA , A ND ITS S U B SEQ U EN T T R ANSF ER TO V ICTO R IAN BRITA IN 15 . FROM T H E KAMA SUTR A TO N OW: THE IMPAC T OF COLONIA L RULE ON SOUTH A SIA N QU EER C U LT U R E 16 . TH E M EX I C A N R EVO LUTION THROUG H PIC TURES 17. ON T H E R I S E O F INTOL ER AN CE TOWAR DS MA LE HOMOSEX UA LIT Y IN MEIJ I JA PA N 18 . HOW H AS T H E MY THO LOGY OF THE W I LD WES T IMPAC TED U.S. C ULTURE A ND IDENTIT Y 19 . E PO N Y M ET H I C S: N AMIN G INHUMANE MEDIC INE 20 . TH E EVO LU T I ON OF D IAL ECTS WITHIN THE EN GLISH L A N GUAGE 21. WHAT L ED TO T HE 19TH CENTURY GAEL I C REVIVA L? 22 . S OC I AL AN X I ET I ES SUR ROUND IN G THE MODERN WOMA N IN IN TERWA R BRITA IN 23 . D ID N A Z I G ER MANY R ELY MO R E O N COERC ION OR CON SEN T? 24. HOW 90S C I N EM A R EVO LTED AGAINS T “HIGH C ULTURE” SHA KESPEA RE 24. T H E R EP EAL I N G O F THE CONTAG IOUS DISEA SES AC T 25 . IN S EA RC H O F ATHENS: ER NES T SIMON ’ S C A MPA IGN TO BOL S T ER B R I TA IN’S D EMO CR ATIC CULTURE 26 . GRAF F I T I : H AS THE ART OF R ESIS TAN CE N OW B E EN G EN T R I F I ED ? 26 . RE PL AC I N G T H E QUOTA SYS TEM WITH ‘ BLIND AU D I T I O N I N G ’ IN ED UC ATION AND EMPLOYMENT 27. RE C L A I MI N G AUS TR AL IA DAY 28 . PU N K: A MU S I C R EVO LUTION 29 . A RP I R EL L AS AGA INS T AUG US TO PAG E 27

2

JUNE 2021  THE MANCHESTER HISTORIAN


L A N G U A G E A N D C U LT U R E S P E C I A L

HOW DOES LANGUAGE WORK AS A COLONIAL TOOL? The concept of power is often understood in a physical sense between two groups. A broader (and perhaps more effective) way of defining power is as simply the ability to control and coerce others. In Black Skin, White Mask, the psychologist Frantz Fanon examines language and how it has been used to oppress people within the context of colonialism. Frantz Fanon was born in 1925, in the French colony of Martinique. His family were relatively prosperous, enabling him to attend a prestigious school where he was taught by the noted poet Aimé Césaire. Fanon remembered being taught to speak in pure French rather than in creole. As a result, his French resembled that of a native French speaker and he strongly identified with French culture. In 1943, he left Martinique to fight with Gaullist forces against the Vichy regime, which he perceived as being illegitimate and overtly racist. After the war, Fanon attended the University of Lyon to qualify as a psychiatrist, where he experienced much discrimination. Upon completing his residency, he published the work he is most famous for today:Black Skin, White Mask, in which he discusses the negative psychological impacts of colonialism as well as his own personal experiences of racism throughout his life. He defines colonialism as: ‘A systematic negation of the other person and a furious determination to deny the other person all attributes of humanity, colonialism forces the people it dominates to ask themselves the question constantly: In reality, who am I?’. Following this, Fanon worked in Algeria, where he treated French soldiers who were dealing with trauma after torturing Algerian freedom fighters. Already a staunch anti-colonialist, Fanon’s views grew stronger during his time in Algeria, resulting in him resigning from his job and joining the National Liberation Front (FLN) in Algeria to fight for self-rule. His 1961 book, The Wretched of the Earth, sparked controversy by justifying the use of violence by colonised people to gain independence. Fanon’s work on language, power and oppression have inspired many notable revolutionaries such as Che Guevara, Steve Biko, and Malcolm X. He is often regarded as a seminal author and thinker in postcolonial circles for being an early proponent of the idea that colonialism in Africa was negative. Fanon addresses language and its link to colonialism in the opening chapters of Black Skin, White Mask. To Fanon, speaking a foreign language is to assimilate into another culture completely and adopt the civilisation of another place at the expense of your own native culture. Within the context of colonisation, this would mean that a colonised subject speaking the language of the coloniser is an attempt of the colonised to become more like the coloniser and enter his world. From this, Fanon goes on to argue that a colonised person speaking the language of their oppressors is oppressing themselves. Fanon used the example of Afro-Caribbeans speaking French or a creole language as people attempting to assimilate into the culture of the coloniser,

furthering the notion that their culture is irrelevant and inferior compared to that of the coloniser. Furthermore, this attempt to be ‘White’ alienates the Black person from other Black people. The impact of colonialism is much more profound than initially thought – the colonised are taught to see themselves as inherently inferior because of the colour of their skin. Fanon’s analysis of language in Black Skin, White Mask does not end with the notion of colonised people furthering their own oppression by attempting to speak the language. The argument continues after this, pointing out that many people speaking the language of their oppressors also believe that this will enable them to be seen as an equal in their world. However, Fanon points out that this is impossible due to what he describes as the ‘epidermal character of race’. This is to say that a Black person, irrespective of their education, diction or mannerisms, would still be black and therefore inferior in the eyes of the coloniser. To prove this, Fanon uses anecdotal evidence of local French people being surprised (and occasionally threatened) by his articulateness. According to Ashwin Venkatakrishnan Fanon, their surprise is an indication that they see Black people as inherently inferior compared to them and unworthy of treatment as an equal and fellow human. To Fanon, the colonised will always be objectified in this way as a result of being colonised subjects. This idea that race cannot be transcended by language or mannerisms is a perspective that is prevalent and challenged even today. The hip-hop artist Jay Z expresses this in his 2017 song The Story of O.J. with the line ‘O.J. like, “I’m not black, I’m O.J.”...okay’. In the song, Jay Z refers to Fanon’s view that the colour of your skin automatically establishes a supposed inferiority compared to White people. Race cannot be overcome by wealth, money or language, but is an eternal aspect of someone irrespective of how hard they try to overcome it. Frantz Fanon uses his experience as a psychiatrist to show the impacts of colonialism. Colonialism dehumanises the colonised and reduces their culture. Languageplays a key role in this displacement of identity and establishes a hierarchy of superiority of the coloniser over the colonised. Irrespective of what you might think of Fanon’s claims, they have remained popular and played a key role in the argument that colonialism is innately negative.

ASHWIN VENKATAKRISHNAN JUNE 2021  THE MANCHESTER HISTORIAN

3


L A N G U A G E A N D C U LT U R E S P E C I A L

M I C HEL F O U C AU LT – R E CON CEPTUA LISIN G POWE R I N M ODERN SOC IE T Y ulated and informed by our relation to institutions and the societal forces we engage with. That is to say, we think what we do because we are conditioned to do so by our place in the world. In a sense we become occupied by a norm. Our desire to ‘fit in’ or ‘be normal’, is part of a wider internalisation of what is expected from us. A greater understanding of this internalisation can be seen through Foucault’s assessment of the Panopticon. Jeremy Bentham proposed the idea of a Panopticon in prisons. A tower which can

To assess Foucault’s theory of power in 50 pages would be challenging, to do so in one, an insult. For this I can only apologise. There are few 20th century thinkers whose influence has been as vast as that of Michel Foucault. Perhaps, one may consider those sitting on the table across from him in the smoke ridden intellectual confines of the Café de Flore; Jean-Paul Sartre and Simone de Beauvoir. In this article, however, I seek to demonstrate that Foucault provides a radical view of how power functions and permeates into our everyday reality. The actuality of human existence and knowledge is actively shaped by such a force. However, in the ink which follows I aim to demonstrate that this is not a critique which undermines human agency but facilitates a higher understanding of modern society. Foucault felt that work in the field of psychoanalysis, principally Freud and Jacques Lacan, had done much ‘to posit a new understanding of human relations’. Nevertheless, Foucault felt that power could not be understood solely from within, forces external to oneself need to be taken into consideration. Foucault was an admirer of Marx’s work, but the power relations emerging from relations of production again seemed too simplistic for Foucault. Foucault certainly would not deny that there is a power relation between the bourgeois manager and worker. However, he would suggest that we look further at the wider forces governing such a dynamic. Foucault separates power into two distinct areas: ‘normalising’ and ‘repressive power’. The latter is what one would commonly assume as power. For example, a police officer arresting a citizen, a judge sentencing a defendant or the military invading the state in which you live. This is a form of power which is visible and common to our ontology. Such power acts to make an individual do what they otherwise would not do. An individual would not voluntarily place themselves in a cell. Rather, a relation of power has made such a thing occur. Let us turn to ‘normalising power’. Foucault argues across his work that most power is not visible but rather operates in a more subversive way, ‘society is an archipelago of different powers’. Take the relation between a parent and child. The child, largely, does as the parent says, their behaviour is regulated in this manner. Further, take a university, prison, or society as a whole; what Foucault states is that what we know, understand and do, is reg-

4

see outwards, but no prisoner can see in. Thus, one would never know if they were being watched, or if anyone had ever entered the building. Under such a system, the subject would behave as if they were being watched. The fear and threat of the tower watching you is enough for the internalisation of the norms which you are asked to follow. Thus, direct power is expanded into every aspect of the prisoner’s life, through a pervasive symbol of power. This carries over into ideas of surveillance and ‘Stop and Search’. The mass tracking of individuals online or policies of ‘Stop and Search’, which seldom result in convictions, seem to exist to remind us of the state’s presence. Foucault argues that this is a form of control. Such power is never politically neutral. If those who design the Panopticon or carry out surveillance are racist, sexist or homophobic, the behaviour these modes of power induce will express these prejudices. Foucault, through the term Episteme, also states that we do not understand things because we have gradually become more intelligent, instead we know what we do because the societal conditions which we find ourselves in allow us to. We commonly assume that we have become more humane in our approach to justice. Yet, Foucault rejects this. Foucault indicates that when punishments, around the 18th century, slowly developed from public hangings to private executions and incarceration, this was not out of developing notions of humanity. Rather, this represents a more insidious operation of power. By hiding the exercise of power from direct view, the means of power, which were often protested against at executions, become concealed within physical confines. Moreover, Foucault stresses the neo-liberal transformation of the self. Modern society induces the individual to conceive themselves in the image of the economy, remaking our experience of the world by using the market as the base from which all human relations are understood. One becomes ‘Homo Economicus’. That is, more than a worker, but an entrepreneur of the self. For Foucault, we are all induced to become producers of human capital, subjects for life within a capitalist society. It is through this relation that we are constituted as subjects. This may seem to be a pessimistic denial of human agency. However, it is only through observing and understanding the structures of power and forces subjugating human existence that one can comprehend their position in the world and be emancipated from the normalising forces of power.

JUNE 2021  THE MANCHESTER HISTORIAN

OSCAR TAPPER


AFRICAN, ISLAMIC AND ARABIC

A L -FA R AB I : TH E SE COND T E AC H ER, F ORGOTTEN I N M OD E R N H IS TO RI ES OF P H ILOSO PH Y? It goes without saying that the roots of modern European/ Western Civilisation can be traced back to Ancient Greece and Rome. Imagine how our culture would appear had the likes of Socrates, Plato, and Aristotle not existed? Quite impoverished, it can readily be assumed. But then again, where would European Culture be today had Islam not existed? This is an intellectual/cultural debt that is yet to be sufficiently acknowledged. Bursting out of the Arabian Peninsula shortly after the death of the Prophet Muhammad (d.632 CE), the first Muslims rapidly conquered pretty much everywhere they went, eventually establishing themselves in Spain in the West, all the way to China in the East. It was during the Abbasid Caliphate (750-1257) that the sciences and the arts flourished abundantly. The imperial redrawing of the world brought Muslims into contact with an extensive mix of peoples, trade, and more significantly ideas. Mass translations into Arabic and Persian of Greek philosophy, Zoroastrian theology, and Indian sciences were carried out consequently. Backed by patronages of wealthy Caliphs and Sultans, it is no coincidence that this era, dubbed the ‘Golden Age of Islam’ by some, witnessed the lives of extraordinary, enduring thinkers who continue to impact and inspire people today. Al-Kindi, Ibn Sina (Avicenna), Ibn Rushd (Averroes), Ibn Tufayl, Al-Tusi, and Ibn Khaldun are just some of the heavyweight polymaths who made immense contributions to human civilisation. But there is one thinker who should not go amiss: Abu Nasr Al-Farabi (872-950). Also known as Alpharabius in the West, Al-Farabi, speculated to be either a Turk or Persian, like so many of his contemporaries, took full advantage of the new opportunities and educational facilities available in his day.Becoming a fully rounded polymath whose impact, though often overlooked, reverberates in our times in many ways. In Islamicate societies, he is still referred to as the Second Teacher, the First being Aristotle. Yet, is he a ‘lost’ thinker? Most histories of philosophy will either mention him sparsely alongside the more famous thinkers or, more daringly, ignore him outright. A brief examination of his life, career and legacy will reveal that this grave injustice needs to be redressed. He spent most of his career in Baghdad, the political centre of the Caliphate, and the scene of much innovation. There, he studied the entire gamut of Greek philosophy alongside the Qur’an and other Islamic disciplines. Al-Farabi wrote on a plethora of subjects: logic, physics, psychology, and even alchemy rank among the many areas that concerned this ferocious intellect. He even wrote a book on Music Theory, noticing how music can powerfully influence the mood of a listener and control their behaviour. His greatest achievements, however, lie in the fields of politics, philosophy, and ethics. In one of the highly regarded political philosophy works ever written in the Arabic language, The Virtuous City (al-Madina al-Fadila), Al-Farabi ambitiously combined Greek philosophy with Islamic thought to set out a political vision. In his view, the ideal society was one that led to happiness. He argued that the inhabitants should aspire to be and act virtuous purely by possessing the knowledge of the Divine and Nature. Reminiscent of Plato’s The Republic, it is the Enlightened Philosopher

who is charged with establishing this happiness and the ceaseless pursuit of justice. Other regimes were ‘vile’ if they sought to pursue other goals detrimental to happiness e.g. pursuit of wealth and prosperity for their own sake. Regimes of tyranny will naturally seek to assert their hegemony over others. Quite fascinatingly, Al-Farabi elaborates upon the regime of ‘corporate associations’ in which the inhabitants are free to do as they please. This is not far from a democratic order, and as the Virtuous City was nearly impossible to establish, Al-Farabi favoured it ahead of other societies. True, excess luxury and hedonism may corrupt the inhabitants, but such a social order would nonetheless enable the pursuits of the sciences and arts as well as the essential freedoms and values that may one day culminate in the utopian ideal. It is not possible to celebrate Ibn Sina or Ibn Rushd without celebrating Al-Farabi, who greatly influenced those exalted intellectuals. It is these thinkers whom the West also learned from as much as the ‘rediscovery’ of Classical thought during the Renaissance (c.1400-1600), and that have shaped much of European discourse since. They helped to shape Europe as we know it today. Acknowledging this intellectual/cultural debt will go a long way in

helping to restore the rightful place of Al-Farabi as a unique preeminent thinker that has contributed immensely to human thought. Al-Farabi’s reception today in modern Western cultures might be underwhelming but do not let that deter you from underestimating his impact elsewhere. Ayatollah Ruhollah Khomeini (1900-1989) – the Shi’ite jurist who led the Iranian Revolution of 1979 and set up an Islamic Republic that still endures today – was heavily influenced by Al-Farabi’s Virtuous City. Modern Iran owes much to the philosophies of Al-Farabi and Aristotle as it does to Shi’ite traditions of Islam. The Second Teacher is certainly a ‘lost’ thinker who needs to be rediscovered again. If anything, we must take the philosophical ideas of Al-Farabi more seriously. They are clearly as relevant today as they were in his era notwithstanding whether we agree with his ideas or not. The passage of time is no limit on the power of ideas in influencing human affairs and societies.

ZEESHAN MAHMOOD

JUNE 2021  THE MANCHESTER HISTORIAN

5


AFRICAN, ISLAMIC AND ARABIC

PALMYRA AND ISIS: WHY ARE CULTURAL SITES TARGETED BY JIHADIST GROUPS? In 2015, the Islamic State (ISIS) attacked the 2,000 year-old Arch of Triumph in the ancient city of Palmyra, Syria. This raised serious questions about what jihadist groups like ISIS were truly hoping to gain. It is important to keep in mind the location of ISIS which is in both parts of Iraq and Syria. This region is one of the most culturally rich, with well preserved monuments of ancient history. One of the most notable of these heritage sites is the ancient city of Palmyra. As a result, the jihadist group was able to profit from the destruction and looting of these sites, selling artefacts from this region on the European and North American black market. By targeting these cultural sites, ISIS was able to benefit financially which, in turn, aided their wars and battles. The group was well-aware of the cultural importance of their location and utilised this to their own advantage. This targeting was not random nor coincidental, indeed, it was deliberate and strategic.

In addition to their location, this strategy of targeting cultural sites held a more vital significance. The more important driving force behind these acts of cultural destruction and looting was the ideologies that underpinned the efforts of jihadist groups like ISIS. These ideologies are dominated by thoughts of monotheism, Salafism (a movement that calls upon Muslims to return to original and traditional forms of Islamic practices) and the condemning of Shirk (the notion of polytheism and idolatry towards any deity other than Allah). These beliefs are hugely significant to discuss in order to understand why cultural sites are targeted by jihadist groups. This is because cultural sites like Palmyra exhibit archaeological evidence of a variety of religions. Religious buildings and iconography belonging to the Greeks, Roman, Persians and even Christians, were visible on the site; the space was viewed as one of the most religiously diverse in history.

As a result of its archaeological significance to all these religions, the destruction of this site – and others like it – is an example of a strategic attack against beliefs that go against Salafism, representing acts of Shirk. The symbols of various gods from different historical societies being looted, destroyed and often sold to aid war, is an ideological assault on other religions. Thus, the ideologies of ISIS appear dominant and a threat. Overall, these sites are a source of income for the group. More importantly, however, cultural sites like Palmyra are viewed as symbols of Western and pagan religions and beliefs by jihadist groups like ISIS. These cultural sites being a target of destruction is a deliberate act of defiance, threat and ideological control in their region as well as the rest of the world.

MYMONA BIBI

ANCIENT AND PRE-HISTORIC

ARMINIUS OR HERMANN? THE FOUNDER OF GERMANY The Battle of the Teutoburg Forest in 9 AD was a turning point in history as Arminius led the Germanic barbarians to victory against the Romans. Arminius was raised and educated in Rome and served on campaigns with the Roman army. Although raised in Rome, Arminius was a noble member of the Cherusci tribe, one of many barbarian and often warring tribes in the Germanic provinces. The Roman governor in the region was disillusioned and provocative, who carelessly entrusted Arminius as his source of knowledge in the province. Arminius betrayed this trust and led Varus to believe that he needed assistance in suppressing an uprising deep into the Teutoburg forest. Varus’s legions were stretched, as well as outnumbered in territory that favoured Arminius’s barbarian fighters. Whilst the torrential rains and winds contributed to Arminius’s victory, Arminius certainly used his understanding of Roman weaknesses in warfare to fight effectively. After three days of fighting, the Romans

were defeated and Varus and his commanders committed suicide rather than admit defeat to Emperor Augustus. The barbarian people tied the heads of the fallen Romans to trees, stole their coveted legionary symbols, and sent governor Varus’s head to Rome. These developments inspired the Romans to return six years later in 15 AD, where the barbarian tribes were thoroughly destroyed and Arminius was eventually assassinated by his own people. The story of Arminius is compelling, yet the effects of his intervention are complicated. Several prominent scholars have argued that without his interference, the Roman Empire would have stretched far further to the north and east. Arguably, European languages may not have developed as the use of Latin would have prevented the formation of modern European languages. Moreover, religious proliferation may too have been culled, which could explain why Germany is still divided by Catholic and Protestant differences. Arminius, or Hermann (in Ger-

6  JUNE 2021  THE MANCHESTER HISTORIAN

man), has been glorified and used for certain political purposes throughout the modern era. Fictions and children’s stories have used Hermann as a figure of inspiration, whilst Martin Luther and the Nazi Party have also glorified his ability to outwit and outpower a stronger force. Arminius can be remembered as a hero for his role in defeating the Romans in one of the greatest ambushes in the history of warfare. Yet, whether his legacy marks the birth of the German nation is contestable and the use of his story must be approached with caution in an age where nationalistic heroes have been manipulated with devastating effects.

OSCAR RIHLL


ANCIENT AND PRE-HISTORIC

HOW THE AMBITIONS OF ONE EMPEROR LED TO ANTISEMITISM THROUGHOUT THE ROMAN EMPIRE The Roman Emperor Vespasian came to power in 69CE after a year-long and bloody Roman civil war, which had seen no fewer than four emperors. Having little legitimacy to his newfound and tenuously held position other than the strength of the legions which had proclaimed him emperor, Vespasian needed to validate his rule quickly. As everyone knew, the best way to garner public support in Rome was to decimate a foreign enemy, and Vespasian found the perfect target: Judea, a plucky province that had recently shaken off Roman rule with a rebellion in 66CE. Vespasian sent his son Titus to the region to get the job done swiftly, which he did with brutal efficiency: the city of Jerusalem was left in rubble, and the Jews’ most holy site, the Jerusalem temple, was burned to the ground. The Jewish ancient historian, Josephus, reports a figure of 1,100,000 Jews killed in the siege and afterwards 97,000 enslaved. Back in Rome, Vespasian was going to milk this victory for all it was worth. In a triumphal procession through the streets of Rome to celebrate the conquest, the spoils of the Jerusalem Temple, including the Jew’s most Holy objects: a large golden candelabrum and an ancient scroll of the Torah, were proudly paraded to symbolise not just the defeat of the Jewish Rebels, but a Roman victory over the Jewish religion. However, the propaganda for the victory did not stop here; Vespasian reshaped the city of Rome with an ambitious building programme: The Colosseum, the Forum of Vespasian, and the Arch of Titus, were all paid for with the booty of the Jewish Temple and all celebrated the defeat of Judea. What made

this foreign conquest different from any other was that Vespasian punished not just the rebellious city of Jerusalem, but all Jews across the empire. A “Jewish Tax” was placed on every Jew, ostensibly to replace the tithe to the Jewish Temple. Such a policy that extended across an entire people was unheard of—the tax singled out Jewish communities from their gentile neighbours and isolated them as outcasts. What made this doubly insulting was that the tax was taken for the reconstruction of the temple of Jupiter Maximus in Rome. Thus, not only had the Jews lost their own temple, they were now forced to pay for a pagan one, which was highly sacrilegious. After Vespasian’s death in 79CE, his sons, Titus and Domitian, continued his legacy of propagandising the Jewish War. When Domitian came to power in 81CE, he had no real military victories of his own; thus, he continued to propagandise the victory over Judea as if he had participated himself. He was still minting coins until 85CE inscribed: “JUDEA CAPTA”

and according to the ancient writer Suetonius, the collection of the Jewish Tax was continued and carried out “very fiercely”. Suetonius goes on to say “I recall being present in my youth when the person of a man ninety years old was examined before the procurator and a very crowded court, to see whether he was circumcised.”. The man was being examined to determine whether he was Jewish and should pay the Jewish tax. To modern readers, the scene described will be reminiscent of the horrors of the twentieth century. Within this atmosphere of empire-wide hostility towards Jews, early Christianity was developing, and as a result, this sentiment found its way into some Christian viewpoints. The main Christian conviction was that the destruction of the Jerusalem Temple was a realisation of Jesus’ prophecy: Jesus said: “As for what you see here, the time will come when not one stone will be left on another; every one of them will be thrown down.” (Luke, 21.5). The prophecy had come true, which Christians saw as God’s just punishment of the Jews. Justin, son of Priscus, an early Christian martyr (c100-165CE), displays in his writing the Christian vitriol towards Jews: “For the circumcision according to the flesh, which is from Abraham, was given for a sign, that you may be separated from other nations, and from us, and that you alone may suffer that which you now justly suffer; and that your lands may be desolate and your cities burned with fire, and that strangers may eat your fruit in your presence, and that no one of you may go up to Jerusalem”. Alongside this, Christians may

have felt they needed to establish a clearer separation between themselves and Jews to avoid similar persecution. Likewise, the Gospel of John, completed sometime around 90-110CE (some scholars have argued a gentile Christian wrote it), was the first text to describe the Jews as Jesus’ adversaries, and is believed to have been influenced by the empire-wide hostility to Jews. The author lumped together various Jewish groups into one, and in his writing, “the Jews” are the people who convince Pontius Pilate not to release Jesus. This “anti-Jewish” attitude that embedded itself within certain aspects of Christianity and throughout the Roman Empire would continue to be prevalent into the medieval period and beyond. As we have seen, the path that led to this pervasive anti-Semitism began with the Emperor Vespasian’s desire to curry favour with the Roman people.

NOAH GRAHAM JUNE 2021  THE MANCHESTER HISTORIAN

7


ANCIENT AND PRE-HISTORIC

D E CO D I N G H I E R O G LY P H I C S Many readers will already know of the Rosetta Stone and its monumental impact on history. Discovered during the Napoleonic campaigns in Egypt around the turn of the 19th century, the Stone was taken to England and now rests in the British Museum. It is an immense thing, needing a new gallery built in the museum to handle the weight. But what does it actually say? Written in three languages: Ancient Greek, Demotic, and Egyptian Hieroglyph, the stone slab announces the cult of Ptolemy V and his rise to the throne. It was allegedly one of many to be put up around Egypt during a period of internal instability and revolt. Whether this was commonly done by the kings of Egypt, the brilliance of the plan must be stated. Written in three scripts, itallowed all peoples, upper and lower class Egyptians as well as Greeks, to have their new leader immortalised and quite literally set in stone. With the blessings of the priests who wrote the tablet, it gave no room to question who was in charge.

To later archaeologists it was a gift; they were able to understand a whole civilisation and eventually elevate the study of Egyptology to an academic level. While the rest of the Ancient Egyptian population used Demotic as their primary writing system, hieroglyphs were used within the realm of authority, with literature and matters of the king immortalised on monuments. Like many ancient scripts, accessibility to lower classes was rare - only priests and the upper classes would get sufficient training in reading and writing in hieroglyphs, as they were quite impractical for anything other than ornamental status symbols. This form of “picture-writing” was the oldest form of writing in ancient Egypt, with Phoenician-derived scripts such as Ancient Greek and Demotic coming in at a later date. The translation of the Rosetta Stone has allowed historians glimpses into the most ancient parts of Egyptian society, and has allowed for a much greater insight into

Egyptian literature, mythology, and religious matters. Ancient Egypt tends to have a stereotype of being mysterious, which is often a result of an incredibly old civilisation and very little remnants of the oldest parts of that civilisation. With the Rosetta stone, we have been able to make a little more sense of the earliest parts of Egypt, and therefore the earliest parts of humanity. In comparison, Ancient Greece, began around 800BC following the fall of the Mycenaean civilisation, two thousand years after the Early Dynastic period of 3000BC. Dates aside, the Rosetta Stone and its translation opened a whole new field of study, and academics and enthusiasts everywhere can now understand the words and worlds of a civilisation so iconic it has been immortalised again and again through pop culture and modern recreations.

ANNE DE REYNIER

MODERN WESTERN

LANGUAGE AS A POLITICAL TOOL IN THE FORMER YUGOSLAV STATES In 2017, the Declaration on the Common Language was signed. It marks a culmination of attempts to counter nationalistic factions in the Western Balkans and a move towards a discussion of language, independent of nationalist tendencies. Language politics has been an important factor in the creation of the new national identities which have emerged out of the fall of the Former Republic of Yugoslavia. The significance of language in such identities is profound. Cigarette packets in Bosnia and Herzegovina still feature the health warning in Bosnian, Serbian and Croatian. The Declaration’s statement that Croats, Bosniaks, Serbs and Montenegrins have a standard language may make sense linguistically, but the impact of such a suggestion may have served to conceal a tension rather than provide resolution to the problem. Recent attempts to create a standardised language in the region are not a new phenomena. Language planning was a prominent feature in early nation-building during the nineteenth century. It is the deliberate effort to influence the function, structure or acquisition of lan-

guages within a community. In 1850, the Vienna Literary Agreement initiated the linguistic standardisation of Serbo-Croatian. It made efforts to acknowledge the similarities between Croatian and Serbian literature. Just over a century later, the Novi Sad Agreement revived the operation of implementing a unified linguistic standard; a reflection of Tito’s ‘brotherhood and unity’ policy. Yet both agreements overlooked incentives to foster linguistic support for a Bosnian identity. Moreover, neither agreement involved the active participation of Muslim Slavs from Bosnia and Herzegovina. With the expansion of the European Union (EU), the strife to standardise identity has increased. In recent times, language and the planning of it has evolved into a political strategy used in appealing to European ideals. Moving away from past identities associated with the former socialist republics and towards a euro-centric characterisation of themselves is one motivation behind such reforms. Yet this has caused further divide within South Eastern Europe. There now exists a disunion between those who want to acquire

8  JUNE 2021  THE MANCHESTER HISTORIAN

accession into the EU and those who harbour traits perceived as nostalgic or a hindrance to obtaining EU membership. Language politics has fuelled this division while also contributing to the subsistence of nesting orientalisms in the region. The utility of language as a marker of national and social identity makes de-politicisation unintelligible. Previous agreements and attempts at reform have served as a tool to ostracise certain ethnicities or subgroups. Questions must therefore be asked over how these issues can be resolved moving forward. With the emergence of the field of language economics, there is some optimism for the future. However, whether we should expect the development of a European identity in the future remains an open question.

SIOBHAN COLEMAN


ANCIENT AND PRE-HISTORIC

AL E XA NDER TH E GRE AT: LGBT ICO N ? Oppressive regimes have always been threatened by art. It is visc Alexander III of Macedon (356 BC – 323 BC) is immortalised as one of history’s greatest generals for having never lost a battle and establishing a massive empire from the Balkans to the Indus River. His impact on history is immense: Alexander introduced the Persian idea of absolute monarchy to the Greco-Roman world, forever changing global governance. For such an influential figure, it should come as no surprise that historians have been interested in his personal life, notably his sex life…

nian rule in the region. Alexander and Hephaestion married Stateira and Drypetis, Darius III’s daughters. Advocates for Alexander’s homosexuality argue that he yearned for children with his lover, Hephaestion, and utilised the Persian princesses as surrogates given that the men’s offspring would be cousins. Adversaries counterpoint that the marriage was merely a political gesture applauding Hephaestion for his services. It is unclear whether the Susa weddings are a political ceremony or a secret gay lovers’ union.

Modern terminology that describes sexual identity is not appropriate for a historical figure when such ideas did not exist. To avoid anachronism, one must consider how the Greeks of the 300s BC viewed sex. Homosexual relations were ordinary amongst the upper classes, especially in the form of pederasty. Sexual orientation was not the defining factor in sex, rather the role that each participant played: the dominant, higher-class, older partner took an active role, and the younger, lower-class partner took a passive one. Nonetheless, homosexual men of the same class experienced social stigma as the passive role was more effeminate. As Macedonian king, Alexander could partake in sexual relations with anyone, so long as he maintained the dominant role.

The idea of an LGBT or gay icon was created in the 20 th century and is a person revered by the LGBT community for being a champion of LGBT rights and/or being a notorious LGBT person themselves. Employing a broad definition, Alexander could be considered an LGBT icon for being a remarkable person that likely engaged in homosexual relationships. Yet, if all necessary modern criteria are applied, this may not be the case; Alexander did not further acceptance of homosexual relations between social classes. Furthermore, an LGBT icon should be a role model. By 21 st century standards, Alexander was a war criminal who broke the Geneva

One must examine the sparse remaining evidence about Alexander’s life to understand his sexuality. In his early years, he showed little interest in sexual relationships. Quintus Curtis stated that Alexander’s parents purchased a courtesan fearing that the young prince was γύννις, meaning ‘womanish’. As the two did not engage in sex ancient authors praised Alexander’s self-control whereas modern advocates, like James Davidson, use it as conformation of homosexuality. However, it is more reasonable to assume that teenage Alexander had not experienced sexual attraction yet. The most contested individual for the title of Alexander’s lover is Hephaestion. Whilst never explicitly stated as Alexander’s lover – always referred to as his epithet Φιλαλέξανδρος (friend of Alexander) – several modern historians interpret the relationship as amorous, rather than platonic. Hephaestion secured high-ranking positions in the army and, in Troy, the two made sacrifices at the shrines of Achilles and Patroclus: two other likely lovers that suffered history’s homosexual erasure. Moreover, Alexander was mocked for abandoning imperial administration due to lusting after handsome Hephaestion’s thighs. These interactions strongly suggest the two were not merely friends. After Hephaestion’s sudden death in 324 BC, the grief-stricken Alexander mourned over the body and refused to depart until dragged away by his companions. The king ordered that the whole empire be plunged into a state of mourning; all music was banned and the statue of Asclepios, god of medicine, was demolished. Since mortals were supposed to respect the gods, Alexander’s subjects were deeply offended. This mark of poor governance demonstrated Alexander’s hysteria after the death of the one he loved the most. Alexander did engage in heterosexual relations and fathered at least one son. He married at least twice, first to Roxana in 327 BC and later to Stateira in 324 BC. Sources describe how Alexander instantly fell in love with Roxana’s beauty and wit, immediately desiring marriage. In 324 BC, Alexander organised the Susa weddings – a mass wedding of Macedonian and Persian nobles to symbolically combine both cultures and consolidate Macedo-

Convention several times, including civilian murder, torture, and unnecessary civic destruction (although these military decisions were mild compared to his contemporaries). Nevertheless, our

modern morality should not be imposed on Alexander as ethics change over time. It is destructive to view history through this moralistic lens because it is not an objective analysis and the mindset of the people of the past cannot be understood. Since the sexual revolution over half a century ago, Alexander’s sexual identity has been reclaimed by liberal Western communities as a gay hero. Whether he deserves this title or not, it is already bestowed upon him. Using contemporary terminology, it is reasonable to assume that Alexander was bisexual. Historical evidence suggests that he experienced lust, and potentially love, towards both men and women. Is this enough to honour him with the title ‘LGBT Icon’? Though by our modern standards Alexander would be a tyrant, in the context of his time, he is an icon, and thus, if he is analysed relatively, Alexander thoroughly deserves our homage. This leaves us with the question, almost 2500 years later, would Alexander want to be our LGBT icon?

ALEXANDRA BIRCH JUNE 2021  THE MANCHESTER HISTORIAN  9


E A R LY M O D E R N

MISSIONARIES: COLONIALISM’S ‘AGENT, SCRIBE, AND MORAL ALIBI’? send him reinforcements to attack the Dayaks. However, there is no proof that the Dayaks had attacked the Sarawak. Instead, Middleton believes that this was all a ploy by Brooke to increase his naval support to consolidate his rule in Malay by force.

Colonialism is often defended as a moral mission, a mission to educate and civilise the non-western world, and often used Christian Missionaries to convey their message. However, this perspective stands to much debate, as through the years the Empires have often been questioned on what the true intentions behind colonialism were. Were they purely moral? Or were they based on profit, and excavating the best resources from foreign land?

In India, the missionaries created a communal divide between Hindus and Muslims, legally establishing them as different communities. To further the divide, the imperialist partitioned Bengal in two based on religion, and disallowed anyone to convert to Islam. Christianity was also used as the basis of education in India, as school and universities were based on Christian morals, with hymns and prayers being a part of their daily-routine. Indian politician and ex-UN Under Secretary-General, Shashi Tharoor, believed the idea that the Empire was set up to help the Indians is preposterous, showing that Indians Economy was severely hurt by the

Colette Harris, a political writer on Gender and Development, stated that a Ugandan tribe chief had asked a British Officer to bring Christianity to Uganda as they wanted to “worship the white man’s God.” This encouraged the British Empire to send Missionaries from the Anglican Church of England to Uganda and start setting up a Church along the river Nile. The Missionaries also went through a lot of trouble to help communicate their message and translate the Bible. Across the Ocean, you had Rajah Brooke in the Malay Peninsula, who was seen as the ideal British Imperialist, implementing the ideology of the Civilising Mission, spreading Christianity to the Sarawak tribe, while defending them from pirates. Similar stories can also be said about Thomas Macaulay’s intentions while writing the Indian Penal Code, which is still in use today. This makes the civilising mission and its accompanied missionaries, sound very positive. They helped establish a penal code, defended local tribes from foreign aggressors and only established Christianity when the “natives” asked for it. However, all these cases can be further expanded on. If we look at Uganda’s Missionary implementation, Harris points out a lot of problems that this caused for the locals. “Natives” had to change their clothing, as the missionaries found it to be very “revealing” and not “European” enough. This clothing change was brought in without building an understanding with the “natives” and as a result the women were now seen as “sex objects”. Harris believes that this is directly related to an increase in molestation. Furthermore, before the Empire, the women were given equal status, and were respected by Ugandan men. However, this contradicted Victorian ideology, and the missionaries made changes to the Ugandan society. After, women had to ask for a man’s approval before leaving home and simultaneously the church taught a man that he was superior. To Harris, the missionaries in Uganda institutionalised male superiority. Malay Alex Middleton, a historian focused on British and European imperialism, argues that Rajah Brooke was nowhere close to how he is painted by imperialists in Britain. Middleton believes that Brooke’s intentions were to expand the Empire by any means necessary. Though Brooke opened up free trade, and was seen as benevolent and humanitarian, he often used aggressive means which were defended by baseless causes. For example, he claimed that the Sarawak Tribe had been attacked by the Dayaks and hence forced the British Parliament to

10  JUNE 2021  THE MANCHESTER HISTORIAN

Empire, and after the Empire left only 16% of Indians were literate which doesn’t seem close to civilising the “natives.” Finally, we must take into consideration that originally the Empire was based on profit. When Robert Clive came to India and set up the foundation for an Empire, he did so with the purpose of getting rich as fast as possible. When the British East India Company began to take land in India it forced Indian farmers to grow tea and poppy, which incidentally made opium to wage the opium wars and get more tea. In India, the Railways were set up to be used by only the British to get minerals from mines to ports. In East Africa Robert Williams, the Royal African Society’s Vice-President and man in-charge of the Cape to Cairo Railway, explicitly stated multiple times that the Railways were to connect mines to ports and that the tribes on the inside were unhindered. His speeches stated that he wanted to “exploit” copper and gold, and said “Give him [Africa] railways, he loves them, but don’t give him drink,” which is a clear indication that the minerals that were collected weren’t for the use of the Africans but just for the Empire. Along with the aforementioned there are many other instances where the “natives” have been exploited and suppressed, like curbing their voices with acts on freedom of Press and Assembly. This shows that the idea of missionaries being sent from to civilise and educate the “natives” was nothing more than an alibi for establishing the world’s biggest Empire.

SHIKHAR TALWAR


E A R LY M O D E R N

HOW THE BIBLE HAS BEEN USED TO OPPRESS WOMEN SINCE THE GARDEN OF EDEN The Bible, the true representation of God’s word and will, a book with unparalleled influence on world history that shows no sign of abating, has proved to be one of the largest sources of ethnic, racial, and gendered conflict. Ever since Eve deceived in the Garden of Eden, the Bible has been the chief literary vehicle in the constant oppression of women. The Bible has shaped literature, history, entertainment, and culture more than any book ever written, and it will be shown that through various interpretations and contexts it has been used to oppress women and support patriarchal regimes. These interpretations include the feminist rejection, the Gnostic scripture, the context of time and space, and the normalisation of the male experience. But first, in case you missed the last two millennia of women’s status as described by the Bible as a second-rate citizen, here’s a brief catch up. The Bible treats women badly: “Wives obey your husbands,” “Let women learn silence in full submission,” and be “Second in nature, first in sin.” A woman’s role is one of subordination, who appears in the Bible out of necessity to the scripture; as a man’s wife, or property, or the bearer of a son. It’s clear from the get-go that Adam was created first and was deceived while Eve fell into sin. The Bible has inspired some of the greatest moments of humanity but equally has fuelled some of the worst excesses of human savagery, self-interest, and narrow-mindedness, providing ideological material for societies to enslave and abject fellow humans to poverty. The Bible provides religious and moral norms preventing women from playing a full and equal role. Furthermore, religion and the associated scripture can be manipulated to subjugate women. It justifies violations against women and has ever since Eve. Obviously, this is totally unacceptable and frankly not accepted in modern Western societies. Therefore, why is this holy text still one of the most popular and most sold books in the world? The answer is in its interpretation. If you were to ask your great-grandparents their views on immigration, gender norms, racism etc., you would find a stark change from the liberal viewpoints associated with modern Western society. Why should a book written over two centuries ago be any less shocking? The key is in interpreting the Bible in the context of author instead of the lens of the interpreter. The Bible’s readers understand the text differently due to their own particular circumstances, i.e., gender, location, and race. The birth of Christianity featured vastly different cultures across much of the globe (compared to now) and the Bible was shaped in these contexts. Whoever wrote the scriptures had to reflect societal dynamics and values of the time resulting in the context of who, when, and where being imperative. Context matters and times change. The Bible is not necessarily an instruction manual on oppression but rather evidence of

previous societies. Certain cultures have embraced egalitarianism, and some interpret the Bible in the most abhorrent way possible. Therefore, 21st century readers (generally) interpret the treatment and oppression of women with much disdain. This is in contrast to those of the middle ages, who did not necessarily use the Bible as a tool for control. The Bible mirrored a society that, like your great-grandparents, have aged terribly. The plain meaning of scripture is very androcentric and oppressive of women. Feminists found evidence that women served and participated in all levels of communal life and attempted to reject it as authoritative. They chose to interpret the Bible knowing the scripture was produced in a patriarchal culture and shaped by the narrator’s perspective. This supports the argument that it is the interpretation that determines what message is taken and used from the Bible. This feminine resistance is one attempt to reclaim the meaning behind God’s message. The Bible oppresses women through the male experience being the only experience. For centuries the only experiences that have been used to interpret the Bible have been those of men. Because only a man’s experiences are heard, they become the norm, resulting in the elimination of women from biblical studies. The Christian God was a male God who sent a male son, leaving little room for women. This normalised the male experience and shaped the interpretation for millennia, weaving the fabric of the Bible with patriarchy in mind. Lastly, the Gnostic scriptures present Eve in a different light than that of the Bible. Discovered in the Nag Hammadi Library in 1945, these ancient books depict Eve as superior to Adam, who taught him “the word of knowledge of the internal God.” Eve’s superiority is nowhere more evident than in her role as Adam’s awakener. These Gnostic texts made a valiant attempt to remove the shadow of the Judeo-Christian oppressive version of the creation myth. This Gnostic interpretation fuels the argument that the Bible was simply reflecting patriarchy rather than seeking to oppress. Gnostic texts provide evidence of an interpretation which was lost to time, along with any hope of emancipation, until the 20th century. For millennia, women have suffered as second-rate citizens. The Bible has been one factor in this and used as an excuse (or as evidence) as to why certain actions are allowed, or to why certain sectors of society can be treated in a certain way. The Bible can be interpreted in several ways that can both oppress and emancipate women. Women have sadly been at the Bible’s mercy for too long in an attempt to sustain an agenda of days past. This scripture, of primarily male experiences, should be read with a grain of salt and more time should be devoted to alternative texts and interpretations.

SAM PILLING

JUNE 2021  THE MANCHESTER HISTORIAN  11


M E D I E VA L

ST CUTHBERT’S COFFIN: DEVOTION IN RUNIC AND ROMAN LETTERING At fin

what point become

does a

a cofreliquary?

In 698 A.D., eleven years after his death, St Cuthbert’s body was exhumed and found to be incorrupt. Removed from its subterranean stone sarcophagus in St Peter’s Church, on the monastery island of Lindisfarne, the body was then transferred into a new oak coffin and placed above ground next to the altar. Almost 200 years later, in 875 A.D., this coffin was evacuated from Lindisfarne and for seven years carried across Northumbria by monks evading Danish armies. Both while on Lindisfarne and during its seven-year pilgrimage, Cuthbert’s coffin functioned not just as a casket, but also as a reliquary: a container for relics, designed to be a focal point for religious devotion. Usually, reliquaries only contain a small object associated with a saint, such as partial physical remains or personal effects. These reliquaries are often small, sometimes portable, and in Christian practices can be richly decorated. But coffins are rarely considered to be reliquaries – likely because they are not usually displayed or worshipped in the same manner. Cuthbert’s oak coffin, however, was publicly venerated, associated with miracles, taken on an (albeit involuntary) pilgrimage, and decorated with religious iconography. It was intended to be an object for worship – to perform the same function as a reliquary. According to Bede’s prose Life of St Cuthbert, the monks had ‘a wish to remove his bones, which they expected to find dry and free from his decayed flesh, and to put them in a small coffer, on the same spot, above the ground, as objects of veneration to the people.’ (Chapter XLII) Even before they discovered the miracle of his incorruption, the community therefore intended to salvage Cuthbert’s remains and to revere them as holy relics. Bede’s Life also shows that the new coffin was constructed for this purpose before the exhumation, as Bishop Eadberht ordered the monks to place his body “in the chest which you have prepared” (Chapter XLII). This detail is not present in the earlier anonymous Life, but it suggests that the coffin itself was intentionally designed to be viewed and worshipped. If this is the case, then it fulfils the same

purpose as other medieval reliquaries. Each surface of the coffin’s exterior is inscribed with images of evangelists, angels, apostles, and the Virgin and Child. It may not be richly decorated (it is instead limited to simple, unpainted carvings) but this is not a surprise for a man such as Cuthbert who led an ascetic life as a monk and hermit. But crucially, these images are accompanied by text in both runic and roman alphabets, which label the images. The different types of text can potentially illuminate different forms of interaction with the coffin – from its initial construction to subsequent veneration. All the archangels and apostles are labelled in Roman script, as are the Virgin Mary and the evangelist Luke. The remaining three evangelists (Matthew, Mark, and John) are labelled with runes, as is Christ, written as the Chi Rho (IHS XPS). The key question is – why the variation in script? There are several possible theories, none of which can be confirmed with any certainty but all of which are worth considering. The use of runes may well be due to construction practicalities: runes are far easier to carve than their curved roman equivalents, and they are surprisingly common in early medieval inscriptions, particularly in Northumbria. The carver clearly struggles with curved lettering - the letter

flect the way in which he was known for his scholarliness and literacy, as he was a Greek physician and is the patron saint of students and surgeons. Variations in text – whether it’s script, size, or position – all direct a viewer’s focus towards specific areas of the coffin and prompt contemplation of their religious significance. Finally, it is worth noting that, for an artefact designed to be revered by monks and pilgrims alike, the text on the coffin is surprisingly faint and indiscernible from a distance. There are suggestions that the labels were lightly incised before then being carved with a heavier hand, which may suggest that they could have initially been a construction aid, with no intention to keep them in the finished work. The coffin survives today and is now on permanent display at Durham Cathedral. Despite no longer holding Cuthbert’s remains – which are instead buried nearby, behind the cathedral’s altar – it is still a rare example of a medieval portable reliquary. As a focaliser for devotional practice, it is certainly worth considering St Cuthbert’s coffin in the context of other reliquaries from early medieval Europe.

<O> is often nearly hexagonal! - yet curves are possible, as the inscriber of the figures manages this feat in their halos, eyes, and wings. It is entirely feasible that the coffin was simply carved by multiple workmen who used different scripts, and the variation has no symbolic function or devotional purpose. However, regardless of intention, the different scripts have different resultant effects. If we consider potential symbolic meanings, then the fact that it is only three evangelists and Christ who are labelled in runes may indicate their particularly important or spiritual status. The exception of Luke, the only evangelist who is labelled in roman script, could also re-

12  JUNE 2021  THE MANCHESTER HISTORIAN

CATRIN HABERFIELD


ASIAN AND OCEANIC

HOW HAS THE PUBLIC MEMORY OF THE SECOND SINO-JAPANESE WAR INFLUENCED C HINESE CULTURE? Public memory in the simplest terms can be defined as the common understanding of history within a culture. This memory is reinforced by remembrance days, schoolbooks, media, literature and film, and other cultural and institutional factors. Public memory continues to dominate the ways in which a nation or culture not only interprets its own history but its place within the international community. Chinese public memory concerning the Second Sino-Japanese War has been perhaps one of the most controversial topics of recent history and present-day international relations. Nevertheless, Chinese public memory regarding the Second Sino-Japanese War remains a worthy topic of discussion, as 1949 gave the world two case studies of Chinese culture, one in mainland China and the other in Taiwan. Mainland China after 1949 became a single party communist state established by Mao Zedong, whereas Taiwan after 1949, became home to a military dictatorship under Chiang Kai-shek until his death in 1975, when it began a transition to democracy. Both Communist China and the island of Taiwan present a remarkable difference in how the memory of the Second Sino-Japanese War has influenced Chinese culture and international relations. Since martial law ended in 1987, the Republic of China in Taiwan has enjoyed a relatively open democratic society. Taiwan, despite being burdened by the history of Chiang Kai-shek’s military despotism and obsessive hatred of communism, has made instrumental strides in the fostering of reconciliation between itself and the Peoples’ Republic of China and Japan. President Ma Ying-jeou of the Republic of China in 2008 made an official apology to the victims of Chiang’s 1927 White Terror which had involved the widespread murder and torture of suspected communists. President Ma Ying-jeou’s apology was an attempt to reconcile and restore good relations between the Republic of China, the CCP and the Chinese people. No such meaningful apology has ever been issued by either Japan for its wartime atrocities according to Beijing or by the CCP itself for any human rights abuses recognised by the international community. Yinan He in a 2011 article for Europe-Asia Studies theorised that the harmonisation of national memories facilitates genuine reconciliation, while the divergence of memory between nations prevents such reconciliation. Therefore, according to He’s reasoning national memory can serve both as a bridge and a divide between nations. The Republic of China in Taiwan, at least from a diplomatic position, has done far more than both the CCP and Japan to form a bridge of reconciliation by openly admitting its guilt. According to Rana Mitter, under Maoist China many aspects of the war remained undiscussed until the economic reforms of Deng Xiaoping and the Tiananmen demonstration of 1989 forced both the CCP and the public to revisit their memories of the war. This reflection on the past however was not part of an attempt to mobilise public sentiment, as the CCP had learnt the dangers of provoking mass movements from the devastation of the Cultural Revolution. Rather, this revision of the past was something that everyone felt was necessary but dared not speak of. Beijing in the twenty-first century, as a result of this revision, has been forced to confront a type of tragic symmetry in which their racial policies, posed against the Uyghur Muslims of Xinjiang over the last few years, show considerable similarity to the racially mo-

tivated conduct of wartime Japan. Maoist China however, had delayed the realisation that Beijing continues to keep at arm’s length, largely due to Mao’s desire for international recognition for the Peoples’ Republic of China. When this international recognition was achieved in 1972, by China’s UN seat being reallocated to Beijing, Mao was much less neglectful of Japan’s wartime activities but still suppressed any public recollection that may have endangered China’s trade with Japan and the thawing of tensions with the U.S after Nixon’s 1972 visit to Beijing. The Nanjing Massacre of 1937 has been perhaps the most traumatic scar of the war for China, memorialised today by public monuments, textbooks and demonstrations that continue to cause diplomatic rifts between Tokyo and Beijing. I remember being in my A-level class in 2019 and covering the Nanjing Massacre and taking away from that sobering lesson at 10am that this single event in modern Chinese history must have tortured the minds of every person that held it in living memory. I was later to learn while looking at CCP propaganda posters from the Chinese Civil War, that certain aspects of the Nanjing Massacre had been remembered more than others in Maoist China. These posters depicted a cowering detachment of Chiang’s troops quickly swapping their uniforms for civilian clothing in order to escape the advancing Japanese military. Under Mao the Nanjing Massacre was a reference to the incompetency of Chiang’s nationalists rather than Japan’s wartime atrocities. It was not until the 1980s that public memorials of the Nanjing Massacre began to even be constructed in Nanjing itself. In 1985, the Nanjing Massacre Memorial Hall was built and inscribed on it were the names of the victims, and the hall itself remains located near what is referred to as the “pit of ten thousand corpses”. It is certain for both mainland China and Taiwan that the memory of the Second Sino-Japanese War has had a far greater influence on international relations than Chinese culture itself. Memory by nature is temporary and what is remembered and forgotten is so often swayed by current events and political regimes while not having enough time to become fully embedded into the cultural psyche. Therefore, in studying historical memory we are always left with the unsatisfactory conclusion that it is always changing.

ISAAC FEAVER

JUNE 2021  THE MANCHESTER HISTORIAN  13


ASIAN AND OCEANIC

EXPLORING WHY FORENSIC FINGERPRINTING DEVELOPED IN COLONIAL INDIA, AND ITS SUBSEQUENT TRANSFER TO VICTORIAN BRITAIN Modern detective novels and television dramas have captured public imagination for over a century. Forensic fingerprinting features in nearly every single one. Whilst the practise is one many are familiar with, few know of its modern history of development in colonial India, and the story of how it reached Victorian Britain to further develop into the technique widely used today across the globe. It is the result of an exchange of forensic knowledge born in two hugely diverse national and social contexts. The late nineteenth century was a period of significant forensic development. It was a period with rapidly increasing urban anonymity, making forensic investigations challenging in many respects. Centralised scientific management and formalised anthropometry was in vogue across Europe, with many countries investing in forensic laboratories and improved techniques for criminal identification. For this reason, India as the seat of a vital modern forensic development seems surprising. Professor of History, Chandak Sengoopta (2003), observes an absence of any mention in Indian literature of forensic laboratories, suggesting major technoscientific inequalities existed between Britain and India in this period. However, these inequalities arguably formed the foundation for British implementation of formalised forensic fingerprinting in India and its subsequent success. I would first like to establish the national context in which forensic fingerprinting came into existence. Whilst in Britain, criminals were a small subclass in society having actively broken a law, in India the British Raj designated multiple, often peripatetic, groups as ‘criminal tribes’ (Sengoopta, 2003). The result of these differing classifications? A criminal class far exceeding the size of that which existed in Britain, and one the Raj sought to use to mobilise India’s immense economic potential after entering the global market in the 1840s. This motivation is important to recognise, as it influenced the methods used by policing authorities to trace and identify ‘criminals’ in this period. British authorities refused to implement body chipping or amputation as a means of punishment to control ‘criminal’ classes in India as bodily mutilation may impede an individual’s ability to work – and therefore harm the Empire’s economy. Of further importance were India’s low literacy levels and inadequate investigative infrastructure. These rendered the anthropometric methods widely used in Europe in this period inefficient in this context. The solution, fingerprinting, was a method of forensics an illiterate local police constable could use without significant challenges. Furthermore, the use of hand-printing as a form of signature in colonial India had shown prior success. Thus fingerprinting was a natural progression from this practise in the forensic field. Whilst the use of ‘marks’ for identification has a ‘murky’ prehistory in Asia (Cole, 2004), forensic fingerprinting’s modern history began and developed in colonial India.

In later years, India too began to adopt other European scientific developments for forensics, for example wireless communication. From this we can observe an exchange of forensic and scientific knowledge, bringing modern wireless communication from Britain to India, and ‘marks’ and formalised fingerprinting from India to Britain. However, despite the initial success and preference for forensic fingerprinting in India, the British government and public did not respond favourably towards fingerprinting for formal identification when first suggested by Magistrate William Herschel in the 1850s. They cited reasons implying mark-making practices were only suitable for more ‘primitive’ races rather than the British populace (Sengoopta, 2003), and the public’s concern over the use of the technique for increased government surveillance and classification of non-criminal classes. The association between criminals, ‘primitive’ races, and mark-making was exceedingly pervasive in British society and a central reason for initial refusal of the practise. Consequently, Herschel brought and successfully implemented the practice in India where it developed into the policing technique widely used today, crucially aided by Azizul Haque, Hem Chandra Bose, and Sir Edward Henry of the Kolkata fingerprint Bureau. From this, Henry helped reintroduce forensic fingerprinting to the British government and public, finding a greater degree of success. The practise was investigated and overseen by the Belper Committee, led by Lord Belper, and established by the Secretary of State for Home Department. The origin story of forensic fingerprinting teaches us about Britain and India’s relationship as coloniser and colonised. Singha (2000) suggests fingerprinting was a reaction to Britain’s belief of their supposed ‘superiority’ and its resulting racism. This is reinforced by Herschel’s perception of Indian cultures as ones of ‘deceit’ in which ‘natives’ were unable to distinguish truth from falsehoods, a driving force behind his decision to bring forensic fingerprinting to India after its initial rejection in Britain. The hegemonic power the British government held over its colonial Indian subjects and its eagerness to control this large, diverse populace exist in stark contrast to its treatment of British subjects. This article’s title ‘From Colonial Subject to Criminal’ is indicative of this double standard. It reflects the association between fingerprinting and criminality - how the use of this practise was both indicative of the British perception that large swathes of the Indian population were deceitful criminals, and a reflection of the transfer of this technique from wider Indian colonial subjects to British criminals at the turn of the 20th century.

14  JUNE 2021  THE MANCHESTER HISTORIAN

HANNAH TEEGER


ASIAN AND OCEANIC

FROM THE KAMA SUTRA TO NOW: THE IMPACT OF COLONIAL RULE ON SOUTH ASIAN QUEER CULTURE Despite only having overturned Section 377 of the Indian Penal Code, which criminalised homosexuality, in September 2018, India has a long queer history, which the BJP (the incumbent Hindu nationalist party) completely disregard. Both ancient Indian culture and mythological texts directly refute the attitude that “…traditionally, India’s society does not recognise [homosexual] relations”, as asserted by RSS (a Hindu nationalist volunteer organisation) leader Arun Kumar, following the Supreme Court’s landmark judgement against Section 377. Within both of what became known as Hinduism and Islam, the division between sacred and sexual was extremely fluid. There are various devotional traditions in which male disciples would effeminise themselves in order to worship gods such as Krishna, Shiva and Vishnu. Similarly, in the court of Awadh, the Nawabs would dress as women on the day of their pirs (similar to saints). This was something that clearly shocked the British, who were horrified that the rulers of Awadh were men who would dress as women. The British similarly disapproved of the thriving traditions of hijras and aravanis, transgender women who for centuries have organised themselves into formal communities, and still today often live in hijra-communities led by a guru. Kama, roughly translating to desire, is one of the four Purusartha (the vital aims of human life), within Hinduism, along with Dharma, Artha, and Moksha. The 12th century Khajuraho temples in central India are decorated with an abundance of sculptures, many of which depict erotic scenes. Of these, several show homosexual relations and sexual fluidity both between men and women, as well as depicting heterosexual interactions. Further, the chapter Purushayita in the 2nd century text on love and eroticism, the Kama Sutra, recognises lesbians, or swarinis. Gay men are also frequently referred to in the Kama Sutra as klibas, in the chapter Auparishtaka. The sexual aspect of the Kama Sutra is only one portion, other aspects discuss how to attract partners and keep them, as well as how and when to commit adultery. The text was likely written by the philosopher Vatsyayana around the 2nd century, and references kama as one of the aforementioned aims of life, thus how sexual activity can also be a spiritual act (something Victorian colonists were not on board with). The Kama Sutra recognises homosexual marriage, classified under the gandharva or ‘celestial’ variety – “a union of love and cohabitation, without the need for parental approval”. This isn’t to say that queerness was the ‘mainstream’, but as Madhavi Menon has described, for thousands of years Indians lived with “indifference to difference … difference was not treated as unworthy of existence”. Clearly, queerness existed in both historic India and Hinduism, whether the RSS and Bharatiya Janata Party (BJP) admit it or not. Many Hindu epics and ancient texts themselves are dotted with queer characters. Varuna and Mitra, two male deities, are depicted as a homosexual couple in the Rig Veda, and are always shown side by side, representing the two half-moons – according to the Bhagavata Purana, the couple had children. The Krittivasi Ramayan, a 15th century Bengali telling of the Ramayana, states that King Bhagirathi was born of two queens, the wives of his late ‘father’, who made love to each other. Further, there is mention of lesbian Rakshasas

in the Ramayana, the story that the BJP MP, Mahesh Sharma, was referring to when he stated that: “I think it is a historical document. People who think it is fiction are absolutely wrong” – a position many Hindu nationalists take. The clear acknowledgment of same-sex relationships in ancient Indian tradition and literature, coupled with the alleged historicity of these texts, evidently contradicts the view that queerness has never been recognised or accepted in Indian society. What happened then? Why do the self-appointed guardians of Indian (explicitly Hindu) culture completely disregard India’s queer history? The custodians of so-called Indian tradition have in fact been favouring British law. As previously stated, the British were shocked to see people dressed in ways that didn’t match their alleged ‘predetermined biological code’. The Victorian colonial mindset found ‘morbid passion between members of the same sex’ to be ‘unnatural’ and imposed severe laws punishing it. Colonisers tend to have no issue, and in fact believe it to be their duty, to impose their ‘superior morality’ on the ‘inferiors’ they ruled. The Criminal Tribes Act (1871) criminalised hijras, while the Hindu and Muslim personal laws essentially told people what they could or couldn’t do sexually. Section 377 of the IPC, with penalties as high as a life sentence, was drafted by Lord Macaulay in 1839 and enacted by the British Government in 1861. The laws imposed by the British acted in concert within the caste system, which the British were able to exploit, through which their puritanical beliefs became ascendant within a hierarchical structure. This Victorian moral code has persevered into modern India, with Hindu nationalists claiming that defending these conservative attitudes is preserving traditional Indian values. However, prejudice against queerness is an ugly colonial relic, which has been internalised through a system of education in which Indians were filled with revulsion for their own traditions. Indian author Gurcharan Das has stated, “tragically, the colonial brainwashing was so deep that this un-Indian imposition [Section 377] remained on our statute books for 71 years after the colonisers had left”. India has a long queer history which was undeniably severely impacted by colonialism. Any claims that homosexuality has never been part of Indian tradition are, quite simply, factually inaccurate.

NICOLE BROWN

JUNE 2021  THE MANCHESTER HISTORIAN  15


NON-WESTERN AMERICAN

THE MEXICAN REVOLUTION THROUGH PICTURES The Mexican Revolution was a hugely significant moment in modern Mexican history. By 1911, the 34-year dictatorial rule from the Porfirian regime had come to an end, eventually overthrown at the Battle of Ciudad Juárez by a group of revolutionaries, thrusting the country into a decade of social unrest, uprising and uncertainty. Despite this defining moment in Mexican history, it is often hard to reduce the revolution to a singular driver. Political leaders like Francisco Madero represent bourgeois sentiment, yet populist figures like Francisco ‘Pancho’ Villa and Emiliano Zapata played a significant role in mobilising the agrarian classes. What can we learn from the photography of the revolution? Despite its eventual downfall, the Porfirian regime can be split into two contrasting eras: the years of economic modernization and infrastructure development, between 1876 and 1905; and the years of social unrest, between 1905 and 1910, leading up to the revolution. The first era can be considered a relative success for the Porfiriato. Porfirio Díaz moved Mexico away from the centrifugal tendencies of the Maximillian Dynasty towards an economically and politically centralized state. With that came a development of infrastructure, extending the Mexican railways from 400 miles to over 19,000 (picture 1), as well as an economic integration into the USA’s market.

Díaz’s biggest success, however, was arguably his biggest downfall. With modernization came political understanding, and with political understanding came democratic values and Porfirian opposition. Francisco I. Madero, leader of the Anti-Reelectionist Party, was one to emerge from this. Madero challenged Diaz for the presidency in 1910 before his false imprisonment to give Díaz a unanimous victory. In response, Madero called for an insurrection against Díaz resulting in the defeat of the Porfirian regime and the establishment of Madero as President of Mexico. But who was Francisco Madero?

economic system fairer. At face value, the photograph promotes this narrative. Madero was initially viewed as a populist figure; one that could unite Mexico following an era of social disunity, and him delivering a speech to a crowd from all spectrums of revolutionary sentiment epitomises that. Despite Madero’s initial populist support, it is hard to reduce his political actions once in power to one of populism. His intention of uniting the nation ultimately failed because his idea of reform was largely bourgeois. Madero’s immediate creation of a new political party, the Constitutionalist Progressive Party, alienated many of his supporters, and his attempt to demobilize revolutionary forces resulted in his term starting with opposition. Madero was initially so successful due to populist figures Pancho Villa and Emiliano Zapata’s ability to mobilise the agrarian classes, yet his failure to implement any lower-class demands, like the prevalent call for land reform, inevitably caused conflict. How significant were Pancho Villa and Emiliano Zapata, the figureheads of the agrarian-class and the arguably true populist leaders, in shaping the Revolution? Pictured (picture 3) in Madero’s camp, Villa gave his support to Madero in 1911 to help overthrow the Porfirian regime. The picture can tell us a lot. As previously covered, it shows that Madero initially had populist support. But it also goes a lot further.

Pictured (picture 2) delivering a speech in Ciudad Juárez following the defeat of the Porfirian regime, the photograph offers a valuable insight into Madero’s political support. Madero initially had support from across the political spectrum. Madero had the support from the bourgeoisie due to his hacienda background; the middle-classes viewed his democratic values as an intention to strengthen the Mexican political process; and the lower-classes viewed his call to return lands to villages and Indian communities, and his criticism of the wage inequality in the Cananean mines, as a chance to make the

16  JUNE 2021  THE MANCHESTER HISTORIAN


Despite Villa and Zapata’s agrarian roots, they both held significant influence in the north, Villa’s territory, and the south, Zapata’s territory, in shaping political sentiment. Villa was recruited by Madero because of the growing influence of the Mexican Liberal Party, who anointed themselves with nationalist support, and the view that Madero did not match the views of revolutionary industrialists. Though Villa gave Madero many key victories in the months preceding Madero’s rise to power, it was always clear that Madero’s failure to implement agrarian demands would witness a withdrawal of support from Villa and Zapata. Madero’s reign in power lasted only two-years before being ousted by Victoriano Huerta, yet Villa and Zapata remained influential figures throughout the revolutionary decade.

from the agraristas, and his swift downfall once Villa and Zapata lead rebellions against the President further suggests populism underpinned the Revolution. But what was popular sentiment during the Revolutionary years? Land reform dominated much of the opposition to the Díaz dictatorship, and the wage inequality in the gold mines of the North of Mexico perpetuated concerns of a growing gap between the rich and the poor. But neither were reformed until after the Revolution. Fears over the future of democracy and the influence of foreign investment played a bigger role in mobilising the classes with the most power to enact change – the classes once associated with support for the Porfiriato and conservatism.

Is it fair to reduce the Mexican Revolution to one of populism through the consideration of Villa and Zapata’s considerable influence? Madero’s support garnered much of its success

LIAM YORK

ASIAN AND OCEANIC

ON THE RISE OF INTOLERANCE TOWARDS MALE HOMOSEXUALITY IN MEIJI JAPAN The Meiji period (1868-1912) was a defining moment in Japanese history for reforming the nation’s society and economy to compete with that of Western countries. The Meiji Restoration was a movement spearheaded by government officials, which prioritized the “modernisation” of Japan by abolishing the previous feudal Tokugawa shogunate and changing existing social, economic and political structures. However, it also served to suppress and erase cultural themes such as male homosexuality that were popular throughout traditional history, especially during the height of the preceding Edo period. In striving for modernity, such themes were often deemed “unspeakable” and did not coincide with the clean and respectable image that officials were aiming to create. As such, male homosexuality became largely invisible and any public representation of it was highly scrutinised or subsequently criminalised. Tolerance towards male homosexuality was directly influenced on a political level by the state during the Meiji period. Economic reform included the adoption of the 1868 Civil Code which complimented the growing demand for family law. It allowed a clear line of succession for inheriting private property and companies to be established. Similar to capitalist models in the United States and

other Western countries, a conjugal heteronormative family structure was considered the most suitable for this economic climate. The family unit became linked to the national identity of Japan, and as such, sexuality became a strictly defined parameter. This highlighted a decreased tolerance towards male homosexuality in Japan as it contrasted with what the Meiji government defined as “modern” or beneficial for the nation’s reform. Furthermore, the government’s regulation of popular print media and entertainment actively censored depictions of male homosexuality. For example, more dynamic and erotic representations of male sexuality often thrived in genres of kabuki theatre, where young men acted in female roles, following the shogunate ban on female kabuki actresses from 1629. However, many of these pre-Meiji texts and plays were either banned or re-written to conform to the constraints

of fūzoku (public morals) which deemed what was publicly acceptable within state discourse. This was enforced by the 1869 Publication Ordinance (Shuppan jōrei) which allowed works to be vetted for inappropriate content and censored before being published. Similarly, pre-existing artwork that featured undesirable themes such as homosexuality or erotica was also socially stigmatised. During the Meiji period, the transfer or public display of such illustrations was deemed a violation of the 1880 Criminal Law and merited legal punishment. Therefore, although male homosexuality still persisted throughout the Meiji period, it could only exist in more private spaces outside of public display or consumption; a stark contrast to previous eras in which such sexuality was normalised and culturally visible.

ISABEL FOUNTAIN

JUNE 2021  THE MANCHESTER HISTORIAN  17


MODERN WESTERN

HOW HAS THE MYTHOLOGY OF THE WILD WEST IMPACTED U.S. CULTURE AND IDENTITY When mentioned, the Wild West conjures up a series of universally recognised images: the white American lone ranger on horseback, face etched with lines of dirt, sunburn, and an ill-temper; harsh, barren terracotta-coloured land, Monument Valley looming in the distance; lassos, gunfights, saloon bars, maybe even a poor caricature of a Native American. The historical myth constitutes one of the United States’ few foundational narratives, but its story of single heroes battling foes and the harsh elements has vividly endured in the nation’s i m a g i n a t i o n . Historically-speaking, the story of the Wild West began with the 17th century expansion from the colonies’ eastern parcel of land, across the Midwest, all the way to Arizona and New Mexico in 1912 as the new-born US government encouraged its citizenry to take advantage of its violent land-grabbing. The admission of states into the union took place through diplomatic arrangement (the purchase of Louisiana from France), war (the taking of Texas from Mexico), but above all and in every case, the stealing of land from native populations. Across these few centuries of upheaval, large swaths of land became a playground for drifters, farmers, and state employees in a westward movement that saw lone men and families traverse thousands of miles to establish themselves on “free land”. When, in 1848, gold mines were discovered in California, this propelled another army of hopefuls across the plains. What sets the Wild West apart from other foundational histories are the narratives that so blatantly pit the white man against marginalised communities in American society, in a bid to cement the supremacy and credibility of the former over the latter. No story better encompasses this narrative as the Cowboys versus Indians, a cruel caricature of the real-life indigenous communities who fought to defend their land from encroaching invaders. The 1956 film The Searchers, one of the most renowned Westerns, follows the quest of Ethan Edwards to avenge the brutal murder of family members by the Comanche Native American tribe, and retrieve kidnapped nieces Debbie and Lucy. The protagonist stalks across the plains, murdering and scalping Indians as he goes, hell-bent on exacting revenge for the implied rape of Lucy. The film’s tagline ‘He had to find her, he had to find her…” summarises well the age-old myth of the white man saving the white woman from a rape at

the hands of the Native American, the Mexican Immigrant or the African American. It is no coincidence that the rise of the Western film in the early to mid-20th century coincided with a new wave of racist segregation and anti-immigrant rhetoric. The romanticisation of the rugged white male saviour was a narrative seized upon by white supremacist organisations such as the Ku Klux Klan, who were violently revived in the 1920s. An equally prevalent facet of the cowboy is his status as the lone wolf. Conquering the bountiful landscape of the Wild West, unbeholden to anyone and permitted to chase money or glory, this character depicts the ideal modern American man, whose rugged individualism in every aspect of his political and economic life makes him the perfect capitalist citizen. Eric Hobsbawm argues that the cowboy is a “myth of an ultra-individualist society”, the textbook vehicle for the American Dream. He perfectly depicts the paradox of American capitalist anarchism: unconstrained by the state and representative of both types of the American man. Individualist anarchism permits the rich man to operate free from the powers of the law and the state, driven by profit, and the poor man to possess, above all other rights, the right to improve his standing, as a loner unburdened by social responsibility. It makes sense, therefore, that the tradition of the cowboy was revived under the ultra-free market Reagan administration in a whole host of advertisement schemes selling products from cigarettes to cologne, because “the west has always been a place where a man went to prove himself.” Reagan himself enjoyed playing into the image of the cowboy, but he wasn’t the first President to do so: Teddy Roosevelt is considered the pioneer. Roosevelt himself was a member of the elite club coined the “Frontier Club” by Christine Bold, which pushed the Wild West legend in literature and art to romanticise and distort its image, with the end goal being to shift public opinion in favour of legalised hunting. Simultaneously, the Frontier Club used its status and money to suppress outspoken working class, black, Native American and immigrant men. The Wild West is not only, or even primarily, concerned with its history. The term encompasses its geography, culture and folklore handed down through generations, diluted until it became an easily digestible fable. A legend upon which hundreds of novels, movies, advertisements, and even political speeches have been built rarely acknowledges the less exciting truth: that between 1870 and 1885, there were only 45 deaths by gunshot in all the major cattle towns combined, that the cowboy demographic was comprised not only of white people, but of Mexicans, of Chinese immigrants, of African Americans, or that most Californian gold-searchers didn’t discover so much as a nugget. But this mythic landscape and its characters benefit, above all, the most elite in American society, and leaning back on the stereotypes of the west being America’s mythical home has often, successfully, painted a glorious picture of heroes, easily identifiable villains, and the promise that if you are willing to fight for personal greatness above all else, you’ll achieve your wildest dreams.

18  JUNE 2021  THE MANCHESTER HISTORIAN

ZOE RABBANI


MODERN WESTERN

EPONYM ETHICS: NAMING INHUMANE MEDICINE As one of the most studied, popularly represented, and morally contemptuous regimes in modern times, to publicly adopt any aspect of Nazi or fascist language in contemporary society would quickly draw widespread revulsion and reprehension. Yet, the legacies of Nazi experimentation and medicinal breakthroughs found in the nomenclature of science and medicine still produce uncomfortable spectres of inhuman barbarism in a field based upon ethics and human good. Set this within a period where COVID-19 has renewed public attention to the centrality of medicine and science; and in the UK where the toppling of slave trader Edward Colston’s statue in Bristol, June 2020, still generates political discord, becoming conscious of and confronting these legacies has become increasingly salient. Whilst hardly renowned names of the Nazi state, two medicinal eponyms allow a perspective into the utility and consequences of names as commemoration: Asperger’s Syndrome, a type of autism increasingly defined as Autism Spectrum Disorder (ASD); and Wegener’s Granulomatosis, a type of the rare but chronic autoimmune disease Vasculitis, now named Granulomatosis with Polyangiitis (GPA). However, despite their alternate names, often the eponym coexists or remains in use despite their fascist origins. Whilst initially explained by their novelty, such eponyms prove extremely resilient: from the ambiguities or recent discovery of biographical information, or proponents of medical eponyms (much like statues and building names) pointing to them preserving the past, and thus language, as it happened. Friedrich Wegener, for example, despite membership of the Sturmabteilungs (SA) paramilitary group offers no definitive proof of his participation in unethical experimentation - only his proximity to the Lodz Ghetto, Poland, and implication in expected knowledge of human research on Holocaust victims there. Meanwhile, Hans Asperger’s name was only given to ASD in the 1980s, taking until 2018 to evidence him sending disabled children to be experimented on and ultimately euthanised in the Austrian ‘Aktion T4’ programme. Neither of these men were ever brought to trial for their actions and lived comfortable post-war lives. Such biographies of terror and inhumanity, and as vestiges of the most morally base and contemptible acts to have been carried out in human history - never mind the last century - using eponyms are still present in the landscape of medicinal language. Especially so in the linguistic technicalities of science, eponymic terminology not only provides a mnemonic for the layperson and scientist alike to memorise these diseases, but to distinguish them. Remembering Wegener’s as a decontextualised term is more suited to memory and conversation, especially as an already rare disease, than granulomatosis with polyangiitis. Meanwhile, the specificity of the Asperger’s eponym relates its signs and diagnostics away from the range of behavioural and neurological conditions of general ASD, which quotidian discussion may be averse to distinguish. Sometimes occluding history in language can simply be efficient. Further to these issues are the placement of these histories in a scientific context of medicine, which prides itself on ethics and studies to save and improve human life. Understanding the inhuman actions of these doctors can expose uncomfortable contradictions in a discipline predicated on its humani-

ty - particularly in the epistemology of these scientific breakthroughs achieved via inherently abhorrent means, facilitating the ends of contemporary medicinal diagnostics or treatment predicated on improved patient quality of life. By continuing to conceptualise diseases after these individuals as commemoration or reward both recognizes their immoral means to achieve discovery whilst presenting a dissonance between the proposed morality of medicine in the present to past doctors who explicitly disregarded those considerations. It can also threaten to obfuscate the timeline of scientific knowledge and collective efforts to realise such developments, reducing medicinal advancement to the acclaim of one person and diminishing the historical precedent and palimpsest of knowledge many people formulated in their own contributions. Lastly, and often neglected in considerations of eponymic diseases, are how the patients of these diseases have to confront the history and names of their illness. Especially upon diagnosis of ASD or GPA, utilising the eponym or even having to come across popular knowledge of it (especially the case for Asperger’s), can be a conflicting experience. Aside from having to face the realities of their malady, from being diagnosed the eponym can require an engagement, association, or even acceptance of that name and thus past actions patients have no agency over. Whilst arguments may be raised about the distinction of naming diseases from actual commemoration, or the use of terminology mainly by doctors or specialists as a trivial usage, patients often have no choice but to traverse or re-engage with these awful histories in communicating their medical experiences. Continuing eponym use threatens to undermine this tact regarding present patients, whilst continuing remembrance and commemoration of evil doctors instead of the horrific experiences of their past victims. Unlike Colston, these eponyms cannot be pulled down overnight. However, language and names traverse similar issues to scandals surrounding memorial commemoration: of remembrance good and bad, cultural and social significance, and specious accusations of “rewriting history”. Yet it is not revisionist, overly sensitive, sanitised, or even ahistorical to contest and confront these names and people, but itself becomes a part of the historical process of memory, empathy, and the diachronic progression of language and ethical human history. Avoiding this change and confrontation with uncomfortable histories threatens to mould both language and history into monoliths. Rejecting inhumane eponyms to respect victims becomes history in and of itself.

PHILIP BRADY

JUNE 2021  THE MANCHESTER HISTORIAN  19


MODERN WESTERN

THE EVOLUTION OF DIALECTS WITHIN THE ENGLISH LANGUAGE After the fall of the Roman Empire in the fourth century BC, three tribes settled in England: the Angles, the Saxons, and the Jutes. These three tribes came from Germany and Denmark, crossing the North Sea to settle in England. The Angles and the Saxons settled all along the northern coast of England whilst the Jutes arrived in the south and west. These tribes brought with them a miscellany of languages and dialects ranging from West Germanic (Dutch), North Germanic (Swedish, Norwegian), and East Germanic (Burgundian). Thus, from the fourth century, England or ‘Angle-land’, had an eclectic range of dialects emerging. Eventually, the Angles and the Saxons united to become the Anglo-Saxons, with the separate dialects combining into Old English. This was the earliest recorded form of the English Language. As the Anglo-Saxons predominantly resided in the north-east, traces of Old English are still recognisable today in the modern Geordie dialect, making it the oldest dialect in England. Three centuries later, another tribe crossed the North Sea. The Vikings were ancient warriors from Scandinavia, bringing another form of the Germanic dialect known as Old Norse. Settling mainly in the north of England, they began dominating areas over the Anglo-Saxons, establishing their capital as York or “Jorvik”. Thus, Yorkshire became Viking territory, and there was a transformation of Old English into Old Norse, creating an amalgamation of the two. Therefore, the Yorkshire dialect dates back to the Viking’s Old Norse, making it the second oldest dialect in England. The Vikings spread into Lancashire and eventually across the Irish Sea to Ireland. Thus, the Lancashire and Yorkshire dialects have similarities, but the main difference is that the Lancashire dialect is rhotic (where you can hear the “r” after a vowel sound) whereas Yorkshire is non-rhotic. This links the Lancashire dialect to Ireland as most major Irish dialects are rhotic. Given the proximity of Ireland and Lancashire between the Irish Sea, this would be a logical result. When William the Conqueror led the Norman Conquest in 1066, another language was introduced: French. The Normans established their stronghold in the south east of England, coming across the English Channel from France. As they introduced law schools and universities, French became the higher social classes’ spoken language, being the more desired form of speaking. The introduction of the feudal system further correlated language to social class; those who were educated learned French and Latin and were therefore of higher status. However, to receive the education you had to be in the south, not the north. Simultaneously, the East Midlands dialect began to develop around London, influenced by French taught in schools. Chaucer (heralded as the “Father of English Literature”) introduced words that shaped this dialect. A significant vowel shift happened between the 1400s to 1700s where the pronunciation became shorter: ‘swich’ became ‘such’, ‘meese’ became ‘mice’; words that are familiar to 21st-century readers. During the 15th century, the Chancery English standard was established in the southeast of England in all major institutions, including the church, government, and education. Chaucer’s variated English was deemed acceptable alongside French and Latin taught in schools. However, as the north did not have access to schools, their dialect remained unchanged for longer.

and 18th centuries but eventually found relative stability. It was during the Industrial Revolution in the 19th century that another significant shift in dialect would happen. The increased growth in factories and technology meant mass migration into large cities to seek employment; with this came a coalescence of minor regional dialects combining into more prominent dialects. Birmingham is a good example of this transformation, with the city’s metal factories being a major source of employment, drawing local and national people to work. Thus, the West Midlands accent formed with northern and southern dialects converging. The Industrial Revolution, coupled with the Great Potato Famine in 1845, also saw the mass migration of Irish people moving to England for better lives. The closest port across from Ireland was Liverpool which, during this time, saw 1.5 million Irish men, women, and children move to the city. Before this, most people in Liverpool spoke with a Lancashire dialect. However, with the introduction of the Irish accent, the Lancashire dialect modified into Scouse, making Scouse a relatively young dialect. Received pronunciation (RP) also dates back to the 19th century, used in boarding schools such as Eton and Harrow. Their speech pattern evolved from the East Midlands dialect and became associated with upper-class establishments. Later in 1922, the BBC adopted it, believing it to be the most understandable accent. However, this alienated regional accents as RP belonged to an elitist minority group. The second world war would radically alter accents again, this time with a global influence from the West Indies, America, India and many countries that aided Britain in the war. Thus, dialects continued to evolve and shift. In the 21st century, there is a vast array, with Eastern estuary, General Northern English, and even Mancunian, which is becoming more distinct from the Lancashire dialect. Thus, dialects in England are a mutable concept that have been constantly influenced by other places, linking to the times’ political, social and economic surroundings. The way we hear dialects is changing again, with more localised and regional dialects being celebrated rather than ironed out. Therefore, dialects are crucial in understanding our world and our origins to help shape our future.

The English language changed considerably between the 15th

20  JUNE 2021  THE MANCHESTER HISTORIAN

AMELIA HOPE


MODERN WESTERN

WHAT LED TO THE 19TH CENTURY GAELIC REVIVAL?

The Gaelic revival refers to the revival of interest in the Irish language and Irish Gaelic culture. While this broad movement emerged as early as the 1840s, it rapidly gained traction in the late nineteenth century. A variety of organisations espoused this revival, for example by promoting Gaelic literature or traditional sports. The Gaelic League, in many ways the vanguard of the revival, was formed in 1893, arising from an address given by its first president Douglas Hyde entitled ‘The Necessity for De-Anglicising Ireland’. This gives insight into perhaps the primary motivation behind the Gaelic revival: the assertion of Irish exclusiveness from English imperial influence, in the face of the decline of Gaelic language. Before the Famine, it had been spoken by half of the population, but rapid social change had seen its usage plummet. \The revival has particular interest given the context of the rising tide of Irish nationalism in the years preceding the establishment of the Irish Free State. Although the Gaelic League initially claimed to be apolitical, there is an inherent link between Irish nationalism and an organisation aimed at reviving Gaelic tradition at a time of pervasive British imperial oppression. This was evident in many of the League’s members being involved with nationalist organisations - it was links formed through the League that laid the foundation for groups like the Irish Volunteers. The Gaelic League arose at the same time as the birth of Sinn Féin and the growth of the Irish Republican Brotherhood, and most of the signatories of the 1916 Proclamation were League members. Thus, a post-colonial approach is key to the study of the Gaelic revival, and raises many questions - firstly, whether the movement was synonymous with cultural nationalism and the Home Rule movement, or a separate celebration of culture. Douglas Hyde himself rejected his contemporaries’ attempts to pit Ireland against England, claiming neutrality. For him, the Irish had willfully “broken the continuity of Irish life” by losing their language, and his patriotism was not reactionary nationalism but concerned with preserving what he saw as a distinctly Irish culture. For others, the Gaelic revival movement symbolised a conscious effort to consolidate indigenous culture in the face of colonial domination, and was entwined with nationalist insurgency. The growth of a national Irish consciousness could not have been lost on the elites at the forefront of the revival movement, many of whom were embroiled in separatist discourse. It seems almost impossible to separate the Gaelic revival entirely from its historical context of anti-colonial dissent, in spite of Hyde’s own views.

Secondly, questions of nationhood are pertinent. If the Gaelic revival is viewed as a form of cultural nationalism at a time of nationalist insurrection, the work of revisionist historians becomes relevant. Anderson’s modernist idea of the nation as a “cultural artefact” envisions nationalism as a modern, imagined phenomenon, which calls the causes of the Gaelic revival into question: is the culture it revives in some ways imagined? Revisionist historians have criticised Irish nationalists for presenting themselves as descendants of the ancient Gaelic nation to legitimise their struggle against the English. Yet revisionists have themselves been criticised for a failure to acknowledge that nationalism cannot emerge from nowhere, without cultural and symbolic elements which predate the modern nation. Surely then, the Irish language is an example of this kind of ancient, unifying element. The revival could not purely reflect cynical nationalist interests; idealists like Douglas Hyde demonstrated a genuine enthusiasm for the revival of traditional Irish language and culture in the face of Anglicisation. Elements of shared culture must have existed, to have been harnessed by the movement. Yet, perhaps cultural nationalism was in some ways a device used for the purpose of developing a strong national identity, as “different” from the English colonists. Shakir Mustafa argues that this cultural decolonisation was a considerable achievement in that it helped form a spiritual centre for a national unity, mobilising resistance, but also provided a dominated consciousness with creative opportunities to move beyond the feeling of inferiority created by the material supremacy of the English. Further controversy arises in this notion of the Gaelic revival as an aspect of cultural decolonisation. Some have viewed it as an elitist project, and although it was a small group of intellectuals who accelerated this process of a move towards native culture, they did not work in a vacuum. Despite the revisionist depiction of this elite structure unrelated to a grassroots base, evidence suggests there was significant popular support for the Gaelic League’s brand of cultural decolonisation. Seamus Deane has argued that the Gaelic Revival reflected the vulnerable political position of the Irish as colonised, describing it as “a strategic retreat from political to cultural supremacy”. Building or even inventing cultural tradition perhaps became important as more was lost materially to colonisation. Other historians have celebrated the Revival for empowering Ireland as a site of cultural creativity. What engendered the Gaelic revival is clear: it was an attempt to revive the declining aspects of Irish tradition, language and heritage. Its growth reflected a growing national interest in Ireland’s past - its language, sport and folklore. Yet asking why reveals a myriad of further issues, questions of Irish nationalism, nationhood and historiography. The picture remains complex, and questions of patriotism and cultural nationalism are extremely pertinent when studying colonial resistance across the world.

ERIN KILKER

JUNE 2021  THE MANCHESTER HISTORIAN  21


MODERN WESTERN

SOCIAL ANXIETIES SURROUNDING THE MODERN WOMAN IN INTERWAR BRITAIN

When we think of the Flapper we think of Gatsby, champagne, cropped hair and extravagant parties. Behind this image was in fact a deeply symbolic figure of the cultural and social changes that took place in 1920s and 30s Britain. A major product of this disordered era, Flapper culture was one which transgressed notions of gender and sexuality. Public anxieties surrounding maternity, lesbianism and race all became exasperated in relation to this new model of femininity. Consequently, we can gather a more conclusive picture of the social atmosphere in Britain than one might initially think by assessing the emergence of Flapper culture in interwar Britain. Surprisingly, in interwar Britain the Flapper was most commonly associated with ‘boyishness’ and a rejection of femininity. After experiencing life in the public sphere whilst men were at war, women began distancing themselves from domesticity, desiring increased social freedoms. This was very much reflected through the Flapper’s clothing choices. In an attempt to subvert the ‘ideal’ body type of a maternal curvaceous figure, the Flapper wore loose fitted clothing in order to make her figure look as masculine as possible. Corsets were abandoned, looser garments popularised and for many, dressing ‘like a man’ was preferred. Punch cartoons of the time indicate that this model of femininity was problematic, with the Flapper frequently being depicted as incompatible with maternity.

The blurred boundary between men’s and women’s clothing was not an entirely new prospect in interwar Britain, however, this was a style the Flapper adopted. For the first time in this period though, links were made between the ‘mannishness’ of the Flapper and lesbianism. The absence of male presence during WWI generated anxiety that women were becoming lesbians. Arguably though, it was the Radclyffe Hall trail of 1928 that first probed the public to asso-

woman desired to in post-war society.

ciate mannish clothing with lesbianism.

This trial regarded a novel entitled The Well of Loneliness by Radclyffe Hall. Seemingly controversial for the interwar period, this story followed two female ambulance drivers during WWI, who fell in love. The author of this novel, was a lesbian who often went by the name of John, and was known for dressing in the ‘mannish’ way that encapsulated Flapper culture. The trial saw The Well of Loneliness’ contents being labelled ‘obscene’ for presenting lesbian relationships as natural and romantic. Press coverage of the case caused panic, insinuating that women who read the novel would end up rejecting men themselves. For some, the trial of Radclyffe Hall signified a turning point at which ‘mannish’ dress became associated with immorality. In turn, the Flapper began to be viewed as a problematic model of femininity. Aside from the problematic elements of her character, the behaviour and appearance of the Flapper was also representative of female autonomy. The Charleston became a hugely popular style of dance during this period, reverting back to the popular portrayal of the Flapper that we see in film and television series. Beneath this perception though, the Charleston

required women to wear loose clothing, have high energy levels and to move their bodies in ways that may not have typically been viewed as ‘proper’ for women. This type of culture was the epitome of modernity, and a Flapper partaking in such activities symbolised women’s freedom and sense of adventure. More specifically, the clothing worn by the Flapper when doing dances like that Charleston was quite literally ‘freeing’, allowing her to move without restrictions – just as the modern

22  JUNE 2021  THE MANCHESTER HISTORIAN

Perhaps the most sinister response to Flapper culture was society’s tendency to target foreign men, particularly Chinese men, as the perpetrators of drug habits which became commonly associated with the ‘modern woman’. Migrant sailor men suffered severe racial prejudice in interwar Britain, partly because of a narrative that they were responsible for the high levels of unemployment, which caused definitions of ‘Britishness’ to be violently reasserted in society. Despite being tolerated before WWI, drugs like cocaine and opium soon became connoted with Chinese migrants, marking them a dangerous product of immigration. The Flapper was often found in environments involving drugs, sadly leading to the demonization of Chinese men, who were blamed when these women overdosed. Additionally, young women who became involved with men who sold opium suffered backlash for exemplifying the dangers of interracial relationships. Drug use had long been related to middle and upper-class men and soldiers; the British government actually overlooked this issue because India’s economy (part of the Empire) was heavily reliant upon opium trade with China. As soon as women crossed the boundary into environments involving drug use though, society had a problem. Perhaps less concerned with the health of these young women, the Flapper’s links to drugs was presented as an example of Western superiority over the Oriental culture. By portraying these women as victims of exploitation, the Flapper’s independence and her personal choice to transgress into new social avenues were forgotten, in an attempt to reassert social norms. Ultimately, Flapper culture in interwar Britain signified much more than dressing up and flamboyant parties. Through transgressive dress and behaviour, she encapsulated the freedoms and adventures that the ‘modern woman’ of the post-1918 longed for in a new model of femininity. Sadly, though, society’s desire to return to the ‘social peace’ of pre-war times resulted in the Flapper becoming a problematic antithesis of social order, often leading to the demonization of both herself and those who associated with her.

KERRY MCCALL


MODERN WESTERN

DID NAZI GERMANY RELY MORE ON COERCION OR CONSENT? Hitler’s coming to power in January 1933 is steeped in ambiguity. On one hand, he was the leader of Germany’s largest political party, and was thus selected by Weimar conservatives to run the country and provide unity. On the other hand, the chancellorship was not decided by popular mandate and the Nazis won 33.09% of the votes in November 1932, hardly a majority. Immediately following the war, historians sought to explain Nazi Germany as a totalitarian state with a people firmly controlled from above. This was challenged in the 1960s by the notion of ‘cumulative radicalisation’, whereby Nazism also held to initiatives from below, though historians maintained the ability of the working-class not to engage with Nazism’s ideological message. By the 1980s, however, historians considered Nazi racial ideology as capable of penetrating popular mindsets. Within this historiographical shift, Michael Burleigh articulated the notion of a Nazi political religion. This ideological framework essentially suggests that the ‘sacralisation of politics’ turned people towards Nazism as a political movement based on faith, abandoning tolerance and uniting a disparate society which had undergone a supposed moral collapse in the years of Weimar. This had a totalising effect, creating a Volksgemeinschaft (national community) based on notions of inclusion and exclusion. The creation of consensus was bound up in an ideological vision of uniting the country. This was particularly appealing in the context of Germany’s significant unemployment problem at the end of the 1920s, which the ‘party bickering’ of Weimar democracy had failed to prevent. Bourgeois liberal democratic parties were losing face by the early 1930s. People began seeking more radical solutions than democracy was proving to provide them. Nazism was disruptive and powerful as a political contender because, according to Peter Fritzsche, ‘it threatened to overturn the privileged position of social elites while co-opting the gains made by the working-class movement.’ Robert Gellately has devoted himself to the argument that, from 1933, ‘consensus in favour of Hitler and increasingly also Nazism, was virtually never in doubt.’ To Gellately, consent and coercion were interlinked, though coercion was only used on selective minority groups. He emphasises that the initial targets of Nazi coercion were ‘social outsiders’; that is, communists and common criminals. Indeed, in 1933, as many as 2,000 people were detained on political grounds. To Gellately, that people were consistently aware of the repression of social outsiders and remained, at least, passively supportive of the Nazi regime is evidence of consensus. Part of Nazi popularity was creating an inclusive Volksgemeinschaft, where exclusion of outsiders would increase the sense of unity. This dichotomy of inclusion and exclusion further extended to treatment of Jews. Michael Widlt emphasises that terror was key to the formation of Volksgemeinschaft. The day after the March 1933 elections, people were singing songs about ‘Jewish blood squirting under knives,’ and a nationwide boycott of

Jewish stores was organised by the SA. Wildt articulates that the aim of the Nazis was to create social distance between Jews and non-Jews, stigmatising any solidarity and sympathy and creating a racist national unity. Gellately’s suggestion that there was consensus on the Nazis from 1933 would imply that the German population was behind the Holocaust. The extent to which there was consensus is questionable to say the least. According to Geoff Eley, Michael Burleigh’s writings on ‘political religion’ are firmly influenced by his own contemporary notions of the ‘politics of decency’, revealing his liberal views in the heroic portrayal of the aristocrats of the 1944 July Plot, who put Hitler into power in the first place. Burleigh essentially reiterates the notion of ‘mass society’, which does not account for Germany’s deep political diversities. Richard Evans fundamentally questions Gellately’s belief in a consensus. Gellately emphasises that the Nazis only targeted ‘social outsiders’, but the largest group of people imprisoned in the early camps were communists. Far from social outsiders, communists were strongly integrated into working-class communities across Germany. The Communist Party of Germany (KPD) gained 18.86% of the vote in November 1932. Gellately furthermore does not give due significance to Nazi violence against the Social Democrats (SDP), who made up 20.43% of the November 1932 vote. 3,000 leading members of the SDP were beaten up, tortured and in some cases killed in June 1933. Many were imprisoned in the early camps. A combined 13.1 million votes went to these workers’ parties, compared with the 11.7 million won by the Nazis. Violent coercion was thus used against extremely popular political parties which many, particularly working-class, Germans had voted for. Coercion did not just involve outright terror. The Civil Service Law of 7th April 1933 coerced members of Catholic, liberal and conservative political parties to join the Nazi Party by the direct threat of losing their jobs in state employment. Civil service not only included more traditional civil servants but schoolteachers, university staff, prosecutors, policemen, social administrators, post office and public transport officials and more. The National Socialist vision was attractive to some, particularly the middle classes and peasants, after the chaos of the 1920s. They were able to forge an exclusively inclusive movement steeped in German pride and nationalism. Nevertheless, I am inclined to agree with Richard Evans’ deconstruction of consent. Consent is based on freedom and capacity of choice. Violence was used from the regime’s inception. In this climate, ‘passive consent’ cannot be used to exemplify Nazi consensus. As Evans emphatically writes, ‘a threat of violence is not consent.’

ELLIOTT COUSINS

JUNE 2021  THE MANCHESTER HISTORIAN  23


FILM REVIEW

HOW 90S CINEMA REVOLTED AGAINST “HIGH CULTURE” SHAKESPEARE As countless scholars have pointed out, the culture surrounding Shakespearean audiences throughout the ages has varied significantly. In Shakespeare’s day, theatre was intended for all realms of society; the upper and lower classes experienced the same masterpiece, albeit through financially segregated seating zones. Strangely, with the birth of cinema and the inevitable birth of Shakespearean cinematic depictions, this intentional accessibility vanished - Shakespeare became a product of high culture, intended for a demographic of well-cultured thespians and critics. Perhaps due to presumptions about the capability of the uneducated population to understand Elizabethan theatre, individuals outside of these parameters were no longer expected to enjoy Shakespeare. However, 90s cinema sought to challenge this. Baz Luhrmann’s 1996 Romeo + Juliet is the earliest mainstream example. This depiction completely subverts our aesthetic expectations of a Shakespearean film, bringing an Elizabethan plot into the 20th century by infusing garish pop music, elements of drag culture, comedic, extrav-

agant campy acting and high fashion with Leonardo DiCaprio’s iconic Prada Hawaiian shirt. All the while being set in the 90s but maintaining an almost identical dialogue to the 1595 text. This was a bold step towards making Shakespeare accessible to all, whilst still maintaining some original integrity. Although slated by critics and the “Bardolatrous” alike for its alleged irreverence, this rendition captivated youth attention and opened the floodgates for an abundance of Shakespearean adaptations which rejected high culture perceptions of Shakespeare and strayed ever further away from the original texts.

10 Things I Hate About You (1999), based on The Taming of the Shrew, Never Been Kissed (1999), based on As You Like It and She’s The Man (2006), based on Twelfth Night represent the three highest grossing of these adaptations. None draw attention to their Shakespearean origins and are instead presented as standard coming-of-age romantic comedies, unequivocally removing Shakespeare from the pomposity of high culture and making it digestible and accessible

MODERN WESTERN

to all. Lovers of Shakespeare may find it unsettling to know that viewers may not even recognise that they are watching the Bard, however these films should remind lovers of literature of the enduring nature of Shakespearean themes: rest assured that these films are born from one filmmaker’s adoration of Shakespeare.

Shakespeare is now fully immersed in pop culture, perhaps thanks to these digestible turn-of-the-century renditions. This prevalence encourages acknowledgement of more subtle Shakespearean influences in film, such as The Lion King (1994) and My Own Private Idaho (1991). Ultimately, film as a medium is intended for mass consumption so it is no surprise that Shakespearean cinema became more consumable for a modern audience. The experiences of Shakespeare’s characters, notably Romeo and Juliet’s naivety or Viola’s attempts to evade sexism are extremely relatable some 400 years later. It is no surprise that modern viewers still resonate with Shakespearean foundations in cinema. The surviving popularity of Shakespeare’s narratives are a testament to their potency, a testament we should welcome.

LUCY AGATE

THE REPEALING OF THE CONTAGIOUS DISEASES ACT

In 1864, the first Contagious Diseases Act was introduced which aimed to tackle the rapid spreading of venereal diseases (VD) in garrison towns and ports in Britain. However, the legislation only policed women who were suspected of being prostitutes, forcing them to be sexually examined; if their tests came back positive, they could face legal consequences such as imprisonment. The reforms made in 1866 and 1869 extended the Acts’ sphere of influence by enlarging jurisdiction and lengthening the time incarcerated. This brought opposition to the Acts because not only was the legislation imposed quietly demonstrating that the British government were aware of its potential controversy, but furthermore, men were not held accountable or examined, subsequently forcing the blame on women. The Act allowed any woman to be under suspicion because the police had the authority under the law to detain any woman under the pretext that she could be a prostitute. This outright abuse of power against women and the portrayal of

24

them as the passive carriers of VD with no consideration of a male role in the issue, further represented the sexual double standard present in 19th century Britain. In 1869, enraged by the sexual injustice women were facing, mid-Victorian feminists Josephine Butler and Elizabeth Wolstenholme founded the Ladies’ National Association (LNA) which aimed to campaign against these controversial acts. The organisation voiced their concerns against the sexual double standard and demonstrated in an article published by the Daily News that women were the victims and did not deserve the violation and humiliation caused by the medical examination. Their role in the repealing of the Acts proved indispensable as the lobbying of women against an oppressive ruling meant that women were able to actively voice their opinions and gain agency in the battle for women’s rights. Despite this, lat-

JUNE 2021  THE MANCHESTER HISTORIAN

er suffragists feared being associated with the work of the LNA because they worried the discussion of sex would hinder their campaigning for the right for women to vote. As well as from the LNA, the controversial legislation also received criticism from within parliament including MPs who believed the Acts were brought in secretly without public knowledge and acknowledged the need for the Acts to apply to both men and women in order to be an effective legislation against venereal diseases. The question of sexual morality and double standards is a topic still prevalent in today’s society despite the immense advancements in feminist thinking. Even though it took 22 years the repealing of the Contagious Diseases Acts was a small victory for mid-Victorian feminists but only a minor battle in the struggle against the oppressive patriarchal values of 19th century Britain that was reluctant to give women agency, authority, and most significantly, the vote.

CHARLIE TIMSON


MODERN WESTERN

IN S EARC H OF AT H E NS: ERNEST SIMON’S CAMPAIGN TO BOLSTER BRITAIN’S DEMOCRATIC CULTURE (1932-1939) “We are today in the midst of one of the greatest crises of civilisation. A wave of barbarism is sweeping over the world… threatening to destroy everything that is best in human society”. Given the rise of demagogic authoritarian leaders, such a statement appears to be a tragically apt description of the state of international affairs today. Furthermore, this rise has been accompanied in recent years by exponential levels of automation which, instead of freeing people from the toil of labour, has failed to eradicate poverty and made work more insecure. Whilst automation should allow us to lead “leisured lives, free to pursue the true, the good, and the beautiful”, there has actually been a “frightful failure in our civilisation” given that “poverty and distress exist in the midst of a glut”. Whilst these words may suitably illustrate contemporary society, they were actually spoken in the 1930s by an industrialist from Manchester called Ernest Simon (1879-1960) in the context of economic crisis and the spread of totalitarianism in Europe. Simon, however, was not a prophet of doom, but was actively engaged in cultivating Britain’s democratic culture. In our current epoch marked by similar crises, we can learn much from exploring how Simon believed citizenship education could strengthen democracy and foster prosperity. Simon was the son of German immigrants who had settled in Manchester. Together with his wife Shena (née Potter) he was a leading social reformer. Involved in politics at the national and local level, Simon dreamed of a flourishing democracy. As Lord Mayor of Manchester in 1921, Simon called on the city’s inhabitants to emulate the civic virtue of the Athenian citizens of the fifth century BCE so that they could replicate Athens’ great feats. The following decade, worried that not enough was being done to safeguard democracy, Simon co-founded the Association for Education in Citizenship with Eva Hubback in 1934. The association garnered the interest of prominent figures from across the political spectrum with a conference hosted by the association in 1937 seeing speeches from Clement Attlee, William Beveridge, and Viscount Halifax. Simon had come to found the association to address the shortcomings of democratic culture in Britain. In Simon’s eyes people did not receive any education which prepared them to become members of a democratic community. This paucity in education imperilled the future of democracy as it left even the most well-educated individuals prone to being swayed by fascist demagogues and their pernicious propaganda machines. Moreover, it left voters apathetic and uninformed about politics and thus unable to select competent representatives who were desperately needed to address the increasingly complex issues facing the world. Despite major advances in production, levels of terrible destitution demonstrated to Simon that current politicians were simply not up to the task of managing the economy. The economic wellbeing of millions would be greatly improved if citizens were educated and able to elect the “expert” instead of the “quack”. Simon hoped that through the efforts of the association British democratic culture could be transformed. The introduction

of civics education would lead individuals to respect the liberal democratic values of tolerance and deliberation and come to recognise their role as a citizen, having an informed and engaged interest in politics. Such an educated citizenry would hold the key to overcoming the threat of fascism and the economic crisis. Instead of electing incompetent figures or despotic demagogues, citizens would elect virtuous and capable representatives who could tackle the pressing issues of unemployment and poverty. In 1938 Simon hoped to further strengthen British democracy by searching for a democratic culture which Britons could learn from. He toured Switzerland and the Nordic countries, publicising his findings in his book The Smaller Democracies the following year. Impressed by all the countries, especially with Switzerland’s local government which compared remarkably well to Manchester’s which was characterised by self-interestedness and pitiful levels of participation, Simon ultimately championed Sweden’s exemplar democratic culture. Swedish citizens, like those of Denmark and Norway, were not only freedom-loving and interested in politics, but had also elected particularly able representatives and resultantly Sweden had led the way in addressing the economic crisis. Through the innovative ideas of its talented leaders who had employed deficit spending to resolve the economic slump, Sweden had successfully tackled unemployment and thus had staved off the threat of political extremism. Simon argued that such competent leadership owed much to Sweden’s educational culture. Alongside a strong respect for the social sciences by those in government, citizenship education held pride of place in Sweden where compulsory post-elementary civics classes were combined with an impressive national literature on citizenship. Moreover, the availability of adult education had enabled many members of the social democrat cabinet who had come from humble backgrounds to become well-educated leaders. Awestruck with Sweden’s well-educated voters and representatives, Simon concluded that the country was heading towards the abolition of unemployment and becoming “a perfect democracy”. In sum, whilst we should never seek to directly copy the past, given today’s eerie resemblance to the world of the 1930s, Simon’s ideas on how to fortify the bastion of popular rule in the face of economic turmoil and growing authoritarianism can guide us towards establishing a healthy and prosperous democracy. We should especially heed Simon’s warning made on the eve of the second world war that education was now in a contest against disaster, a contest in which disaster was gaining on education.

JOHN AYSHFORD JUNE 2021  THE MANCHESTER HISTORIAN

25


MODERN WESTERN

GRAFFITI: HAS THE ART OF RESISTANCE NOW BEEN GENTRIFIED? Graffiti art has taken on a new life in recent years, with vast murals spray painted on the side of buildings in every major city you visit. Commissioned murals and their detailed aesthetics are now deemed as a worthy presence in public spaces, referred to as “‘street art” rather than graffiti. This distinction is worth highlighting as it is often the latter that is stigmatised, resulting in criminal convictions. It is no coincidence that this spontaneous, less detailed form of artistic protest, largely deriving from deprived communities, has been treated less favourably than professionally commissioned “street art”. Historically, however, this favourable treatment has always existed.

collapse of communism in 1989 enabled Vrubel to go ahead with the mural. This emphasises the new freedoms he possessed; the creation process itself a critique of the German Democratic Republic (GDR). However, in 2009 Vrubel was asked to restore the painting, detracting from the original protest element. Instead, the mural was transformed into a site of memory that the state regarded as worthy of preserving. The Wall has, therefore, become commercialised: attractive to external visitors of Berlin, and advertised as a place of interest on travel websites. It has increased in value, not only as an example of resistance, but also because of its contribution to German culture and economy.

The Berlin Wall provides an interesting example of this. One striking image is the socialist “Fraternal Kiss between Leonid Brezhnev and Erich Honecker by Dmitri Vrubel in 1990. After being banned from painting on the east side of the Wall, the

Vrubel’s street art has been emulated recently in Bristol in 2016, depicting the “Fraternal Kiss’”, this time between Donald Trump and Boris Johnson, opposing the Brexit campaign. On the surface it displays the art of resistance, but

MODERN WESTERN

the reference to Vrubel’s original work suggests the aestheticization of street art: that in order for it to be engaging, it has to emulate similar forms of protest. Spontaneous graffiti writing, alternatively, is used to signal that something is wrong in a community, creating fear of urban decline within elites and thus requiring legal intervention. This facilitates stereotypes about those who write graffiti, suggesting they engage with other crimes, meaning certain communities are imagined as a criminal class. Even if the message of resistance is powerful, the process of what is and what is not allowed in public spaces shows only certain members of society are allowed to express their discontent through graffiti. We find this now, and in the past, making it clear that the art of resistance has always been gentrified to some extent.

SHEONA MOUNTFORD

REPL ACING THE QUOTA SYSTEM WITH ‘BLIND AUDITIONING’ IN EDUCATION AND EMPLOYMENT In 2017-2018 the UK government introduced its commitment to a ‘Equality diversity and Inclusion’ initiative in their employment structure for the HS2 project. This commitment incorporated an employment system known as ‘Blind Auditioning’ which is an Affirmative Action method of screening job applicants based strictly on the candidate’s skills and qualifications. Therefore, employers have no access to information surrounding their age, gender, race or socio-economic status. More significantly, Blind Auditioning means employers/admissions teams have no access to a candidates’ name until the recruitment process is finished. This stops well-known discrimination surrounding names. In 2019, Dr Valentina Di Stasio (Utrecht University, The Netherlands) undertook a study which concluded: ‘regardless of the occupation considered or the information included in the application, employers may simply read no further as soon as they see a Middle Eastern or African-sounding name.’ A Blind Auditioning system counters this problem completely

26

through omission of the candidate’s name. Furthermore, it is used to reduce the levels of discrimination towards minorities, such as disabled or female candidates.

In turn, the employment pool for HS2 when using Blind Auditioning saw a success rate for women increase from 17% to 47%, and from 14% to 50% for BAME people. The higher equality in gender is prominent as 36% of HS2’s core staff are women, in comparison to the Infrastructure Sector average of 21%. Previous widespread Affirmative Action methods have included the Quota System. For example, in Norway the law requires a mandatory quota of 40% female representation on senior boards. This law is enforced through strict sanctions for companies who don’t comply. Quota Systems have also been effective in higher education institutions in America where some Universities reserve up to 15% of admissions for ethnic minorities. Although this method goes a long way to tackle inequalities in the workplace and higher education, the Quota System has

JUNE 2021  THE MANCHESTER HISTORIAN

been strongly criticised by opposers who note that applicants may be evaluated on their race or gender instead of competence. Hence, quotas have been surrounded by cries of tokenism instead of promoting a genuine increase in equality. In addition, quotas retain problems of long-term success. The Chartered Institute of Personnel Development found that there was a direct correlation between stopping the usage of Quota systems and the immediate decline of representation of minority groups in the workplace. Therefore, following movements such as last years’ Black Lives Matter campaigns, systems such as Blind Auditioning are examples of pragmatic methods to further diversity and equality within the workplace, education and general society. It is the responsibility of the UK government to advocate systems such as this to be adopted on a wider scale to combat discrimination and the diminished employment/ admittance chances of minority groups.

HOLLY GARDINER


MODERN WESTERN

RECL A I MI N G AU S T R ALIA DAY The government at the time believed that Aboriginal peoples should, for their own good, be forced to conform with white mainstream Australian culture. Children were removed from their families and adopted by white people, or placed in institutions which were rife with abuse and neglect. They were taught to reject their own languages and culture, and many had their names changed in line with European norms. The impact was devastating; children who were forcibly removed suffered from high rates of depression, anxiety, and post-traumatic stress disorder, and many cultural practises were lost. In 2008, the Australian Prime Minister formally apologised to this Stolen Generation, but campaigners have argued that this is not enough. In order to help heal the wounds inflicted by assimilation policies, Australia should acknowledge the suffering of indigenous people on a wider scale, including by changing the date of Australia Day.

Australia Day has been celebrated as an official holiday in Australia since 1818, with its proponents proclaiming it to be a day of national unity and remembrance. Each year on January 26th, the holiday is commemorated with community festivals, concerts, and political addresses, and is seen as symbolic of national identity. However, in recent years, a growing number of Aboriginal and Torres Strait Islander activists have been campaigning for a change to Australia’s national day. Activists and their supporters argue that the 26th of January should be a day of national mourning, not celebration, and that the holiday excludes indigenous history. Despite conservative pushback, an increasing number of Australians are becoming aware of the holiday’s bloody history, generating hope that in future years, celebrations will be more inclusive of Australia’s diverse past and present. The 26th of January marks the day, in 1788, when Admiral Arthur Phillip first landed at Sydney Cove with a fleet of eleven convict ships, beginning the British colonisation of Australia. Although British colonisers treated the land as ‘undiscovered’, and therefore free to be taken, Australia had been inhabited by Aboriginal civilizations for more than 60,000 years. What followed was a brutal and lengthy invasion process wherein indigenous people sought to defend the land which they had inhabited for millennia. During more than a century of the frontier wars or, more aptly, ‘the killing times’, countless atrocities were committed against indigenous people, forcing them to submit to colonial authority. From Admiral Phillip’s arrival until the late 1920s, massacres were a common occurrence, with colonial authorities and civilians rounding up and executing indigenous people. One such massacre occurred on Australia Day in 1838 at Waterloo Creek, when up to 300 Aboriginal people were killed by a group of mounted police. An official inquiry at the time cleared those involved of any wrongdoing. By 1934, at least 40,000 indigenous Australians had been killed by invaders, although much of the evidence surrounding this mass murder was concealed or destroyed by the colonial government.

Dating ‘Australia Day’ as the day on which it was colonised by Europeans is inherently Eurocentric, discounting the thousands of years of Aboriginal history which preceded invasion. For indigenous people, this date does not mark the beginning of their nation, but the beginning of over 200 years of violence, discrimination and systemic abuse. For this reason, many within indigenous communities have long objected to the celebration of January 26th. In recent years, the protest of Australia Day has become more mainstream, with many people now referring to the date as Invasion Day and calling for a different date to be selected for national celebration. In 2020, tens of thousands of protestors gathered in every major Australian city, with placards reading “Change the Nation” and “No pride in genocide.” At the Melbourne protests, Aboriginal activist Uncle Robbie Thorpe called it a “day of mourning” for his people, and expressed hope that this year would be one of the last Invasion Days to be celebrated in Australia. Despite the growing popularity of these campaigns, a significant number of Australians still believe that Australia Day is something to celebrate: a 2017 poll found that only 26% support changing the date. The presently sitting government has also challenged the campaign; Prime Minister Scott Morrison has argued that Australia Day should be seen as a symbol of national unity despite the objections of indigenous groups. As the campaign has gained more prevalence in recent years, it has in turn become more politicised, with conservative politicians using the issue to stir a sense of nationalism in voters. This will certainly present a challenge for indigenous protesters as they continue to fight for recognition of Australia Day’s dark history. However, in 2020, the Black Lives Matter movement generated a wider awareness of racism and discrimination around the world. Perhaps this will prompt non-indigenous Australians to examine the terrible history of their national holiday, leading to an Australia more aware of the suffering of its first people.

JENNA HELMS

This conflict did not mark the end of the oppression of indigenous Australians. From 1910 until the 1970s, Aboriginal and Torres Strait Islander children were forcibly removed from their homes by the government as part of the policy of assimilation.

JUNE 2021  THE MANCHESTER HISTORIAN

27


MODERN WESTERN

PUNK: A MUSIC REVOLUTION Punk’s Do-It-Yourself ethic was transformative. It would be hard to align Punk with any specific political persuasion as affiliations with the sound run as far and wide as the political spectrum itself. Therefore, one must evaluate Punk broadly, locating its revolutionary dimension in the Do-It-Yourself ethos. As a result, Punk inspired political engagement, musical creativity, and a myriad of sub-genres, whilst maintaining an ethos that transcended stylistic and musical boundaries. Examples also show how punk as a revolutionist culture also kick-started an internationalist musical culture that quickly began to create links of political solidarity across boundaries. Early signs of Punk’s ability to look past its own stylistic and musical affiliations, demonstrating its revolutionary capabilities as an ethic, are show in the 1980s with the soul-tinged punk music of Vic Godard and the Subway Sect, as well as the Clash’s musically expansive, and politically minded 3-vinyl ‘Sandinista!’. These illustrate aptly the way in which Punk was revolutionary beyond genres and borders; as well as showing defiance within the proximity of the inherently capitalistic music industry. We often, rightly, associate punk with a Do-It-Yourself ethos that still exists today. Cassettes, 7-inch singles and fanzines were independently created and produced as modes of expression among thousands of young people; bored, creative and keen to express an anger born out of the cultural come down of the 1960s. What has not been explored, however, is the way in which such an ethos developed into an ingrained

understanding among the punk generation, that you could (and indeed, you can!) Do-It-Yourself. In Britain, Northern Soul was filling the ears and dance floors of England; New Wave was confusingly and creatively turning pop music on its head and Jazz was slowly being re-popularized. These musical factors all slammed into one another, providing a pretty good soundtrack for the political turmoil of the late 1970s. The result of this was a heartfelt reproduction of newfound influences, inspired by a message that if you really loved a sound, a look, or an approach, you could reproduce it in a way you saw fit. This process is illustrated in the way in which, according to Helen King, when handed ‘a bunch of northern soul 45s… [Vic] Godard immediately felt a kinship with the tone and tenor of the dancehall Motown-inflected soul music’ and so, got

writing ‘in this vein.’ If artists such as Godard were inspired by messages such as legendary fanzine Sniffin’ Glue’s claim ‘This is a chord, this is another, this is a third, now form a band’ – then the mass mobilisation (if you will…) of Punk had infinite possibilities when the floodgates of musical inspiration opened up. Another incredible example of this is in the Clash’s 1980 release Sandinista! Around this time, Mick Jones said in an interview with NPR – ‘whichever station you’re with, you gotta play more rap music, because we never hear any of it in England, and we want to’ which shows the way in which the band were looking beyond their guitar music basis, inspired by whole other genres. Bands such as the Clash were often lampooned for signing to a major label (CBS in 1977), however, I argue that this shouldn’t strip them of their revolutionary zeal – they reduced their royalties on Sandinista! so it could be sold for less, and made it a triple album to point fun at a record label that didn’t want them to release a double album. This shows signs of revolutionary resistance in the confines of a capitalistic music industry. Coinciding with their musical development, was the development of the Clash’s political focus, sharpening into a Leftist Internationalist band - and they weren’t alone. In the very same year, a Punk explosion was happening in South Africa. Bands began to spring up across the nation, inspired by the revolutionary capabilities of Punk music, and the developing internationalist ethos of Punk. By sneaking live tapes and 7inches across borders, music was able to travel and be heard in ways previously not done so. An important component of this is the way in which, generally speaking, a lot of Punk was able to represent a sense of solidarity against all regimes. Therefore, identifying as a Punk, one would broadly commit to an anti-establishment ethic that was more or less based on freedom and anarchy. A number of moments coincided to show the way in which it grew as an internationalist scene. Bands like Clash associated themselves with the Leftist Internationalism of movements, and a strong anti-US message heard in songs such as ‘The Call Up’. Bands in South Africa were self-professedly inspired by the music of groups such as the Clash, and would then go on to form bands such as National Wake and Wild Youth who had a strong anti-apartheid message, consistently at loggerheads with the apartheid regime. Although their music was deeply about the South African experience of the apartheid, they still demonstrated their own internationalism through playing fundraisers for other movements of people living under oppressive governments, notably a fundraiser for the Solidarity Movement in Poland. It was around this time that Punk was becoming explosive in Poland too as, according to Aneta Panek, ‘[p]osters, t-shirts and artzines soon abounded… Polish punk was born as an act of rebellion against the oppression of the Communist regime’. This shows the way in which Punk spread with acts of international solidarity, that no doubt is emblematic of a revolutionist culture. In conclusion, Punks Do-It-Yourself was its most revolutionary dimension, inspiring a generation to truly express themselves. From this developed more political aspects, and a sense of internationalism. From groups performing three-chord two-minute one-line anarchist anthems in 1976, to the inclusion of modes of Avant-guard expression with a true blend of genres in the expression of international solidarity. Punk changed, but was ever rooted in an ethos you can Do-It-Yourself.

28  JUNE 2021  THE MANCHESTER HISTORIAN

ARTHUR ARNOLD


NON-WESTERN AMERICAN

ARPIRELLAS AGAINST AUGUSTO Amidst empty streets in a fearful nation, Chilean women met at churches and in neighbours’ houses to stitch compassionately into fabric their accounts of an uncompassionate truth. These pieces documented the realities of life under Pinochet’s military dictatorship and provided the women who made them with a voice, a community, and a means of receiving economic solidarity from abroad. The 1973 US-backed coup d’état carried out by the Chilean military against the democratic socialist government of Salvador Allende was characterised by widespread human rights violations and economic turmoil. In the immediate aftermath of the coup, thousands of activists and trade unionists faced torture and execution. The number of desaparecidos (disappeared) grew to roughly three thousand. Meanwhile the implementation of a neoliberal economic “shock treatment” effected a rise in unemployment, food shortages, and the privatisation of utilities left thousands without water or electricity. Women, many of whom had family among the disappeared and who were amongst the worst hit by the economic and political situation, met in studios organised by the Catholic Church to give a voice to themselves and to the disappeared through scenes sown with fabric scraps onto pieces of burlap. Termed arpilleras, many of these pieces depicted missing loved ones, or women and communities holding signs asking, ‘donde estan?’ (where are they?). By stitching their loved ones’ clothes into their arpilleras, lives and art became inseparable; the bright blue of children’s school uniforms provided the sky under which scenes of social, political and economic oppression took place. Others rendered scenes of solidarity, cooperatives called ‘Comprando Juntos’ (‘Buying Together’), where communities would band together in the face of economic hardship. The church’s ‘Vicariate of Solidarity’ facilitated similar solidarity from those in other countries by selling arpilleras to provide a vital source of income for arpilleristas. Though the production and distribution of arpilleras was outlawed during the regime,

the subversive resistance art continued in secret. Through each piece a voice was aired, and a story was told which resonated with people across the world, and fuelled resistance in Chile. Women’s voices were not just confined to these anonymous and secretive pieces. Women were also at the forefront of the street movement and the ‘Vote No!’ movement that ended the Pinochet regime in 1988. Arpilleras have since inspired similar works in other countries facing systematic oppression, from across South America to Africa and even Northern Ireland.

SARAH CUNDY JUNE 2021  THE MANCHESTER HISTORIAN  29


Edit ors Heads of O nline Sarah Cundy Hannah Speller K irsten MacDonald Har r y Eddy Head s o f Design Megan Han nafor d Heads of Copyedit ing Marton Jasz Philippa Ter r y Er in Taylor D esign Team Freya Jackso n - Duffy Copyedit ing Team Freya Kenn edy Har per R ebecca B oulton Eleanor Maher Syan a Wibowo Geor gia McGee Elysia Heitm ar H eads o f Mar ket ing Natasha Par sons Eleano r Tho m pson Tille Q uattr one Pallav Roy Abraham Ar m str ong


Articles inside

ARPIRELLAS AGAINST AUGUSTO

2min
pages 29-30

PUNK: A MUSIC REVOLUTION

4min
page 28

RECLAIMING AUSTRALIA DAY

4min
page 27

IN SEARCH OF ATHENS: ERNEST SIMON’S CAMPAIGN TO

4min
page 25

DID NAZI GERMANY RELY MORE ON COERCION OR CONSENT?

4min
page 23

SOCIAL ANXIETIES SURROUNDING THE MODERN WOMAN IN INTERWAR BRITAIN

4min
page 22

WHAT LED TO THE 19TH CENTURY GAELIC REVIVAL?

4min
page 21

THE EVOLUTION OF DIALECTS WITHIN THE ENGLISH LANGUAGE

4min
page 20

HOW HAS THE MYTHOLOGY OF THE WILD WEST IMPACTED U.S. CULTURE AND IDENTITY

4min
page 18

THE MEXICAN REVOLUTION THROUGH PICTURES

3min
page 16

EPONYM ETHICS: NAMING INHUMANE MEDICINE

4min
page 19

FROM THE KAMA SUTRA TO NOW: THE IMPACT OF COLONIAL RULE ON SOUTH ASIAN

4min
page 15

HOW THE BIBLE HAS BEEN USED TO OPPRESS WOMEN SINCE THE GARDEN OF EDEN

4min
page 11

MISSIONARIES: COLONIALISM’S ‘AGENT, SCRIBE, AND MORAL ALIBI’?

4min
page 10

HOW HAS THE PUBLIC MEMORY OF THE SECOND SINO-JAPANESE WAR INFLUENCED

4min
page 13

EXPLORING WHY FORENSIC FINGERPRINTING DEVELOPED IN COLONIAL INDIA, AND ITS

4min
page 14

AL-FARABI: THE SECOND TEACHER, FORGOTTEN IN MODERN

4min
page 5

MICHAEL FOUCAULT - RECONCEPTUALISING POWER IN

4min
page 4

ALEXANDER THE GREAT: LGBT ICON?

4min
page 9

HOW DOES LANGUAGE WORK AS A COLONIAL TOOL?

4min
page 3
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.