SlamDunk! Studios

"creative and analytical writing"

Home

WELCOME to SlamDunk! Studios. This is a portfolio of creative and analytical writing I've produced over the years. The articles focus on literature, cinema, gaming, history, and sociology.

You can browse through all of the site content in the blog feed below, or search for specific pieces in the Navigation Bar above. Comments and feedback are very welcome. If you'd like to follow my more recent writing and creative projects, then please check out Valkyrist.wordpress.com. Thank you!

view:  full / summary

The Cinema of Attractions (film essay)

Posted on June 30, 2013 at 7:55 AM Comments comments (0)

During the early stages of film production, audiences demonstrated a fascination with moving pictures, now referred to as the “cinema of attractions”. This period emphasised visual spectacle and unique imagery, over narrative structure (Gunning). The first film-makers were more aware of their audience, and were creating images of fantasy and exoticism, specifically for them to see. After 1910, however, film theorists observed a decisive shift towards theatrical storytelling. Film-makers began to emulate the more established narrative modes of theatre and literature. Tom Gunning expressed frustration over this transition, viewing it as a retreat towards safe, conventional forms of entertainment, and away from cinema’s artistic potential.


The primary difference between the cinema of attraction and narrative cinema is their relationship to the audience. Modern film-makers work towards the creation of a story, with a realistic setting (even in the context of fantasy and science fiction). The screen is a window, through which we can experience this world, but the characters in it are unaware of us, and our gaze is voyeuristic and unrequited (Smith). Early film-makers, however, directly interact with the viewer, and their images are meant to stimulate us on a purely exhibitionist level. In the cinema of attraction, the visuals are the subject of curiosity, and even if there is a story, its importance is ancillary, and serves only to generate further aesthetic stimuli for the audience. For narrative films, the opposite is true. Certainly a contemporary film can produce spectacular images, but the immediate reality of the script must always take precedence. A character in a narrative film (with the possible exception of satire) will never address the audience, or break the “realistic illusion” that the movie screen provides (Strauven).


Conversely, characters in the cinema of attraction, often acknowledge the audience, thereby breaking the fourth wall. This instils in us the notion that, what we are watching, is also watching us, that these moving pictures are in an active dialogue with us, and that we are not passive, but active observers (Gunning). Gunning described this kind of cinema as a fictional world willing to rupture itself in the solicitation of attention. The spectacle of these films is shared by the characters and the audience. The screen is not a window, but a doorway. Films during this period had “the ability to produce exhibitionist confrontation rather than diegetic absorption” (Strauven). To Gunning’s annoyance, however, film-makers after 1910 began to move away from this style of cinema, instead working on narrative-driven features which he regarded as mere imitations of theatre (Gunning).


The cinema of attraction presented several types of spectacle to illicit audience attention. Perhaps the most prevalent images were of the fantastical, the exotic, and the erotic. The latter two are fairly self-explanatory, and continue to exist in the form of documentary and pornography. Exotic films allowed people to view foreign animals and cultures, and experience worlds that they would otherwise never be aware of (Strauven). They were able to transport the poor and working class into a state of wonder and excitement. Erotic films were slightly less ambitious, and sought to illicit arousal, and possibly laughter from audience members. Nevertheless, it was a popular attraction for people in the early twentieth century, and as stated, continues to exist today, especially online. Modern documentaries and pornography are pure exhibitionism, and labour under the aesthetic power of their subjects, rather than any pre-constructed narrative (Rizzo).


Fantasy spectacle faded quickly after the 1910s. Film-makers like Georges Melies and Edwin Porter have been studied primarily for their contributions to cinema as a storytelling medium. However, they were far more fascinated with the visual power of film (Gunning). Melies’ 1902 feature A Trip to the Moon depicts the first imagery of man’s flight into outer space. Yet there is no scientific process to the film; rather it is pure mysticism. The rocket is built by men dressed as wizards, and the moon is depicted as a living human face. While Melies’ brilliant set design might have been achieved on a theatre stage, his use of editing is something uniquely cinematic, and would have left audience members in a state of delight and awe. With the proliferation and dissection of the movie industry in the proceeding century, one wonders whether movies could ever again inspire such amazement. Gunning explains that the cinema of attraction has not so much been extinguished by narrative, but has instead gone underground. It occasionally presents itself as a component of narrative film. For example in musical films, such as The Wizard of Oz or The Sound of Music, the narrative reality will suddenly halt, and characters will burst into sporadic song and dance, for the direct involvement and enjoyment of the audience. Furthermore, avant-garde film-makers, like David Lynch have continued to utilize imagery over plot (Smith).


YouTube has been described as the new “cinema of attractions”. Like early cinema, online videos are created to illicit a reaction from the viewer, whether it be shock, surprise, laughter, or excitement (Rizzo). As online videos are so short (usually five to ten minutes), they are rarely narrative-driven, and instead focus on provocative imagery, designed to engage the viewer. People are expected to comment on YouTube videos, and engage with the poster or other viewers. Another connection to the cinema of attractions is the way people discover popular videos through links, emails, and friend recommendations. It is a sort of communal process that compares with the advertising techniques of vaudeville and sideshows, and contrasts the rigidly corporate advertising of narrative cinema. YouTube, and other online media networks, may represent a return to the style and atmosphere of spectacle over narrative.


Finally, video games might also be described as a medium that emphasises spectacle over narrative. Like the cinema of attractions, it engages directly with its audience, and focuses on imagery designed to excite, threaten and bamboozle them. The dialogue between game and player may be even more intense than film, since the player’s virtual self resides inside the screen, and is therefore experiencing the joys and dangers of the imaginary world in real time (Gurevitch). The player is able to change and interact with the fantasy world, but is in turn altered by it. Ironically, video games are undergoing their own crisis of identity, which echoes early cinema. Game designers are increasingly emulating the narrative style of Hollywood films, employing cinematic cut-scenes, and using celebrity voice actors, to appeal to a wider demographic. Little by little control is being taken away from the player, and reconstituted into a more linear narrative trajectory. Perhaps each new medium faces a period where it has to choose between emulating the conventions of older art forms, or developing its own, unique style.



Interactive Texts (VG article)

Posted on June 2, 2013 at 8:15 AM Comments comments (0)

Gamers are positioned as active “participants” in the narratology and ludology of video games, as well as “consumers” of gaming media. This dynamic signifies how fluid the cultural understanding of video games is, with some gamers interpreting them as an interactive artform (or texts), while others view them in purely mechanical, gameplay terms. And like all creative modes (cinema, music, design), video games are part of a highly competitive industry that, at the end of the day, seeks to turn a profit.

 

The balance between art and business is important to maintain, especially when dealing with a product so expensive to produce and purchase. Pushing too far in either direction can end in disaster. The financial crash of the mid-1980s in one such example this, wherein game companies flooded the market with low quality titles, and saw diminishing returns. Conversely, developers invest a significant amount of time and money into their products, and cannot subsist on critical acclaim or artistic appreciation alone (Flew 131). They need a solid revenue stream and a sizeable market share in order to survive. Franchises are very valuable in this respect, creating a financially stable framework through which game developers are able to create new and exciting content; experiment with different ideas and mechanics; and craft interesting, complex narratives (Bodman 11).

 

The risk, of course, is that developers rely too heavily on the franchise format, and continue to release the same basic model over and over. Arguably, the Madden football series has fallen into this trap. Its developers release a new instalment each year, and while the player line-ups are incrementally updated (to coincide with actual NFL drafts), there are only minor alterations to the game’s visuals, audio, online architecture, and gameplay mechanics (Taylor 12). Conversely, The Legend of Zelda series has resisted the franchise impulse, with each new game being vastly different than its predecessors, featuring unique graphical and gameplay styles. For example, Wind Waker uses cartoony cell-shaded visuals, and is framed as an epic, sea-faring adventure, while Majora’s Mask is much darker in texture and tone, more limited in scope, and focuses on concepts of time travel and death (14). Images of the two games are seen below, illustrating their divergent styles. Unlike Madden, which recycles the same engine each year with minimal alterations, each title in The Legend of Zelda series has been built from the ground up, and contains a self-enclosed narrative, while also abiding by, and adding to, a much greater Zelda mythology (both in terms of an ongoing storylines and characters, as well as introducing new gameplay mechanics and items). It is an example of a franchise building upon its past successes, but never becoming stale in its ideas, a trait which tends to turn gamers off.



Wind Waker (2003)                                                                           Majora’s Mask (2001)


Gamers are arguably the savviest of all media consumers, and are likely to conduct extensive research on a specific video game or console—monitoring production and development, watching gameplay videos, and reading reviews and articles—before they make their purchase. This may speak to meticulous (perhaps obsessive) nature of gamers, but is also due to the high costs of mainstream games, which can be upwards of $60, and usually $80 to $100 in Australia (Squire). Compared to a movie or music download, it is a significant investment on the part of the consumer, both in terms of money and time, with the average game taking over thirty hours to complete. This presented a problem in the mid-1980s when media companies began flooding the market with poorly developed, low quality video games. At the time, the medium was being treated by companies as a fad, to be mined quickly and efficiently before it passed (like so many other trends of the 70s and 80s). However, gamers cottoned on to the cynical marketing push, and sales plummeted. This crash almost single-handedly stymied the development of video games, and was seen as proof of their fleeting interest among consumers. However, a few years later, the Nintendo Entertainment System was released, and with its emphasis on the quality of games, rather than the quantity, sales ascended once more. From a corporate perspective, “games were once again seen as a profitable commodity”, and “gamers” recognised as a demographic worth pursuing.

 

Besides being marketed as products in their own right, video games have also been converged with other industries and technologies. For example, various film studios have released game adaptations, as a means of advertising current cinematic titles, as well as expanding upon their narratives (Jenkins). For example, the 1997 adaptation of GoldenEye, included levels and sequences not seen the original film (though partially present in early screenplays), and lead to a renewed interest in the film through home video and rental agencies. This signifies a cross-pollination arrangement between various entertainment industries, in which separate mediums (cinema and games) build off each other’s brand, and advertise in their respective markets for the benefit of both. Regarding technological convergence, in the early 2000s, Sony actually used their gaming console (the PlayStation 2) as a means of ingratiating their most recent video technology (DVD) into people’s homes. The success of their DVD format lead to non-gamers buying their console, while gamers who already planned to buy a PS2, now had a DVD player to buy movies for. Consumers find this combination of home entertainment devices (games, movies, music, internet) very appealing, and a desire for each format, cross-pollinates the consumption of the others. During the succeeding console cycle, Sony again tried to merge their new home video technology (Blu-ray) with their new gaming console (PlayStation 3). This time, however, they had competition, with Microsoft pushing HD-DVD via the Xbox 360.

 

The practice of “modding” represents a junction between gamers as consumers of content and gamers as creators of content. It refers to the act of modifying old video game engines, in order to create new games. It can involve re-arranging the textures, mapping, and models of the original game, or simply taking the shell of the game, and filling it with new content (Flew). Modding is employed by game developers and gamers alike. For example, the 1998 first-person shooter Half-Life was created from a modified version of the 1996 game Quake. Half-Life was then modified by gamers in 1999 to create team-based action game Counter-Strike, which set the mould for online shooters like Call of Duty and Battlefield. Game developers spend a lot of effort building these game engines, so they want to get their money’s worth out of them. For example, contemporary games like Grand Theft Auto IV, Red Dead Redemption and L.A. Noire are all modelled from a single engine (RAGE), as are games from developers CRYTEK. The point, is that video games are built on top of each other. New games are modelled on the style and structure of old games, and contain the programming DNA of games from the past; old digital art is broken down and re-assembled into new art (Taylor).

 

As well as being involved in the marketing and developing process, users are also actively involved in the content of video games. Unlike watching a film or listening to music, gamers act as spectator and participant. They experience the game as a visual narrative, whilst interacting with its virtual world and effecting its narrative (or ludic) outcomes. The storytelling, aesthetic, and conceptual elements of games, combined with the influence of the user, constitute them as being “interactive texts” (Bodman). However, there are divergent theories over whether video games should be treated as narrative or mechanical forms of entertainment. The theory of “narratology” argues that games are a storytelling medium, in which the gamer takes on the role of a character, and plays out a “cyberdrama” within the context of a virtual world. The complexity of the narrative, and the amount to which the player can alter or influence it, may vary greatly, but the gameplay is still structured around a narrative arc (in which the gamer embodies a central role). Conversely, the theory of “ludology” argues that games should be understood primarily as a system of rules, interfaces and objectives, and that representational elements are purely incidental. It argues that narrative components, such as cutscenes and character dialogue, are irrelevant, and that gameplay is determined by the underlying set of rules and expectations imposed on the player (Flew).

 

The Uncharted series can be used as an argument for ludic theory. The narrative of these games is communicated to the viewer through pre-rendered videos, called “cutscenes”, which play out like scenes from a movie (in this case, a movie about a treasure hunter). These cutscenes are scripted, and cannot be altered, influenced or controlled by the player. They are there primarily as a way to contextualise the gameplay, and outline player objectives (Bryce). For example, a cutscene may provide exposition on where a valuable item is, and why the player needs to acquire it to continue. The actual gameplay, however, focuses on the player’s attempts to get from point A to point B, whether that entails defeating enemies, or solving puzzles. The point is, while the narrative structure of the game may be entertaining and helpful in providing direction, it does not alter the actual gameplay. Likewise, the player cannot influence the narrative, and can focus only on achieving an objective, which in turn activates the next cutscene, which indicates the next objective (and on and on it goes until the “final boss”). The narrative structure is, as a ludic would argue, incidental to the mechanics of gameplay; that is, the story cannot be alter (or be altered by) the decisions of the player. This fact is made most obvious via the player’s option to “skip” cutscenes (so called, because they splice up the actual game), and immediately return to gameplay.

 

Conversely, the Fallout series can be used as an argument for the narrativist theory. The narrative of these games is communicated primarily through conversations, in which other characters will ask you questions, and your responses will directly affect the course of the game (an example is pictured below). Rather than entering pre-scripted cutscenes, players must seek out these one-on-one dialogue sequences. The player can also customise their abilities, weapons, armour, personality, gender, wealth, and morality, all of which affect how the world and characters of the game relate to you. Most importantly, control and perspective never leaves your character, giving the player a greater sense of “self” within the environment (Bryce). Unlike Uncharted, which leads players down a set order of objectives and obstacles, the world of Fallout is completely open-ended. It is a vast expanse of land, filled with dozens of quests and locations (cities, townships, caravans, caves, sewers, harbours, vaults) for the player to pursue at their leisure. That is, it is entirely up to the player to seek out activities, and to forge their own unique story within the context of the game. The world itself may be bound by an established narrative structure (a post-apocalyptic wasteland, overrun with mutants and militia), but the individual story of the player is endlessly customizable, and can be different with every play-through. This supports the narrativist theory, allowing each player to realise their own unique storyline, based on conversational questions and answers; how a player reacts (and how quickly) to various scenarios (many of which are randomly generated); and what order the player chooses to visit locations, or peruse objectives (if at all). Far from incidental, narrative is intertwined with gameplay, simultaneously informing, and being informed by, the player’s decisions and actions (Morris).

 

The concept of a “gamer” is as fluid as the cultural understanding of video games. Gamers embody multiple roles within video game culture, including consumers of gaming media, viewers and players of interactive texts, and creators and modifiers of content. The term “console wars” has come to embody the intense competition between game developers for market share, with the “winners” of each console generation (for example, Wii v PS3 v 360) being determined by such factors as financial success and fan loyalty.

The Russo-Japanese War (hist essay)

Posted on May 18, 2013 at 8:50 AM Comments comments (0)

The political and military strategies of the Russo-Japanese War (1904–1905) demonstrate an adherence to Carl von Clausewitz’s principles of war, particularly his arguments regarding the influence of policy. Clausewitz writes that war is merely the “continuation of policy by other means”, a political instrument, wielded by the state, to fulfil its objectives. Those objectives may include the acquisition of land or resources, the expansion of its economic or political sphere, or the protection of its citizens, at home and abroad. Clausewitz disagrees with fellow theorist Sun Tzu, arguing that war is closer to competitive business than art. It is a conflict of human interests “settled by bloodshed”, rather than diplomacy or commerce. And yet, war is more than just a tool of the state. “It develops within the womb of state policy,” and its influence extends to every aspect of a society, especially the notions of national identity and patriotism (which ironically serve as motivators for citizens to join the service).1

 

The war between Russia and Japan developed over rival ambitions regarding China, with each nation determined to further its influence over the Far East. Russia was expanding eastward across Asia, and desired a warm water port for its Pacific navy. Japan had recently seized Port Arthur from the crumbling Chinese Empire, and after pressure from several European powers, was forced to lease it to the Russians. The Russians promptly fortified the port, and began building their Trans-Siberian railway to its location. Japan was not happy with how it was being treated by the Europeans. It had fought a war with China to acquire that port, only for Europeans to swoop in and snatch it way. More importantly, however, the Meiji government had just concluded a three decade restoration, assimilating Western customs, technology and ideas, and emerging from the 1890s as a modernised state. The Japanese wanted to be recognised as equal in industry and culture to Western states. It had imperial ambitions in China and Korea, and Russia was threatening to supplant those ambitions by extending its realm into the Pacific. However, though they had Port Arthur and a considerable naval presence, Japan knew that their empire would not be consolidated until the railway was complete. If they were going to make a move against Russia, it would have to be before then.

 

So, we see the political motivations of two sovereign states—imperial expansion and political influence—coming into conflict with one another. The fact that one of these powers is European and the other Asian is also important, as modern Japan feels as though it has something to prove, while Russia comports itself with an air of superiority. And indeed, Russia’s military strength is far greater than Japan’s. At this time, it is perhaps the largest in the world, and yet most of it is stationed in Western Russia. Japan is placed in an intriguing position. Given Russia’s strength, and their own losses during the war with China, its first means of policy will be negotiation. However, there is a ticking clock tied to this course, because once the Trans-Siberian railway is complete, Russia will be able to transport its soldiers to Port Arthur and the East with speed and volume. Russia is also aware of this possibility, though continues to disregard Japan as a genuine threat to its interests. For Japan, foreseeable military action would not just be a fight over imperial territories. It would be a pre-emptive strike on what they saw as a threat to their sovereignty.4 Russia’s empire was enormous, stretching from central Europe, all the way to Mongolia and the Pacific Ocean. With every local state it absorbed it became stronger (perhaps hungrier), and with every fresh mile of railway track, it became more secure. Soon it would have ports along the Pacific Ocean and territories in China, imposing itself on Japan’s sphere of influence and trade. In an act of diplomacy, Japan offered to support Russia’s annexation of Manchuria, so long as it could maintain Korea is a buffer between the two empires. Unfortunately, Russia rejected these terms, confident the Asian nation would never dare attack them. In a seemingly unrelated matter, large quantities of Russian soldiers had begun to enter Manchuria, to quell the Boxer Rebellion, and protect the construction of their railway. However, even after the fighting had ended, 177,000 Russian troops remained stationed in Manchuria. They were an unspoken message to the Japanese as negotiations continued, and a ready military presence should war break out. It was a clever manoeuvre by the Russian high command, enacting one policy (that is, the preservation of peace in Manchuria), to enforce another (their imperial operations at Port Arthur).

 

However, there was one Russian politician eager for a war with Japan—Tsar Nicholas II—who sought to use the conflict as means of reviving his peoples’ patriotism, and quell any rebellions that might be fermenting amongst them. This demonstrates the instrument of war being wielded in two political landscapes – domestic and foreign. The Tsar’s advisors foresaw the difficulties of mobilising troops without a completed railway, and endeavoured to draw out negotiates with the Japanese, until it was completed. The Meiji government realised this façade and, besides regarding these delays as an insult to Japan’s sovereignty, also realised that if the railway was complete, the Russian’s would have far greater leverage in future negotiations. They decided that war was the only option, and recognized that their window of victory would have to emerge before the Trans-Siberian railway was completed, or otherwise, the Russian’s would overwhelm them. That is, they believed they could triumph in a short war, but not a long one. On 8 February 1904 the Japanese Imperial Navy attacked Russia’s Port Arthur fleet, besieging the garrison on sea and land. The assault was intended to be a surprise, as Japan did not issue a formal declaration of war until three hours after the first torpedo was fired. This shocked Tsar Nicholas, who had been advised that Japan would avoid war, and had not even begun to mobilise his soldiers in Manchuria. However, the Meiji government defended its position, citing Russia’s undeclared attack on Sweden in 1809. The lack of Russian preparation, perhaps stemming from a European feeling of superiority, gave Japan a strong advantage in the early stages of the war. They managed to sink several battleships of the Russian Far East Fleet, forcing them into a defensive position behind the shore batteries. Japanese troops also began landing in Manchuria, and gained significant ground before meeting Russian resistance. The strategy of the Russian’s at this point was to delay fighting until reinforcements could arrive. The Russian navy held their position at Port Arthur, while their Manchurian soldiers did not engage the Japanese until the Battle of Yalu River, which formed the first major land battle of the war. It was a brutal skirmish, and dispelled the Russian perception that their Asian adversary would submit easily.

 

The battles between Japan and Russia in 1904 and 1905 demonstrate several of Clausewitz’s tactics of combat and general principles of war. This includes the notion that concentric attack is more effective that parallel attack, and that it is best to cut off the enemies lines of supply and retreat. This latter tactic is expressed through Japan’s decision to attack before the Trans-Siberian railway was complete, making it difficult and time consuming for supplies and reinforcements to delivered from Western Russia, as well as for Manchurian troops to retreat on land. Japan also set up naval blockades in the Yellow Sea, forcing Baltic support fleets to take a longer route, or risk being attacked. The former tactic can be observed through the various land battles, in which both forces entrenched themselves, and backed by heavy artillery, engaged the enemy with heavy infantry. These concentrated assaults lead to high casualties, compounded by the efficiency of modern weapons, such as the machine gun. The use of trenches, and direct infantry assaults against automatic weaponry, can be viewed as a dark precursor to the bloody warfare of the Western Front, a decade later. Clausewitz’ political strategies of war can be boiled down to three objectives: conquer and destroy the enemy; take possession of his resources, material and sources of strength; and gain public opinion. The capture of Port Arthur and the disruption of the Trans-Siberian Railway robbed Russia of territorial advantage and strategic manoeuvrability, forcing them to continually delay, fall back and entrench themselves, as they waited for reinforcements. This also functioned to decrease the morale of Russian soldiers, and bolster the Japanese who were gaining momentum with each bloody victory. The apparent “underdog” status of the Japanese also gained them international public support. American’s largely sided with Japan, believing that they were fighting a “just war” by defending themselves against Russian aggression.10 Britain, seeking to restrict Russia’ naval ambitions, aided the Japanese through intelligence gathering. Britain’s Indian Army often intercepted Russian telegraph cables regarding the war, and were happy to pass that information along to the Japanese. In return, Japan shared its own intelligence data, in particular, evidence that Germany was aiding Russia’s eastward expansion. The Japanese revealed to Britain that Germany had a desire to “disturb the balance of power in Europe”, once again, foreshadowing the bloody clash that would occur a decade later. Through this sympathy, Japan was able to secure financing for the war from the United States and the United Kingdom. One source in particular ($200 million), came from a Jewish-American banker, who was sympathetic to Japan’s cause, and intensely critical of Russia’s anti-sematic policies. This combination of concentrated assaults, the seizure of resources and material, the prevention of enemy supplies and reinforcements, and the mounting support of the international community, all contributed to a decisive Japanese victory.

 

Their defeat at the hands of the Japanese shook the Russian confidence, and rather than unify people, as Tsar Nicholas had hoped, the war provoked anger and rebellion. While he was capable of sending more troops, the disapproval of the war was so strong, and the importance of the disputed land so low, that the Tsar decided to negotiate for peace so that he could concentrate on Russia’s domestic problems. This demonstrates the priority of political interests switching from eternal to internal, with war no longer serving as the correct instrument for the desired objective. Peace was reached between the two nations on 5 September 1905, with the Treaty of Portsmouth. Japan’s international prestige rose greatly after this decision, having been the first Asian power to defeat a European power, and now being regarded as a modern nation. Conversely, Russia lost territory, alliances, and the respect of other European powers, especially Germany and Austria-Hungary (which may have influenced their decisions a decade later to attack France and Serbia respectively). While Tsar Nicholas was able to quell the 1905 revolutions, eventually public discontent grew so great that he was overthrown in 1917. The Russo-Japanese War can be viewed as a significant factor in the cause of the Russian Revolution. Japan’s decision to attack Port Arthur prior to an official declaration directly influenced international law, as “the requirement to declare war before commencing hostilities” was outlined at the third Hague Convention, in 1907, and put into effect in 1910.

 

The Russo-Japanese War cannot be defined as a wholly “just war”, though both combatants fulfil particular criteria of a just war. Obviously it was waged by a legitimate authority, with Russia and Japan recognised as sovereign states, and war carried out by their respective governments. That said, the fact that Japan attacked prior to a formal assertion, does make the first three hours of bloodshed an undeclared (perhaps illegitimate in the eyes of international law) act of violence by the Meiji government. Moreover, the Russian’s were justified in defending themselves from a direct surprise attack on their port and navy. Though it can be disputed whether they had the right to be there in the first place, given that Port Arthur was seized from the Japanese, and Japan was becoming increasingly unnerved by the Russian’s activities and ambitions in their “neighbourhood”. However, neither nation has any real right to be there in the first place. Both were encroaching on the sovereign statehood of China and Korea, so any cries of self-defence seem a little false. Still, Russia’s imperial ambitions could not be ignored. They had moved swiftly and decisively across Asia, consolidating local states as they approached the Pacific. Japan had reason to feel threaten by this encroachment, fearing that they too would be absorbed by a mighty European power, or at least their sphere of influence would be significantly neutered. They also felt pressured to act quickly, before the Trans-Siberian railway was complete, and Russia’s dominion consolidated. Japan also believed their campaign would have “a reasonable chance of success”; that is, a short term war with Russia was “winnable” and not waged in vain, or out of anger.14 It is difficult to determine how well Japan regarded Manchuria’s civilian population, but the Russian’s treated them very poorly, with instances of mass executions and forced labour recorded. The Manchurian’s had small love for the Japanese, but they hated the Russian’s far more, and aided the Japanese troops where they good (particularly with guerrilla raids).

 

The Russo-Japanese War was the first great war of the twentieth century, and is significant for being the first instance in modern military history in which a European power was defeated by a non-European power. Political and military policy during and after the war, foreshadowed the bloody strategies of attrition and trench warfare of World War I, as well as  cultivating the ambition of the Japanese Empire, which reared itself again during World War II.

 

The Battle of Vienna (hist essay)

Posted on April 17, 2013 at 8:35 AM Comments comments (1)

By the morning of September 12, 1683, 150,000 Turks had laid siege to the city of Vienna. They had captured their outer fortifications, and were tunnelling beneath the inner walls. Deep trenches had been built to shield the Turkish lines from Viennese archers, and the invading soldiers were now assaulting the city gates with cannon fire. But there was also a separate, subterranean battle being fought. Turkish sappers were attempting to burrow beneath the city walls, and detonate explosive mines, chipping away at Vienna’s defensive infrastructure. In response, Viennese soldiers were feverishly digging their own tunnels, in order to block the sneak attacks, and disarm active mines. Eventually the city walls were cracked wide enough for the Turkish army to charge through. The entered the city twice, but were repelled both times by the defending Viennese, who by now were exhausted from lack of food and sleep.


Here is a map of Vienna in 1683. Notice how the outer walls were built as jagged edges, like the shape of a star. This makes the city much harder for an offensive army to address, as attacking one wall, exposes their flank to another. It also allows many more defending archers to line the ramparts. Notice also that the cartographer has included markings for several Turkish barracks, and even cannon balls in flight, indicating that the map was drawn during the heat of the siege.

With the Holy Roman Emperor fleeing the city (along with several other wealthy citizens), it seemed as though Vienna would soon be taken, and Europe would be left wide open to Ottoman expansion. Pope Innocent XI saw the siege as a military and philosophical assault on Western Christendom, and induced the French and Polish armies to aid the crumbling city. The French King, Louis XIV, regarded Austria as his territorial rival, and promptly refused, but the Polish King, Jan III, agreed to help. A relieving army of 80,000 Polish, Bavarian, Saxon, and German troop was raised along the hills of Vienna. Their alliance was held together by loyalty to the pope, and a mutual fear of the invading Ottoman Empire.

 

This painting, by Juliusz Kossak, depicts the Polish cavalry, arriving at the battle. The Turkish troops are clambering up the hill to meet them. Behind them, we can see the Turkish tents. And in the far distance, we can make out the walls of Vienna, and the steeple of St Stephen’s Cathedral. King Jan led a fierce cavalry charge against the Turkish left flank, descending from the hills with 3,000 Winged Hussars. Imagine being a Turkish infantryman, to wake up one morning and find these things galloping towards you. The Polish Hussars were an elite military unit, famed for their skill at lancing. As you can see, their helmets were adorned with ostrich feathers, to signify their status. Besides looking intimidating, these wings also made a loud clattering sound which made the cavalry seem much larger than it really was.

 

Witnessing the charge, the Viennese abandoned their defensive positions, and lead a ground assault on the centre of the Turkish lines. The Turk’s maintained their position, and counterattacked, but their General held back significant portions of his strength in anticipation of entering the city. The field was quickly consumed by fighting. Though superior in numbers, the Turkish soldiers had endured months of stagnant siege, and were now being attacked from multiple angles. After three hours of fighting, the Turkish lines became scattered. King Jan seized this break in rank to lead another powerful cavalry charge, now regarded as the largest in military history. Demoralised by their failure to breach the wall, and shaken by this sudden explosion of resistance, the Turkish forces abandoned their tents and weapons, and fled eastward. The Polish Hussars continued to harass the rear of their host, until the retreat was absolute. Upon securing the field, King Jan was heard to remark: “We came. We saw. God conquered.” He was declared by the Viennese “savour of the city”, and by the Pope, “defender of Western Christendom”.

 

Youth Subcultures (soc essay)

Posted on March 24, 2013 at 9:00 AM Comments comments (6)

According to Nilan, Julian, and Germov (2007), a subculture is a group of people who represent lifestyles and “modes of meaning” alternate or subordinate to the dominant culture. Members of a subculture will express themselves in opposition, or as outsiders, to the established societal structure. Indeed, they may define themselves in defiance to positions of authority, and work actively to undermine the state. More often, however, a subculture defines itself as a group of peers, which abide by a distinctly alternative system of values and beliefs. Those values may be expressed in such modes as music, clothing, dance, language, lifestyle, and graffiti. Sometimes, a subculture that was initially passive in its resistance to established social norms, can develop into something openly hostile and organisationally defiant of authority. One example of this, is the British skinhead movement of the 1980s. Skinhead culture initially originated among working class youths, and was influenced by the West Indies music scene. While rebellious, its position on politics and race were largely neutral. Gradually, however, it was co-opted by radical right-wing organisations, such as the National Front, and transformed into a subculture hostile towards foreigners and Black Britons (Kayleen 1998).

 

Sociologist David Riesman was able to distinguish between the social majority and the social minorities, as early as 1950. It is important to note that this does not refer to ethnic minorities, as while their cultural beliefs may be at odds with those of their adopted country, they are still part of the dominant culture from where they originated. That said, ethnicity can certainly influence subcultures. As mentioned above, the early skinhead culture was moulded by Jamaican immigration, particularly “rude boy” fashion and music (Kayleen 1998). Riesman wrote that the social majority “passively accepted commercially provided styles and meanings”, while a subculture “actively sought a minority style… and interpreted it in accordance with subversive values”. This poses an interesting dilemma. Can subcultures exist without an establishment to rebel against, or is their association formed purely from a compulsion of opposition? Subcultures are generally strongest when they have something to galvanise against. But what if the state becomes more diverse in its views? What if it had legalised marijuana and advocated “free love”? Would the hippy movement have suddenly disintegrated?

 

In Subculture: The Meaning of Style (1979) Dick Hebdige wrote that, subcultures “bring together like-minded people” who feel neglected or ostracised by the state. Hebdige observed the rise in youth subcultures correlated with a frustration over the dominant ideology and hegemony of post-war Britain. He also observed that the styles (which he defines as a combination of music, dance, clothing, and inebriants) of various subcultures (mods, rockers, punks, skinheads) evolved into symbolic forms of resistance. For example, the shaved heads and working class attire of the skinhead subculture is symbolic of their exclusion from the socio-economic power structure. Similarly, their taste in reggae music represents a kinship with West Indies youth, who they regard as brothers in opposition to the British authority. In later decades, however, the meaning of these symbols has changed. While the shaved head became a badge of skinhead pride, antagonism shifted from the government to racial minorities. Socio-economic kinship transformed into hatred, and now Black Briton’s no longer regard skinheads as brothers, but enemies (Kayleen 1998).

 

Hebdige observes that all subcultures are formed through two factors: common resistance and common values. They define themselves in opposition to the established norms, and practice a style that is distinctly alternative. The dominant social group often regards the style and attitudes of these subcultures as something deviant, radical, or to be feared. However, the more a power structure attempts to quash or suppress a subculture, the more resilient that subculture becomes. In fact, the ire of authority figures has the unwanted effect of legitimising the subculture, and galvanising its members into action. Ironically, the way a subculture disintegrates is by being accepted, or more specifically mainstreamed, by the dominant culture. Hebdige argues that the moment a subculture is recognised, and its style commoditised for mass consumption (that is, music and fashion tastes), then that subculture dies, or is severally neutered. Businesses are continually seeking to capitalise on the “cool” aesthetic, and will often appropriate elements of subculture style into their brands (Khan 2003). However, since capitalism is a pillar of the establishment, this has the effect of corrupting the subculture, and making it part of the dominant culture. This forces members to continually adopt new, alien styles, in order to escape the consumerist absorption. It is slightly humorous process of the social majority and the social minority, each chasing a particular aesthetic, and then immediately abandoning it once it has been appropriated. The relatively recent hipster subculture is a prime example of this process, in which bands or fashions that have entered mainstream consciousness are deemed “uncool”, and style is valued almost entirely for its obscurity. The hipster mantra is often implied to be “…you’ve probably never heard of it”.

 

Following the 1960s, there was a countercultural rejection of established gender and sexual norms. The succeeding decades fostered a more permissive environment when it came to sex, especially with regards to “gay culture” (Jaime 2007). Besides the obviously subversive displays of same-sex affection, homosexuals during this period adopted certain styles of fashion and gestures that were intended to distinguish them from the mainstream. Gay culture is considered “the largest sexual subculture of the 20th century”, and met considerable resistance from the dominant culture. But gay men and women stood their ground, establishing their presence in society as a distinct and cohesive subculture with chants such as “We’re here! We’re queer! Get used to it!” and “Out of the closets and into the streets!” However, as societal antagonism of homosexuals faded during the 1980s and 90s, gay culture became more passive and integrated into mainstream society. There is now a movement towards the normalisation of homosexuality through gay marriage (an established conservative value). Gay culture represents a transition from subculture to mainstream dominance, so much so, that new subcultures have begun to spring up in opposition to mainstream homosexuality, including “leathermen”, “bears”, and “drag kings”.

 

The Australian Bureau of Statistics conducted a study into how culture and leisure are modelled into Australian society (2001). They agreed with Hebdige’s argument that cultural expressions, such as art, music and dance, work as symbols of various subcultures. Their contention was that the artistic contributions of subcultures “help define and interpret out broader culture” (2001). That is, subcultures offer a reflection, and a more diverse understanding of our society, than the opinions of the dominant culture. The risk, as always, is that by accepting subcultures as part of society, rather than outside them, we run the danger of dissolving them. The Bureau pointed to graffiti as a symbol of the kind of “urban tribalism” some subcultures adopt, in which their environment acts as territory that they possess, but also a canvas upon which they can express themselves. Graffiti can also act as an olive branch between subcultures and the dominant culture. If aesthetically pleasing, and not removed by law enforcement, it represents a sort of mutual respect between parties, as well as an acknowledgement of difference. For example, dark city laneways might be regarded by average citizens as the domain of youth gangs, and therefore dangerous. But if those concrete laneways are painted with colourful art, it gives the citizen permission to walk down them, and admire the cultural expressions of a subculture that they will never be a part of. In fact, when graffiti is removed by law enforcement, there is often an outcry from the mainstream and subcultures (2001).

 

Patrick Williams examines subcultures as a relatively new field of sociology (2009). He observes that the majority of sociological research has focused on the “exertion of power”, while research into subcultures has revealed as “resistance to power”. Scholars from the 1960s argued that this resistance was a force of “good”, and a hopeful check against authoritarianism. Recent scholars, however, now view this resistance as “a trite concept that legitimizes the consumptive practices of would-be rebels” (2009). Williams points out that while the “spirit” or subcultures is resistant to mainstream culture, most of its acts of resistance are relatively passive. For example, the lyrics of punk music rally against the authority and hypocrisies of the establishment, but the mere buying and listening to of said music is not in itself resistance. However, those lyrics may inspire the listener to actively resist the establishment, not just in attitude and fashion, but through acts of protest, such as rallies, marches or publishing.

 

Caleb & Sweeney Todd (lit/film essay)

Posted on February 25, 2013 at 7:45 AM Comments comments (0)

The gothic sublime defines itself in contrast with romanticism, particularly in its exploration of human psychology, criminality and madness. While the romantics associated the sublime experience with external forces of nature and divinity, gothic artists turned their gaze inwards, to the “tangled labyrinth of dreams”. Vijay Mishra writes that Gothicism represents a regression from the “soaring grandness” of Burkean sublimity, and into the deep chasm of fear and feeling within. This introversion creates an intensely schizophrenic outlook, within the characters and the observer, which is why gothic texts deal so often with madness and melancholy.

 

Emanuel Kant, a romantic, sees the sublime as a representation of the “colossal”, the absolutely great, which occurs in the moment that reason gives way to imagination, before regaining its power (Morris). A person experiences the sublime within this gap, this mental lapse between grounded reason and soaring creativity. Sigmund Freud, a psychologist, wrote that the sublime is “the dissolution of self in death” (308). He believed that sublimity was a psychological compulsion towards our own end. This nihilistic mood is reflected in gothic literature, which so often contrasts the romance of nature with the horror of man. The children’s book Caleb (Gary Crew 1996) and the film Sweeney Todd (Tim Burton 2005) each explore notions of the sublime experience through their gothic visuals. The texts mix beauty, terror, humour, grotesquery, and farce, allowing the viewer to drift torturously and euphorically between each mood.

 

The front cover of Caleb depicts a haunting image, a sinister face formed by slithering insects. At first glance the figure seems human, or at least humanoid, and appears very threatening. It gazes back at us with cold, empty eyes, more predatory than intellectual, as though it does not see us at all. Maybe it is looking right through us, at something we cannot sense. Soon, it does not seem human at all, and more like an androgynous alien. The figure is something we do not know or understand, and its leering presence resides in a grey area between man and woman, between human and insect. Eventually the face dissolves all together, and we see the crawling insects that formed it in the first place, white grubs against a black canvas. Cockroaches form the figure’s eyes, while a dragonfly makes its nose and brow. The insects are scuttling around, sliding against one another in chaos, and yet for that one fleeting second they formed a human face. The idea that they were there all along is very unsettling, as though they were crawling across our skin, invisibly violating us. Now the face seems to have been an allusion altogether, and yet its eyes still gaze at us from that black abyss. The deception of the image is equally unnerving, and forces us to question how we view the world. “What else has been an illusion?” the reader begins to ponder. What other details have we missed in the world, too blinded by the forest to notice the hidous trees? The image represents several aspects of the sublime, particularly the liminal. Our whole notion of the figure oscillates between contrasting ideas, never being able to settle on any one truth. The figure is neither human nor alien, neither male nor female, neither person nor insect. Even the colours of the image are in a state of flux. Is the figure the absence or the presence? Is the image a black face with white features, or a black hole, covered in white insects? Does the face even exist at all, or can it only be seen from this exact angle, at this exact moment, when the insects are in those exact positions. Such transient ideas force us to question our own mortality, and whether our existence is fleeting, and perceivable only through a very specific set of stimuli. Does god look at us and see a person, with dreams and fears and a soul, or does he see only a collection of cells and blood and molecules?

 

The imagery of Caleb is intensely gothic, portraying a world of shadowed buildings, faceless people, and twisted nature. The black and white sketches form the visual narrative of Caleb, and are bound by white frames, beside the text. Faded red drawings are also present and reside behind or around the text, taking the form of insects or leaves. The fact that the black and white sketches are framed creates a sense of order, as they contain the illustrations of Caleb’s spooky transformation, and are therefore the most potentially threatening aspect of the book. The measured red and white frames create a sense that even when the story is at its most chaotic, they are imprisoned within the geometry of the book. The visual anomaly, however, is the owl, which is sketched in black and white, but is able to fly back and forth between picture and text, transcending the frame that contains the rest of the story. This seems to imply that the pictures are not entirely shackled to the page, and may leap out at us if things become too intense. The owl is another example of the liminal, able reside between the rational spaces of picture and text, and able to transcend the rational order of the picture frame. It is possible that the owl represents the stories narrator, Quill, whose name is linked to the bird’s feather, and who is also placed as a passive observer (or transcriber) of Caleb’s story.

 

Many of the pictures are seen from a distorted perspective. Some are slanted, such as the mansion on page 1, forming a sense of instability or chaos through the diagonal lines. Drawings of nature, such as trees and branches are curled and twisted. The black forest on page 22 seems to thrash around Caleb, while he himself stands deathly still, as grasping branches explode from each of his shoulders like dragonfly wings. Nothing is settled or straight. Everything is clashing this way and that, from the swaying grass to Caleb’ crossed arms. Other images are viewed from very low or very high locations, which create the impression of a tiny creature, like an ant looking up, or a fly looking down. The image on page 1 is particularly enormous, filling almost the entire frame, and towering over us like a mountain. The aesthetic is sublime, as it overwhelms us with its structure, and makes us feel as small and as insignificant as an ant. An insect point of view plays into the thematic arc of the novel, which deals with biological transformation. The insects drawn in faded red also echo the developments of the text, notably towards the end, when Caleb finds a symbolic mate, and the critters begin coupling with one another. Like the front cover, the imagery is very unsettling. You can practically hear bugs skittering about across the pages, and crawling up onto your hands. Christian imagery is also present throughout the novel. On page 22, the moon frames Caleb’s face like a halo, but his expression is cold and sinister. This posture (with the black dragonfly wings) is an allusion to the fallen angel Lucifer, whose feathered wings were blackened and hardened by the fires of chaos (and the corruption of his mind). On page 14, we get a shadow behind Caleb, in the shape of a demon. Conversely, above Quill is the shadow of an angel, formed by his stuffed owl. This frames their characters as spiritual enemies, one transgressing nature, the other abiding it. Or it could refer to the duality of man, and the contrary natures housed in a single shell. This second reading would fit well with gothic sublimity, which locates human beings in the liminal morality between heaven and hell.

 

Sweeney Todd opens with a soaring view of Victorian London, before plunging into the tortured psyche of its titular character. Burton has created a rather surreal piece, melding together elements of the horror and musical genres, and giving the film an extravagantly gothic look and feel. Modern musicals tend to centre on romantic and comedic stories. The very notion of setting a horror film, or at least a serial killer film, to music, is a contradiction (Mollin). The visual movements from one jaunty song to the next are interspersed with scenes of intense violence and gore, and this wavering between fun and terror contributes to a sublime experience in the mind of the viewer. Laughter and smiles are quickly choked off in the wake of bloody torture, and visceral disgust is dissolved as another brilliant musical number begins. Burton keeps the viewer swaying back and forth like a pendulum, and in the fleeting moments between mummery and violence resides the sublime.

 

Like Caleb, the film uses line and angles to illustrate mood. The most obvious distortion is Todd’s barber shop. The room’s window faces out at a sharp diagonal angle, and symbolises Todd’s unstable psyche and dangerous mood. As large as it is, the window offers very little light, and there is a melancholic gloom to the shop. Outside the glass, we see row after row of blackened chimney, raised upwards in a dark, imposing manner. The sky is polluted with a thick haze that bathes the filthy lanes in an eerie shadow. The streets are an open sewer of poverty and corruption, represented by the narrow alley-ways, the mud-soaked cobblestones, the oppressive smog, and the hustle and bustle of desperate, world-weary souls (Mollin). This bleak imagery contributes to the gothic tone of the film, and lives up to Todd’s grim assessment of London as a “great black pit”. But of course Todd is also projecting his own melancholy onto the city, formed by the loss of his wife and daughter decades’ prior. The glass also represents the liminal space between Todd’s internal thoughts, and the world outside. The grimy panes are that thin, fleeting gap between mind and environment. As the film progresses, Todd withdraws further and further inside himself (and the safety of his shop) shunning the cruelty of the external world, and isolating his soul from those who might care about him.

 

Another chief difference between romantic and gothic experiences of the sublime, is the exploration of terror. Edmund Burke wrote that the sensation of terror came from “an apprehension of pain or death” (Paulson). This statement may be true, but it tends to oversimplify the nature of fear, and ignore complex human psychology. Certainly objects of danger can inspire fear. Raging storms, dark nights, and ferocious beasts will all terrify people on a primal level, as will a murderer. But what’s more frightening than the murderer’s knife is the mind that works it. Being stabbed may be physically painful, but trying to comprehend the nature of the man who stabs you is psychologically painful. We look at a murderer, and we see ourselves. We see our own rage and sorrow reflected back, and think, how thick is the line between him and me. We can sympathise with Todd, and the injustice he has suffered. We want him to have his revenge against the wicked Judge Turpin. We can imagine his pain and his anger, and egg him on his trail of blood. But when we see where the trail leads, and realise how much it has cost him, we recoil. We become frightened by our own bloodlust, and realise that the line is thinner than we thought. The terror does not reside in Todd’s ability or desire to harm us, but in the ruin of his own soul. One thing that the romantics and gothics do agree on is that terror is the “ruling principle” of the sublime, and only those who know true fear can have a sublime experience (Morris).


The Tell-Tale Heart (lit essay)

Posted on February 24, 2013 at 9:40 AM Comments comments (0)

The Tell-Tale Heart (Edgar Allan Poe, 1843) is a short story with elements of horror and suspense. While the tale depicts a grisly murder, the horror comes from the twisted psychology of the killer, and the cold, meticulous process of his deed. Poe’s story represents a new kind of genre that emerged during the Victorian era – the detective story. While very similar to a Gothic tale, this piece focuses on a crime (a murder), a criminal’s inner psychology, the process of his dead, and his eventual capture. Unlike most detective stories, however, the killer in The Tell-Tale Heart is not discovered by the two visiting police men. He is foiled by his own guilty conscience, symbolised by the frantic beating of a heart (whom he thinks is the victim, but is actually his own terrified pulse).

 

The tone of the narrator is a mixture of nervousness and excitement. He plots the murder of an old man, with no clear motive, other than his fear of the man’s “blue eye”. This fear may be projected guilt, the notion that the old man senses the narrator for what he truly is – a killer. The narrator sets about to bury this fear, both figuratively and literally, by killing the old man. The murder may also be a Freudian attempt to overthrow the father figure, and usurp his role as patriarch. Throughout the short story, the narrator is constantly insisting to us that he is not a madman, but a thoughtful, meticulous murderer. The implication is that madmen are messy and wild, while he is disciplined and clean – a more sophisticated killer, for a more sophisticated age. As the story progresses, the narrator’s tone becomes more and more frantic. He jumps wildly from one thought to the next, seizing on proof of his sanity. This may illustrate a form of schizophrenia, or may even be Poe’s critique on the formless, frantic style of modern writers.


Watchmen & The Tyger (lit essay)

Posted on February 12, 2013 at 8:25 AM Comments comments (0)

Watchmen (1987) is a twelve-issue comic series, written by Alan Moore and illustrated by Dave Gibbons. It depicts an alternate history of the world in which superheroes began emerging in the 1940s and 50s, and aided the United States government during the Cold War. The series itself is set during the 1980s, after the Watchmen have been outlawed and disbanded by a totalitarian American government, and the nation readies itself for nuclear war with the Soviet Union. Moore contrasts the fantasy of the superhero mythos, with the vicious reality of urban crime and the nuclear age. The sublime manifests itself in Moore’s characters, each of whom aspires to uphold a childlike notion of good and evil, while having to face the infinite shades of grey that make up postmodern society. Throughout the comics, Moore makes numerous literary allusions, particularly to William Blake, and his 1794 poem The Tyger.

 

The fifth book in series is titled “Fearful Symmetry”, and centres on the character Rorschach, an effective, but deeply nihilistic crime-fighter. Though super-heroes have been outlawed by the state, Rorschach continues to stalk the streets of Manhattan, guarding its citizens, and punishing its criminals. By quoting Blake’s poem, Moore is making a clear link between Rorschach and the Tyger. Both figures inspire a mixture of awe and fear in their surrounding environment, and often in the reader. While guided by unwavering moral principle, Rorschach is a deeply violent and disturbed individual. His hatred of criminals sometimes erupts into a blind, murderous rage. Like the Tyger, Rorschach’s appearance provides a stark contrast with his environment. Slight in stature, garbed in a fedora and trench coat, Rorschach would usually blend into the monotonous drone of urban decay, were it not for the strange mask he wears. Wrapped around his head is a sheet of white material, with a black ink stain printed across the face. The ink stain seems to change shape every cell, depending on who is viewing it, yet constantly maintains a symmetrical pattern. It is a clear nod to the Roschach test practiced by psychologists, and echoes the symmetrical black and yellow symmetry of the tiger.

 

Like Blake’s Tyger, Rorschach is beyond all description and understanding. His mental and spiritual landscapes are a mystery. It isn’t even clear whether he is sane or not. He may only seek to punish criminality to expel his own bloodlust, rather than the pursuit of any moral code. Or perhaps he is aware of his own spiritual violence, and has crafted a superhero identity in order to harness it against those who deserve his wrath. Neither his allies, his enemies, nor his therapist are able to capture who he truly is, and in this ambiguous, enigmatic nature, lies Rorschach’s sublime presence. Moore has written the character as though he is a predator, and rather than being his home, Manhattan serves as his hunting grounds. He stalks criminals from the shadows, and when his mask is off, he walks beside them on the street. He is everywhere and he is nowhere. The boundaries of his physical presence are blurred to the point where criminals see him even when he isn’t there.  They see him in the architecture of the city, and in the shifting night air, ready to pounce on them like a carnivorous beast.

 

Rorschach’s mask is inextricably tied to his crime-fighting aura, and his own fractured psyche. The symmetry of the ink blots symbolise his two conflicting natures – life-saver and killer. They also symbolise his dual identities – the masked Rorschach, and man behind the mask, Walter Kovacs. Unlike other superheroes, however, Rorschach considers his mask to be his true face, to the point where removing it is like have his skin flayed off.

 

Because the mask contains a Rorschach inkblot, its image changes depending of who is looking at it. It inspires terror in criminals, and hope in their victims. The mask, and Rorschach by extension, is a blank canvas, devoid of human feature or emotion. He is the abyss, and people project their innermost thoughts and fears into him. This indistinct form and constantly transforming and dissolving identity are emblematic of the sublime experience. Rorschach has crafted this persona specifically because it inspires awe, and an irrational fear in his enemies. Like the character of Batman, Rorschach alters and embellishes his precence through deception and theatricality, forming an irrational fear in his prey. He represents one of the rare instances in which we as readers are exposed to the inner monologue of the predator, the abyss.

 

Rorschach’s psychology is deeply fractured. Part of him is hopeful, determined to uphold moral order. But part of him is deeply cynical about human nature, having observed all of the atrocities of urban crime, and worse still, the apathy of everyday people in the face of this moral decay. Rorschach is a violent, brutal and mentally unstable person. Yet the identity he crafts is a symbol of justice, and inspires a measure of hope amongst Manhattan’s civilian populace. His mind resides somewhere between Walter Kovacs (the man he was born as, but rejects) and Rorschach (the man he created, and regards as his true self). This liminal, grey area in which his psyche rests is emblematic of the “fearful symmetry” Blake alluded to in his poem.

 

Rorschach’s horror is unmistakable, especially when he descends into blind rage. In one scene, he discovers the dead body of a little girl, having promised her parents he would find her safely. When the killer admits his crime, showing no remorse whatsoever, Rorschoch’s explodes with anger, and butchers the man with an axe. Whatever morsel of satisfaction we may feel for seeing a child-killer punished, is quickly evaporated in the midst of Rorschach’s psychotic violence. The euphoria of Rorschach’s presence is more difficult to locate, but it is certainly tied to his moral code. Rorschach is the only character is the series that refuses to compromise. He seeks the truth and justice above all, even at the cost of his own soul.

 

Sweeney Todd (film review)

Posted on February 10, 2013 at 9:20 AM Comments comments (0)

Set in the filthy streetscapes of Victorian London, Sweeney Todd (Tim Burton, 2005) might be described as a “horror musical”, revolving around themes of revenge, cannibalism and hairdressing. The film, like most of Burton’s work, is black humoured and extravagantly gothic. All of the characters, from Depp’s blood-thirsty barber, to Rickman’s sadistic Judge Turpin, are deathly pale, with black rings around their eyes. The city of London is a nightmarish fantasy of industrialism. Grey chimney’s creep over the horizon, polluting the sky with a thick haze that bathes the landscape in an eerie shadow. The streets are an open sewer of poverty and corruption, represented by the narrow alley-ways, the mud-soaked cobblestones, the oppressive smog, and the hustle and bustle of desperate, world-weary souls. This imagery contributes to the gothic tone of the film, and lives up to Todd’s grim assessment of London as a “great black pit”. But of course Todd is also projecting his own melancholy onto the city, formed by the loss of his wife and daughter decades’ prior.


The music in the film is gloriously entertaining. Special mention goes to Depp’s barber-ous showdown with a flamboyant Italian, played by Sasha Baron Cohen; as well as Depp’s duet with Carter, in which the pair discovers a new pie filling that will make your skin crawl. While none of the top-billed actors are particularly gifted at singing, their acting skills bring so much emotion and charisma to the performance, and the sequences work splendidly.


Characters contend themselves to gothic archetypes. Bright-eyes sailor Anthony plays the passive hero, who rescues the psychologically tortured maiden, Joanna, from the clutches of sinisterly powerful Judge Turpin. Todd is really the only character who doesn’t quite fit the Gothic mould, embodying more of Greek hero of tragedy. Todd’s desire is vengeance, and his soul is gradually consumed by it. Even when his daughter Joanna is in danger, Todd’s only motive to rescue her, is to lure Judge Turpin into a trap. By the end, Todd’ love from his family has been completely poisoned by his lust for blood. So much so, that he fails to even recognise his wife and daughter.

The Lamb was Bleating Softly (lit essay)

Posted on February 9, 2013 at 9:10 AM Comments comments (0)

The Lamb was Bleating Softly (Juan Ramon Jimenez) conveys the arrival of an extraordinary (perhaps divine) presence into a rural landscape. The poem opens with the farm animals being stirred awake by “His” appearance. They are excited, and this anticipation is communicated aurally, as the narrator him/herself is roused by the commotion. The fact that the animals sense His coming first may suggest a communion with nature that is simply weaker in humans. The opening lines may also be part of a dream, since it is not until line six that the narrator’s consciousness is even established. Then again, just because it was dreamed, doesn’t make it any less powerful.


Who He is, is never made explicit, but His greatness is transformed into sensory perception, through the stirring of the animals, the colours of the sky, and the intoxication of the narrator. The anticipation of the animals and the narrator, combined with the adoring descriptions of the sunrise, illustrate how this divine presence has transformed the flora and fauna into something bright and beautiful, and brimming with excitement. The world falls into a rare harmony that only something sublime or transcendent could inspire.


One possible explanation is that the poem is a retelling of the birth of Christ. The moon is described as “a full and divine womb” (Mary perhaps) descending. The fact that it falls on the west, may even symbolise the expansion of Christianity into Europe. The rural setting, the stable, and the farm animals all summon up the imagery of Jesus’s birth, and of course, the lamb itself, which is synonymous with Christ, and the children of God. Another explanation is that the arriving figure is the narrator’s husband, having returned home from war, or some murky fate. The animals all recognise his scent, and grow excited, and the wife, sensing his imminent return, becomes drunk with joy, and rushes to greet him. In either case, the tone of the poem is a glorious transformation of nature and feeling; the night has passed, and a new has dawn broken.

 

 

Dawn After the Wreck (lit essay)

Posted on February 7, 2013 at 9:10 AM Comments comments (0)

This painting (by Joseph Mallord WilliamTurner) portrays the elements of land, sea and sky, and a lone dog, gazing upwards. The colours seem to bleed into each other, making it difficult to determine where the water ends and the sand begins. It is also unclear whether the sun is rising or setting, or whether the tide is ebbing or flowing, leaving the entire scene in a state of flux. The twilit sky is reflected by the wet sands, giving the picture a mirrored feel, and drawing the viewer’s gaze into the dark blue ocean that splits the image. While most of the image seems calm and still, the water on the right side of the painting seem to be churning like waves against the shore. All of these elements represent a liminal grey area, and contribute to the images indistinct quality.

 

The dog is the only living thing within the frame, and serves to orientate the viewer. Were the dog not there, we might be able to turn the picture upside down, and still determine the same image – a blue skyline, with a red hue on either side of it. While the dry, empty shading of the sand and sky suggest emptiness, the sea suggests depth and life with its dark blue visage and churning tide. Perhaps an entire ecosystem of life simmers beneath its surface, hidden from our gaze.

 

The vast emptiness of the scene is both frightening and poignant. The beach is devoid of humanity, untouched and unmarred, and almost entirely devoid of life. It surrounds and engulfs the tiny dog, and suggests the amoral emptiness of the world in general. The dog is gazing up intently, perhaps at something out of frame, like the moon, or perhaps at something no human can see, implying a spiritual or subliminal world that we have lost touch with. A divine world, perhaps, or one of nature, which technology has severed us from. Or maybe the dog is simply contemplating the vastness of universe, as we contemplate him/her.


Paradise Lost (lit essay)

Posted on January 31, 2013 at 9:10 AM Comments comments (0)

The object of Paradise Lost (1667), as John Milton declares in the opening passages, is to learn why Adam and Eve tasted the Fruit of Knowledge, and who or what seduced them into disobedience against God. He also wants to understand the reason for God’s plan, and to justify his decision to expel the fallen angels, and human beings, from Heaven.

 

The character Satan is initially in despair over how far he has fallen, not just the physical fall from Heaven to the darkest depths of Chaos, but of his utter defeat at the hands of God. Yet even after tasting God’s indomitable strength, Satan is still as defiant and determined as ever. His military pride has been wounded, and now his grief is been transformed into rage. Even if victory is impossible, he swears to continue his rebellion, and to mar the future works of God, out of spite and vengeance. His chief servant Beelzebub, though intensely loyal to Satan, is initially filled with despair. Outwardly, he is boastful, and agrees with his commander that they should continue the fight, but inside, he is greatly disheartened by their dire situation.

 

Milton describes the fallen angels as being immensely powerful, and enormous in size. The narrator compares them to the Titans of Greek mythology, great stone giants who were also cast down into Chaos, by the Gods of Mount Olympus. He also compares them to the Leviathan, the largest creature God ever made, even claiming that sailors would mistake them for islands, and drop anchor beside them. These descriptions illustrate to the reader that, even in defeat, Satan and his followers are still incredibly formidable. It also generates deception and confusion, since the reader has no idea what a Titan or a Leviathan look like, and learns later that the fallen angels can actually change their size and shape.

 

Satan and Beelzubub gather up the hundreds of thousands of fallen angels, all of whom are still armed and armoured. The size and strength of their army, even in defeat, bolsters their confidence, and they fly towards Heaven, and offer a defiant war-cry towards God. After that, they build Pandemonium, deep beneath the crusts of Chaos, and make it the capital fortress of Satan’s new kingdom.

 

Satan is a tragic figure, and his desire to keep fighting, even in the face of imminent defeat, is classically heroic. From his perspective, his struggle is justified. He believes God’s rule is oppressive, and argues that since angels are “self-begot and self-raised” they ought to be self-governed. He even goes so far as declaring he’d rather be free in Hell than a slave in Heaven, which is an admirable, as is the council he of demons he assembles, in opposition to God’s totalitarian rule. However, as the story progresses, he turns more and more to evil means. He longer aspires to defeat God (since this is deemed impossible), seeking only vengeance and to pervert the course of good. However tragic an anti-hero he began as, he becomes too corrupt and bitter to root for.


The Prelude (lit essay)

Posted on January 30, 2013 at 8:45 AM Comments comments (0)

Lines 356–400 of The Prelude (Wordsworth, 1850) explore the notion of guilt in a young boy. Wordsworth recounts the finding of a little boat. He argues that it was Nature herself that lead him to the vessel. The environment has taken on a voice of its own, and the boy is letting himself be guided and swayed by the forest and the river. On the other hand, perhaps the boy is very much in control, and is using the still canvas of nature to pour out his own whims and desires. A nervous excitement takes the boy as he unchains the vessel, and his awareness of the forest becomes heightened as his mischief echoes against the rocky hills. His “troubled pleasure” is magnified by the stillness of nature. The boy is no longer a part of his environment. He is changing it, and this guilt is given form in the clasping shadows of the river.

 

As the boy paddles the boat along the river, he becomes intensely aware of the moon and stars reflected in the water. He is shrouded by the vivid night sky, sailing through upon air. This vast open space may indicate the mixture of excitement and fear he bears from giving himself to nature, or straying from social morality. He drifts with his eyes planted firmly on the shore he is departing, his gaze blind to the path ahead. He is uncertain where his actions will lead him, and he has no conscience to anchor his passage into the dark future. He has defied society by stealing the boat, but there is no one around to judge him but himself. Yet he does feel guilt, and those feelings are made whole (at least in his own mind) by the grim black shapes that emerge out of the forest. Perhaps it is nature condemning him for disturbing order, or perhaps it is the boy’s own feelings (cultivated by society) giving weight to the shadows, and twisting them into figures of dread.

 

Book of Thel (lit essay)

Posted on January 17, 2013 at 8:20 AM Comments comments (0)

The Book of Thel (Blake, 1789) suggests that you cannot know life until you experience it for yourself. The maiden Thel is fascinated by what the future holds for her. She asks four different individuals, at different stages of maturity, for answers. However, none of their advice prepares her for the reality of life, with all of its torment and heartbreak. The sublime experience of the “hollow pit” cannot be understood at an intellectual or academic level. It had to be experienced first-hand, and when it is, Thel rejects it, and retreats back to the innocence of Har.

 

Her retreat from the “pit” is a rejection of adult life and the sublime. She clings to her innocence, symbolised by the serene and peaceful Valley of Har. That said, once she has experienced adulthood, and sexual maturity, much of that innocence will be lost. The poem can also be read as an unborn child, learning about the world she will soon enter. Our earthly world, full of mortality and encompassing misery, in contrast to Thel’s flowery, pleasant existence in Har. Her retreat can be seen as a child refusing to enter a world as wretched and corrupted as ours, and a refusal to surrender her innocence to birth.

 

The opening lines of the poem illustrate two divergent creatures – the eagle and the mole. One flies above the world, the other burrows beneath it. Yet neither can offer Thel a taste of the sublime experience of the pit. She must go there herself. The individuals Thel meets are also examples of duality. The flower is an innocent maiden, associated with feminine grace, while the cloud (which waters her) is a lover, associated with masculine power (i.e. surging storms, thunder, the symbolic seat of God). The worm is infancy, completely dependent, and incapable of caring for itself, while the clay is motherhood (mother earth?), who nurtures the worm, and is strengthened by its dependence on the soil. God can be seen as a patriarchal figure throughout the poem, but he enforces very little moral authority over Thel. Rather, she investigates her sexuality freely, and is not bound by marriage or custom to surrender her virginity.

 

If the Vale of Har is beauty, then the Hollow Pits beyond the Northen Bar are the sublime. The first three segments of the poem flow with a serene, flowery, syrupy language. Thel’s world is a world of glistening dew and milky garments, humble grass and golden honey. It is safe and neat, and non-threatening. But the world beyond the Northen Bar is chaos and carnage, and perpetual sorrow. It overwhelms Thel, and instead of embracing it, she recoils from the sublime, and flees back to the safety of Har, the purity of the womb.


Moral Panic and the War on Drugs (soc essay)

Posted on January 12, 2013 at 7:05 AM Comments comments (0)

Throughout the twentieth century, drug use has undergone cycles of intense public scrutiny and concern. From the 1960s onwards, the media and legislators have focused on specific drugs as representations of the wider cultural and moral decay of youth demographics. These intense, often exaggerated, reactions to drug culture mark a moral panic that was drawn to boiling point during the 1980s and has sustained itself through negative public opinion well into the 2000s. America’s “War on Drugs” campaign is the most dramatic example of moral panic informing national policy, and represents the establishments struggle against the counter-culture movement as a perceived threat to social order and values.

 

The term “moral panic” has been widely used, both as an academic and everyday expression, since the work of Stanley Cohen in the early 1970s. It is generally used to describe the public’s perception of youth subcultures, as well as anti-social or criminal behaviour. Essentially, it is an exaggerated reaction, on the part of the media, police, or wider public, in response to the activities and attitudes of a particular social group. These activities may well be trivial or harmless, but their documentation by the media has been sensationalised to such a degree that a general anxiety is formed. Conversely, the activities of a subculture may indeed be harmful or dangerous, but the response of the social establishments has been so exacerbated and unyielding that it has actually inflamed the problem. In both cases, perceived perpetrators of the social unrest are vilified and ostracised by the media and public, often referred to as “folk devils”. This antagonism is later reflected in legislation, in which the perceived “troublemakers” are profiled and targeted by law enforcement agencies. The drug war embodies many of the principles of a moral panic, and has been exploited by politicians and legislators to curry favour with their older and more socially conservative constituents.

 

The “War on Drugs” is a campaign of prohibition being undertaken by the United States and several other Western governments, including Australia. It is aimed at deterring the use of illicit substances, and reducing the illegal drug trade. The so-called “war” is both a political ideology, and a set of federal and international policies designed to discourage production, distribution and consumption of psychoactive drugs, including marijuana, cocaine, heroin, crystal meth, and many other forms of illegal narcotics. For members of the subculture, drug use is a recreational or social activity, such as the consumption of alcohol; however drugs may also be employed for medical, athletic and spiritual purposes. The illegal drug trade refers to a global black market that cultivates, distributes and sells these substances subject to prohibition. In 2003, the United Nations estimated that the drug trade generated $321.6 billion a year, representing almost 1% of the total global commerce. In this sense, drug prohibition represents both a cultural and an economic structure, one that houses and harms members of the socio-economic underclass. The term “War on Drugs” was coined by President Richard Nixon in 1971, but is actually a continuation of drug prohibition laws dating back to 1914, which saw the outlawing of opium and coca leaves. Preceding Acts include the 1919 Alcohol ban, though this was repealed in 1933 after intense public disapproval, and the 1956 Boggs Act, which drastically increased penalties for the use of Marijuana, declaring it a “gateway drug” to more dangerous substances. The prohibition was intensified in the 1980s during the deeply conservative Reagan and Thatcher administrations, which further demonised drug users, particularly ethnic minorities. Unlike, perhaps musical or artistic movements, drug culture does in fact pose a legitimate risk to the health and wellbeing of its members. However, moral panic has served to inflame the problem, more than help it, and it is often the people most in need of public assistance, namely addicts, that are so coldly maligned by public disapproval.

 

In response to Cohen’s research into moral panics, criminologist Jock Young explored the drug use that developed around the “hippy” culture of the 1960s, in particular the use of marijuana. Young observed that the negative social reactions to hippy drug use actually had a reinforcing effect on its members. As the hippy movement defined itself in opposition to established norm, drug use snowballed as a response to public criticism. Young described the process as “deviance amplification”, in which the accumulation of moral panic serves to proportionally increase the deviant or criminal behaviour of a subculture. Young observed that “our knowledge of deviants is not only stereotypical because of the distortions of the mass media but is also one-dimensional”. Many so-called “deviant groups” are grossly misperceived, due to stereotypical and sensationalised reporting, deliberately designed to inflame readers. One British newspaper in the late-60s described hippies as a violent, filthy mob that walked the streets with iron bars, fornicated in public, and resided in a fortress in Piccadilly Circus. This false perception leads to a social reaction based on perceived threat to morality, and not an accurate representation of the subculture in question. Young also found that when the police employed stereotypes to engage drug users, the users would gradually come to embody those stereotypes, emphasising the more criminal elements of their subculture, and expressing themselves in rebellion to mainstream values and opinion. This escalation between police and drug-users served to criminalise groups where illegal activities were initially not the focus of their association. Young argued that, over time, the police action against marijuana users led to the intensification of their deviant behaviour, so that “certain facets of the stereotype became the actuality”.

 

In 1930, the public perception of American’s towards marijuana was apathetic and indifferent. Very few American’s smoked marijuana or knew someone who did. Only 16 states had laws making possession and distribution a crime, and even in those states enforcement was “relatively lax”. But by 1937 every one of the United States had outlawed the possession of marijuana. Hundreds of magazine and newspaper articles were being published, proclaimed the horrors of marijuana, and denouncing the drug as “a sex-crazed drug menace”, “a weed of madness” and “the burning weed of hell”. This dramatic shift is a direct result of moral panic, cultivated by the Federal Bureau of Narcotics, who “perceived an area of wrongdoing that properly belonged in their jurisdiction and moved to put it there”. The FBN attacked marijuana use from two positions – they provided the media with “facts and figures” which demonstrated marijuana as being a harmful and addictive substance; and they worked with legislators to outlaw marijuana one state at a time, until the prohibition was federal. By sowing the seeds of public disapproval, the bureau was able to raise support for the legislation, and by criminalising the marijuana, they were able to further solidify the deviancy of the drug in the perception of the mass media and public attitudes. Marijuana, a relatively unknown and little-used drug had been transformed into a “national menace” over the course of a few years. The FBN had successfully created a crisis where no basis for one existed, and expertly positioned itself as moral and legal crusader against the “killer weed”. In the process, American marijuana-users, a largely passive and socially responsible demographic, were suddenly the outsiders and enemies of a moral crusade. The 1937 Marijuana Transfer Tax is a prime example of an artificially constructed moral panic. Some parties have even argued that the demonization of marijuana was merely a business endeavour initiated by timber merchants (such as newspaper owner Randolph Hearst) to limit the production of hemp, which had emerged as a very cheap substitute for pulp paper.

 

While public fear of marijuana faded during the 1960s and 70s, the drug panic exploded once again in the 1980s, amidst the “crack epidemic” that plagued America’s inner city neighbourhoods. If marijuana was the drug of the 60s, then crack cocaine was the drug of the 80s, and in no other decade has the issue of drugs occupied such a huge and troubling space in the public consciousness. Drugs were no longer regarded as the symptom of moral and social decay; they were the cause, not just within poor areas such as Harlem, South Chicago and South Central, but within wealthier areas of Beverly Hills and downtown Manhattan, where cocaine became a mark of extravagance and success. This shift in public perception was dramatic to say the least, especially when one considers that during the 1970s eleven states had decriminalised small quantities of marijuana, and only a third of American high-schoolers thought of marijuana as harmful. However, as soon as 1980s commenced public tolerance of drugs began to decline, and they were once again perceived as harmful. The resurgence of moral panic over drugs during this decade is interesting in how unexpected it was. Even more baffling is that the panic declined almost immediately after the 1980s concluded, and public awareness turned to international matters, such as the Gulf War and the fall of the Berlin Wall. However, for the preceding decade, drug use and abuse were forefront in the minds of everyday American’s, culminating in 1989 when 64% of voters in a New York Times/CBS poll “named drug use the nation’s number one problem”.

 

The reason drug panic reached such a fever pitch during these years could be the result of a number of factors, including the concoctions of the press, politicians and moral entrepreneurs wishing to serve their own ambitious agendas. Media fear-mongering and political scapegoating has also been blaming, particularly with a focus on ethnic minorities, who served as the primary victims of the crack epidemic. The use of crack within inner city neighbourhoods would have largely gone unnoticed by most American’s, but when the media suddenly alerted people to its widespread use, along with the overdoses of several famous athletes, fresh anxiety may have been formed. From a political perspective, the issue of illegal drug use as a social rallying cry focuses attention away from “structural ills like economic inequality, injustice, and lack of meaningful roles for young people”. It allows politicians, particularly conservatives, to appear strong-handed and socially conscious without actually having to do anything. Condemning drugs and drug users costs a politician nothing, but attracts easy votes from law-and-order minded individuals, and actually removes blame from the establishment, placing it onto the users themselves, many of whom are poor, desperate and powerless to defend themselves. The 1980s also provided a period of reflection for the United States, following the sullen horrors of Vietnam, while still lingering under the dormant shadow of the Soviet Union; the decade marked a point of stasis and introspection for the country itself, in which drug use became the dominating anxiety, because it eroded society from within.

 

The public finds widespread drug use both “riveting and terrifying” because it taps into a very primary fear concerning the control over one’s body. Misplaced as they are, the public perceives the entering of foreign substances into the body as a form of “silent genetic catastrophe”, which renders the victim psychotic and completely dependent on the poisonous chemicals. This fear has been exploited numerous times over the twentieth century, by American and British politicians, and exacerbated by sensationalist media coverage and documentation. Moral panic is stirred up around drug use and abuse at seemingly random points in social development, and these moral crises give us a glimpse into the fears and anxieties of the society in which they take root.

 

 

The point is, moral panics do not solve a problem. They inflame it. Whether drugs are a danger to Australian youth is debate worth having. However, the social anxiety formed around this debate functions only to demonise drug users, and further marginalise them from mainstream society. While there are subcultures that wilfully use drugs as a symbol of their rebellion, there are also kids who have become addicted to illicit substances and who continue to harm themselves. In both cases, moral panic only worsens the problem. It can even be argued that this moral panic has been engineered, by media sources and politicians, who have framed the issue as a criminal matter, rather than a social health matter. The war on drugs, as it has come to be called, has raged for over four decades, and victory is nowhere in sight. Yet, it isn’t really a war being fought against drugs, but against drug users, the people addicted to these substances. And rather than treat them as victims of social neglect, we lock them in prisons, where their problems are increased tenfold.

 

Hebdige observed that once a subculture has been accepted by the mainstream, it loses its power and attraction. What if we applied that same principle to the drug war? What if we legalised drugs, and instead of locking up users, we treated them. This may sound like a radical and reckless proposition, but it could incorporate several positive changes within our society. Firstly, we would have to admit that the drug war cannot be won, that like alcohol prohibition, society will never suppress people’s desire for intoxicants. However, if those substances could be sold legally, in specialised chemists, we may be able to control the problem, and monitor it more effectively. Moreover, the legal sale of drugs would also allow their production to be monitored with the same rigid health standards applied to food, alcohol, tobacco, and medicines (as opposed to the less than sanitary, and possibly poisonous conditions of most drug labs). All taxes generated from drug sales, and there would be a lot of it, could go directly to funding free rehabilitation clinics, clean needle exchanges, and safe injecting rooms. Such clinics could also serve the function of bringing addicts to a central location, where they could be offered medical care, or at the very least clean needles (which would in turn, decrease the rates of HIV and hepatitis). The other benefit to this proposal is that it would effectively cripple the illegal drug trade, robbing dealers, traffickers, enforcers, and suppliers of the economy in which they operate. As with the end of prohibition in the United States, legalisation could cripple drug empires in a single stroke, and the money saved on police enforcement could be diverted to education and health services, as well as freeing up police resources to pursue other areas of crime.


The Exorcist & The Silence of the Lambs (film essay)

Posted on November 2, 2012 at 8:20 AM Comments comments (0)

Horror films often portray images of intense violence and gore. This filmic technique is not unique to the genre of course, as action films, thrillers, and especially war dramas also depict scenes of violence. However, in those cases, the violence is contextual. The threat applies to the characters in the film, and works more as a plot device than a subject of examination. In horror films, the violence is universal. It reaches past the screen, and affects the audience members viscerally, rather than aesthetically. It gets under their skin, and crawls into their thoughts long after the credits have rolled. The best horror films speak to something old and primal inside people. It may be the fear of physical and psychological abuse, or the fear of possession, person or otherwise. It may be the fear of watching a loved turn against you, of being corrupted or defiled before your eyes. All of these fears are expressed through The Exorcist (Friedkin 1973) and The Silence of the Lambs (Demme 1991), two films concerned with the horror of abduction and possession. Both films focus on female victims, and explore the imagery of masks and the wearing of another’s flesh. The editing and mise-en-scene of both of these films contribute to the horror of the narrative, and evoke a sense of dread in both the characters and the audience.

 

The Exorcist and The Silence of the Lambs each address the audience’s fear of possession. In The Exorcist, the possessor is a demon, a supernatural entity, which seduces a young girl, enters her body, and makes her an instrument of his treacherous will. In The Silence of the Lambs, possession takes on a physical and psychological component. A young girl is kidnapped and held hostage in a serial killers basement. Parallel to that, a young FBI agent is manipulated and ensnared psychologically by the brilliant, but insane Dr Hannibal Lector. In both films the motif of female skin is used as a symbol of possession. In The Exorcist, the demon inhabits Regan’s body, slowly poisoning and flaying her from the inside out, while Regan herself cowers within the prison of her own flesh. And in The Silence of the Lambs, Buffalo Bill seeks to transcend his hideous soul, by replacing his “ugly” masculine body, with a “beautiful” female body, fashioned from the skins of his victims. While the mere description of these plots is enough to induce nausea, it is interesting to note that the violence in these two films is less pronounced than one might presume, even after viewing them. While The Exorcist certainly portrays sequences of gore and bodily fluid, the source of danger, Regan, is usually tied up, so as not to harm herself or others. The violence she inflicts on others is emotional violence, taunting Father Karras’ guilt, and exposing her mother Chris to an intensely sexualised aggression. In The Silence of the Lambs, not a single murder occurs within the frame of the film. Certainly we are exposed to the grisly aftermath of killings, but the closest we get to seeing one is the shot of Hannibal Lector swinging a police baton back and forth across the lens, and spotting his shirt with flecks of blood. Instead, the horror lies in the violent and damaged psyches of these characters, and the arduous dissections of their crimes.

 

The imagery of horror films, particularly modern horror, favours the depiction of graphic violence and gore. The intensity of the imagery is designed to disgust viewers, as much as frighten them. When The Exorcist was first released in 1973, newspapers reported incidences of public vomiting and mass panic. The shot of Regan, a young girl assaulting Father Karras with a jet-stream of green sick remains, for better or worse, one of the most iconic moments in cinema history. While it is tempting to dismiss such filmic techniques as cheap and literally (rather than thematically) repulsive, there is an aesthetic strategy to The Exorcist’s “gore moments”, especially when contrasted with the handsome and measured cinematography that embodies the rest of the film. Likewise, The Silence of the Lambs, while omitting certain acts of violence from the audience, offers fleeting images of intense horror, such as the water-bloated corpse of one of the female victims, and a shot of green, peeling human flesh, sewn together over a mannequin. In their exposure to these “dark sensations”, the audience is no longer experiencing imagery on an aesthetic level, distanced by the fictionality of what they are viewing, but rather they are viscerally frightened or nauseated by it, thus blurring the boundaries between what is “real” and “imaginary”. Though the act of vomiting is generally regarded as undesirable, audiences seem to derive a morbid fascination (even pleasure) from frightening themselves to the point of somatic release. The stimulus of fear, like lust, seems to be sought out by people, in an effort to overwhelm themselves with emotion, literally “possessed” by the visual experience. Along with the shots of projectile vomiting, The Exorcist also sees Regan scratching open her face and wrists, cursing her mother, twisting her neck and limbs at bone-snapping angles, and stabbing at her crotch with a crucifix. These explosions of physical and emotional abuse are made all the more shocking in that such a sweet, loving girl is made to commit them. Perhaps Regan’s violent possession echoes the involuntary fear and disgust we are made to feel as an audience. Beyond our primal bodily responses, however, the horror of the film also raises our curiosity, through its transgression of established cultural norms. Specifically, the film violates the sanctity of religion, and the safety of the family home. The demonic entity invades and corrupts both of these institutions, transforming them from sanctuaries into cages of torment. The resulting terror is the price we pay for gazing upon such monstrosity.

 

The reason the horror is The Exorcist is so effective, is because it takes place within an otherwise calm and thoughtful family drama. The audience spends time with Regan and her mother, and the film develops them into real and sympathetic characters, observing their caring and loving relationship. Similarly, Father Karras spends the early part of the film dealing with his aging mother, and struggling with his faith. The supernatural elements emerge gradually, and are treated with scepticism by the characters. As Regan’s behaviour becomes more unstable, such as wetting herself at a party, her mother turns to science and medicine for help. This grounds the film in a reality that the audience recognises, and behaviour that is believable. In illustrates a new world response to an old world power. These establishing scenes contrast sharply with the horrific imagery of Regan’s possession, and the spiritual and emotional violence that it necessitates. We are forced to watch this sweet-natured girl become corrupted and defiled by a demonic entity, and her mother reduced to tears, as the one she loves most in the world is turned cruelly against her.

 

This juxtaposition of order and chaos is exemplified in the film’s prologue. Set on the other side of the world, in the deserts of Northern Iraq, archaeologist Father Merrin unearths an ancient holy relic. The ruins of the dig site are grey, craggy and dusty, closer to rocks than bricks. The sun blazes enormous in a glowering yellow sky, baking the sands an arid red. The heat is dry and oppressive, and the environment portrays a stillness that contrasts with the busy desperation of the diggers. Windburnt men hack and hammer at the rocks, probing deeper and deeper into the ancient world. Even when Father Merrin returns to town at night, there is a feeling of age, not just in his laboured movements, but in the land and people who surround him, with their long beards and weary eyes. The prologue ends with Merrin happening across a statue in the dig-site, a winged monster with a snarling face, some ancient people’s grasp at capturing monstrosity. As soon as he lays eyes on it, the tone of the scene shifts drastically; the stillness gives way to frenzy. The sun shines angrily through the statue, making it appear black, and then a screeching sound begins to rise as Merrin approaches. The camera, previously steady, begins to shake back and forth, and the frame seems to speed up and pivot, as a second man appears behind him. Two dogs begin fighting viciously nearby, and the screeching becomes louder, mixing with shrieking violin chords. The camera zooms in on Merris and then the statue, implying a profound spiritual connection between the two. Time seems to shift, and they continue to stare each other down, their figures silhouetted against the swelling desert sun. The screeching grows even louder, sharp and unnerving, and braids with the growls of the dogs. Chaos and violence seem to have erupted out of nowhere, and as quickly they appeared, the scene fades and dissolves into an image of a modern American city – Georgetown. Friedkin has transported us from the old world to the new world, giving us a very frightening sense that a violent and evil presence has passed between the two. Subliminal images of the demon statue, as well as a white-faced androgynous creature, are intercut throughout the course of the film. Their ambiguous placement, while not explicit, creates a sense of uneasiness within the viewer, and fosters the idea that the evil portrayed in the film is reaching out, and toying with their mind.

 

Decay and corruption are prevailing motifs throughout the The Exorcist. As Regan’s possession progresses, her skin becomes green and flayed with cuts, her hair grows long and tangled, and her eyes glow a misty yellow. These external changes echo the internal decay of her soul, as the demon takes over and forces her to do and say awful things to the one’s she loves. The demon exploits her disintegration of innocence and virtue to attack Father Karras’ faith in God, and taunt her mother’s love. The horror of this transformation lies in the suggestion that institutions of safety and moral order can be infested from within, and changed into places of torment and pain. This is likely a very real dread in many people’s lives, especially those who have experienced family breakdown or institutionalised abuse. There is also a disturbing sexual element to the possession. Regan exposes her genitals to doctors, demanding sex, attempting to castrate them when denied. She masturbates violently with a crucifix, and even tries to force a sexual encounter with her mother. This sexualised violence imposed by the demon may be a mockery of Mary’s Virgin birth. It may also imply a sense of sexual vulnerability that the pre-pubescent Regan felt, which was quickly seized upon and exploited by the domineering demon. The sequences would likely have deeply disturbed Catholics and parents alike.

 

As mentioned above, the horror in The Silence of the Lambs comes less from acts of violence and “jump-scares”, and more from psychological dissection, and the building of tension. By not showing the audience certain things, such as the nurse Dr Lector attacked, or the front of Buffallo Bill’s rotting victim, the film allows the audience to build up terror in their own imaginations. The violence is alluded to, recounted, dissected, and re-enacted throughout the film, but the acts themselves, the serial killings of Lector and Bill, occur off-screen and out of frame. They remain mysterious and enigmatic, festering like a cancer in the back of the audiences mind, hanging like a shadow over the characters of the film. Demme knows that nothing he could show could compare to the monstrosity the viewer is building up in their own minds. In a sense, “jump-scares” and acts of violence are antithetical to this kind of horror film, as they provide a release of tension. Demme is far more interested in drip-feeding fear to the audience, providing them with dribs and drabs of the horrible truth, article headlines such “Bill Skin Fifth” and “New Horrors in Cannibal Trial”, and allowing the audience to piece together and assemble their own dread. The horror also comes from our association with the victims, particularly with the shots of one of their empty bedrooms. Their normality is our normality, and we cannot ignore the fact that it could easily have been us. This fear may be even more pronounced in female viewers, since the film deals specifically with misogynistic violence, and the depiction of women as prey for serial killers; lambs, so to speak, to the wolves of Lector and Buffalo Bill. Demme knows that people have a morbid fascination with true crime, and the grislier the better. Once again, it plays in with the audiences desire to gaze into the darkness of the soul, and observe the transgressors of cultural order. However, Demme takes it a step further, by actually locating us in the perspective of the killer, at the climax of the film. As Clarice Starling fumbles around in the darkness, terrified and vulnerable, we see perfectly, through the predatory night vision goggles of Buffalo Bill. In that moment, we slip into his skin, and feel the power of stalking a helpless woman. It is both intoxicating and horrifying.

 

One of the most iconic scenes of the film is the first meeting of Starling and Lector, and a prime illustration of building tension and dread in the audiences mind. Through the film, Starling is constantly exposed to a form of leering masculinity. Her male FBI colleagues gaze at her with a mixture of desire and contempt. As she first enters Baltimore’s prison for the criminally insane, she is immediately objectified by Warden Dr Chilton, who regards her as little more than a pretty airhead. But as she descends into the bowels of the prison, her feelings of isolation and otherness become far more pronounced. Red lights flash as she reaches Lector’s floor, signifying her arrival into Hell. The camera at this point takes on a point-of-view perspective, as we look around the cells from Starling’s perspective. The prison guards, with the exception of the courteous Barney, look down on her with expressions of disdain. The prison door slams shut behind her, further cementing her isolation, and she makes her way slowly down the row of cells. The lighting here is dark, and the brickwork more akin to a medieval dungeon than a state-of-the-art prison. The third cell holds Miggs, perhaps the embodiment of all masculine sexual violence. His misogyny cannot be contained, and literally bursts out through the bars to latch itself onto Starling. However, the final cell holds Lector. A stark contrast to the other rooms, Lector is imprisoned by glass, rather than bars, signifying a higher, perhaps more elegant, form of sociopathy. His walls are decorated with beautiful sketches, and Lector himself is clean, groomed, and well-dressed. He greets Starling courteously, and offers her a seat. Slowly, however, his true monstrosity begins to spill out. He reaches into her mind, probing her upbringing and manipulating her psyche, before outright threatening to eat her liver. Lector is everything Buffalo Bill wants to be. He is a hideous soul, contained within an elegant shell. What makes Lector most frightening of all is that his mask is so perfect that it raises questions within the audiences mind. How many others like him are out there, hiding behind this mask of sanity? Finally, this mask is made literal, when Lector slices off the face of a police officer, and places it upon himself, to curry the empathy of actual human beings.


Cultural Cringe in Australia (soc essay)

Posted on October 28, 2012 at 7:10 AM Comments comments (0)

The term “cultural cringe” was first used in 1950, by the Melbourne critic Arthur Phillips, and refers to the ingrained feelings of inferiority felt by local intellectuals, writers and musicians. Phillips pointed out that the public widely assumed that anything produced by Australian artists was inherently deficient, when compared to British and European works. He argued that the only way local art professionals could gain public esteem was by travelling overseas and receiving acclaim from European critics (52-53). While Phillips coined the term, this phenomenon was already present in Australian society, as far back as the 19th century, as evidenced by Henry Lawson’s preface to the 1894 Short Stories in Prose and Verse. Lawson writes that the Australian writer, until recognised by London critics, “is only accepted as an imitator of some recognized English or American author”; the “Australian Burns” or the “Australian Kipling”, so to speak. Thus, “no matter how original he may be, he is branded, at the very start, as a plagiarist”, and while his countrymen no doubt believe they are complimenting and encouraging him, they are “really doing him a cruel and an almost irreparable injury” (Rodrick 108).

 

Ironically, the term “cultural cringe” seems to only have been coined a few centuries after the word “culture”, which rose to prominence in Europe, during the 18th century. Originally, it referred to the process of “bettering” and “refining” an individual, through education, and the fulfilment of national ideals. Today, “culture” signifies the “distinct ways that people living in different parts of the world classify and represent their experiences”, through art and intellectual achievement (Salt 288). A culture can encompass a nation, a state, or an ethnic group, but above all it is the creative and academic manifestations of a collective. The great fear of Australians is that they do not have a cultural identity of their own, or that, if they do, it is inherently inferior to that of other countries, especially Britain and America. This “cultural cringe” has many contributing factors, including Australia’s relatively short and seemingly uneventful history as a British colony; its relationship with the British Empire, especially when compared to the United States; a sense of shame over having been descended from criminals; and a natural distain for self-important behaviour and “tall poppies”. The phenomenon is further aggravated by a feeling that the Australian culture is being absorbed by the world’s mass media, and is regarded with meagre standing in the global village (Pickering 47).

 

When addressing cultural cringe in Australia, and the effects of globalisation, it is important to establish just what Australian culture is. The difficulty of this task further emphasises a feeling of cultural vacuum felt by Australian academics. It is nation located in the Asia-Pacific, yet wholly isolated from the cultural values and customs of its neighbours. Rather, Australia is the product of British principles, particularly with regards to law, religion, language and literature. In the past few decades, however, Australia has been saturated with American media, film and television, and has been criticised in its attempt to emulate American cultural forms (Brown 138).

 

If there is a uniquely Australian culture, it is symbolised by the countries unofficial mantra “a fair go for all”. Australian’s are a proudly democratic people. While one of the youngest nations in the world, they were one of the earliest to institute democratic government. Australian’s value mateship, equality, hard work, and like to champion the “underdog” (Curan). Australian’s dislike self-important behaviour, or those whom they regard as “tall poppies”; common people who are raised into the spotlight by the influences of governance or celebrity. Even after the demonization of socialism during the Cold War, this working man’s, egalitarian spirit has endured. In fact, Australia was federated under the dream of being a “working man’s paradise”, and is reflected in the progressive labour laws of its constitution (Curan). Compared to European nations, it is also rather progressive and practical in its approach to law and judicial process. This attitude may come from the relatively short period since the countries federation. Australian’s are not part of a thousand year old nation, and rather than being content with simply analysing a problem, they are open to reform and can embrace change when it yields practical results (Curan). Social mobility is also far more possible in Australia, than in Britain or Europe. Any Australian citizen, regardless of who they are or where they were born, is able to ascend to the highest rungs of the political and economic ladder, based on merit. On the other hand, in England, a person is likely to be categorised or even discriminated against based on their regional accent (Salt 291). The European class system, if not entirely absent, is vastly diminished in Australian society. Finally, while Australia was certainly born into a policy of racial discrimination, it came to embrace multiculturalism during the 1970s, and today, stands beside America as one of the great “melting pots” of the world (Salt 301).

 

Australia’s early distrust of authority—informed by its convict heritage, mining rebellions, and bushranger legend—evolved into a “distinctly Australian aloofness”. It was not disillusioned, but “illusion free” (Curan). The country’s propensity for mythmaking was thin, and it valued the useful over the beautiful; function over fashion. High culture—the world of art and literature—had become associated with social authority, and was consequently spurned by the Australian working man. Passion was regarded as suspect, since “it could enslave you”, and frustrated by this lack of passion, many Australian artists and creators of illusion travelled overseas. They felt unappreciated and “accused Australia of lacking a complete identity”, or for being too “immature” to appreciate them (Curan).

 

One theory offered, regarding Australia’s cultural identity crisis, is the idea that tragedy and cultural cringe are “inversely proportional”; that only through war and suffering can a people learn who they really are, what their strengths and weaknesses are, and what meaning they attach to life. Identity is best forged in opposition to an oppressor, and whatever else countries like Poland, Russia and Germany have suffered, it has never been from an identity crisis. Compared to the American Revolution, Australia’s Federation had all of the conflict, drama and anxiety of a game of cricket. Australia’s self-consciousness, and the insecurity of cultural cringe, is really a great self-indulgence, reserved for people with not a lot to complain about (Wierzbicka 1188).

 

It wasn’t until 1914 that Australia’s first great test arrived, and the fledgling nation emerged on the world stage. It was during World War I, that the first collective Australian story unfolded in the form of the ANZACs. During the Gallipoli campaigns, journalist Keith Murdoch described the Australian soldiers as “noble men… who have endured… Oh, if you could picture ANZAC as I have seen it, you would find that to be an Australian is the greatest privilege the world has to offer” (Curan). It was not through victory that the Australian soldier was lionised, but through his endurance and bloodshed on the beaches of Gallipoli. The suffering of so many men during World War I played a big part in honing Australia’s identity, and brought its citizens together in pride and patriotism. On the other hand, the incompetence and callousness of the British officers further hardened Australia’s mistrust of authority.

 

However, in the 1970s, this ANZAC tradition was pushed aside by the “cringe community”, whose cultural highpoint came in anti-Vietnam protests. They believed that glorifying the ANZACs glorified war. It wasn’t that Australia didn’t have an identity; “it just wasn’t one that they liked” (Wierzbicka 1186). The “cringe community” disapproved of the patriotism of the new generation, particularly the increased interest in the ANZAC legend, cultivated by the Howard government. For example, when the National Museum of Australia was opened, the ANZAC tradition was trivialised and mocked while reverence and passion was reserved for the Aboriginal gallery “with its holocaustic imagery” (Curan). Thus, the cultural cringe was not instigated by the “little Aussie battler”, but rather the cultural elite; people in political and academic positions. Australia had become the victim of “an internal version of class discrimination” (Pickering 57)

 

One final manifestation of the cultural cringe comes in the form of the “convict stain”. This refers to a sense of shame many Australian’s feel about being descended from British convicts (Adams). It has led to very few Australian’s researching their family history; for fear that their ancestors committed some heinous crime. The phenomenon has made it very difficult for historians to trace the lineages of early settlers. The convict stain effected people personally in the late 1800s, with known convict descendants being banned from places, like sports clubs. The Melbourne Cricket Club maintained a well-known convict stain policy, making exceptions in only a few cases, such as Tom Wills, the inventor of AFL. The “stain” is perpetuated by people of other countries, especially Britain, who often mock Australian tourists as “returning to the scene of the crime” (Adams). In recent decades, however, the “stain” has become far less pronounced, with more and more people actually investigating their families past.

 

Australia now encompasses two competing strands of cultural engagement: the identity of mateship, hard work, patriotism, and national pride exemplified by the common man; and the identity of Australian art, literature, film and historical revaluation exemplified by the intellectual community. However, the two sides are still bonded by an enduring spirit of egalitarianism.

 

 

Annotated Bibliography

 

Brown, Ruth. “English Heritage and Australian Culture: The Church and Literature of

England in ‘Oscar and Lucinda’.” Australian Literary Studies 17.2 (1995): 135–140.

 

This article is an exploration of the relationship between England and Australia, as realized through the literature of Australian author, Peter Carey. It speaks specifically to the cultural relationship between the two nations; how Australia has evolved and matured from a British colony, and into an (near) independent federation. One point that Carey makes, is that, if white Australia has a culture, it is tied directly to Christianity. It is a cultural and spiritual history which has destroyed 40,000 years of Aboriginal tradition to establish itself. However, as Christianity begins to fade in influence and importance in Australia, Carey (though himself an atheist), sees a fading of whatever cultural link we still have with Britain. In its place is a distinct lack of any unique Australian identity. The article argues that much of Australian culture is spawned out of opposition to British colonialism and society, and an almost contrarian approach to its social structure and values. Class divides are far less pronounced in Australia, than they are in Britain. In fact, the only thing that seems to separate the rich from the poor, is the size of their income. Pretention and pomposity are also derided in Australian society. Brown argues that Carey’s work signals a need to move away from English literature and church, since they are designed as instruments of oppression. Other critics, however, such as David Callahan, see the novel, not as a call to abandon old heritage, but for the creation of a new, easy relationship between father-and-child nations; a merging of British and Australian culture.

 

 

Kidd, Evan, Nenagh Kemp and Sara Quinn. “Did you have a choccie bickie this arvo? A quantitative look at Australian Hypocoristics.” Language Sciences 33.1 (2011): 359–368.

 

This article examines the use and representation of Australian expressions and terminology. After conducting a survey of 150 Australian-born citizens (all English-speaking), it considers the age, region and socio-economic backgrounds of Australian’s who use Australian hypocoristics (e.g. choccie, arvo, bickie, etc.). The study concluded that Australian hypocoristics are the product of a linguistic process that captures “inflectional morphology”, and is far more pronounced in rural areas, than urban and suburban regions. The reasons for this are a mixture of globalisation, and multiculturalism (non-English speaking migrants, learning the language), and is therefore declining among younger Australians. The article also talks about the Australian lexicon itself, arguing that the peculiarities of Australian-English (or AusE) cannot be attributed solely to accent, since there are thousands of lexical terms which are vastly different to their standard English forms. Kidd also points out the abject difficulty for foreigners forced to learn AusE, and that there are actual dictionaries designed to assist the strange aberrations in dialogue.

 

 

Phillips, Arthur. “The Cultural Cringe.” Meanjin 69.4 (2010): 52-55. Accessed April 14, 2012. http://search.informit.com.au/documentSummary;dn=501786897597346;res= IELLCC.

 

In this article, Phillips discusses the idea of “cultural cringe” in Australia. The term refers to an internalised inferiority complex which causes people to dismiss their culture as inferior to that of other countries. This phenomenon is particularly pronounced in Australian society, and has led to compulsive, unfavourable comparisons of Australian art and literature, and that of English, French and American works. It has led to the demolition of many world-class, pre-war buildings in Melbourne and Sydney, and the destruction of some of the world’s greatest examples of Victorian architecture. It also cultivates an anti-intellectual attitude, which has isolated many Australian intellectuals. Phillips argues that Australian writers and artists need to overcome their “cringe” and communicate their culture and way of life in a factual manner, rather than in an inferior or melancholic one. He further examines the nature of ordinary Australian’s to express an almost obsessive curiosity to know what foreigners (particularly Americans) think of them and their culture. The over-saturation of imported shows on Australian television (mostly from the U.S.) has been seen as one cause, particularly with regard to Australia’s own attempts to emulate said shows. Another manifestation of cultural cringe is the “convict stain”, which is a sense of shame many Australian’s feel over their convict heritage, and has led to native citizens refusing to research their family history, for fear of being descended from a criminal.

 

 

Pickering, Jonathon. “Globalisation: A Threat to Australian Culture?” Journal of Australian Political Economy 48.1 (2001): 46–59.

 

This article examines the influence of globalisation on Australian culture. Pickering addresses the fears of Levi-Satrauss, who wrote of a global village that was absorbing all of the western nations’ history and heritage, and producing universal monoculture in their place. Pickering argues instead that globalisation has had a mixed influence on Australia; that the nation’s popular and political cultures have been transformed both from within and from without, as well as retaining old rites and rituals. Far from Levi-Strausses’ vision of a monocultural “vegetable garden”, Pickering maintains that Australian culture is stronger than it’s ever been, and that, while globalisation may burden us with the awareness of new problems, it also broadens the pool of resources we have at our disposal to deal with them. Globalisation is a two-way street capable of creating homogeny, hybridity, as well as communication and egalitarianism.

 

 

Robinson, Shirleene. “Inventing Australia for Americans: The Rise of the ‘Outback Steakhouse’ Restaurant Chain in the USA.The Journal of Popular Culture 44.1 (2011): 545-562.

 

This article analyses the Australia-themed, American-owned restaurant chain, “Outback Steakhouse”. With more than 1200 locations in the U.S., Robinson poses the question, of how these restaurants present and depict Australian culture for Americans. And more importantly, who decides, controls and maintains this depiction? The article goes on to explain the creation of the chain, in the wake of the “Crocodile Dundee” film series, and its enormous success in America. The film itself is a vastly exaggerated and comical depiction of the rugged Australian outback, and the restaurant plays off this depiction to an even greater degree. While the restaurant owners are likely in on the “joke”, that doesn’t stop these two depictions from created a very real image of Australia in the minds of Americans. Robinson discusses the gradual construction of the Australian image in America, and how it links in very closely with the American idea of the “wild west”. The international success of the chain has demonstrated the exportation of a fragment of Australian culture (absurd as it is), as well as the inescapable appeal of the “Australian legend”, even if such an ideal fails to accurately represent our contemporary society.

 

 

Salt, Bernard. The Big Shift: Welcome to the Third Australian Culture. South Yarra: Hardie Grant Publishing, 2001.

 

This book touches on wide range of topics, relating to modern Australian culture and growth. Specifically, it looks at population growth, urban changes (such as suburban migration), the attraction of central city living, interstate migration, as well as rich and poor regions. Salt charts the course of Australian society since European settlement, correlating shifts in population and demographic distribution with cultural changes, and making observations about the likely future of the country. He points out that the Australian beach culture is nearly as old as the bush culture, and the shift to the beach has been going on for a long time. Salt also discusses the result of an aging population on institutions such as emergency management. Many of the people shifting to the rural area, or to the coast, bring with them an urban mentality.  They have little or no experience with sustainability or natural hazards, and expect such services to be provided for. Salt argues that this is a serious generational problem.

 

 

Waterhouse, Richard. “The Minstrel Show and Australian Culture.” Journal of Popular Culture 24.3 (1990): 147-166.

 

This article discusses the phenomenon of “mistral shows” in Australia, from 1838. Though originating in America, “Jim Crow” became popular in Australia for a short period, after shows were performed at the Royal Victoria. Australian colonists were greatly amused and entertained by the mistral sketches, jokes and songs. Waterhouse further reveals that, not only did British and American troupes enjoy extended and profitable tours, but that the influence of “Jim Crow” shows extended far beyond the professional stage. In the wake of tours by international companies, local amateur minstrel bands flourished. One of the earliest such companies organized, in Hobart in 1856, operated on such a limited budget, that it was forced to provide its patrons with handwritten programmes. The tradition of amateur minstrelsy became so entrenched in Tasmania, that on the eve of World War I, the Hobart Amateur Minstrels were still offering weekly concerts in the King’s Theatre. The article reveals a hidden vein of Australian theatre, that very few people are likely aware of.

 

 

Wierzbicka. Anna. “Australian Sultural Scripts – Bloody Revisited.” Journal of Pragmatics 34.1 (2002): 1167–1209.


This article is also an exploration of the Australian lexicon, but it focuses primarily on the use of the word “bloody” in everyday language. Wierzbicka argues that that the adjective is far from meaningless, and that by unpacking the use and history of the word we can throw a good deal of light on traditional Australian attitudes and values. She further states that the word offers a vantage point from which to investigate a whole network of information, regarding the changes and continuity of Australian culture, speech and society. Wiezbick demonstrates that frequently used “discourse markers” do in fact have their own precise meaning, and that this meaning can be revealed by means of the “Natural Semantic Metalanguage”.

 

Bronson (film essay)

Posted on October 18, 2012 at 8:50 AM Comments comments (0)

Voyeurism in the “True Crime” Narrative

Charlie Bronson, “Britain’s most violent prisoner”, has spent 38 years beyond bars, most of it restricted to solitary confinement. Nicolas Winding Refn’s 2008 film is a dramatic recounting of Bronson’s life, and an example of the “true crime” narrative. True crime texts are generally inspired by media headlines, which reflect press and public obsessions with violent events and criminal personalities. They often reveal as much about the audience’s fascination with deviant and depraved acts, as they do about the deviant subjects themselves. In the case of Charlie Bronson, neither the film, nor the man himself, attempts to explain his condition. In fact, the film goes out of its way to depict his peaceful home-life and loving parents. Bronson is depicted as a force of nature. He doesn’t have anger management issues. On the contrary, Bronson micro-manages his anger, and deploys it with maximum effect against the prison officers he encounters. He seems intent on committing violence for its own sake, and inciting chaos for its own ends. When asked what his demands are after taking a hostage, he cannot come up with an answer, replying simply “What have you got?” In other hostage situations Bronson has asked for items as trivial as a cup of tea, and even released hostages because they irritate him. However, Bronson often makes the point that while he is dangerous and harmful, he has never killed anyone. This alludes to an understanding that murder transgresses a moral code that he is unwilling to breach, and we as an audience are unwilling to forgive. It is also interesting to note that the one character in the film he does attempt to kill is a paedophile, alluding to a second cultural and moral transgression that even a criminal like Charlie Bronson is unable to abide.

 

Interspersed with the scenes of brutal prison life, are a series of surreal moments of Charlie, standing alone on a darkened stage, recounting his life story to an audience of captivated onlookers. He sees himself as a performer, complete with black suit and white face-paint. These sequences may indicate Bronson’s growing insanity, as if he is playing to an audience in his own head. On the other hand, we are watching him, both his character in the movie, and his real-life counterpart, in the form of news coverage and prison artwork. And as we gaze at him, he gazes right back at us. In this sense, the film is as much a critique on our motivations as Bronson’s. The question isn’t why is he doing these awful things, but why are we watching them? Why do we entertain ourselves with media depictions of violence, whether they be in the form of news reports, movies, sports or video games? For the 69 days that Bronson is set free, he supports himself by taking part in illegal bare-knuckle boxing competitions. These scenes of organised violence clearly contrast with the scenes of chaotic violence enacted by Bronson during incarceration. Is one more civilised than the other? Is watching two men fight one another in a boxing ring, all that different from watching them fight in a prison cell? Why is the video game, Grand Theft Auto, which gives players the option to beat, rob and kill whomever the feel, one the most popular games ever released? One possibility is that these media expressions of violence give us (the audience) the chance to act out perverse fantasies of deviance, without the threat of punishment or repercussion. Likewise, the character of Bronson, acts as a conduit, through which the viewer is able to express some subconscious desire to commit violence, and have violence committed against them. A kind of sado-masochistic avatar.

 

The depiction of Bronson also provides comparison between the mythologised hyper-masculine prisoner, and the stunted, juvenile “inner child” archetype. With his shaven head and penchant for nudity, “Charlie often resembles a giant baby venting its fury”. His bizarre street clothing is a mixture of army officer and Victorian strongman, giving us the impression that this is how a young Charlie viewed masculinity, and that his early imprisonment denied him any genuine emotional progression into adulthood. This arrested development is again emphasised during his brief stint as a freedman, in which he is saddened to find out his mother threw away all of his childhood toys, including his old bed. While he has plenty to say for himself during his imaginary onstage monologues, off stage Charlie is inarticulate and oddly defenceless. While visiting his Uncle Jack on the outside, Charlie sits silently and stirs his drink, unable to interact with people socially and visibly intimidated by the sexual advances of his uncle’s female friends.

 

Charlie has been incarcerated for so long that he feels uncomfortable outside of a prison cell, and unable to communicate with people in any form other than violence. The scenes of violence are depicted with a sort of dreamy romanticism, often set to pieces of classical music, and slowed down to a point where the carnage takes on an almost ballet-like elegance. Bronson appears willfully ignorant of his actions, treating the brawls as if they were sequences in a comic book or an action film, rather than moments of pain and suffering for his victims. Like Bronson himself, the film invites us to revel in the chaos on display. This may speak to a sort of pop-culture desensitisation to violence that has infected all of us, including Bronson, who emphasises this by actually taking the name of an action movie star.


The Maralinga Nuclear Tests (hist essay)

Posted on October 1, 2012 at 11:50 PM Comments comments (0)

Maralinga is a remote desert area of western South Australia. It is the home and hunting grounds of the Tjarutja people, who had lived in the region for thousands of years. In the 1950s, however, the Australian government granted the United Kingdom access to the area, so that their scientists could test nuclear devices and atomic weaponry. Fatefully, the name Maralinga translates into “Fields of Thunder”, which is exactly what the Tjarutja people witnessed on 27 September 1956, when a nuclear warhead, code-named Operation Buffalo, was detonated on their ancestral homeland (Anderson). The atomic blasts released a radioactive cloud that exceeded 11.4 kilometres (well above the expected fallout), with unpredicted wind patterns spreading radiation across the Anangu Indigenous lands, as far as the Northern Territory and Queensland (Anderson).

 

The Fallout

It is impossible to determine how many Indigenous lives were lost in the immediate detonations, but those outside of the blast-zone, described a searing white light, followed by a horrible black mist that rolled across the land and blotted out the sun. The Tjarutja people were totally exposed to the blast and the subsequent hard rain that fell on their lands. Many of them became sick with vomiting and diaorrhea. Older members of the group died from terrible illnesses. A young boy named Yami Lester witnessed the mushroom cloud from afar, and was immediately blinded in one eye. He lost sight in his other eye shortly after. In a number of cases, such as the Milpuddie family, pregnant mothers gave birth to stillborns. Some of elder Tjarutja referred to the black smoke as “Mumoi”, fearing that an evil spirit had descended on their land. They were not far wrong (Michel).

 

While the major test series, Operation Buffalo and Operation Antler, were marginally publicised at the time, hundreds of minor tests were carried out by the British in secret. These operations were conducted over a number of months, and tested the response of nuclear devices to elements such as fire. One of these secret tests, Operation Vixen, hurled molten uranium and plutonium almost a kilometre into the air, forming pools of concentrated radiation all over the Maralinga flatlands (Anderson).

 

Very little effort was made to protect the Tjarutja people from the fallout of the testing. Steps were taken by the federal government to curtail Aboriginal traversal into Maralinga, including signs and fences, with warnings about the area. Unfortunately, such signs were printed in English, making them of little use to the Maralingan-speaking locals. Boots were distributed among the community to avoid contact with contaminant soil, but most of them did not fit, leading as many as 100 Aborigines to walk barefoot across the radioactive Maralinga plains. British scientists even found families of Tjarutja sleeping in craters formed by the initial detonations (Michel).

 

What the white authorities were failing to grasp was that the land itself held great spiritual significance to the Tjarutja people, and contained mythological sites that were important to their daily lives. Cut off from their homeland and culture, many of the Indigenous refugees travelled considerable distances (such as Cundalee, Western Australia), to attend ceremonial functions in other Aboriginal groups. Some died of thirst along the way, because they were unfamiliar with the river-systems this far from home. These efforts emphasise the intrinsic link between Indigenous culture, identity and land (Lynch).

 

The Paul Kelly song Maralinga captures some of the horror and fear felt by Tjarutja during the testing: “First we heard two big bangs/we thought it was the great snake digging holes/Then we saw the big cloud/then the big black mist began to roll/this is a rainy land/A strangeness on our skin/a soreness in our eyes like weeping fire/a pox upon our skin/a boulder on our backs all our lives/this is a rainy land.”

 

The McClelland Commission

In the decades following, victims of the Maralinga testing (both Indigenous and military personnel) were still suffering from the effects of radiation poisoning, including blindness, sores, and illnesses such as cancer. In the words of the Atomic Veteran Associations, “they started to piece things together, linking their afflictions with their exposure to nuclear testing” (Parkinson) Treatment of Aborigines during the testing attracted strong condemnation. In 1984, the Hawke government announced a public inquiry into the British testing program and its aftermath. It was called the McClelland Commission, after its chairman. The resulting report was a scathing indictment of the British and Australian governments of the time, for placing their military and scientific pursuits above the safety and wellbeing of their own people. McClelland concluded that the organisation, management and resources allocated to ensuring the safety of Aborigines were completely inadequate, and denounced officials as “ignorant, incompetent and cynical” (Michel).

 

Attitudes towards Aborigines

The plight of Aborigines in vicinity of the blast-zone was in many respects a reflection of their status in Australia at the time. In a revealing statement to the Royal Commission, Sir Ernest Titterton was quoted as having said that “If Aboriginal people objected to the tests they, could [simply] vote the government out”. Yet, in 1956, Aborigines were not even counted on the census, let alone given the right to vote (Michel). Such a small and disadvantaged minority was given no chance to wield electoral influence in the protection of their land and customs. There is no shortage of evidence illustrating the low regard in which Aborigines were held at this time. The chief scientist of the Department of Supply, a British expatriate, criticised an officer whom he regarded as overly concerned with Aboriginal welfare for “placing the affairs of a handful of natives above those of the British Commonwealth of Nations”. Occasionally, when Aborigines were sighted in restricted areas, reports of these sightings were disbelieved, or less than subtly discouraged. One officer who reported sighting Aborigines in the prohibited zone was asked if he realised “what sort of damage [he] would be doing by finding Aboriginals where Aboriginals should not be” (Parkinson).

 

The Clean-Up

In the late 1990s, the Australian government carried out a clean-up of the Maralinga test site. However, nuclear engineer Alan Parkinson denounced the project as “a cheap and nasty solution that wouldn't be adopted on white-fellas land”. The majority of residual plutonium was buried underground, and covered with cement. However, it will remain toxic for the next 24,000 years. Despite being granted Maralinga back, the Tjarutja people now regard it as mostly poisoned, not just for the pockets of radiation still present in the soil, but for the history of pain, sorrow and betrayal they now associate with it (Lynch).

 

The Anangu Story

Several years ago, a group of Tjarutja women came together, and created a book of traditional paintings, detailing the struggles of their people. The picture book captures the Anangu way of life before and after white settlement. It illustrated the Tjarutja people’s exile from Maralinga; the physical and spiritual effects of the nuclear testing; and the feelings of homesickness that exists in the current generation. The book served as an expression of their grief and frustration over the destruction of their homeland, but also a special record of their experiences for white Australian’s and younger Aborigines to appreciate (McCartney).

 

The Mabo Decision (hist essay)

Posted on September 10, 2012 at 10:30 PM Comments comments (0)

In 1992, the High Court of Australia rejected the notion of terra nullius, and legally recognised the occupation of Indigenous People’s before and during the process of British colonisation. It was the first time, in the eyes of the law, that Aboriginal people had been acknowledged as the traditional custodians of the land. The ruling introduced the concept of native title, which is the recognition that “some Indigenous people have rights and interests to their land that come from their traditional laws and customs” (Tehan).

 

The Mabo case was a landmark decision for the High Court. It was regarded as a victory for Indigenous people across the country, and a progressive step towards reconciliation between black and white Australia. However, the ruling also received backlash from politically conservative groups. The equally panicky and reactionary media coverage of the time tended to foster division, rather than unity. On the flip side, Aboriginal activists criticised the phrasing of the law as being too vague, and more a symbolic victory than a practical one. While some Indigenous groups have been granted native title, dozens have been denied on the basis that their cultural link to the land has been broken. And even to this day, hundreds more are still waiting to plead their case.

 

In 1996, the High Court delivered a second landmark decision, namely, that pastoral leases on land, did not necessarily extinguish native title. In other words, Indigenous people may have access to leased rural property, if they could establish an ancestral and customary connection. This was called the Wik decision, and if Mabo did not inflame conservative Australia, then this ruling certainly did. While furthering the rights of Indigenous custodians, the media backlash widened the gap between Aborigines and white land-owners (Stevenson, 1996).

 

Terra nullius, the notion that Australia was an empty land belonging to no one, was the primary basis and rationale for British colonisation. In 1992, the High Court found that Australia was indeed occupied by an Indigenous populace at the time of settlement, with each group of people owning a different portion of the land, and maintaining cultural and spiritual connections with that land. With terra nullius rejected, the legitimacy of colonisation was challenged (Kildea). The High Court also acknowledged Britain did not attain absolute sovereignty over Australia the moment that it set foot on the country, and that Indigenous people’s continued to own parts of the land well into the process of European colonisation.

 

In the case of Mabo v Queensland (No 2) the High Court legally recognised the Meriam People of Murray Island were the native title holders over their traditional lands. The court ruled that native title would be granted to Indigenous inhabitants of a territory where traditional laws were acknowledged and customs observed. However, the court also stated that “when the tide of history has washed away any real acknowledgment of traditional law and any real observance of traditional customs, the foundation of native title has disappeared”. This meant that in large areas of the country’s interior, where colonisation had been less oppressive, native titles could be recognised (Tehan).

 

Native title has benefitted many Indigenous communities. Since 1992, 126 native title determinations have been put to the courts, with over 90 ruling that native title did exist. Communities in Pilbara, the Kimberly’s, Torres Strait, the Northern Territory, and south-west Victoria have all been recognised as original custodians, with most, if not all of their traditional country conceded. Unfortunately, hundreds more have yet to go to trial, with many claimants being forced to wait years to pursue their titles; by which time elder members of the community may have passed on. In such cases, Indigenous land use agreements can be entered into, which serve to circumvent long and costly trials, and foster more cordial relationships between Indigenous and non-Indigenous occupants (Tehan).

 

In some cases, Indigenous people have been granted native title over areas rich in minerals, and have granted mining and exploration companies access to the land. In return, the companies have given the community financial numerations and employment. Such cases emphasise Indigenous self-determination, where communities have maintained the cultural dignity of their homeland, as well as created economic stability and prosperity for themselves. It is an example of Indigenous tradition and industrialisation working in mutual esteem (Nettheim). Of course, not every community would want to exploit their homeland in such a fashion, but the important thing is, it is their decision. In other cases, native title owners have worked with state governments to create national parks. This also generates employment opportunities for Aborigines, as well as allowing members of white Australia to enter their country in a respectful and informed manner.

 

Unfortunately, not all Indigenous communities have directly benefitted from native title law. This is often due to their proximity to white settlement, as well as the duration and financial burden of such complex legal processes. One example of a failed native title claim is the case of Yorta Yorta v Victoria. In that instance, Justice Olney deemed native title extinguished, ruling that the “tide of history” had “washed away” any real acknowledgement of traditional laws and any real observance of traditional customs by the applicants (Jagger).

 

This judgement generated intense anger, disappointment and disillusionment on the part of the Yorta Yorta people, who continue to maintain that they are the rightful owners and occupants of their land. Much of the uproar stems from the court’s interpretation of the word “traditional”. While the Yorta Yorta have passed down stories and rituals from generation to generation, the court deemed that these customs were not present in the culture, prior to British settlement, thus excluding them from being “traditional”. One of the problems Yorta Yorta v Victoria unearths about native title law, was that many Indigenous peoples were dislocated from their lands by British colonists, thus dissolving the foundation of native title, in the eyes of the High Court.

 

In 1996, the High Court of Australia found that statutory pastoral leases did not bestow exclusive possession rights on the leaseholder. As a result, native title rights could co-exist with rural land leasers, depending on the terms and nature of the particular pastoral contract. The decision “provoked significant debate in Australian politics”, and led to “intense discussions on the validity of land holdings in Australia” (Due). Many conservative politicians accused the High Court of being out of touch with history, and with the “Australian way of life”. The High Court was also criticised for introducing uncertainty into the Australian judiciary, and panic into the populace, though in truth, most of this uproar was fed by sensationalist journalism.

 

After the Wik decision, many Australian’s, particularly graziers, feared that the High Court had “pushed the pendulum back too far in the Aboriginal direction”, granting them greater rights than ordinary Australians (Kildea). Coalition Prime Minister, John Howard, held up a map of Australia on public television, and declared that up to 16% of the country was being threatened with annexation. This inflamed public perceptions of the Wik decision and lead to intense criticism of the High Court. Sensationalist media coverage even suggested that native title claimants were trying to steal people’s backyards (Due). Legal Commentator Philip Hunter criticised the backlash as being “totally unjustified”, arguing that the High Court clearly ruled that in cases where native title and pastoral leases clashed, native title would be extinguished. Prime Minister Howard responded to the ruling with a ten-point plan that reaffirmed the rights of pastoralists. The resulting legislation produced the longest debate in the history of the Senate (Kildea). Unfortunately, the uninformed coverage tended to inflame public opinion of Indigenous activism, deepening the cultural void between Aboriginal and non-Aboriginal Australians. Hunter also argued that the issue was exploited by conservative politicians, in an effort to build support for their own base (Due).

 

Aboriginal community spokesman, Gary Foley, has called the Mabo decision “the greatest single act of dispossession since 1788” and regards the Native Title Act as a fraud that perpetuates European Colonialism, and reverses the efforts of Indigenous activists. Foley argues that by rejected terra nullius, the High Court has completely illegitimated the process of British colonisation (which in 1788 was legally justified if the land was empty), as well as its own judicial authority (which stems from British common law). Despite this, however, the fruits of British invasion and dispossession remain firmly in place, and their untouchability has been fortified by law. Foley further argues that granted native title is purely symbolic, and completely separate from land rights, which is what the Indigenous civil rights movement had been campaigning for, the past few decades. In a sense, Foley argues that native title offers only a tiny [and symbolic] olive branch to Indigenous Australia, while completely legitimising the dispossession of land now under white control. Had the High Court’s simply recognised that the lands were taken through right of conquest, then the Australian government would be forced to negotiate for all lands alienated, but by introducing the concept of native title, it is forcing the dispossessed Indigenous parties into proving some High Court-defined version of “cultural connection”.

 

Another point of criticism from Aboriginal activists is that native title tends to divide Indigenous people, rather than unite them. The land rights movement of the 60s and 70s was about getting compensation for all Aboriginal people for the dispossession of land. Native title, however, divides Aborigines into their specific people or community, and asks them to provide proof of an “ongoing connection” with their specific land (Due). With land rights everyone had a stake, but native title serves to fragment the movement, even creating resentment between communities who are granted native title and those who aren’t (Foley). Moreover, the act of establishing an “unbroken, customary connection to land in a hostile court-room” is a long, gruelling process. One is forced to put their identity on trial, and the identity of their people. This can cause bitter divisions within communities. As Aboriginal leader Muriel Bamblett recounts, “Siblings, cousins, Uncles, Aunties—families began to be driven apart from each other. In some cases they would not even talk to each other” (Stevenson).

 

The Native Title decision of 1992 is a significant moment in Indigenous and non-Indigenous relations. While it has certainly benefitted many communities across Australia, it has contributed to a fragmenting of the Aboriginal rights movement. The backlash against the 1996 Wik decision also served to antagonise the movement, contributing to further cultural division. That said, however, the Indigenous communities that have successfully achieved native title are given a real and lasting opportunity to reconnect to the culture and way of life that was taken from them, over 200 years ago, and that is a small victory for them.

 


Rss_feed