Masterclass and Guest Lecture: The Languages of Comics
Image: Neil Cohn, Visual Language Lab
Masterclass & Guest Lecture: The Languages of Comics
With Dr Neil Cohn & Dr Charles Forceville
presented in conjunction with Amsterdam Comics
Date: 2 June 2017
Time: 10:00-15:00
Venue: University Library – Potgieterzaal. Singel 425, Amsterdam
Open to: PhD Candidates and RMa Students
Credits: 1 ECTS
Coordination: Amsterdam Comics, RMES, Dr Erin La Cour and Rik Spanjers MA
Registration
“The Languages of Comics” Masterclass & Lectures
Amsterdam Comics is pleased to announce the third installment of the Masterclass and Guest Lecture series with “The Languages of Comics,” led by Dr Charles Forceville and Dr Neil Cohn. The workshop will engage students in the mechanics of visual language theory, and the practice thereof. The program consists of two lectures and a masterclass. The lectures will familiarize participants with the research of Dr Charles Forceville and Dr Neil Cohn. The masterclass will allow students to do some analyses themselves based on material provided by the lecturers.
Lectures
“Representation and metarepresentation of thoughts, speech, and sensory perception in comics”
Dr Charles Forceville
University of Amsterdam
Comics draw on the visual and the verbal modality, making it a thoroughly multimodal medium. A central strand of comics research is partly or wholly inspired by cognitive linguistics and relevance theory (e.g. Yus 2008, Kukkonen 2013, Cohn 2013, Forceville 2005, 2011, 2013, Forceville and Clark 2014).
As in monomodal written and spoken language, the representation of speech and thoughts in comics is a central issue. There are substantial differences between the following utterances:
- Lisa: The apple tree is to the right of the barn.
- Lisa: John says the apple tree is to the right of the barn.
- Lisa: John thinks the apple tree is to the right of the barn.
Utterances such as (2) and (3) show the speaker’s “’metarepresentational’ ability, i.e. the ability to represent the representations of others” (Clark 2013: 345). Here is another type of metarepresentation:
4. Lisa: John sees/hears/smells/feels that the apple tree is to the right of the barn.
While the addressee of (1) can be fairly confident that, indeed, the apple tree is to the right of the barn this confidence diminishes in (2) and even further in (3) and (4), as in these utterances the responsibility for stating the correct location of the apple tree increasingly involves Lisa’s interpretation of John’s perspective on its location.
In the medium of comics this issue is further complicated because salient information about “saying/thinking/perceiving that …” can be conveyed verbally, visually, or in a combination of verbal and visual information. At the highest level, the comics reader will of course postulate an agency that is responsible (as “Lisa” is in [1]) for the information conveyed in the two modes – namely that of the creator of the comics, or that agency’s persona – what in classic narratology is called the “implied author.” That is, there is always a “narrating agency” that either ’says’ verbally and visually: “the apple tree is to the right of the barn” in its own voice, or does so by delegating this ‘saying’ to embedded narrators (often characters).
In this paper I will analyse panels from various comics sources to inventory which visual resources play a role in metarepresentations, and the degree to which these depend on interaction with the verbal mode. These resources include “point of view” shots and body postures as well as non-verbal information in characters’ text balloons. The findings will show that, and how, there are multimodal and purely visual equivalents for “thinking/perceiving that …” and even for “saying that …”
The broader interest of the paper is that considering “metarepresentations” in visual and multimodal modes helps expand our understanding of phenomena that have traditionally been seen as belonging exclusively to the domain of the verbal. This will benefit both the theorization of such discourses and help develop these hitherto mainly language-oriented models.
References
- Abbott, Michael, and Charles Forceville (2011). “Visual representation of emotion in manga: loss of control is loss of hands in Azumanga Daioh volume 4.” Language and Literature 20(2): 91-112.
- Clark, Billy (2013). Relevance Theory. Cambridge: Cambridge University Press.
- Cohn, Neil (2013). The Visual Language of Comics: Introduction to the Structure and Cognition of Sequential Images. London: Bloomsbury.
- Forceville, Charles (2005). “Visual representations of the Idealized Cognitive Model of anger in the Asterix album La Zizanie.” Journal of Pragmatics 37(1): 69-88.
- Forceville, Charles (2011). “Pictorial runes in Tintin and the Picaros.” Journal of Pragmatics 43(3): 875-890.
- Forceville, Charles (2013). “Creative visual duality in comics balloons.” In: Tony Veale, Kurt Feyaerts, and Charles Forceville (eds), Creativity and the Agile Mind: A Multi-Disciplinary Exploration of a Multi-Faceted Phenomenon (253-273). Berlin: Mouton de Gruyter.
- Forceville, Charles, and Billy Clark (2014). “Can pictures have explicatures?” Linguagem em (Dis)curso 14(3): 451-472
- Kukkonen, Karin (2013). Contemporary Comics Storytelling. Nebraska: University of Nebraska Press.
- Yus, Francisco (2008). Inferring from comics: A multi-stage account. In: Pelegrí Sancho Cremades, Carmen Gregori Signes, and Santiago Renard (eds). El Discurs del Comic (223-249). Valencia: University of Valencia.
The Visual Language of Comics
Dr Neil Cohn
Tilburg University
http://www.visuallanguagelab.com
Drawings and sequential images are an integral part of human expression dating back at least as far as cave paintings, and in contemporary society appear most prominently in comics. Just how is it that our brains understand this deeply rooted expressive system? I will present a provocative theory: that the structure and cognition of drawings and sequential images is similar to language.
Building on contemporary theories from linguistics and cognitive psychology, I will argue that comics are “written in” a visual language of sequential images that combines with text. Like spoken and signed languages, visual narratives use a systematic visual vocabulary, strategies for combining these patterns into meaningful units, and a hierarchic grammar governing coherent sequential images. We will explore how these basic structures work, what cross-cultural research shows us about diverse visual languages of the world, and what the newest neuroscience research reveals about the overlap of how the brain comprehends language, music, and visual narratives. Altogether, this work opens up a new line of research within the linguistic and cognitive sciences, raising intriguing questions about the connections between language and the diversity of humans’ expressive behaviors in the mind and brain.
Programme:
10.00-10:15 – Registration
10:15-11.15 – Lecture by Dr Charles Forceville
11.15-11.30 – Coffee break
11.30-12.30 – Lecture by Dr Neil Cohn
12.30-13.15 – Lunch
13.15-15.00 – Masterclass: The Language of Comics
Chairs: Dr Erin La Cour & Rik Spanjers MA
Preparation and readings:
- Forceville, Charles, Elisabeth El Refaie, and Gert Meesters (2014). “Stylistics and comics.” Chapter 30 in: Michael Burke (ed.), The Routledge Handbook of Stylistics (485-499). London: Routledge.
- Cohn, Neil. 2014. Building a better “comic theory”: Shortcomings of theoretical research on comics how to overcome them. Studies in Comics. 5(1), 57-75.
- Cohn, Neil. 2013. Navigating comics: An empirical and theoretical approach to strategies of reading comic page layouts. Frontiers in Cognitive Science. 4: 1-15
- Cohn, Neil. 2015. Narrative conjunction’s junction function: The interface of narrative grammar and semantics in sequential images. Journal of Pragmatics. 88:105-132