Posts Tagged ‘lecture’

Distributed Cognition as Artistic Strategy

Tuesday, October 6th, 2009

I had assumed there was only one model of distributed cognition; that which is largely associated with the work of Edward Hutchins and which describes how individual human knowledge can be distributed across a group or network of people, tools or environments. So I was somewhat taken aback when Katherine Hayles, Professor of Literature at Duke University introduced four more in her lecture “How We Think” at the University of Nottingham earlier this evening. She outlined the embodied, extended, autonomous, and appropriated perspectives alongside the embedded model to which I am familiar. She then went on to explain how these are used to varying degrees by writers, artists and designers working in the digital domain; highlighting the print-based work of authors Mark Z Danielewski (‘House of Leaves’), and Steven Hall (‘Raw Shark Texts’), the electronic and interactive texts of Deena Larsen and Steve Tomasula, and an algorithmic engine from multimedia artist Talan Memmott. She discussed the roles of narrrative and spaciality (of texts / images etc.), and the temporality of embodied reading, and concluded by referring to Lev Manovich’s notion that narrative is in direct conflict with what he terms ‘database’ (i.e. that which is relational, spatial or conceptual).

Manuel Castells @ LSE

Tuesday, May 26th, 2009

Manuel Castells, the pre-eminent social theorist on the Internet, will be discussing his new book Communication Power at the LSE on Thursday 9 July. Having secured a £2 Megatrain return fair from Nottingham, I hope to get down to this one.

Digital Literacies

Friday, March 6th, 2009

I’ve recently had trouble interpreting terms like digital literacies, visual literacies, knowledge literacies, and web literacies. What exactly do they mean?

In an enjoyable lecture – appropriately titled The Great Multimodal Muddle – held last night at the School of Education, freelance writer and researcher Cary Bazalgette talked of the confusion resulting from the extended application of the term ‘literacy’ to non-print based media. She stressed that literacy should always refer to the knowledge and understanding of texts, and also explained how the concept of multimodality is nothing new; using Danish church interiors and Maori carvings as examples.

Bazalgette challenges the notion that all digital media should be seen collectively, proposing instead that the study of lliteracies be conceptualised around two basic categories: ‘Page-based Texts’ (which includes digital artifacts like Webpages and SMS), and ‘Time-based Texts’ (i.e. TV, film, games, VR, recorded music, podcasts etc.) This seems to be a useful framework, though I would argue Web environments which increasingly combine multiple media (i.e. page- and time-based texts) make this approach problematic.

Bazalgette concluded with some interesting observations on recent research, explaining how very young children  learn concepts of narrative, genre and character through viewing TV, whilst developing an understanding of ‘film language’ that is often to a higher level of sophistication than that provided by age-appropriate print texts.

Roy Pea @ LSRI

Saturday, November 8th, 2008

I was happy to attend the lecture by Roy Pea at the official opening of the LSRI at the University of Nottingham last night. In the first half, he presented an overview of the learning implications of the paradigm shift – a term I’m happy to use if he is – in participatory culture through Web 2.0 technologies, largely referencing the recently published report of the NSF Task Force on Cyberlearning which Pea co-authored. In the second half, he focused on his work around collaborative video discourse and his involvement in the DIVER Project. The lecture was video-recorded and will, no doubt, be available on the LSRI website very soon.