- In the discussion of task A11 (pp. 279–81) the account of the students’ utterances is plausible, but why is transcript data to be preferred to the video data for such a visual task?
The authors used the video data as well as the transcript to perform their analysis (“In this case the reason or warrant is usually to be found outside of the talk in the physical context, i.e. on the problem book page”, p.283) however the concordancer can only deal with text data. So the technology used for analysis dictated how the data was re-presented for publication. Also, research is published primarily on paper – although increasingly also in digital format – so it is against the conventions of the genre, and of professional practice, to submit research data in video format – not to mention practical logistics problems, and the issue of violation of participants’ privacy and protection of anonymity.
A criticism sometimes made of quantitative research is that it uses preconceived categories rather than letting findings ‘emerge’ from the data. The ‘Commentary’ on task A11 (pp. 280–1) is qualitative rather than quantitative, but it could be argued that it also uses preconceived categories. For example, Elaine’s words before the intervention, ‘No, because it will come along like that’, and the fact that the next utterance is by John on the next question are interpreted as, ‘She gives a reason to support her view and this is not challenged.’ Her words after the intervention, ‘Now we’re talking about this bit so it can’t be number 2 it’s that one. It’s that one it’s that one’ are interpreted as, ‘In proposing number 4 Elaine is building on these two earlier failed solutions’ (p. 281). Wegerif and Mercer have prior expectations about ‘exploratory talk’, defined as ‘talk in which reasons are given for assertions and reasoned challenges made and accepted within a co-operative framework orientated towards agreement’ (p. 277). So notions such as ‘reason’, ‘support’, ‘challenge’ and ‘failed solution’ have specific, preconceived meanings. Do you think it would be possible to avoid the use of preconceived categories when analysing this data?
It would be difficult to do so, because presumably these categories have been taken from prior research, and it is important to show how a study builds on what has gone before to add to the “body of knowledge” in a given area. Both authors cite their own previous work on exploratory talk in the References.
In systemic functional linguistics, the focus is more on the different ways language is used by participants in contexts of communication to achieve communicative goals, and I suspect systemic functional linguists would find the linking of individual words or single utterances to specific functions a little simplistic. (check Michael Halliday for more on this).
- Again in relation to task A11, what evidence might support the following claim on p. 281?
‘In the context of John’s vocal objections to previous assertions made by his two partners his silence at this point implies a tacit agreement with their decision.’
See above! John’s silence might have been intended to achieve a number of communicative goals, including dissent. The history of John and his classmates’ interactions and relationships would have to be examined in greater depth to support the authors’ claim.
- On p. 281, the authors claim:
‘It was generally found to be the case that the problems which had not been solved in the pre-intervention task and were then solved in the post-intervention task, leading to the marked increase in group scores, were solved as a result of group interaction strategies associated with exploratory talk and coached in the intervention programme.’
When you read this claim, did you ask yourself if the researchers had looked at whether this was also true of the control group? If time allows, feel free to look at the papers in which fuller accounts of the study appear.
Time does not allow! But yes, the reasons for the control group’s improvement should also have been interrogated.
- In the post-intervention talk around problem A11, John says, ‘No, it’s out, that goes out look’.
This utterance doesn’t use the words ‘cos’, ‘because’, ‘if’, ‘so’ or a question word, but it is plausible that John is giving a reason. How might one deal with such a problem?
The video would show non-verbal modes of communication such as pointing and the classmates’ nodding their understanding and agreement.
- Are you convinced that the study effectively demonstrates the authors’ case that:
‘the incorporation of computer-based methods into the study of talk offers a way of combining the strengths of quantitative and qualitative methods of discourse analysis while overcoming some of their main weaknesses’?
What does the computer add to the analysis?
The software (rather than the computer per se) allows the analyst to zoom in on interaction fragments for detailed qualitative interpretation which might enable the identification of salient language features, and then zoom out again to look at their frequency of occurrence in larger corpora (quantitative). I’m not convinced that allowing analysts increased speed and convenience in zipping between methods is the same as overcoming their weaknesses – although it may be useful in revealing where and why data at both levels of abstraction may be unreliable.
- What is the status of computer-based text analysis 16 years on? Spend 20 minutes trying to answer this question by searching the web.
Searched “computer-based text analysis” in Google using Search tools – Time – Past year. Found this very blog post there even though I haven’t finished it. Needless to say, it wasn’t much use! I couldn’t access “an intriguing study of computational text analysis” (Reading Machines, Stephen Ramsay, 2011) as the OU doesn’t seem to subscribe to Shibboleth. Google presented me with a loooong list of ads for various text analysis software packages, some targeted at academics but most apparently to the world of marketing. LIWC seems to be a market leader in the burgeoning world of text analysis software, developed by James W Pennebaker, a US sociologist interested in how language reveals emotional states.
I turned next to Google Scholar and again restricted my search to articles published in the last year. It would seem, from what was a very superficial search, that its status is far from assured in academic research into linguistics and related fields, but that its capacity to deal with extremely large scale text corpora has been seized upon enthusiastically by fields of marketing, management, media, government, in other words organizational and business contexts.
- How does this paper compare with Reading 1?
The Wegerif and Mercer paper was concerned and critical about the suitability of quantitative research methods to enquire into collaborative learning. Reading 1 did not seem to question this.
Reading 1 seemed far more focused on the use of technology for course delivery, whereas Wegerif and Mercer had a greater focus on the application of pedagogy and learning theory to the activity design with regard to language, and was more concerned with technology as a language research tool than as a means of communicating content to learners.