The processes of undertaking stage 1 of the analysis of Mass Observation writers’ responses to the Life Lines directive leads me to reflect on something I often say to participants of my analysis courses – the various ‘moments of contact’ we have with our data during qualitative analysis.
Revisiting data at different times, from different perspectives and for different purposes is characteristic of the iterative nature of qualitative analysis, and must be prized as essential in the development of a valid interpretation. Transcription is a key ‘moment of contact’ that shouldn’t be undervalued.
We all know that transcription is time-consuming. The speed with which we can type, the technology used to facilitate the process and the form of transcript being generated all affect the amount of time it takes, but it’s typically suggested that an hour’s worth of audio recording – say from an open-ended interview – takes between 8 and 10 hours to transcribe. If you’re working with video, perhaps transcribing interactions between participants and the characteristics of the setting, as well as the content of what is being said etc. then it takes longer. Voice-recognition software is sometimes thought to offer a solution, but this is unlikely to be the case – see here for the reason why!
Several CAQDAS packages, such as ATLAS.ti, MAXQDA, NVivo, QDA Miner, HyperRESEARCH, Qualrus, and Transana (see CAQDAS Networking Project for reviews of these products) provide the ability to analyse audio, video and image data ‘directly’ – i.e. without the need for a written transcript that represents the original data. There are many analytic reasons why this might be useful.
But this technical ability also brings with it the danger of enabling lazy researchers to be lazier. Often students get very excited when they realise that the CAQDAS package they have chosen enables direct analysis of audio-visual material. It’s almost like you can see them thinking “wow, I don’t have to do that boring transcription any more”.
(Note of clarification here, by the way – I’m NOT saying the technology encourages laziness. Technical affordances do not – and cannot – in and of themselves encourage us into certain practices, because we, as humans, as researchers, are always in control of deciding what purpose to put a technological feature).
My response to this is: Why would you think transcription is boring? Just because something takes time doesn’t mean it’s boring, surely?! You designed and undertook the interviews or focus-groups, observed the settings, designed the open-ended survey questions, whatever. Therefore how can you be bored by transcribing, formatting and preparing the data? It’s the basis of how you will go about the analysis, in fact, its an integral part of analysis.
I’m currently teaching my daughter to read. She’s four. It’s taking a while. It’s a process that involves a huge amount of repetition. I’ve spent approximately an hour a day for the past several months on this. Just like when I transcribe I’m faced with having to write out the same or similar passages several times – because research participants often say similar things in response to our interview questions – my daughter and I read the same books several times before moving on to a new one. We do that to consolidate her learning. True, after the third time she’s usually had enough of the story (you could perhaps say she’s ‘bored’ with it), but she’s also pretty chuffed with herself because she realises she can read it more easily and with more fluency the third time than she did the first. She’s ready to progress. My daughter has an older brother, he’s 8. I absolutely love the fact that he is now an accomplished reader and that he voluntarily spends time reading a range of fiction (currently he is reading Harry Potter), and non-fiction (historic, geographic and wildlife encyclopaedias are his favourite) and these days would rather read on his own than to or with me. But his reading skills are in large part a result of the time we spent together repeatedly practicing the core elements of reading that he now unconsciously exercises independently.
Why am I telling you anecdotes about my childrens’ reading learning experiences, you may be wondering? Because their experiences are a useful analogy for what we need to do as qualitative researchers with our materials. Just as my son and daughter have – and need – repeated ‘moments of contact’ with phonetic letter sounds, words, sentences, paragraphs, chapters and books to consolidate their reading expertise, so we, as qualitative researchers, have – and need – repeated ‘moments of contact’ with our data. We need those moments to achieve the deep level of contact and understanding that leads to an authoritative and valid interpretation. This is true whatever our research objectives, methodologies, analytic strategies.
The number and types of ‘moments of contact’ we have depend on the project’s characteristics – including research questions, type and amount of data, analytic approach, levels of analysis and types of output. And the way software tools are harnessed. For Defining Mass Observation, the analysts did not have the benefit of transcription as a ‘moment of contact’. For various practical reasons, others were employed to transcribe the hundreds of hand-written narratives. As they were doing so we realised that the process was providing them with not only an overview of the breadth of content contained within the materials but also valuable insights that could inform our analysis. We therefore took the opportunity of interviewing them towards the end of their process and have taken their thoughts into account in designing and undertaking the analysis. They had the overview of content that the three qualitative analysts didn’t have, which, as discussed in this blog post, was a key factor in shaping out analytic design.
During the analytic planning stage, we undertook a pilot analysis of a sub-sample of responses to both the “My Life Line” and “Social Divisions” Mass Observation Project (MOP) Directives. This involved several tasks which entailed repeated ‘moments of contact’ with the data, including the following:
– identifying, defining and representing concepts
– familiarising with the data by exploring content at a detailed level
– experimenting with different conceptualisation strategies (open-coding for thematic content, coding for tone of expressions, capturing the chronology of events, etc.)
– interrogating the occurrence of different types of codes in the data and in relation to writers with different characteristics
This pilot work essentially involved undertaking a whole mini-analysis of the sub-sample of data, experimenting with different ways of undertaking analysis and evaluating the extent to which these would enable us to answer our research questions. This resulted in designing an overall analytic plan, which we are now in the process of undertaking.
For the “My Life Line” Directive, we are just completing Stage 1: High-Level Semantic Content Mapping, which has involved the seven “Phases of Action” comprising various analytic tasks. I’ll discuss those in a different blog post. The point I want to make now is that the analysis plan was designed to overcome the lack of an overview, on the part of the the analysts, of content of the extensive material as a whole. We needed to design a process that enabled us to gain this overview quickly and comprehensively. Although we gained a lot from interviewing the transcribers, we couldn’t rely solely on their insights as they had not been asked to think about the data they were transcribing in relation to our research questions. In addition, because there are three qualitative researchers working on the analysis we needed to design a process that ensured consistency and equivalence without each of us having to engage to the same level with all the transcripts.
However, as I have been undertaking stage 1, I’ve been thinking about how we would have designed the analysis differently if we had participated in the transcription process. Would undertaking transcription have meant the analytic plan would have been different? Would we have had to go through the extensive pilot planning stage at all? At least it would have been different because it would have been more pointedly informed. We would have made notes as we were transcribing, and these would have informed the design. The lack of a comprehensive overview of content was a key factor underlying our design, so therefore it stands to reason that had we had that overview we would have undertaken the analysis somewhat differently. We would still have had to undertake the high-level semantic content mapping process because in order to answer our research questions we need to consistently map out the topics discussed and the ways in which they are discussed. But there are certain areas of data conceptualisation (commonly called ‘coding’) which would perhaps have been more focused more quickly had the analysts been involved in the transcription. So that the dilemmas we encountered about how to code for certain aspects would have been pre-empted.
All projects are different and researchers have to respond to their characteristics in order to enable systematic and high-quality analysis. It’s always a balance between practical and analytic needs. I’m not saying that our analysis would be better if we had done the transcribing ourselves, and within the parameters of the funding for this project, that wouldn’t have been possible anyway. But the way we approached the analysis would certainly have been different. We have had to build in certain steps to overcome the lack of overview of content that would either not have been required, or would have been different.
So, the point about transcription that the DMO project underlines is that transcription is an analytic act. This is not a new idea, but one that is often overlooked or suppressed. What you decide to transcribe and how you decide to format your transcriptions, affects how you can go about analysis. Therefore transcription shouldn’t be undervalued as a process. It’s probably true that as researchers progress through their careers they become less likely to be the ones undertaking transcription. It’s very common for transcription to be contracted out and there are many professional services for doing so. In funded projects like DMO contracting out transcription is a practical issue, as its often just too expensive for professional researchers to undertake transcription within tight budgets. This doesn’t have to be a problem – as our experience shows, analysis can be designed to overcome what is lost by not transcribing oneself.
However, don’t undervalue transcription. If you’re a student you have the luxury that you may never get again to engage with your data during this important process. Thinking of transcription as a ‘moment of contact’ with data, during which you can take notes about content and potential avenues for analysis, rather than a boring task you just want to finish, will free you to make the most of your data.