Qualitative Analytic Design #1: Factors underlying our approach

In her recent post, Rose commented on the variety in the responses to the 2008 Your Life Line Directive. This variety has shaped the way we are approaching the qualitative analysis of this and the 1990 Social Divisions Directives. So I thought I’d outline our analytic design and share how we are implementing it within MAXQDA (see here for an explanation of our choice of CAQDAS package). I’m doing that in a series of 4 posts, this is the first, check back over the next few weeks and months as our analysis proceeds for the next 3 posts, which detail the way we are going about each analytic phase.

Framed by the projects’ overall objectives, research questions and methodology, our analytic design evolved out of a pilot analysis phase when different approaches were trialled on a sub-sample of narratives from both Directives.

The resulting design involves three phases: i) high-level mapping of semantic content, ii) thematic prioritisation, and iii) in-depth latent thematic analysis. Each phase will be the subject of separate blog posts.

Here, though, I’m briefly discussing four factors that underlie this approach:

1) the nature of the data

2) the need to both keep separate and to integrate the analysis of the two Directives

3) the practicalities of the project

4) the need to develop a transparent and transferable process

 

The nature of writing for the Mass Observation Project (MOP) and the data that is generated

Because the writings of Mass Observation volunteers are only loosely guided by the questions in the Directive, we did not have an overview of the general content of the material at the start. This is symptomatic of this type of secondary qualitative data analysis. The narratives were not generated for the purpose of this study and therefore we have had no influence on the nature or content of the material we are analysing.

This is very different to the type of situation where researchers design and undertake interviews or focus-groups, or observe naturally occurring settings or events. Had we been involved in designing the Directive questions for a specific substantive purpose we might have had some ‘control’ but even then, the very nature of the MOP results in very varied responses to Directives. Some are short, others much longer; some specifically seek to answer Directive questions, others attend only very loosely to the Directive questions; some are written in a quite structured form, for example using bullet points, others are longer more free-flowing, discursive-style narratives; some are written in the first person and reveal detailed insights into personal experiences and opinions, others are more cursory, brief descriptions that upon first reading appear to reveal little about the feelings and opinions of the writers.

This varied nature means we have a very rich set of materials – just what qualitative data analysts love – but when we started out we had no idea of the content of this large body of varied writings. We therefore needed to design an approach that first provides us with an overview, so that we can evaluate the extent to which our research questions are answerable by the data. We could have achieved this by first reading all the Directive responses, but with almost 600 Social Divisions and almost 200 My Life Lines responses, some of which are many pages long, and a short-time frame, we couldn’t do this. We needed to be coding whilst reading. Had the analytic team done the transcribing, we would have had the broad content view we’re looking for from that process, but again, the project resources didn’t allow for that and temporary typists were employed to transcribe the responses. We did informally ‘interview’ the transcribers about their impressions of the MOP writers when they had finished, though, and this informed our thinking. But we could not solely rely on their opinions.

Analysing different sets of responses separately then integrating our analyses

Initially our intention had been to analyse responses to both Directives together, because one of our objectives is to explore the extent to which perceptions and lives have changed between 1990 and 2008 amongst writers who responded to both Directives (that we call ‘serial responders’). However, the pilot demonstrated that despite the need to analyse the ‘serial responders and despite the synergies across the two Directives’, starting out with all the data together would be impractical and would affect our ability to maintain focus whilst coding each Directive.

In addition, it became clear in our pilot coding that we cannot know at the outset which areas of our substantive interest offer potentials for looking across the two Directives. There are many potential synergies, and the longitudinal element of exploring the identities of writers that have responded to both Directives is an important part of our work. However, the Directives are very different and the context of the time in which these were responded to is important.

The practicalities of the project – number of coders and time-frame

Any project needs to attend to ensuring coding is consistent. The involvement of three coders and the short time-scales mean that we needed to design an approach that maximizes consistency from the outset. We could have mapped out the content of the Directive responses by undergoing an “open” coding exercise as a means of initial theme generation; this would have been similar to the usual first stage of Grounded Theory-informed projects. Indeed we did this in our pilot work and this informed the focus of phase one. However, the time available and the uncertainty of content means it is more systematic to focus first on the descriptive content and undertake more interpretive work once we have a clear idea of content.

The need to develop a transparent and transferable process 

One of the objectives of this project is to open up possibilities for using MOP as a source of secondary longitudinal qualitative data. This means that verified and assured processes for analysing MOP data that can be adopted or adapted by other researchers are amongst the projects outputs. The three-phased approach not only serves our analytic purposes but also offers a method that can be easily documented and illustrated.

 

Like qualitative research design in general, ours is iterative and emergent – we expect to need to refine our initial research questions as we progress – in light of our growing understandings. I will outline the three phases of the design (high-level mapping of semantic content, thematic prioritisation, and in-depth latent thematic analysis) and how they are being implemented in the CAQDAS package MAXQDA, in future posts.

 

 

Advertisements

2 thoughts on “Qualitative Analytic Design #1: Factors underlying our approach

  1. Pingback: Qualitative Analytic Design #2: Phase One – High-level mapping of semantic content | Defining Mass Observation

  2. Pingback: Transcription as a ‘moment of contact’ with qualitative data | Defining Mass Observation

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s