NVivo is a software programme for qualitative data analysis. It is a powerful platform that supports text, video, audio, picture data, PDF, surveys, bibliographic libraries from Endnote and the like, internet data from Facebook, Twitter, LinkedIn, YouTube and Survey Monkey, and notes taken using Evernote and OneNote. NVivo supports a range of inductive and deductive methods to analyse qualitative data such as thematic and content analysis, within and cross-case analysis, discourse, conversational and narrative analysis, grounded theory, analytical induction, and qualitative research synthesis. The objective of this course is to provide participants with knowledge and skills on how to use the basic and advanced features of NVivo in their own research project. The course’s content is spread over four modules and includes setting up a project and organising data in NVivo; preparing text and multimedia sources, managing a literature review, coding and analysing data; seeking patterns and identifying relationships, and presenting findings using graphic displays. More on the four modules is presented below.
Module 1 Data Management
The course opens with guidelines on how to apply one's research design in NVivo: type(s) of data collected, unit of analysis, cases and variables (when applicable), coding approach, and choice of analytic method. We then create an NVivo project, import and organise a interview and focus group transcripts, audio and video recordings, PDF from literature review, survey data, Tweets and Facebook posts, and bibliographical meta-data. When using NVivo for literature review, we learn to cross-reference sources that support or contradict a given line of argument, and learn how to show connections between sources and authors.
Our attention then turns to the transcribing possibilities of NVivo, starting with transcribing audio and video recordings in-full or working only with sections of sound data. We look at picture data and explore the possibility to comment on a whole picture or only regions of it. We move on to create externals that link an NVivo project to outside information, as well as creating memos to record the analytic process. Module 1 concludes with lexical queries which search for the most cited words, as well as the occurrence and context in which keywords are mentioned across textual sources. We analyse the outputs in word clouds, dendograms, and wordtrees.
Module 2: Data Coding
Module 2 presents the different techniques to automatically and manually code data in NVivo. We start by autocoding questions from structured interviews so the responses of each question are gathered in one node. Such data sorting, known as broad-brush coding, is very useful when one wants to examine everything that has been said about a question or theme across a dataset. But the analytic task of coding really starts with the creation of categories which may derive from theory or be inductively created from the data. Accordingly, we create a coding scheme where different types of codes co-exist and which capture ideas in text data, video transcripts, audio recordings, pictures, and social media data. The use of relationship nodes is introduced to formalise relationships between codes when working towards hypothesis generation or falsification. Module 2 concludes with mapping the coding process in models and graphs.
Module 3: Data Analysis
Module 3 is concerned with the preparation and the conduct of qualitative data analysis through seeking patterns and identifying relationships across the data. We first set up the case and variable structure from Excel which results in the cases being listed in a case structure with their variables assigned. With the case list set-up, we turn to the NVivo search tool which is time-saver functionality when one wants to quickly locate project items based on their name, creation date, or attribute values. We search for cases that match specific sociodemographics and grouped these in sets so we can later compare what, and how much, was said for specific codes across sets of respondents.
We then move on with coding-based queries which retrieve data based on patterns of codes co-occurrence, proximity, sequence, or exclusion. We first run coding queries that search for data coded at some nodes but only when mentioned by respondents of a given profile. For cross-case analysis, we run matrix queries which cross-tabulate cases with codes and we interpret the results using different numerical readings: coding density, number of cases, relative percentage, etc. Our interpretation is recorded in memos and is linked back to theory. Module 3 concludes with running group query to find out association between items across the data.
Module 4: Data Visualisation
Module 4 proposes different graphic displays to effectively communicate one’s research findings. We first discuss the rationales for choosing certain displays against others. We learn how to generate models, charts, graphs, dendograms, and maps. Moving on to building a solid audit trail to back up results and substantiate one’s claims, we learn how to export results out of NVivo so these can be used in Word, Excel, and PowerPoint.
The usefulness of generating nodes summary reports, which provide detailed synthesis of the scope of a node in a project, is also covered. When working with colleagues who don’t use NVivo, the possibility to export project data in mini websites using HTML files is presented. Module 4 concludes with the ABC of coordinating team work with a particular emphasis on the golden rules for successful data management, splitting and merging project files in a master project, and the measurement of intercoder reliability.