As my PhD research slowly gets into gear this quick post reflects some current joy with Scrivener and potential future fun with NVivo. (The latter is largely notes to myself and might not make for great reading.)
Scrivener
There's a ten minute introductory video to ease people into the ways Scrivener works, which is time enough to decide that it's the right piece of writing software. In time I'll start using some of the compilation tools to produce finished documents, though for now I'm creating all my PhD work on it's simple structure ??? very much a living document. Here's a grab of the binder for my research:
It's a mix of working files (including 'SM': notes from a supervisory meeting), an existing draft pasted in and split up, tables of sources reviewed and my reading notes from particular texts. At the bottom are some pdf that have been imported. Each item above can have its own metadata to keep track of keywords and the like. And everything is kept together which makes for very easy backing up ??? a quick copy and paste to Dropbox for example.
I thoroughly recommend a quick look at that video for anyone who hasn't played around with the software.
___
NVivo
I attended a workshop delivered by QSR International, the people behind NVivo. It was pitched as an information and training session, with participants having had different exposure to the software and wanting it to do different things. For me this was the first time I'd used it: a potential way to manage qualitative information, driven by the coding of data. It could be used for a literature review as much as primary research, but the former doesn't appeal to me ??? partly because of my current love for Scrivener, partly because NVivo isn't available for the Mac and therefore not as appropriate for long term use in my Apple world.
The basics of the software focus on creating or importing data, then annotating it, coding, making notes and writing documents. The flexibility available in linking themes across documents (or in fact virtually any kind of media) is impressive and I can see that greater familiarity would serve to make for a powerful way to handle sources and ideas. With primary data in mind, the way individual sections of audio and video can be coded has a range of advantages, such as potentially not needing to transcribe interviews; this in turn keeps the researcher in direct contact with the source material. Fortunately video files can be embedded or linked, helping to keep down the overall file size.
'Nodes' is the terminology of choice: you can code data by dragging it onto a node. These nodes can be nested, giving some handy parent-child relationships. Viewing the coded themes across sources is attractively presented with stripes and such in the relevant views. This is intended to be part of ones workflow, with plenty of flexibility to do some coding, spot when you've reached a good place to take stock, write up some memos and get back to it.
Classifications was also introduced, partly based on establishing one's unit of measurement: a node for each person perhaps, or each hotel or geographical region. Data can then be linked to such units as well as thematically, enabling closer analysis of the data from within NVivo.
As with so much software the more you use something like this the better you'll get and the more you'll get out of it. I don't see myself using it much to begin with, but over time it's clear that there's plenty of tools in place to work with. From what I've seen, and because the university supports it, I'll likely turn to NVivo when the time comes. (At which point I'll have to learn it all over again, but no matter.)