Monday, July 16, 2007

Metrics Redux

I slapped up a whole bunch of links and random posts about metrics yesterday - so I will now attempt to add some value.

Miguel's bumper book of measurements is a good place to start - and applicable to online community environments. How applicable it is to the tangle of blogs, wikis, RSS feeds and social networking tools that currently define social media is a more difficult question I have yet to work out a satisfactory answer for.

Going back to basics, stats fall into 2 bundles:
1. Those you obtain from observation (of which there are plenty in online environments due to the trails people leave).
2. Those you obtain by asking people what they do - i.e. surveys.

As noted earlier, I believe surveys have their limitations. The most serious limitation is that once the answers have been converted into pie charts & scatter graphs, they take on an air of scientific objectivity they may not actually merit. Sometimes these issues can be alleviated with a bloody big population sample. Also asking people whilst they are doing something (diary methods) rather than long afterward (recall methods) makes a big difference. However, while diary methods have been used by academics & "serious" researchers on KM, I have not seen them widely used in organisations - probably because the data collection is viewed as too invasive / labour-intensive and setting up one of these surveys requires significant forward planning. Maybe technologies such as instant messaging or blogging could make this easier? Surveys are often the only sources for obtaining direct effectiveness/value measures from participants.

Which brings us back to observation metrics - which are usually tied to a particular state (being a member) or action (joining, viewing, posting) of a participant. These are the most frequently collected metrics (no doubt because they are the easiest). However identifying an action and then ascribing meaning to it are two different things, as any ethnographer will tell you.

Taking this to the next level, both Miguel and Taule & Timbrell look at conversation metrics - questions posted & answers followed up. This is undoubtedly useful information (and goes some way to my Metrics 3.0 question) but these are difficult to catch on the fly. Plus it either requires you to assume that the thread structure accurately captures interactions or else you have to hand code each post for later analysis (which is laborious work). If the thread structure does capture the shape of conversations well enough, then it might yield some interesting insights. Interestingly, blogs capture this kind of data quite well (either through comments per post or examining links between blogs) because they are less "conversational" than bulletin boards.

Graham Durant-Law's SNA of ACT-KM takes things to the next level. There has been a tremendous proliferation of network visualisation tools (&, to a lesser extent, analytic tools) in the last few years. Real-time SNA may not be that far away (although I have some doubts) but key hurdle is that most SNAs tell you if people are connected in some way but are much less useful on the "how" or "why" front. And classic SNA metrics (e.g. centrality) still remain opaque to non-experts. We need to deploy some form of SNA-style measurement - but I am not sure what those are yet.

(As an aside, online communities, blogging networks & social media generally tend to follow power-law distributions rather than Gaussian ones. As John Hagel has noted, we don't necessarily understand these environments as well as we should and our traditional forms of measurement are not there yet)

No comments: