I found myself getting quite fired up, not in a good way, about their early work, which looked at how 12 students had worked on a collaborative task - generating 29 messages ( this was 2001 folks ). They went on to categorise the messages (by hand) like this...
- Joint knowledge building
- Asking questions, dialogue extension prompts
- Supporting with reference or example
- Acknowledging/ replying / referring to another message
- Motivation and commitment to task
- Instructions/information - coordination messages
I heard about some interesting projects that look to make "sense" of online discussions. For example, AcademicTalk looks at restricting the opening sentence of every message in a forum, essentially then categorizing it a certain way.
See also: Digital Dialogue.
I really like the idea of "constrained conversation" ... almost like a parlour game that forces you into somehow being more communicative. This reminded me of work by Simon Buckingham Shum on argumentation where you sort of construct a discussion from visual building blocks that I saw in the last millennium - you know, it feels like more than a thousand years ago sometimes.
But I found myself getting really worked up, and not in a good way, about the very idea of analysing discussion anyway. If you look at the numbers, the crudest form of measurement, you get crude results. If you choose to tag the discussions your perspective skews everything... and if you change the environment to something better ( than email ) then you've changed so much that measurement is pretty pointless anyway. Yes, you might be better able to understand the collaborative processes that are happening but they are in such an artificial environment your findings are meaningless.
And anyway, how do you even define collaboration anyway.... One person's successful collaborative experience might leave the other participants feeling exploited. Crowdsourcing anyone?
I was also reminded of Jer Thorpe's visualization work for the New York Times on the life-cycle of a tweet. Here, the crudest measures... tweets and re-tweets are shown in realtime in an infinite animated 3D space. It's the sort of thing we maybe all should have instead of an email intray... a collection of funky diagrams that we keep an eye on, jumping in when the ripples get too wobbly or when a diagram "goes quiet". ( Excuse me whilst I get worked up in a good way ).
There for me is the chicken and the egg. If you "hand categorize" your discussions, any worth is both tainted by the viewer ( pretty much like quantum physics ) or completely irrelevant in that any findings can't really be applied elsewhere.
And if you work with the crude numbers, unless you have a LOT of data then you don't have patterns that might be spotted automatically. Imagine a conversation bot popping up mid flame war and saying " I notice that this discussion thread seems to losing its focus and becoming all about petty point-scoring, please desist! ". I could imagine that making all the difference to humanities ability to discuss things rationally.
All Of Which Leads Me To This...
So, if you can't have any analysis that requires a researcher to add it, and you can't rely on the crude numbers what might you use instead?
Remember, I've already said that you also can't invent a fancy-dancey bells and whistles parlour game style environment to force people to behave differently so that you can now measure them properly.
You could just use email and forums. These forums and messages would have extra meta tools though ( which is only slightly fancy-dancey ) that would enable the participants to tag discussions, particularly for negative, anti-collaborative indicators.
Imagine that in the flow of a discussion forum, you could select a certain sentence and from a pop-up mark it as "Self aggrandising" or "Funny, but disrupting the flow of the discussion" or "Rude and disrespectful" or "Missing the point" or "Deluded". Now imagine that these scores were anonymous... but aggregations of them were shown on your profile.
The point of this, is that when things are good, most people don't see it. Good interaction design isn't even noticed, it disappears. Also, making negative, perhaps harsh judgmental comments that are attributed to you is quite a challenging thing to do... " passive aggressive "... "bullying!" etc. But it's often clearer why collaboration DOESN'T happen, than agreeing when it does.
I've always longed for an "Unlike" button in Facebook, not because I'm a snarky miserabilist, but just because I want to show disapproval without getting into long pointless arguments about it. And maybe, the only person who gets to see this feedback is the person at whom it is directed.... maybe only after you've gone beyond a certain threshold.
Technically, this would work just the same as regular emails and forums, albeit with a few extra markup tools. It might show us what mixtures of "types" of people produce good and bad collaborative experiences or it might show us who the Naysayers are in an organisation.
Maybe looking for what makes good collaboration based on what is there is a bit like understanding space... you need to look at what's not there, the black stuff, dark matter, to understand how it all works.