While accessing many-many conversations akin to a meeting or brainstorming session, most of the evaluation measures in use today are highly subjective. It is difficult to both access the output as well as the contribution from different participants. During my internship at CCIL-NEC in Japan, I analysed such many-many discussions and made tools to summarize, visualize and navigate.
I also worked on evaluating how well the participants performed certain roles in the discussion and the work is to be presented in ASNA '11: Evaluation of participants in Group discussion on Twitter
For more details about each, visit their respective subsections.