Peer ratings and assessment quality

The article starts with an example: A British organization asked the public to come up with names for its newest ship. One person joked that the ship should be named Boaty McBoatface, and that became the most popular choice. Does this reflect the wisdom of the crowd? This article sets out a plan for research into factors that affect peer assessment of ideas. Because their research is not complete, there are no conclusions and recommendations, but the article is useful for the way it draws together research done into several theories and models.

Wagenknecht, T., Teubner, T., and Weinhardt, C. (2017) ‘Peer Ratings and Assessment Quality in Crowd-Based Innovation Processes’, Proceedings of the 25th European Conference on Information Systems (ECIS). Guimaraes, Portugal, 5-10 June. Research-in-Progress Papers [Online]. Available at


  1. Introduction
  2. Theoretical background and related work
    1. Crowdsourcing and decision-making
    2. Personality and behavior
  3. Study design
  4. Discussion and conclusion


Organizations use social media to crowdsource peer assessment of ideas such as product innovation and knowledge sharing. The crowd both generates ideas and evaluates the ideas put forward by the crowd or organization. Several factors affect assessment when done in this way.

In this article, the authors define crowdsourcing to mean specifically ‘a means for organisers to motivate a large number of users to propose innovative problem solutions, or to identify problems in the first place’ (Wagenknecht et al., 2017, p. 3146). Crowdsourcing can result in high-quality ideas, but the volume can make it hard to separate high- from low-quality ones. Thus, organizations further ask participants to rate the ideas — but the participants usually are less informed and less willing to spend the time necessary to do a thorough evaluation. Instead, they take a shortcut in their decision-making. The elaboration likelihood model and the dual processing theory explain the shortcut. If the participants are motivated, they process information logically and diligently. But, if they don’t pay much attention then they use evaluation heuristics (or, approaches) and cues to help them process information quickly. One such approach is social proof, ‘which describes people behaving in accordance with how they perceive others to behave in their environment’ (Wagenknecht et al., 2017, p. 3147).

In other words, monkey see, monkey do. People assume that others are making good choices. They adjust their own ratings when they see the ratings given by others and this ‘diminishes the wisdom of the crowd effect and, thus, can lower assessment accuracy of evaluation processes’ (Wagenknecht et al., 2017, p. 3146).

The authors also mention two issues related to personality. One is that extroverts are more likely to interact with others and also more likely to make their own judgments. Similarly, people who believe they can influence outcomes with their actions (people with an internal locus of control) take bigger risks than people who believe they have less control over the consequences of their actions (people with external locus of control), and they provide more leadership. These personality traits can affect assessment outcomes.

See also

Bhanji, J. P. and Delgado, M. R. (2014) “The social brain and reward: social information processing in the human striatum.” Wiley Interdisciplinary Reviews: Cognitive Science 5 (1), 61–73.

Maier, C., Laumer, S., Eckhardt, A. and Weitzel, T. (2015) “Giving too much social support: social overload on social networking sites.” European Journal of Information Systems 24 (5), 1–18.

Riedl, C., Blohm, I., Leimeister, J. M. and Krcmar, H. (2010) “Rating Scales for Collective Intelligence in Innovation Communities: Why Quick and Easy Decision Making Does Not Get it Right.” In: ICIS 2010 Proceedings, pp. 1–21.

Leave a reply