Ranking is a scaling method for ordinal data. Ordinal data in this context means the opinions of participants given about a product or service (the entity) and on what property those opinions focused upon (the attribute). An attribute may be how well a product performs, its weight or ease of use.
Opinions are qualitative and so stand-alone and have no direct measure against another participant’s opinion. The evidence collected is called nonparametric data. However, a consensus can be achieved through gaining many opinions. There are associated statistical (nonparametric) methods for checking the reliability of the data collected. It is best to get the advice of a statistician about which statistical analysis method is most appropriate if this level of reliability is required.
Ranking has been used in the form of ‘Hall testing’ of products, to get preferences and opinions on a range of products or services. Ranking can be used on this more basic level in conjunction with a semi-structured interview or questionnaire to gain some insights into the preferences of a target user group or associated stakeholders. The questions provide some insights into the reasoning behind the priority order.
Points to consider:
- No more than nine entities (products or services) should be put together. It is suggested that only the first two or three positive rankings made and the final two or three negative order of preference provided are the reliable. This means if nine products are used, only six will reliably ordered.
- The operator should define the attribute they wish the participant to use to review the entities, e.g. handling, perceived weight, usability, perceived safety.
- All of the entities or products should have a common attribute, e.g. all walking sticks. Checking the attribute is within the entity becomes more difficult if talking about an abstract service or complex system.
- There are many opportunities for bias in the participant’s answer/response to the operator’s question. The diagram shows the process through which a participant will go to answer a question relating to ranking. Providing clear guidance about the questions asked and checking participants understand the terminology/ nomenclature will help avoid bias at any point of the subjective assessment process through to response.
Cohen, L. Mannion, L., Morrison K. 2007. Research methods in education. 6th ed. Routledge, London.
Creswell JW., 2009. Research design. Qualitative, quantitative, and mixed methods approaches. 3rd ed. Sage, London.
Greer B, Mulhern G. 2002. Making sense of data and statistics in psychology. Palgrave, Baskingstoke.
Siegel S, Castellan NJ. 1988. Nonparametric statistics. McGraw-Hill, London, pp 174-183.
Sinclair, M. 1999. Subjective assessment. In: J.R. Wilson & E.N. Corlett (eds). Evaluation of human work, a practical ergonomics methodology. 2nd ed. London: Taylor & Francis, pp69-83.
Torrens, GE (2011) Universal Design: empathy and affinity. In Karwowski, W, Soares, M, M, Stanton, A, N, Eds, (ed) Handbook of Human Factors and Ergonomics in Consumer Products, CRC Press, pp.233-248 Available at: (http://www.crcnetbase.com/doi/abs/10.1201/b10950-19), Accessed: [23/09/015]