Articulo
New insights on random regret minimization models
Fecha
2015Registro en:
1150590
WOS:000353085200007
Institución
Resumen
This paper develops new methodological insights on Random Regret Minimization (RRM) models. It starts by showing that the classical RRM model is not scale-invariant, and that as a result - the degree of regret minimization behavior imposed by the classical RRM model depends crucially on the sizes of the estimated taste parameters in combination with the distribution of attribute-values in the data. Motivated by this insight, this paper makes three methodological contributions: (1) it clarifies how the estimated taste parameters and the decision rule are related to one another; (2) it introduces the notion of "profundity of regret", and presents a formal measure of this concept; and (3) it proposes two new family members of random regret minimization models: the mu RRM model, and the Pure-RRM model. These new methodological insights are illustrated by re-analyzing 10 datasets which have been used to compare linear-additive RUM and classical RRM models in recently published papers. Our re-analyses reveal that the degree of regret minimizing behavior imposed by the classical RRM model is generally very limited. This insight explains the small differences in model fit that have previously been reported in the literature between the classical RRM model and the linear-additive RUM model. Furthermore, we find that on 4 out of 10 datasets the mu RRM model improves model fit very substantially as compared to the RUM and the classical RRM model. (C) 2015 Elsevier Ltd. All rights reserved.