Similarly - one of my biggest complaints about almost every rating system in production is how just absolutely lazy they are. And by that, I mean everyone seems to think "the object's collective rating is an average of all the individual ratings" is good enough. It's not.
Take any given Yelp / Google / Amazon page and you'll see some distribution like this:
User 1: "5 stars. Everything was great!"
User 2: "5 stars. I'd go here again!"
User 3: "1 star. The food was delicious but the waiter was so rude!!!one11!! They forgot it was my cousin's sister's mother's birthday and they didn't kiss my hand when I sat down!! I love the food here but they need to fire that one waiter!!"
Yelp: 3.6 stars average rating.
One thing I always liked about FourSquare was that they did NOT use this lazy method. Their score was actually intelligent - it checked things like how often someone would return, how much time they spent there, etc. and weighted a review accordingly.
Take any given Yelp / Google / Amazon page and you'll see some distribution like this:
User 1: "5 stars. Everything was great!"
User 2: "5 stars. I'd go here again!"
User 3: "1 star. The food was delicious but the waiter was so rude!!!one11!! They forgot it was my cousin's sister's mother's birthday and they didn't kiss my hand when I sat down!! I love the food here but they need to fire that one waiter!!"
Yelp: 3.6 stars average rating.
One thing I always liked about FourSquare was that they did NOT use this lazy method. Their score was actually intelligent - it checked things like how often someone would return, how much time they spent there, etc. and weighted a review accordingly.