Currently, each user can give one rating for each version of a Program or Rule. The rating is based on stars between 1 and 5.

We are considering several alternatives and would like to hear your thoughts on this. It's a very important decision.

These are some ideas we are considering:

1)

There is one primary rating for each Rule/Program, which is based on stars. Additionally, there is also one rating each time the object gets used in a Scenario. These ratings are binary and just say "This object was useful / not useful this time".

This way, you can express information like "This Program is usually very good at 5 stars and in the last 10 times I have used it, 8 of those times were useful but two were not."

Elody can then look at the statistical distribution of these smaller ratings and attempt to automatically infer things like: "This Program has a significantly better/worse rating than normal when it is used after this other Program"

These automatic inferences could then be used to make suggestions to the developers and even to automatically create additional Rules.

2)

Instead of a single star rating, each object is rated on several binary criteria:
-good / bad at what it does
-fast / slow
-always useful / sometimes useless
-never harmful / sometimes harmful
-…

This information is much more accurate and would be far more useful for Elody's decision making, but it may also be harder to get a user to understand and give these types of ratings.

3)

There is only one rating for each Rule/Program, regardless of version. However, newer versions must be approved before Elody starts using them, to ensure that developers can not sneak harmful code into a Rule after it has gained a high rating.

?)

We would appreciate any thoughts on this. Finding an effective and reliable way to rate Rules and Programs is very important.