Skip to main content

Symon.AI help center

Scores

Abstract

Compare the model score to the baseline score to understand the work you're doing in the pipe is improving your model accuracy.

It's easy to think about scores in Symon.AI as if they're absolutes. But it's more complex than that. When you run a predict tool, you're trying out different things to get a better score.

Scores in Symon.AI are always relative to the following:

  • The data set

  • The baseline score

  • Each iteration of data preparation and testing

Always compare the model score to the baseline score. This will help you understand whether the work you're doing in the pipe is improving your model accuracy. There isn't a universal "good score." If the model score is lower than the baseline score, this could mean that there isn't enough relevant data to accurately predict those values.

If you click on Advanced scores in the toolbar, you can see a more detailed view of your model's score. The baseline score can help you understand why you're getting a particular score.

demo_accuracy_score.png

The golden bar is the baseline score and the purple bar is your current score. The baseline for classifiers (accuracy score) is arrived at by calculating the percentage of "x" in the majority class. The baseline for regressors (error score) is arrived at by calculating the mean of the target column.

In this case, our current score (84%) is a better score when compared to the baseline.

You should also compare your original score with any work you do to improve it. Here's an example:

A pipe with two paths. One path has no added tools, with a baseline score of 79. The other path has several tools added and ends with a baseline score of 84.
  1. Without cleaning up or modifying our data in any way, we start with a score of 79. This isn't a bad score on its own, but we know we can do better.

  2. After adding some tools to clean up our data, we've improved our score to 84. This is a "better score" because it's better than our original score.

A good way to compare values in your data against predicted values is to use the Confusion Matrix tool.

Classifier scores

Classifiers train the model to predict a column with a fixed set of values. Classifiers get an accuracy score ranging from 0-100 percent. The closer the score is to 100 percent, the more accurate the classifier is.

Remember that the score is always relative to your data set and the problem you're trying to solve. A score close to 100% could mean there's a column in the data set that predicts the outcome you're looking for. Or it could mean there's a strong correlation. For another data set, a score of 60% could be a good score.

Regressor scores

Regressors train the model to predict a column with many possible values. Regressors get a SMAPE (symmetric mean absolute percentage error) score. For simplicity, you can think of this as your model's measure of error. The SMAPE score ranges from 0-200 percent. The closer the score is to 0 percent, the lower the potential for errors.

Just like with classifiers, a regressor score is relative to your data set and the problem you're trying to solve. A score of 100 could still be a good score for your data set.