Definitions
This section is for to clarify some terms that are used throughout the documentation.
Last updated
Was this helpful?
This section is for to clarify some terms that are used throughout the documentation.
Last updated
Was this helpful?
Datapoints are the inputs to your model. They are the data that you use to make predictions. For example, if you are building a model to classify images, the datapoints are the images themselves.
Predictions are the outputs of your model. They are the results of your model's predictions. For example, if you are building a model to classify images, the predictions are the labels that your model assigns to the images.
Ground truths are the labels for your datapoints. For example, if you are building a model to classify images, the ground truths are the labels that you know to be correct for the images.
Entropy is a measure of uncertainty. The higher the entropy, the more uncertain the model is about its prediction. The lower the entropy, the more certain the model is about its prediction. For our calculations of entropy we utilize . This differs from metric entropy, which is a normalized version of Shannon entropy.
Variance is a measure of the spread of a distribution. The higher the variance, the more spread out the distribution is. The lower the variance, the more concentrated the distribution is. This metric is used with models that have dropout layers. The specific equation for variance we use is the one for sample variance. You can read more about it .