A proper scoring rule that measures probabilistic forecast accuracy as the mean squared difference between predicted probabilities and binary outcomes; lower scores indicate better calibration.
Cluster: Information Theory
A proper scoring rule that measures probabilistic forecast accuracy as the mean squared difference between predicted probabilities and binary outcomes; lower scores indicate better calibration.
Referenced in 1 article