screenshot

Predictions and promises monitor | ru | en

Ranking Predictions Authors

screenshot.report - predictions and promises monitor

Here are the predictions and promises from experts, journalists, politicians, bloggers, and other authors. Each verified prediction is given a rating based on its accuracy, complexity, and confidence level. The author ranking is compiled based on these evaluations.

How is the rating calculated?

The authors' predictions are categorized as follows

Status: Awaiting, Comletelly came true, Almost came true, Did not come true, Unverifiable.

Complexity: Complex (if the prediction is made from scratch or chosen from multiple options) or Regular (if the prediction is based on a selection from 2–3 provided options).

Confidence: Confident or Careful (If words like "probably," "I suppose," "most likely," "80%," etc., are used).

Based on the result (status of the verified prediction), a base rating (B) is given, and additional points can be awarded for complexity (C) and confidence (Cn).

Result B C Cn
Completely came true 7 +2 +1
Almost came true 5 +2
Did not come true 2 -1

The author's rating is calculated as the Bayesian average (parameters C = 5, m = 5.5) of all their predictions' ratings. This method calculates the rating more fairly than the arithmetic mean, as it takes into account the fact that an author who has made few predictions could have just accidentally guessed (or not guessed) them. It is impossible to judge the author's prognostic abilities if they have made few predictions. Therefore, the author's rating initially shifts towards a neutral value of 5.5, and as the number of predictions increases, it becomes closer and closer to the arithmetic mean. The more predictions, the more accurate the rating. For example, an author who made one prediction with a rating of 2 will receive a rating of 4.91, while an author who made one prediction with a rating of 10 will receive a rating of 6.25. The difference between their ratings is insignificant, as they both could have just accidentally given their predictions. However, an author who made 10 predictions with a rating of 2 will receive a rating of 3.16, while an author who made 10 predictions with a rating of 10 will receive a rating of 8.5. The difference between their ratings is now significantly more noticeable. We can now say with greater confidence that the second author makes better predictions than the first. The more predictions verified, the more accurate the rating.

Only authors with 10 or more verified predictions are included in the ranking. A rating above 7 is excellent, below 5.5 is considered poor.

Content

Predictions do not include assumptions, hopes, instructions, duplicates, statements that are too abstract, simple, or insignificant, as well as those without references to public sources.

All published predictions are direct quotes, translations (usually automatic), or brief summaries of the authors' main ideas. Any errors are unintentional and will be corrected as soon as they are discovered.