Predictions and promises monitor

| About
en | ru

Screenshot: predictions and promises monitor

Predictions and promises from experts, journalists, politicians, bloggers, and other authors are monitored here. Each verified prediction has been assigned a rating according to its accuracy, complexity, and confidence. Based on these ratings, the ranking of authors has been compiled.

How is the rating calculated?

The authors' predictions are categorized as follows

Result

  • Expected.
  • Completely came true.
  • Almost came true.
  • Partially came true.
  • Did not come true.
  • Cannot be verified.

Complexity

  • Creative/Complex. If a prediction is made from scratch or chosen from a variety of options.
  • Selective/Regular. If a prediction is made based on a choice from 2-3 provided options, or it is insignificant.

Confidence

  • Confident.
  • Careful. If words like "probably", "I believe", "most likely", "80%", etc., are used.

Depending on the result, the prediction is assigned a base rating (B), and additional points can be earned for creativity (Cr) and confidence (Cn).

Result B Cr Cn
Completely came true 7 +2 +1
Almost came true 6 +2
Partially came true 5 +2
Did not come true 2 -1

For example, if an expert were asked the question: 'Do you think Bitcoin will surpass the $70,000 mark?' and the expert responded: 'Bitcoin doesn't stand a chance'. The prediction is regular and confident. If it comes true, the rating is 7 + 1 = 8. If it does not come true, the rating is 2 - 1 = 1.

Statistics have been calculated for each author, tag, and for the project as a whole:

  • "Authors". Number of authors.
  • "Predictions". The total number of predictions.
  • "Verified". The number of verified predictions for which the result is known whether the prediction came true or not. Expected predictions and predictions that cannot be verified are not taken into account. The following indicators are calculated based on verified predictions.
  • "Came true". Percentage of predictions that came true (completely, almost or partially).
  • "Complex". Percentage of complex predictions.
  • "Confident". Percentage of confident predictions.
  • "Rating". Author's total rating.

The author's rating is calculated as the Bayesian average (parameters C = 5, m = 5.5) of all their predictions' ratings. This method calculates the rating more fairly than the arithmetic mean, as it takes into account the fact that an author who has made few predictions could have just accidentally guessed (or not guessed) them. It is impossible to judge the author's prognostic abilities if they have made few predictions. Therefore, the author's rating initially shifts towards a neutral value of 5.5, and as the number of predictions increases, it becomes closer and closer to the arithmetic mean. The more predictions, the more accurate the rating. For example, an author who made one prediction with a rating of 2 will receive a rating of 4.91, while an author who made one prediction with a rating of 10 will receive a rating of 6.25. The difference between their ratings is insignificant, as they both could have just accidentally given their predictions. However, an author who made 10 predictions with a rating of 2 will receive a rating of 3.16, while an author who made 10 predictions with a rating of 10 will receive a rating of 8.5. The difference between their ratings is now significantly more noticeable. We can now say with greater confidence that the second author makes better predictions than the first.

Only authors with 5 or more verified predictions are included in the ranking. The results for some authors may still be inaccurate, as many of their predictions have not yet been collected. A rating above 7 is excellent, below 5 is bad.

If an author has a low rating, it doesn't mean they are a fool. Perhaps their successful predictions have not yet been added to the database. Perhaps they relied on incorrect assumptions. Perhaps they analyze and explain past events well. Moreover, making accurate predictions is difficult. Judge for yourself.

What predictions are disregarded?
  • Assumptions, hopes, and directives. For example: "I think Bitcoin might cross the $70,000 mark", "I hope that will happen", "We must complete the tasks set." Such statements are not considered predictions or promises.
  • Too obvious. Predictions with no real choice between options.
  • Duplicates. If the author has already made such a prediction.
  • Too abstract or vague, when it is impossible to understand what the author meant.
  • If there are no links to public sources.
  • Too insignificant.
Content

All published predictions are direct quotes, translations, or brief summaries of the authors' main ideas. Any errors made were unintentional and are corrected as soon as they are discovered.

Feedback

email: msg@screenshot.report