The Corona pandemic underlines the continued importance of improving foresight capabilities. Over the past decade, we have been confronted with an abundance of unexpected situations: the global refugee and migration crisis, the annexation of Crimea by Russia, the election of Donald Trump, and Brexit – all of which caught states and societies off guard. It is therefore no surprise that uneasiness about the volatility of global affairs is growing, particularly in politics. No policy-maker will feel comfortable making decisions that affect the security of the population and the welfare of the nation – especially under time pressure and with a high degree of uncertainty about the effects of the decisions.
Fortunately, research about how effective people are at anticipating future events demonstrates that it does not have to be this way. But first a caveat: Representative statistical studies show that experts do not fare much better at making predictions than can be expected on the basis of a mere random distribution. About half of their expectations are correct, whereas the other half are not. Consulting experts is therefore not enough to obtain better predictions about the future.
However, it is still possible to increase the number of accurate predictions – the so-called forecasts – of concrete events. The current pandemic would be one such event. A carefully designed and methodically controlled forecasting tournament has demonstrated how forecast accuracy can effectively be increased. More than 20,000 participants took part in the Good Judgment Project, including both laymen and experts. They responded to questions about conceivable events in the future. At present, hypothetical questions could include, for example: Will Donald Trump win the US presidential election in November? Will the Dow Jones close above 20,000 points at the end of this year? Will North Korea conduct a nuclear test in 2020? Or simply: Will there be another global pandemic within the next 12 months?
At the end of the period to which the question refers, it is checked whether the forecasts of the participants were accurate. Analysing the results highlights differences: Some participants are correct more often than others. This is not due to clairvoyant powers. Rather, good forecasters proceed methodically. They actively look for information that could be important for answering the question at hand. They also take into account information contradicting their opinions. And if newly available facts require it, they adjust their assessments. Apparently, the most important prerequisite for accurate forecasting is to learn systematically from mistakes. Forecasters with above-average performance levels like to exchange information with their peers – about success factors, but also about failures and misjudgements. In practice, this means rigorous analysis of one’s own forecasts and continuous monitoring of successes as well as failures.
If above-average forecasters are combined in teams, the accuracy levels of their aggregate forecasts increase even more. Of course, this is no guarantee that all nasty surprises can be anticipated in the future, even if forecasting tournaments are continued and developed further. But as the Good Judgment Project shows, a significant increase in accurate forecasting can be achieved: The goal of the project, which was initially set up for four years, was to increase the average accuracy levels of forecasts given by a control group by 50 per cent – which was achieved after only two years. The number of unforeseen events can thus be reduced.
But improving forecast accuracy, and thus reducing the number of unexpected events, is only one side of the coin. For even if future events can be predicted more often, it is not automatically guaranteed that the appropriate preparations will be made. This is due to the process of policy-making. Political as well as electoral majorities are needed to decide on and implement far-reaching and costly measures. In crises situations such as the current pandemic, this is much easier, as we are experiencing in real time. Conversely, with a view to less visible events in the future, it is notoriously difficult to mobilise the necessary majorities for far-reaching precautionary measures. Even scientifically well-founded demands for drastic measures are difficult to implement, as has been observed in recent years with regard to health studies and climate research.
However, it would be wrong to blame political leaders alone for the lack of preparation. For it should not be forgotten that predictions can be wrong – there are plenty of examples for this as well. Time-consuming and costly preparations to ward off an anticipated pandemic could possibly prove redundant, as it might only occur in 10, 20, or 30 years. Of course, decision-makers would have to take the blame for this as well. Deciding about which measures to take in preparation of a forecasted nasty surprise – for which the impact is hardly knowable in advance – requires balancing conflicting interests. The outcome depends very much on expectations about public support for these measures – or opposition against them.
Conflicts about what priorities for political action should be taken in the future will be unavoidable, even after the Corona pandemic. Often at the heart of these conflicts are different assumptions about the future. Even if there is no automatism between good predictions and political action, the standards of rigorous analysis and the continuous monitoring of successes as well as failures can help in the making of informed decisions during such conflicts, thus contributing to more targeted preparation.
Challenges, Opportunities and Success Factors
The editor of the SWP study “While We Were Planning”, Lars Brozus, speaks about the potential benefits of scientifically grounded foresight.
Unexpected Developments in International Politics. Foresight Contributions 2018