The quality of reasoning in the intelligence game
Keynote by Professor Mark Burgman
Almost all risk analyses involve expert opinions about facts because the necessary data are unavailable and decisions are imminent. Groups of experts, broadly defined, routinely and substantially out-perform individual experts when they make forecasts or estimate facts. The ACE program supported by IARPA (the US Intelligence Advanced Research Projects Activity) established this robust result in a wide variety of geopolitical contexts, mirroring similar results from applications of structured expert judgement in engineering, ecology, medicine and transport. It also established that some individuals are much better at making judgements and predictions than are others, and that this ability does not correlate with status, experience or reputation. So the natural question arises, why are some individuals and some groups better than others? This entrains the question, how do we measure the quality of reasoning that leads to a judgement or prediction?
This presentation outlines the steps taken by applied philosophers and others to identify the critical elements of good reasoning, and discusses approaches to measuring the quality of reasoning in reports or arguments in support of a claim.