I want to test whether communication such as deliberation (e.g. via the comment section) can improve aggregate forecasting accuracy.
Current evidence shows that communication can reliably improve *individual* accuracy but suggests that the impact of communication between contributors is unreliable with regard to the aggregate "crowd" accuracy e.g. for platforms such as Manifold Markets or Metaculus.
However, I believe the methods in this prior research generated social dynamics that were primarily driven by numeric anchoring effects, not information exchange.
I will obtain RCT evidence on the effects of deliberation using a platform and sample population that should generate "real" deliberation i.e. dynamics driven by information exchange and not just numeric anchoring.
To develop this research I will seek input from existing platforms to identify the practical questions most relevant to forecast aggregators.
I have successfully collected estimation and forecasting data using my own custom-built platforms by recruiting participants from web sources such as Amazon Mechanical Turk. I have been conducting this work since 2014. I have been building web projects since about 1998.
My research has advanced basic theory about the effect of communication on the 'wisdom of the crowd.' The practical relevance of my prior work is demonstrated by inclusion in popular press such as the Harvard Business Review "Guide to Critical Thinking."
Not every experiment I tried has been successful: showing that communication is unreliable has proven straightforward, but showing how it can be reliably beneficial has proven difficult.
I am optimistic for this project because it takes advantage of recent theoretical advances to test for the presence of meaningful deliberation, looking for effects that should appear only when social influence is driven primarily by numerical anchoring effects. The analysis is designed such that even 'null' results will be informative.
If funded, I will seek guidance on design and recruitment.
I will use the funding to provide prize money for a forecasting competition using a platform that enables randomized controlled experiments. This amount is consistent with the recommendation of an experienced practitioner.
Your support will allow me to supplement my ongoing laboratory research with "real world" data that will make the results more directly applicable to practice.
I currently have institutional funding to pursue this project with participants from Amazon Mechanical Turk (MTurk). This funding will allow me to develop the platform and collect data sufficient for demonstrating basic theoretical principles of group behavior.
However, my current plan is limited to a few basic scenarios and will not test the types of features likely to appear in a web platform (e.g. comments section) which can introduce large variations in dynamics depending on the design. Moreover, if the MTurk experiment fails, we won't know if it's because deliberation is inherently unreliable or because crowd workers just aren't engaged in the task.
Your support will allow me to (1) examine questions specifically relevant to online crowdsourced forecasting with (2) a population representing users of online forecasting platforms.
App status across various funders