Futarchy is an old idea from Robin Hanson. It may be unrealistic for governments any time soon but it can, with the right UI, work great for small groups. Daniel Reeves and Bethany Soule call it "prognootling". It's their process for resolving a disagreement that involves not conflicting preferences but conflicting predictions. (They use auctions for the case of conflicting preferences, but that's another story.)
Say we're deciding whether to submit a Manifund proposal (this is a true story!). We first agree on some numbers, in this case the following:
If we submit a proposal and we're accepted, we'll personally derive $2k of utility, net of our own efforts (and the funding)
If we're rejected, we'll have wasted $100 of effort putting the proposal together
If we don't submit a proposal, that's $0 of utility
How did we pick those numbers? Almost entirely proctogenically. We did independently pick 80% confidence intervals on the developer time required and averaged them and factored that in to the $2k number. But we had no disagreement about any of that. The disagreement was about the likelihood of actually being accepted. If you math this out, it's worth submitting this proposal as long as the probability of getting accepted is at least 4.8%. But what is that probability?
Daniel Reeves and his friend and potential partner in this project, Jake Coble, have different takes. Daniel's sense is that it's at least 50/50 and Jake's sense is it's at most a few percent. So we resolved that like so:
Conditional on choosing to submit the proposal, Daniel puts $1 on "we get accepted" to Jake's $1 on "we get rejected".
(Conditional on choosing not to submit, no bets are needed.)
The implied market probability for those preliminary bets is 50% which makes it a no-brainer to submit.
Jake bumps his bet up to $20 on rejection which pushes the market probability below 5% for an implied optional action of not submitting.
Danny bumps his bet to $2, for a market probability of 9% and an implied optimal action of submitting after all.
Jake is now happy with this. We agree to submit a proposal and Daniel will owe Jake $2 if we're rejected and Jake will owe Daniel $20 if we're accepted.
Clearly Jake didn't actually need much convincing in this case. You can imagine higher stakes decisions where one person wants to do something their partner thinks is too risky. In that case the wager can let the risk-averse person hedge the risk. They wouldn't want to risk it but they had a betting war and pushed the wagered amounts high enough that if the risk-averse person's fears come to pass, they'll be getting a nice payout from their partner. A reified told-you-so.
You can read a story like that here: https://doc.dreev.es/prognootle
And the proof of concept that we've used for these decisions is here: http://dreev.es/prognootle (world-editable; please don't actually edit it)
The objective of this project is to turn that into a tool that a small group can use to make decisions like this.
Daniel Reeves has a PhD in computer science (algorithmic game theory / AI) and has published academic papers on prediction markets. He is also a cofounder of Beeminder. He has built various tools for group decision-making such as http://bid.yootl.es and https://biddybuddy.dreev.repl.co .
$2500 will pay for the projected 35 hours of developer time. If our estimates are off, we'll eat the cost of the additional time needed to complete the task.
App status across various funders