What We Learned from Blowing our Budget
In this blog post we’ll explain how things were set up before, why they failed and how we’re experimenting with a new payment structure to mitigate these risks in the future.
Building a DAO involves coordinating and incentivizing a group of people towards a goal. For MetricsDAO, this group of people includes both our core team of contributors who run operations, as well as the larger community of MetricsDAO analysts who take part in our three step process for delivering on-chain analytics to web3 projects at scale.
Given the decentralized and permissionless nature of a DAO, people need to be driven by incentives, mostly in the form of capital. Given these, one of the biggest risks in building a DAO is pushing either people or your capital to their limits which in turn stagnates the growth of the DAO.
Recently, MetricsDAO pushed its available capital to its near limits due to incentive misalignment, particularly around payments for analytics outputs. In short, we blew our budget. In this blog post we’ll explain how things were set up, why they failed and how we’re experimenting with a new payment structure to mitigate these risks in the future.
Two Income Streams
Before diving in, it is important to first understand how we are paying for participant activities and incentivizing contributors towards our mission of delivering quality analytics at scale.
We have two types of programs: Partner Programs and Self-Funded Experiments.
Partner Programs are funded through grants and deals with our ecosystem partners like Aave, Uniswap, and Messari. The criterias and desired outcomes for these analytics programs are decided mainly by our partners, hence the budget is also partner driven.
Self-Funded Experiments are funded via our loan from Flipside and have a budget per month of USD 5k for Season 2. Examples of these are our Hacks, Scandals, and Scams program and the Optimism Joke Race. In this case criterias and outcomes are mainly driven by the MetricsDAO team.
Three Incentivized Components to On-Demand Analytics
All programs feed into the three components of our On-Demand Analytics Process, namely Community Brainstorming, Analytics, and Peer Review.
Community Brainstorming, which is where anyone can suggest questions to answer with analytics, is paid out at USD $10 per question used. These questions then feed our Analytics Programs, wherein analysts create analytics and tools to answer them.
Analytics submissions are on-going within a predefined time range (usually a week) and the number of submissions per question is uncapped, at USD 50-250 per qualified submission.
Submissions are then Peer Reviewed by a dynamic group of top analysts who get paid USD 25 per hour.
How We Blew Our Budget
Where our incentives fall apart is in the uncapped nature of our incentives for analytics submissions.
As you can see, we went over budget in two out of our three analytics programs so far and the remaining one is in danger of going over budget too.
Digging deeper into the distribution of the scores given out to submissions over time, a large amount of spend has gone to work that we see as subpar, or C- to B+ work.
Score Distribution (Aave, Hacks, True Freeze Programs)
Considering our goal at MetricsDAO is to empower web 3 projects by delivering the best quality on-chain analytics, we need to align our incentives such that they are biased towards the best submissions.
New Payment Structure: Per Challenge Budgets & Payouts
We are trying out a new experiment: Per Challenge Budgets and Payouts. The steps are (1) to set a budget per challenge, (2) peer review qualified submissions, and (3) stack rank qualified submissions. We will use the following parameters: challenge-level budget, a corresponding percentage cut off, and weightings per percentage cut off.
The following table shows a specific example for our upcoming Aave bounty. The total budget is $1500 (in AAVE tokens). Each qualified submission will fall into a tier, and the budget for that tier is split among the participants. If there are 40 qualified submissions, here is a breakdown of payments:
- Tier 1: 2 submissions ($375 each)
- Tier 2: 6 submissions ($87.50 each)
- Tier 3: 10 submissions ($22.50 each)
- Tier 4: 22 submissions (no payment)
Tier 1 is only available for submissions graded 11 or higher. The maximum payout is capped at $500, with any leftover budget distributed proportionally to lower tiers.
Our goal is to ensure that we have reliable budgets with a reliable schedule and analytics programs that have market regulated difficulty. As such, we will know that this experiment is working if we are able to sufficiently fund programs within their budgets and simultaneously improve the quality of qualified and top ranked submissions.
Join us for a community call next week on Tuesday August 23rd at 4pm ET | 8pm UTC, where we will further discuss this new experiment and take any questions. We look forward to hearing your feedback and ideas!