- How should the Generalized Reviewer network be paid? Hourly? Per Review?
Have bounty ops staff choose three trusted people who commit to the completion of all the bounties that are assigned to them. Each one of these reviewers reviews as many of the bounties as they have time for, and they also coordinate the rest of the reviewers who
- Are paid by the bounty and
- Are allowed to commit to reviewing a smaller number of bounties, possibly as few as 1. These people get assigned a batch of 1-5 bounties, depending on what the reviewer commits to from their end and on the level of trust, that the reviewers put in them.
The per-bounty pay should be based on a benchmark average amount of time that is determined to be the optimal balance between a fair and helpful review and taking away too much bounty prize money.
As a reviewer and analyst, my number is 10-15 minutes. So let’s say 4 bucks per bounty. That’s my vote on that, and that may not be the right formula, but decide this and then pay accordingly.
It sucks that if you pay by the hour, reviewers are incentivized to pad their hours. I don’t pad my hours, but I am constantly concerned that I am claiming more money than my peers because I am too thorough. Unfortunately, paying by the bounty means that some uncaring schmuck could just randomly score people without really even reading the bounties and that is unfair. This is why you have to feed them a small number of bounties at a time until they become trustworthy. You also have to pay attention to what they are doing.
Who should be a Generalized Reviewer? What should be the requirements?
I believe the bar, to being eligible to be an entry-level or candidate reviewer, should be much lower than the current 2000 xmetric. A huge part of the job of the analyst is to make their analysis understandable. I think it is therefore a good thing to have people who may not be expert analysts, but who can still tell whether an analysis is worthwhile or not. I think the requirements should be that you have successfully completed one bounty, that got paid. I just think that reviewers can be assessed, and that priority should be given to better reviewers and that, some things are better done in-house and farming out the reviewing job to networks seems dubious to me when it is MDAO that is solely responsible for giving a fair grade in a reasonably fast amount of time.
I have another idea that I am kicking around but I think I like it. Given a bounty, which is to be graded, only those who actually submitted that bounty are eligible. They would be required to grade their own bounty, but it would not count toward their score. I think it is a plus nee almost a necessity that a person has worked the bounty so that they know what the queries are and what the filters should be and will notice if figures seem wrong. Just an idea though …
*What should be the process for losing the right to review?
The choice of who reviews should be at the sole discretion of bounty ops. They should rank reviewers and prioritize based on the quality of their work. You just can’t outsource, decentralize or automate this, I don’t think. Slashing Xmetric sounds like something you do to someone who has done something immoral. I would just not rehire them. If they cheated then yeah slash their xmetric. But just curious, surely everyone has thought about the fact that you can send someone a negative badger, just as easily as you can send a positive badger so that might be better than slashing xmetric.
What Specialized Reviewer groups should exist in the future? Who should be on them?
I don’t know from specialized review groups. I do like tiers based on quality and experience. Use badger for this since it is so easy to revoke these and would make it simple to see your list of folks in each group. Amount of metric could be a factor in determining the tier levels. I have another idea which if implemented, could help with quality and consistency of reviewing, which is to have reviewers scored from one to five. This would be open to any metrics dao participants. My idea is you have an amazon reviews type section. Each bounty has a set of reviews. Most of these would be unofficial. Everyone could add on reviews for no pay, but their participation, and the quality of their reviews could enhance the likelihood of them getting to review bounties themselves.
How should they be paid?
In cold hard USDC. Going with my tiered system you would have to decide how much more an hour their work is worth and up the per bounty pay based on that. pay = hours expected to be spent times what you think their hourly wage is worth.
How should membership be maintained?
With Badger of course! Bounty ops determine that a reviewer is worthy of a promotion. They assign that person a new tier level. This person now maybe gets paid more or gets to grade more advanced bounties or something.
I think we need a proper review form that takes each scoring category and breaks it down into a few subcategories. Reviewers would answer the subcategory questions and then score and comment. They need to do this for each category. They also should make one overall comment about the bounty. This requirement, and only giving them a small batch to work with at a time would force them to slow down a bit. The subcategories would raise alarms about the quality of the reviewer if their answers were way off of everyone else’s. If an analyst has a complaint about a review, they could go to the reviewer first with their complaint. The reviewer then has a chance to explain their grade to the analyst. If the reviewer is unavailable or unwilling to respond or if the answer they give is unsatisfactory, they could then submit a ticket for regrading. This would also be another way to screen out bad reviewers.
I know I have a lot of suggested implementations of things, but I actually think it could be rolled out pretty quickly, with some manual bookkeeping required for a while and eventually smoothing out the process. I have zero understanding, at this point, about what the app will and won’t be able to do, so I realize that even if you love my ideas that it might not be workable given already made decisions about the process.