Whether it is feasible to review OSS projects or provide a reporting mechanism

As other posts have pointed out, there are a lot of low quality projects on the test web, even automated ones. Do we need a human review mechanism to evaluate projects, just to filter out ineffective ones?

In addition, providing a reporting method for ineffective projects seems likely to be effective, but it can also lead to abuse of reporting (deliberately creating low-quality projects for reporting purposes), so rewards for reporting may need to be reined in. Or maybe this mechanism only applies to real volunteers in the community and sets a cap.

Similarly, we may be able to detect the github account characteristics of project developers, including account creation time, commit time and characteristics on the account, interaction situation (such as whether pr has been provided), number of projects under the account, etc. I don’t know whether we can get these account characteristics and whether there are targeted strategies to evade inspection. But at the very least, it could significantly increase the cost of a multi-account strategy.

Finally, the existing tearank algorithm appears to provide a value for the project at the beginning and this value meets the threshold, which may be exploited, although screening of the project should reduce this effect

Thank you tea developers for your contributions and I look forward to the prospects of this project


tea username:warmsnow-sh
tea address:0x293b35659ec37FAE89408bf8A72d89e430aC86c9


9 Likes