Skip to content
This repository has been archived by the owner on Jan 2, 2023. It is now read-only.

Weight idea: Consider the fraction of total commits per contributer #132

Open
fdietze opened this issue Aug 24, 2020 · 14 comments
Open

Weight idea: Consider the fraction of total commits per contributer #132

fdietze opened this issue Aug 24, 2020 · 14 comments

Comments

@fdietze
Copy link
Contributor

fdietze commented Aug 24, 2020

Another weight idea:

(number of commits I made) / (total number of commits)
@krux02
Copy link
Contributor

krux02 commented Aug 24, 2020

This was one of the first ideas for weights, but it was rejected. People who want to increase their salary will massively increase their commit count without actually improving much. For me this weight feels as good as measuring the amount of weight somebody contributed to an airplaine for their salery. Even though if an airplane is finished, this metric would not be too bad, but as an up front work encouragement it would have pretty negative effects.

@fdietze
Copy link
Contributor Author

fdietze commented Aug 24, 2020

Please elaborate more, why it's bad.

@Ly0n
Copy link
Member

Ly0n commented Aug 24, 2020

Let's just look at our project. When you click on contributors on Github you will get this ordering list:
https://github.com/protontypes/openselery/graphs/contributors

It looks like @kikass13 has only contributed a little but acutely he defined the software architecture in many ways.
When we order the list by additions it looks better but still @kikass13 is underestimated. @mw-mw-mw has a lot of additions since she added the logo. Another quantitative approach that maybe could be better. How many successful merges / pull requests are created with commits of these contributors.

@Ly0n
Copy link
Member

Ly0n commented Aug 24, 2020

The problem with my proposal is that I pushed many times directly to the master. You acutely have to look into the PR and keep out the direct pushes to the master.

@fdietze
Copy link
Contributor Author

fdietze commented Aug 24, 2020

I thought that the general idea was, that many different metrics are combined, because every metric in itself is not sufficient.

@kikass13
Copy link
Contributor

kikass13 commented Aug 24, 2020

regardless of my personal involvement in anything. We can hopefully agree on some wrong interpretations of hard metrics:

  • amount of commits != work done
    • git design rules are defined in a commit rich - feature poor way
    • strictly speaking, 100 commits with 1 or 2 lines of code changed per commit is regarded as good design
  • amount of lines touched != work done
    • features can have millions of lines and can still be irrelevant
    • binary files are a thing
    • you don't want to encourage bad software design [sometimes slick designs are necessary / good]

I like the idea of soft metrics like (mentioned by @Ly0n):

  • "contribution to pull requests merged into master branch" regardless of amount of work done
    • could differentiate between "code contributions" for common files and endings regarding programming
    • could differentiate between "configuration contributions" for configs, ci and overall workflow files
    • could differentiate between "documentation contributions" for readmes, wikis, design files and others
    • could differentiate between "review contributions" for code reviews, mentions and overall involvement in the flow of pull requests
  • "contribution to issue related pull requests"
    • commits on pull requests which are linked to issues (which are closed) are a good measurement of actual integration, bugfixing and community work

@kikass13
Copy link
Contributor

kikass13 commented Aug 24, 2020

I am also not a fan of overall amounts of something. It is not feasible to compare 1000 apples against 1 pear.
I would rather prefer a boolean approach degrading with time.

consider a list of users [A,B,C,D] and a list of metrics [M1, M2, M3, M4]:

fulfilled_metrics = {}
for user in user:
   for metric in metrics:
       bools, timestamps= DOES_FULLFILL_METRIC_REQUIREMENTS(user, metric)
       fulfilled_metrics [user, metric] = (bools, timestamps)

and then apply an linear or exponential filter to degrade the fulfillment of every metric with time passed.
For example:

  • did user (Nick) fulfill metric (contribute commit to master) ? YES = True = 1.0
  • when did user (Nick) did that ? ~About 90 days ago
  • when was the last commitment made ? ~1 day ago
  • what is the average rate of commitments made ? ~ every 14 days
  • user (nick) did not do anything for the last 76 days
  • degrading by [lnear] 1-14/76 = 0,81579%
  • user nick contribution for this specific metric = 0,18421

@krux02
Copy link
Contributor

krux02 commented Aug 24, 2020

Just as an idea for a metric, a metric could take the activity change (first order derivative) into account to encourage people to do more than last month. I am not suggesting though that this is a good metric, The point is, the metric are always complex and you always have to think about people who try to exploit them to appear very active when in reality those people contribute as little as possible, or even harm the project in practice.

@kikass13
Copy link
Contributor

kikass13 commented Aug 24, 2020

@krux02
I agree

  • "contribution growth" (see my mentioned types of contribution above)

seems like a nice thing to add .

The point is, the metric are always complex and you always have to think about people who try to exploit them to appear very active when in reality those people contribute as little as possible, or even harm the project in practice.

That is exactly why these metrics have to depend on things (commits, actions, reviews) [what @Ly0n said] that have to do with merges and should not involve direct amounts of things (1000 commits vs 10). Instead, these should be a checklist(true,false) of weights that can be achieved (cumulative)

In my opinion, we should start looking at those metrics as "goals to achieve". A developer either achieves something or he doesn't. If so, the weight of this achievement will be added to the sum of all other weights, therefore it is more likely for a person with more achievements to receive a payout.

To ensure that these achievements are not all flagged as true after a period of time (because at some point, someone will eventually tick all the boxes), these should "degrade" over time. That way, an achievement which would correspond to a 1.0 to the sum of weights for a specific user would be downgraded if that achievement was long in the past and throttle at a minimum of (for example) 0.01 [1%] so that everyone still gets a chance, even if his contributions are way in the past in relation to other fresher contributions of other contributors (users).

@Ly0n
Copy link
Member

Ly0n commented Aug 25, 2020

you always have to think about people who try to exploit them to appear very active when in reality those people contribute as little as possible or even harm the project in practice.
Yes that's true, but you should keep multiple boundary conditions in mind:

  1. People have a public account on Github and they have a high interest to keep the history nice and shiny.
  2. We just consider things that happen on the master branch and normally you do have multiple people approving and reviewing the code. Especially on the larger projects and OpenSelery is interesting for larger projects with multiple people donating and contributing.
  3. We publish the donation history with every push to the master in the wiki. When people misuse the system it will be visible for everyone.

And one important sentence in this context I also want to cite here from our README:
The distribution will never be fair since fairness can not be objectively measured.
I personally have a lot of commits on the README and to test the CI integration on master branch.
This was necessary since I needed to test the CI integration and that only works live. Many people might think that this was not necessary and created many commits on the master branch we don't need.

To get next to something that many people will call "fair" you need to experiment with different weight calculation concepts and constant feedback of your contributors. We have here the change to experiment with that so that others can learn from our experiences. That's why we should discuss about every weight and even integrate it despite it may not be useful for OpenSelery. We can just show that by giving it a low weight.
Imagine a travel blog financed with OpenSelery and hosted on Github by a group of people. Here the lines of "writing" could more important than the pull requests.

In my opinion, we should start looking at those metrics as "goals to achieve":
Yes, that's a good point. The Github API should give us here everything we need.
Also, the people that are contributing to the "Issue" discussion should be rewarded somehow.
Reporting and describing an issue in the right way can be as important and complex as solving it.
When multiple people are involved in reaching a "goal" together like @kikass13 suggested we should also reward them as one team.

@kikass13
Copy link
Contributor

@Ly0n

People have a public account on Github and they have a high interest to keep the history nice and shiny.

  • I do not agree

We just consider things that happen on the master branch and normally you do have multiple people approving and reviewing the code. Especially on the larger projects and OpenSelery is interesting for larger projects with multiple people donating and contributing.

  • I do agree

We publish the donation history with every push to the master in the wiki. When people misuse the system it will be visible for everyone.

  • the tool should not have to be "overwatched" to be useful. If you cannot trust your tool, you will avoid it.

@kikass13
Copy link
Contributor

kikass13 commented Aug 25, 2020

@Ly0n
Objective morality can be achieved if the objective is clear. If we boil down the metrics in the way i described earlier, therefore by changing the "things that improve you're weight in relation to others" [which gives you a higher chance of receiving money] to simple tasks, we inherently create a situation where:

  • we can define multiple types of contribution (aka little boolean checkboxes of goals/achievements)
    • where openselery admin can define which of these contribution types will be rewarded more/less by weighting themfor their respective repositories
  • a maximum cap is reached after every contribution type is exploited
    • therefore a user is maximally rewarded (in relation to others) if he has fulfilled EVERY contribution type in existence in the last overall release (for example)
  • each and every contribution type is weight equally in itself, meaning that contributing 10 lines of code to a release is equally weighed as contributing 10000 lines of code.
    • also contributing a code review to the latest review is equally weighed for every reviewer for the release no matter how much they wrote,
  • users, who did contribute "a thing"(contribution type True), shall be rewarded more than users who did not contribute (contribution type False)
  • users, who did contribute "a thing"(contribution type True) , shall be rewarded slightly more than users who did the same "thing" (contribution type also True) in the near past
    • and even more in respect to users who have done the same thing 1 year ago(example) where the average contribution for this type is 14 days (example)
    • an openselery admin can not only configure the weights of overall contribution types but also the rate at which these types shall "degenerate" of time
  • users are encouraged to achieve multiple contribution types (aka check all the boxes) and therefore contribute not only code, but contribute in different ways.
  • It will inherently encourage users to not only write code (and never do other stuff), but to also review code of others and also clean up the documentation or fix some bugs or even respond to issues and keep lengthy discussions alive
    • which is the most valuable thing (nature) in opensource development

We do not have to worry about fairness, because we simply let users tick their respective boxes. They cannot really abuse that because once they "did a thing" (contribution type = true) they cannot "benefit" from doing it again. The user will be punished if he does not do it again in a timely manner (because his contribution will degrade over time if the project is moving on).

We therefore minimize the "damage by exploitation" to a simple boolean flag with simple validity checks.

@kikass13
Copy link
Contributor

kikass13 commented Aug 25, 2020

I have found a nice little random blog [CLICK] which sums up to the following:

  • Recognize contributions, monetary or otherwise
    • 👍 that is the purpose of this discussion i guess
  • Keep it transparent
    • 👍 we already tick that box
  • Don’t underplay the achievements of the top sellers
    • my mentioned approach defines "top seller" a little different. Top seller is not the one doing the most stuff in one catagory, but rather doing some stuff in all categories.
    • therefore Ill give it a 👍
  • Don’t forget the average performers
    • in my approach, there are only average performers, but some are more versatile and active on more fields than others
    • i can give it a 👍
  • Pay bonuses quickly
    • 👍 openselery already ticks that one
  • Reward teams as well as players
    • 👍

@Ly0n
Copy link
Member

Ly0n commented Aug 25, 2020

The artical you linked has also many other good ideas:

Employees are encouraged to vote for staff members who have done a great job and rewards are distributed accordingly.

You could use emojis to upvote and downvote issues and use that as an additional bonus on the weights when solving the issue.

if employees who do the same job and get the same results get different rewards, it’s abundantly clear what’s going on. An automated platform with prearranged rewards for certain achievements avoids this.

Where is such a platform? Does somebody know about a company that is doing that besides our project in a real automated way?

@kikass13 the combination of multiple soft contributions and a degregration time could be a good next weight to work on.
We have to check what the Github API gives us here and how many request we will need for that.

@Ly0n Ly0n changed the title Weight idea: Consider the fration of total commits per contributer Weight idea: Consider the fraction of total commits per contributer Aug 29, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants