Skip to Main Content
Status Future consideration
Categories Features
Created by Bjorn Aannestad
Created on Feb 6, 2022

My Ideal Scorecard

Here's what I think would be ideal for scorecards, using a RICE model:

  1. Allow the user to select a text value that corresponds to the numeric score. This will help consistency across users and projects. So that a score for "Reach" might be expressed as "Few", "Many" or "All" instead of 1,2,3, and "Impact" would have values like "Differentiator" vs "Minor Usability"... as guidance.
    The work-around of using the text description to list the numbers alongside their meanings is clumsy and won't lead to consistency since that text isn't visible when you "Open Scorecard" and drag the sliders around.

  2. Allow the scores to be non-integer. (There's another idea for this with 150+ votes)

  3. Allow scorecard values to be shared between score fields. So that we can sort or rank issues by R*I*C/E, and also by just their "value" R*I. Like when E isn't known yet, or isn't relevant. I don't see a way to accomplish this today, perhaps there is one?

  4. It is nice that the components of a score can be added individually to a list report. It would be helpful to be able to sync those individually to Jira too, where developers can see them.

  5. Finally, a way to formally handle "Unknown" or lack of (C)onfidence or lack of information. Like an (E)ffort that you don't know yet. Currently, you can only do that with 0s, which makes the rest of the score 0 too, usually. I'm thinking of something like what spreadsheets do -- propagate the "unknown" through the calculation. Or better, indiciate "Unknown" but allow a default value which might be middle of the road, or might be a high value until the factor is proven lower, or vice-versa.
    This will reduce the temptation of making up a value which we can't later tell the origin of. If we don't know yet, let's mark it as such.

Thank you!

  • Attach files
  • Admin
    Austin Merritt
    Reply
    |
    Feb 9, 2022

    Thank you for the additional background. One idea would be to leverage the features list and calculated columns. You could add each of the metrics as individual columns. Then add two calculated columns -- one for the value score you noted and another that also includes effort. You could then sort by either of these. The downside is that this would not be reflected in your ranking on the features board, but it would give you the ability to get the sorting your are looking for.

  • Bjorn Aannestad
    Reply
    |
    Feb 8, 2022

    Thank you for the reply, Austin.

    For #3, what I mean is that we want to report on R * I, which would give us a "Value" order. But, we would want to also be able to sort a report by R * I / E.

    Today, I think we'd have to set R and I twice, once for each of two scorecards? Admittedly, I haven't tried it to see if a factor with the same name in two scorecard defintions is shared between them when given a value using the slider.

  • Admin
    Austin Merritt
    Reply
    |
    Feb 7, 2022

    Thank you for your very thoughtful feedback! This is good timing as we do have a series of improvements coming for scoring and prioritization. We will keep your feedback in mind as we make progress.

    Can you clarify what you are ultimately hoping for with #3? Currently the score will calculate using the metrics that have been scored (and will incorporate the minimum value for any metrics that have not been scored.) So if you have scored R & I, you would still have a score which could be used for sorting/ranking. Thanks!