We would like to be able to extract some insights from our Aha data. For each release:
- How many points over capacity are we?
- After the first sprint in a 2-sprint release is completed, are we half way done with story points?
- How many features do we have estimated at the feature/epic level (more risky) vs. at the groomed requirements/user story level (less risk, more confidence)
- For those features estimated at the feature level, how many requirements/user stories do we have unestimated (and not included in capacity)?
- We have a few time-boxed features - basically, a PM is working with one of the teams to get as much as possible done. For these features, how do the groomed requirement story points compare to the time-boxed amount (feature level estimate)
And we'd like to be able to compare these over time, and cross releases.
- For example, if we look at our last 3 releases, do we average 45% completion rate at the end of the first sprint in a two sprint release? or 65%?
In other words, if our current release is 40% at the halfway point, do we panic? or is that normal?
- For a given feature, do the feature level estimates end up being XX% lower than the more detailed requirement estimates?
We're capturing some of these statistics manually, storing them in a spreadsheet, and comparing them over time. Very labor intensive.
And identifying whether a feature is estimated at the feature or requirements level is also manual. If there's any way to automate that, that would be great!