You are right that the Average Response Time chart is not as user friendly as we would like. Revisiting it is on the roadmap. Thing work pretty well if you are in a situation where questions are getting asked and answered quite consistently. Things will hang around with average response times of a day or two and any deviation from that will become immediately apparent and so lead you to investigate. By exporting you get more detailed timings and so you can see if the averages fluctuate even a few minutes day-to-day. Where things get less good are two scenarios. First, even when things are generally answered pretty promptly, even one big outlier can throw things way off. So responding to a question after several months leads to a ridiculous spike. Second, if questions just generally tend to sit around for a while then it is hard to get a good feel for things -- fluctuations don't end up telling you that much. If you export the detail and compare it carefully to the questions chart you can actually learn some interesting things but this is only really feasible for very low volume scenarios and is a pain anyway.
One thing we considered was having buckets instead of average: how many less than an hour, how many less than a day, less than a week and more than a week, or something like that. This would help. But additional work is needed before this is right in all scenarios.
Having buckets instead of averages would be super useful. This is something we do for a weekly report on the overall health of our "question resolution" for our spaces so our teams that support the community can keep an eye on how things are looking for their spaces.
I'd be glad to share a copy of our report with you if you're interested. The way we have it it set up, its a one-stop shop for our support teams to understand the actionable activity within the spaces they own. Being able to offload some of that reporting to CMR for "self service" would be fantastic and would save us a lot of time vs. doing it outside of the tool