QDQ: What is good?
In January of 2020 ICA hosted multiple trainings, in-person and virtual, to introduce the new Quarterly Data Quality (QDQ) process. A few questions that came up repeatedly from the audience were What is a good score? and How is a good score determined? The answers, then and now, are not simple.
The problem lies with the weight given to the word good. In baseball, a good batting average might also indicate that a batter fails two-thirds of the time. A good test score at school might lead to a letter grade of an A. But maybe a good test score should lead to a C; showing that the work is not too easy for the student at the same time as it is not beyond the student’s reach.
So, forget the word good when thinking about QDQ scores and think of them more as measurements.
A QDQ score of 88% in Completeness gives us some information. It tells us that 12% of the data fall into a category that includes missing data elements and answers that include Don’t Know or Client Refused. It may be that most of the data falls into Client Refused and that, barring the clients changing their minds, that data should remain as it is because it is correct. The QDQ score will not change, but the data is correct.
One year after those initial trainings, a new monitoring process has begun. Each quarter a score category will be chosen by the Monitoring Partners (the CoC Coordinators and State Funding Partners). The Monitoring Partners will also choose a high target threshold and a low target threshold for the score category. For the QDQ submissions in Q4 2020, the score category targeted was Completeness and the high threshold was 95%. Providers of applicable project types scoring 95% or higher in Completeness were contacted and recognized for their high achievement.
The high threshold should not be viewed as a good QDQ score just as scoring below a low threshold should not be considered a bad QDQ score. Rather, scores indicate how the data measures against the QDQ Rubric. It’s important to remember the QDQ scoring rubric is designed to account for all possible circumstances that reduce the overall data quality of the system, and is not meant to reflect the quality of the effort put into data entry. It is a piece of the story.
Having high-quality data will likely yield a high QDQ score. However, there are instances when getting your data correct might actually reduce a QDQ score. For example, correcting an entry/exit originally attributed to the wrong provider can lead to a lower timeliness score, but the data is more accurate. In these circumstances, we hope providers are proud of their work to make the data correct, even if it doesn’t positively impact their QDQ score. As the score is just one piece of the story, adding a narrative to any QDQ score submission provides a better picture of what is happening, especially in the case where improving the quality of data comes at the expense of a QDQ score.
Though the QDQ process is complex, we want users to remember its simple goal and what that goal means: the goal for QDQ is to provide a process to regularly check data across the state and clean it up if needed. With high-quality data, a community can accurately tell the story of the individuals and families it serves. For the information in the system to be useful in measuring our progress or understanding our system, it must be accurate, complete, consistent, and timely.
Statewide data has improved greatly over the past year. That is something we can all be proud of.