Streamlining the QC process
Quality checking on Numerator's data has always been done in the past, but questions from stakeholders regarding accuracy couldn't be answered easily as there were no dedicated tools to collect this type of data. The QC process was not as smooth as it could be — there were a few too many steps involved that could be streamlined. Additionally, the team wanted to transition away from Django’s web framework and make use of React for something more modern and more user-friendly.
The MVP of the QC Tool was being tested by a few select users: team leads who need to check their direct reports’ work before the end of the day and machine learning quality checkers who are training the models used for most of the attributions. As the basic functionality of the tool was still being worked on, there was a lot of ambiguity about what could be done and what the team wanted so requirements changed fairly often during the process.
Impact
- The tool helped speed up the QC process by at least 20% (when it would take 2 hours to process all the data to review, it would take 1h30).
- Feedback from users was positive with these new changes and made the work less like they were working against the tool—users also felt they were listened to, and feedback was still collected on improvements.
Process
The search filters for the MVP were taking a lot of space and the primary actions, “correct” and “wrong” buttons, were not easily accessible as the user scrolls down, so the user needs to scroll back up to complete their task.
One of the big questions the team had was to find a good balance of accuracy and speed. There should be enough information on the table without feeling the need to go through a different flow to investigate the item details as these added more minutes on each item looked at, unless the item in particular was impactful.
I iterated over a few designs as I worked on the task flows from the given requirements the tool should cover based on the workflow users were familiar with. I was a data associate myself, working on attribution and quality check, so I was familiar with the current workflow.
The team wanted to see if we should completely remove the checkboxes, and only allow the “approve” and “reject” buttons on each line. This is something I sought to test in user testing sessions and talking to users.
I spoke to four users during the design of the tool, three of them I set up a user testing session with.
I’ve learned a few things as I talked to users:
- They all had their own preferences on how they process the data, one at-a-time or bulk editing.
- The order of the columns mattered: they preferred to see the “new brand / new category” first and then the “old brand / old category” columns as it made it easier for them to know right away what their direct report attributed.
Takeaways
The feedback I've heard was positive from the users as some changes that may not seem impactful were actually extremely significant for decision-making. Anything that helped speed up the process that could alleviate their workload and give them more breathing room were appreciated so they didn't have to fight against the tool to get the job done before the end of the day.