We built this feature as it became clear to us how much time annotation teams spend on quality assurance. In some cases, quality assurance stood for 40% of all time spent on an annotation project. What was taking time was not necessarily correcting errors, but finding them. Although there are less smart solutions out there like consensus scoring, oftentimes that just gives you a sample of potential issues, so annotation teams have to go through and review every single image by hand.