Try for Free
COMPANY
BlogPressAboutCareers
PRICING
CVAT.ai cloudOn-prem deployment
RESOURCES
InstallQuickstartDocumentation
COMMUNITY
DiscordGitterGitHubContribute
CONTACT US
contact@cvat.ai
Quality control of Image annotation in CVAT

Every day, the vast world of artificial intelligence (AI) becomes increasingly interconnected with our lives. In essence, the backbone of AI technology is data. More specifically, for AI to understand and interpret the visual world around us, we need data in the form of images. These images must be labeled or annotated, turning them from raw pixels into a language that AI can understand. This process, called image annotation, is an integral part of AI development.

However, image annotation is not simply a case of attaching labels to pictures. It requires meticulous work to ensure the data is labeled correctly and consistently. This is where the importance of image annotation quality comes into play.

Imagine you have a dataset full of labeled images. But how do we know the labels are accurate? How can we be sure that these labels are reliable enough to train AI models? To answer these questions, we need to assess the quality of our data. Fortunately, in CVAT, an image labeling tool specifically designed for such tasks, there's a simple way to do this using a method known as the 'Honeypot'.

The Magic of CVAT Honeypot 

n the world of image annotation, quality is king. That's where CVAT and its Honeypot method come in. The Honeypot method is all about comparing actual annotations with a 'ground truth', or known correct annotations. This ground truth is set up in a unique job within CVAT.

Worried about double-annotating an entire dataset for the ground truth? Fret not. Just a fraction of images, say 5-15%, is enough to give an estimate of the overall quality. The size of the  'ground truth' job is flexible, you can set it as a specific number or a percentage of frames .

These 'ground truth' jobs are different from regular jobs. They don’t mingle with your main dataset, whether it's exporting, importing, or automatic annotation. And if you ever need to tweak parameters, you can delete the ground truth job and create a fresh one.

Once your ground truth job is complete, annotate the dataset and let CVAT crunch the data. Once processed, all the information will appear on the Task Analytics page which is dedicated to showing annotation quality results. There you'll find your task's quality score, including the average annotation quality, the number of conflicts and issues, and a per-job breakdown. For a closer look, you can always download the detailed report for the task or for each job.

And if you need to customize quality score requirements? CVAT's got you covered. You can set parameters, for example what counts as a 'bad' overlap and others. Once set, these will be applied in the next quality check. So, there you have it. Ensuring high-quality annotations is a breeze with CVAT and the Honeypot feature.

Check the video about this new feature:

Thank you for choosing CVAT!

Stay tuned and follow the news here:

Facebook

Discord

LinkedIn

Gitter

GitHub

July 20, 2023
CVAT Team
Go Back