AI assistants

AI assistants are what we call our AI tooling that you can use to automate parts - or all - of your annotation work.

The concept behind them is simple. Instead of relying on pre-trained models or large, generalized models, we train models using the data you provide. This way, we can guarantee that we will be able to deliver assistance no matter what your use case is. Think about it as having models custom-tailored for what you are working on.

The learning process

We will train model(s) and activate the assistant(s) after you've labeled 10 images and set the image status to "To review" or to "Done".

Which assistants we train depends on what data you put in. If you put in image tags, we train our Image Classification Assistant. If you annotate using bounding boxes, then we train our Object Detection Assistant etc.

We will then retrain the assistant(s) whenever we've received 20% more data. So, given that you have labeled 10 images, the next model will be trained after 12 images. Then 15. Then 19. And so on.

Our assistants

Use for speeding up image classification.

Use to get predictions on potential metadata to add to your annotations.

Predicts the class of any annotations made so that you don't have to switch between classes manually.

Shows you predictions of where our model thinks bounding boxes should be placed on an image that you can then accept one-by-one or accept all at once. Used to speed up object detection.

Shows you predictions from our AI model on where you can add masks/polygons for your instance segmentation annotation task. As for Object Detection, you can accept them one by one or accept all.

The same as Instance Segmentation Assistant, but for semantic segmentation.