When to use which tool

When using a new annotation tool, it can sometimes be hard to know which tools you should use. To that, we prepared this cheat sheet for you to use.

Image classification

  • Start by using image tags

  • After 10 images, activate and test how the image classification assistant is performing

  • If it's not giving you the results you need, add tags to 10 more images manually

  • Rinse and repeat until the assistants are accurate enough

  • When you are happy with our assistant, you can simply click "Accept all" to classify an image

  • When you find yourself not editing or changing any suggestions in the scope of an hour or so, you can try using automated labeling to batch process the rest of your dataset

Object detection

  • Start by using the bounding box tool

  • After 10 images, activate the Object Detection Assistant and see how it is performing

    • Are you getting good predictions? Great - accept them and add any missing bounding boxes

    • Is the assistant not giving you anything worthwhile? Annotate 10 more images using the bounding box tool and try again

  • As you continue to label, the Object Detection Assistant will learn and get better and better - so keep trying it throughout when annotating

  • When you find yourself accepting suggestions without editing them for an hour or so, try automated labeling to batch process the rest of your dataset

Instance segmentation

  • Start by either using the polygon or the brush tool if you want to annotate completely manually

  • ...but don't forget to try DEXTR and ATOM segmenter as it might speed up the process (especially for complex annotations) considerably

  • After 10 images, activate the Instance Segmentation Assistant and see how it is performing

    • Are the suggestions helpful? Great - accept the good suggestions. Either accept and edit the less good ones or annotate them manually

    • If the suggestions are not great, don't give up hope. The more data you add, the better they become. Annotate 10 more images and then test again.

  • As you annotate, the Instance Segmentation Assistant will learn and become more accurate - so keep trying it when annotating

  • When you find yourself pressing "Enter" on image after image - accepting the suggestions from the assistant - give automated labeling a try to batch process the rest of your dataset

Semantic segmentation

  • Before you start, make sure that the classes you annotate with can be found under "semantic classes". If not, any data you annotate will go to training our instance segmentation assistant

  • Start by - ideally - using the brush tool or the ATOM segmenter

    • Quick aside - the polygon tool or DEXTR can also be used but neither are great for annotating non-object regions. Hasty doesn't support many polygons grouped into one and DEXTR is pre-trained on instance segmentation data

  • By now, you know the drill. After 10 images, check the performance of the Semantic Segmentation Assistant

    • If results are good - use them - accept the great ones; accept and edit the good ones

    • If results leave something to wish for, annotate ten images more and try again

  • As our underlying AI model continually learns while you label - continue to add data to automate more and more of the annotation progress

  • When results are great consistently (which will take longer for semantic segmentation) - try automated labeling to batch process the rest of the dataset

Panoptic segmentation

  • To do panoptic segmentation in Hasty, you essentially have to do both instance and semantic segmentation

    • A tip would be to try the ATOM segmenter, as it is well-suited to both tasks

  • Use the guidance provided above to get your first annotations done

  • When both the Instance Segmentation Assistant and the Semantic Segmentation Assistant becomes available, try both and see how they perform

  • As usual, keep adding more data to increase the performance of both assistants

  • When one or both of the assistants are performing well, you can use auto-labeling to label thousands of images in one click - please note that you need to run two separate auto-labeling jobs to do a complete panoptic segmentation

Attributes

  • To use attributes, first, complete an object detection or segmentation annotation task on an image

  • When you've added all annotations, select each annotation individually and fill out attributes (right menu - second accordian)

  • After you've annotated 10 images, the Label Attribute Assistant will be available

  • As usual, activate it and see how it is performing

  • With enough data (maybe 100 images for easier use cases where the risk of bias is low - the number increases exponentially depending on how subjective your use case is) - the assistants will be performing well enough to save you considerable time - you can "Accept all" and save a couple of seconds per annotation