Everything in this page is subject to changes. This is an experimental Labelling Guideline obtained after a trial and error process.
create an account to https://labelbox.com/ and give the email you used for your account to a sonia software administrator, you should received an invite once the administrator added you to the project

This is a valid label:

Even if in the Jiangshi Label we have a little of space that isn't part of the Jiangshi, we need to conserve the integrity of the image.
This is also a valid label for the same reasons:

Try to have around 50% of the image in the label. For example, this should be labelled:

Because we can see many feature of the Jiangshi. While this should be skipped:

Because we only see a small amount of feature and could lead to an increase of false positive in the trained model.
If you have two image and you see well only one image, you should skip this image:

Image too blurry could mislead the training. This is an example of an image too blurry:

This image is a little bit blurry but not enough to mislead the training and could maybe help the training by introducing a variation of a feature already learned, so we need to label it:

Ok let's do a little thought experiment of one specific situation, this one:

On this image you can see a gun and a badge. You can see enough feature of the gun to correctly say that this is a gun. But the badge? you need to know that the complete image is a badge in order to say that this is a badge.
Ok now what should you label on this image, the gun obviously, but the badge? Should you label it or not? Well then let's examine what happen if we label it or not.
If we label the badge the model gonna associate the feature that you see to a badge, but that would be problematic no? What if we have background noise that look exactly like this badge feature? Our model gonna detect these noises as a badge and this is gonna increase our chances of false positive.
Now what happen if we don't label it? If we don't label the badge, during the training phase the model could guess that these feature belongs to the badge, but we didn't label it so the model gonna "unlearn" that this feature belong to the badge, which gonna decrease the precision in the future.
So after this little thought experiment we can deduct that since we don't know what gonna be the exact behaviours of the model during the training phase, the best course of action is simply to avoid showing the model this situation during the training phase by SKIPPING THE IMAGE.
As a rule of thumbs, if you see more than 1% of an object and less than 50% of the same object. SKIP the entire image.