...
So far we've worked with two different approaches for image tagging.
**It's important to note that only the Image Type model training is managed by us. The color model however was created and managed by another company and it's not under our control.
ML Results caching
Most of the advertisers' inventories don't change too much from day to day, so in order to avoid classifying the same images multiple times, we use a cache system in the ML servers.
...
Dealer Images: Images of real vehicles
Stock Images: Computer generated images with white background
In previous models, we also included all types of CGIs but it lowered the overall accuracy of the model and caused other issues.
Example: Different Criteria for Image types between advertiser and us.
...
Understanding the results from ML
...
Image Type received from ML system
This value defaults to “Placeholder” if the ML system could not return a value
Color received from ML System
This value defaults to null if the ML system could not return a value
If the color value is configured in the scrape the ML classification for the model is not applied
Image Type Manual fix
Image Type manual fix list selects the value returned from the ML system or defaults to Placeholder if None is received.
Color Manual fix
Color manual fix list selects the value returned from the ML system or defaults to “Black” if None is received
Example of Placeholder tagged from ML system: {Image Type = Placeholder, Color = None}
As you see, as color is not tagged, 5 is not showing anything
Previously tagged Placeholders were {Image Type = Placeholder, Color = 'N/A'}. If you find some of these they of these they are cached values.
Edit
When you change any value from the Manual fixes list and press the “update” button at the bottom of the list the following happens:
A historical entry of manual fix is stored in the DB with information of what/who/when the change was made
The change is pushed to the ML server to modify the cached values for the image type/color of that image
...
Understanding "Weird" results
...
Different criteria for image types between the advertisers and us
This case happened documented on 2021-11 and it is documented in Jirais a good example of discrepancies in classifications.
There were two problematic situations happening simultaneously:
...
Examples of conflicting images:
...
...
Vehicles 1 & 2 are real cars or very realistic CGI (image 1?) over a static background.
Vehicles 3 & 4 are very similar images from two different dealers.
Image | ML Classification | Client classification | Issues |
---|---|---|---|
1 | Dealer | Stock | Incorrect classification by the ML System “Not in Stock … on Order”. Cannot add this type of image to the dataset as a Stock image because it will damage the accuracy of dealer images. |
2 | Dealer | Dealer | Correct classification. These type of correct classifications would be damage by adding examples as Image 1 to the Stock training set. |
3 | Dealer | Stock | Very similar to Image 4. Clients differ on which type it is. |
4 | Dealer | Dealer | Very similar to Image 3. Clients differ on which type it is. |
Different criteria between advertisers that share image CDN
We currently have no images with conflicting classifications but there is potential when advertisers share inventory. When storing the classification of an image we store it by image_key|image_url without linking it to an advertiser. This can lead to the scenario where two advertisers that share CDN are manually reclassifying the images against each other.
Edit
ML Manual fixes and model retrains
We added the option to manually reclassify any image present in a Sitemap in order to handle conflicting images and to have historical data of which type of images are most commonly misclassified.
When a manual fix is requested from the backend panel or frontend:
A request is sent to the ML server to change the image type or color of that image (identified by its image_key).
A historical entry of manual fix is saved in the DB with the data of the reclassification, the advertiser the image belongs to, and who did the fix.
Training a new model
After some time passed from the last retrain or if we find very obvious issues with the current model a new one is trained.
The steps followed to train a new model are the following:
Analysis Phase
This step is only done if the model train is caused by a particular issue. We try to analyze why the issue is happening by analyzing both the images incorrectly tagged and the original dataset.
From this phase, we extract information about which images can be causing the issue so we can filter them out from the original dataset and avoid including more examples of that type. Sometimes we also notice things such as imbalances in the number of images for each image type of the data set which can be fixed by looking for new images of that type or by performing some data augmentation.
Dataset Creation Phase
In this step, we work on the creation of the new dataset. We usually use the previous model dataset as a baseline and add/remove images from it in order to improve the results. This is the most tedious and long phase because it has to be done manually by the dev team.
If in the analysis phase we detected some images that could cause issues, the first step is to manually check the original data set and prune all those types of images from it.
Once this is done we begin with the regular process to add images to the dataset:
Download all the ML manual fixes for image type done from the last retrain date.
Apply automatic filters to delete images that are too small
Manually check them to delete misclassification from the client and conflicting images
Apply the crop model on them if required
Split the resulting images in train/test dataset
Extend the previous dataset with the new data.
Edit
Model Training/Test Phase
With the new dataset, we are ready to begin the training phase.
The training of the new model has to be done in one of the ML servers and the service is down while we train the new models. We create from 4 to 8 new models with different parameter configurations and test them against the test set to see if the preliminary results are valid and have an accuracy/recall high enough to be put in production.
If the training was triggered by very bad results on a particular advertiser we sometimes create an ad-hoc test dataset with the advertiser to check the results for that particular advertiser.
Once one of the new models is selected for production, it is deployed and monitored for the following days. If none of the models are good enough we go back to the Analysis phase and work on improving the dataset again.
Edit