Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

It's important to note that only the Image Type model is managed by us. The color model was created by another company and it's not under our control.

ML Results caching

Most of the advertisers' inventories don't change too much from day to day, in order to avoid classifying the same images multiple times we use a cache system in the ML servers.

The first time an image arrives at the ML System we calculate an image key based on different data from the request, such as the HTTP headers. After the image is classified we store an entry for that image key with the image type and color.

This process improves the ML performance by a lot because it avoids downloading and classifying all the images from the advertisers every day. It also allows the existence of manual fixes of the image classifications done by our staff or the client.

There is an important caveat, however. In order for the cache system to work optimally, the CDNs used by advertisers have to provide these HTTP headers with the image metadata for us to be able to create a correct image_key.

Some advertisers work with CDNs that don't use the aforementioned information correctly or it is unexistent, in these cases we have to download and use the raw data of the image to create the key.

Known weaknesses and issues

 Edit

Weakness: Model Accuracy decays over time

Our datasets are created from images that are currently within our systems.
This causes the accuracy of classifications to decay over time due to new advertisers coming to the system with different types of images.
The periodical retrains of the model are done to mitigate this decay.

The decay in the accuracy for non-car vehicles is even worse because most of the new advertisers are only car dealers. This makes that over time, the % of non-car vehicles in the training data set is lower and lower, making the retrains not so effective in mitigating the decay of accuracy for other vehicle types

Weakness: High overall accuracy and extreme local accuracy values

After a new iteration of models is trained, we test them against the test data set to extract metrics as accuracy or recall. The accuracy of the models against the test set is usually around 90-94%.

Images of the same advertiser tend to be similar to each other. This causes the local accuracy (accuracy within the advertiser) to usually present more extreme values.

  • Advertisers with easy to classify images will have accuracies of around 100%

  • Advertisers with conflicting images will have lower accuracies

Issue: Crop of Images with multiple vehicles

When the crop of an image is extracted, the model identifies all the vehicles of the image and extracts the biggest of all. This usually doesn't cause issues because normally the vehicle that we want the advertiser wants to highlight is the biggest but some problematic Placeholder images were found where the crop made it very difficult to identify it as a placeholder image.

Criteria for Image tagging

We use the following criteria to tag the images that are going into the training dataset

Here you can find a list with all the datasets that were used to train the current and past ml models

Understanding the results from ML

The data returned from the ML server is appended to the scrapped vehicles and stored in the advertiser's Sitemap.

When you take a look at the sitemap image and color classification you will see something like this:

  1. Image Added

    Image Type received from ML system

    1. This value defaults to “Placeholder” if the ML system could not return a value

  2. Color received from ML System

    1. This value defaults to null if the ML system could not return a value

    2. If the color value is configured in the scrape the ML classification for the model is not applied

  3. Image Type Manual fix

    1. Image Type manual fix list selects the value returned from the ML system or defaults to Placeholder if None is received.

  4. Color Manual fix

    1. Color manual fix list selects the value returned from the ML system or defaults to “Black” if None is received

  5. Example of Placeholder tagged from ML system: {Image Type = Placeholder, Color = None}

    1. As you see, as color is not tagged, 5 is not showing anything

    2. Previously tagged Placeholders were {Image Type = Placeholder, Color = 'N/A'}. If you find some of these they are cached values.

       Edit

When you change any value from the Manual fixes list and press the “update” button at the bottom of the list the following happens:

  • A historical entry of manual fix is stored in the DB with information of what/who/when the change was made

  • The change is pushed to the ML server to modify the cached values for the image type/color of that image

 Edit