This is some text inside of a div block.

Measuring the efficiency of creativity in advertising

Back to all articles
Date: December 29, 2022
Category: Blog article
Author: 
No items found.

Measuring the efficiency of creativity in advertising has historically been challenging. Discover 6 pitfalls to avoid vs 4 key paths that lead to success.

Fill in the below to receive the Blog article

Required*
Thank you
You can now download the Whitepaper at the following link
Oops! Something went wrong while submitting the form.

Ekimetrics conducted a study providing a technical Marketing Mix Modelling (MMM) approach (object detection algorithms and multi-stage econometric modelling) that demonstrates an objective approach to creative measurement. Here is a list of pitfalls to avoid vs key paths to complete the project successfully.

Measuring the efficiency of creativity in advertising has historically been challenging. It is hard to isolate the impact of creativity from other factors that impact performance, such as execution tactics or brand health. As the advertising landscape evolves, with brands using several creatives at once and with more and better data, it has become increasingly important to understand the impact of creativity…

Ekimetrics conducted a study providing a technical Marketing Mix Modelling (MMM) approach (object detection algorithms and multi-stage econometric modelling) that demonstrates an objective approach to creative measurement.

Here is a list of pitfalls to avoid vs key paths to complete the project successfully.

Pitfall n°1 – Failing to Define Labels at the Start

Define the set of labels that are going to be studied at the very beginning. Adding more labels in the middle of the study would involve having to go back to time consuming tasks, such as manual labelling and repeated extraction and processing of labels.

Pitfall n°2 – Ambiguous Object Definitions

Describe clearly from the start what each label is (e.g. “Person” can be any body part, not just a whole body with a face). This is helpful when the manual labelling is being done as a team, rather than by one person. Furthermore, if using a pre-trained model, ensure that your definition of the object aligns with what is detected by the model. For example, you may define “Car” as just the outside of the car while the OD model is trained to detect both the interior and exterior of cars.

Pitfall n°3 – Non-Generalizable Labels

If you are studying two separate sub-brands within one brand, it is advisable to have two separate studies, rather than one. That is, instead of defining objects “Brand A Logo” and “Brand B Logo”, it may be better to separate the brands into different streams and have the same object labels for both (e.g. Logo, Brand Cue, Product, and Person). This will ensure that your code is reusable for studies of brands that have different numbers of sub-brands.

Pitfall n°4 – Lazy Manual Labelling

Make sure to manually label all objects in a creative. For example, if there are three cars, label all of them, not just one. The manually labelled validation set is the ground truth against which a model’s performance is compared. If some objects are missed, you may have strange performance results that indicate that the model may be over-detecting objects.

Pitfall n°5 – Trying to be Exhaustive

Avoid testing all open-source resources available, as this can be very time-consuming and not very fruitful. Choose two or three to test and instead spend more time on what you can do to improve their performance on your dataset. This could, for example, be done through hyperparameter tuning (e.g. testing different learning rates, batch sizes, confidence thresholds, etc.) or in the processing of the results (e.g. correcting any text labels inside logos or products).

Pitfall n°6 – Lack of Automated Data Errors

Due to the many different data manipulation steps in this project, there are a lot of potential sources for errors. The key is to set up automatic checks at each stage to avoid a trickle-down effect of avoidable errors. For example, removing false positives in face detection, by only ‘accepting’ a detected face if a person was also detected in the creative, will ensure that the feature time series that is used for MMM does not suddenly have more impressions for faces than for people. Similarly, when doing the feature engineering, employing a simple method of checking that there are no negative values, no missing data, and that the impressions and spend data is consistent across each sub-model will ensure that time is not wasted in the MMM stage from modelling with incorrect data

Fortunately, beyond the pitfalls to be avoided, the implementation of the following good practices is key for the success of a project such as the study conducted by Ekimetrics x Meta: Exploring the links between creative execution and marketing effectiveness.

(see the last paragraph: “One methodology, many different audiences and uses cases”)

1 – Data of course

Complete dataset: Ensuring you have a complete data set at the start would ensure that you do not need to add ad hoc layers of consolidation. This is easily done by, for example, checking that all the creative ids in the impressions are accounted for in the actual creative files, and vice versa.

2 – But also, storage

Cloud storage: Due to the volume of creatives included in this study, cloud storage was crucial. Particularly if extracting frames from videos, it is important to account for the significant volume of additional images that need to be stored. This project used Azure Storage.

3 – Not to mention… Object Detection

Labelling Software: For this project, the Azure Machine Learning Studio Data Labelling functionality was used. Having an intuitive software, that can import creatives directly from the cloud storage, and is able to export labels in a familiar format (such as a JSON file) is useful.

External Computation Resources: As the training, validation and final labelling of images are all computationally expensive processes, the use of external computation resources (clusters) is recommended. For pre-processing and feature engineering, individual CPU-enabled single-node clusters are sufficient. For the training, validation, and labelling processes, it is recommended to use GPU-enabled clusters. For this project, Databricks was used as it can connect to Azure storage, facilitates the use of clusters, supports various programming languages, and allows for collaboration on Notebooks.

4 – Last but not least: MMM

Contextual Knowledge: Having a strong understanding of the brand acts as a crucial foundational for every step of this project. It is not only required for making informed decisions regarding which objects should be detected but also vital for defining the features to be measured in the MMM. For example, knowing that products often appear in creatives alongside just a hand raises the question of whether this is the most effective use of a person, or if including the person’s face would be more effective; this in turn leads to the creation of features testing products alongside ‘face-less’ people vs. people with faces. Contextual knowledge can also be gained throughout the project by stopping to analyse the data. For example, checking the distributions of manually labelled objects can give an early indication of performance of custom-trained models (feasibility for successful training and detection) as well as the expected impact for regression models.

Base model: Having strong base models is also key for success in this project. Since the target variable of the sub-models is determined by the contribution of the Meta variables in the base model, a poor base model will directly impact the performance of the sub-model. The quality of the base model will largely depend on the dataset used, so ensuring that sufficient, relevant, and good quality data relating to the baseline, market variations, and marketing activity is crucial.

Programmatic Sub-Modelling: Depending on the number of feature groups, KPIs and sub-brands included in the study, it may be infeasible to run the sub-models on a one-by-one basis. For context, this project had 156 sub-models (13 base models x 12 feature groups). For that reason, it is recommended to create a methodology that allows for the programmatic creation of sub-models

One methodology, many different audiences and uses cases

Thanks to the methodology that was developed in this study, marketers can answer questions such as: “how can marketing returns be improved through creative execution?”. On the other hand, marketing effectiveness professionals can answer questions such as: “how can we codify the creative elements of marketing campaigns?”, “how can I improve measurement when varied creatives are used simultaneously?”.

The methodology can be applied as a whole, or to answer questions relating to Object Detection (OD), like sponsorship content detection, product placement or MMM Sub-Modelling of Meta or other activity. Indeed, MMM Methodology can be adapted for other investigations into Meta activity, such as creative type, creative placement, format, caption/messaging in creative, etc. The same methodology can also be applied to other social media platforms like YouTube or TikTok.

It also means that in such cases, the pitfalls to avoid and the key paths that lead to success could be the same!

Learn more about Data Science for Marketing with Ekimetrics!

Get in touch

Connect with our Data Science experts

Required*
Thank you
We will be in touch very soon!
Oops! Something went wrong while submitting the form.