Learning Structural Causal Models through Deep Generative Models: Methods, guarantees, and challenges - Ekimetrics
Ekimetrics: big data specialists
Contact
Back to all insights

Learning Structural Causal Models through Deep Generative Models: Methods, guarantees, and challenges

Learning Structural Causal Models through Deep Generative Models: Methods, guarantees, and challenges

In today's data-driven world, addressing causal questions is crucial for decision-making. However, it is highly challenging when dealing with biased data or unmeasured quantities. This article explores these complexities and the opportunity to use Deep Generative Models to answer causal questions.

Author : Audrey Poinsot, PhD student

Date : 1 August, 2024

Category : Thought Leadership

In today’s data-driven decision-making era, we aim for ever finer analyses of increasingly challenging problems. When aiming to answer causal questions (such as “Is it the new tax law driving the increase in sales or is it the advertising campaign?”, “How effective is an anti-smoking prevention campaign at reducing cigarette sales?” or “Would my product have achieved better sales if a different media mix had been used?”), dealing with biased data and unmeasured quantities is one of the major challenges. Consequently, practitioners work on datasets containing spurious correlations, which can lead to biased estimates and incorrect conclusions. For instance, it is a known fact that there is a very high correlation between a country’s chocolate consumption and the number of Nobel Laureates. Does this mean that chocolate makes you smarter? Obviously not.

Causal inference methods are developed to answer such causal questions, accounting for and debiasing spurious effects. These methods estimate causal queries (mathematical translation of causal questions) given a set of hypotheses and some data. Causal inference methods are broadly applicable across all types of disciplines and applications. To assess whether to fully trust a causal estimate as unbiased, one must investigate the property of “identification,” which states that the estimation of the causal query is unique under certain assumptions. Such property is of paramount importance for decision-makers to have confidence in the estimate.

In the existing documentation, various methods exist to answer causal questions. Among them are methods known as Deep Structural Causal Models (DSCMs). In this paper, we propose a review of these DSCM methods, focusing on the assumptions they make, the theoretical guarantees they provide, and their performance in practice. Given this wide range of methods, we aim to help practitioners find the most appropriate methods for their needs. Our analysis is particularly focused on the ability of these methods to answer counterfactual questions.

Since many methods share similarities, we first classified them according to two components: the Structural Causal Model (SCM) class being modeled and the Deep Generative Model (DGM) class being used, see Figures 1 and 2. This classification aims to clarify the landscape of these methods. For more details and references to the cited works, please refer to the original version of the paper.

             

Second, we analyzed all the existing DSCMs individually. We analyzed their hypotheses, guarantees, evaluation, and application to provide practitioners with a complete and detailed comparison. The full comparative work is available in the original version of the paper. The three key findings of this analysis are:

  1. Even if all the methods can theoretically answer counterfactual queries, only eight (out of seventeen) are implemented in practice to do so.
  2. All the methods have counterfactual identification properties. However, they hold under strong assumptions. Hence, most real-world use cases cannot validate them. In such a situation, one can use the NeuralID algorithm to test for identification automatically. This constitutes a great opportunity for practitioners.
  3. The experimental evaluations of these methods are highly heterogeneous and, therefore, incomparable. Moreover, there have been no rigorous benchmarks. As a result, we recommend that practitioners carry out their own comparisons using simulated data similar to the real data they are interested in. We also suggest assessing the robustness of the methods with regard to the violation of some assumptions (the choice of assumptions can be different in each use case).

 

Finally, from an ethical standpoint, this work aims to prevent malicious or naive use of DSCM methods by being transparent about their limitations and the assumptions’ plausibility. We recommend that practitioners involve experts in the field concerned in their projects. More precisely, we strongly advise against using these methods to draw causal conclusions without validation from qualified experts. Indeed, with different assumptions, causal estimates can vary or be no longer identifiable, leading to different decision-making processes.

At Ekimetrics, a large part of our business applications, such as Marketing Mix Modelling or Customer Relationship Management, aims to answer causal questions (“How effective is an advertising campaign in increasing brand building?” or “Would this customer have been retained if they had received a discount?”). As a result, the theory of causal inference is an important part of the statistics field for us. That is why we have embraced the Causal Revolution. By incorporating these new state-of-the-art tools into our solutions, we enhance and refine the analysis of our customers’ data to provide more detailed and actionable insights.


Link to the original paper: Learning Structural Causal Models through Deep Generative Models: Methods, guarantees, and challenges

All our industries

Latest news

Contact us!

  • We're committed to your privacy. Ekimetrics uses the information you provide to us to contact you. For more information about how we handle your personal data and your rights, check out our Privacy policy.
  • This field is for validation purposes and should be left unchanged.