Date: Aug 3, 2021
Time: 4:00 pm IST
Efficient Causal Inference in High Dimensions
In this talk, I'll discuss several fundamental problems from Causal inference in high dimensions. I'll start by motivating for a general probabilistic framework for formally studying these problems. Any algorithm in this framework must not only use as few samples as possible, but also must run efficiently as the number of dimension grows. We'll look at three broad problems in this framework as follows.
The first problem we'll look at is that of learning causal effects in high dimensions. Such effects are formalized using Bayesian networks, which are widely used in computer science. I'll describe the central problem of inferring causal effects from observational data before going into the details of our contributions. I'll end this section with some future directions.
Next, we'll look at the important problem of structure learning in high-dimensional Bayesian networks. I'll introduce the concepts of Markov equivalence in Bayes nets and discuss the current state of algorithms for structure learning and its inadequacies. I'll then talk about our recent works in structure learning, before ending with a few open problems.
Finally, I'll discuss about the problem of learning Bayes nets on a given structure. I'll end with our work in this problem in both discrete and continuous settings.