About Me

I am a Ph.D. student at the University of Southern California advised by Prof. Greg Ver Steeg. Broadly, I am interested in unsupervised learning, representation learning, and using machine learning for scientific advancement. More recently, I have been working on fairness (1, 2, 3) and privacy (4, 5) problems in machine learning and federated learning (6).

Before that, I worked on finding optimal strategies for Stackelberg security games with deep reinforcement learning and solving catastrophic forgetting in neural networks. Before coming to USC, I spent two years at Visa Inc., Bangalore, employed as a Sr. software developer, and five wonderful years at IIT Delhi, concluding with a Dual Degree (B.Tech and M.Tech) in Electrical Engineering.

News

  • [May 17, 2021] Starting (virtual) intership at Amazon with Trustworthy Alexa-AI group.
  • [Mar 31, 2021] Paper about membership inference attack on neuroimaging architectures accepted to MIDL, 2021.
  • [Feb 4, 2021] Our AAAI-21 work got featured in USC Viterbi School News.
  • [Jan 8, 2021] Our work on improving brain age estimation got accepted to International Symposium on Biomedical Imaging (ISBI), 2021.
  • [Dec 1, 2020] Our work on controlling fairness with contrastive mutual information estimators got accepted to AAAI. Watch the short explainer video on youtube.

Selected Papers

Membership Inference Attacks on Deep Regression Models for Neuroimaging
Umang Gupta, Dimitris Stripelis, Pradeep K. Lam, Paul Thompson, Jose Luis Ambite, Greg Ver Steeg

We illustrate that allowing access to parameters may leak private information about the dataset. In particular, we show that it is possible to infer if a sample was part of the training set used to train the model given only access to the model prediction (black-box) or access to the model itself (white-box) and some leaked samples from the training data distribution. Such attacks are commonly referred to as Membership Inference attacks. We show realistic Membership Inference attacks on deep learning models trained for brain age prediction in a centralized and decentralized setup. We further observed strong correlations between membership inference attack success and overfitting, an indication that we may be able to ensure privacy and prevent these attacks by reducing overfitting.

Controllable Guarantees for Fair Outcomes via Contrastive Information Estimation
Umang Gupta, Aaron Ferber, Bistra Dilkina, Greg Ver Steeg

We propose a new representation learning algorithm to control parity of any decision algorithm. We theoretically establish that by limiting the mutual information between representations and protected attributes, we can assuredly limit the statistical parity of any classifier. We demonstrate an effective method for controlling fairness through mutual information based on contrastive information estimators and show that they outperform approaches that rely on variational bounds based on complex generative models.

Improved Brain Age Estimation with Slice-based Set Networks
Umang Gupta, Pradeep K. Lam, Greg Ver Steeg, Paul M. Thompson

We propose a new architecture for BrainAGE prediction, which works by encoding a single 2D slice in an MRI with a deep 2D-CNN model and combining the information from these 2D-slice encodings by using set networks or permutation invariant lay- ers. Experiments on the BrainAGE prediction problem, using the UK Biobank dataset showed that the model with the permutation invariant layers trains faster and provides better predictions compared to the other state-of-the-art approaches.

DeepFP for Finding Nash Equilibrium in Continuous Action Spaces
Nitin Kamra, Umang Gupta, Kai Wang, Fei Fang, Yan Liu, Milind Tambe

An approximate extension of fictitious play in continuous action spaces, DeepFP is proposed. DeepFP represents players’ approximate best responses with implicit density approximators and trains them with a model-based learning regime. We demonstrate stable convergence to Nash equilibrium on several classic games and in a forest security domain. DeepFP learns strategies robust to adversarial exploitation and scales well with players’ resources.

Deep Generative Dual Memory Network for Continual Learning
Nitin Kamra, Umang Gupta, Yan Liu

Deriving inspiration from human complementary learning systems (hippocampus and neocortex), we develop a dual memory architecture capable of learning continuously from sequentially incoming tasks while averting catastrophic forgetting. We perform memory consolidation via generative replay of past experiences and demonstrate improved retention on task of learning from sequential non-iid examples.