Updates on the campus response to coronavirus (COVID-19)

Ph.D. Proposal Oral Exam - Mohit Prabhushankar

Event Details

Monday, March 8, 2021

3:00pm - 4:59pm


For More Information


Event Details

Title:  Contrastive Reasoning in Neural Networks


Dr. AlRegib, Advisor     

Dr. Davenport, Chair

Dr. Riedl

Abstract: The objective of the proposed research is to rethink the inductive nature of reasoning in neural networks, and hence, provide contextual explanations to a network’s decision and address its robustness capabilities. Neural networks represent data as projections on trained weights in a high dimensional manifold. The trained weights act as a knowledge base consisting of causal class dependencies. Inference built on features that identify dependencies within this manifold is termed as inductive feed-forward inference. This is a classical cause-to-effect inference model that is widely used because of its simple mathematical functionality and ease of operation. Nevertheless, feed-forward models do not generalize well to untrained situations. To alleviate this generalization challenge, we propose using an effect-to-cause inference model that falls under the abductive reasoning framework. Here, the features represent the change from existing weight dependencies given a certain effect. We term this change as contrast and the ensuing inference mechanism as contrastive inference. The proposed research tackles contrastive reasoning and inference in three phases: 1) We formalize the structure of contrastive reasons, 2) We provide a simple mechanism to extract contrastive reasons from any pre-trained neural network, and 3) Use extracted contrastive reasons to make robust decisions. We also provide representational and explanatory insight into the contrastive reasoning scheme for the applications including robustness to distortions, domain shift, image quality assessment and human visual saliency.

Last revised March 8, 2021