My research was supported by the IARPA Biometric Recognition and Identification at Altitude and Range (BRIAR) program, which aims to develop advanced biometric recognition systems capable of operating in challenging real-world conditions with limited or degraded data quality. My research focused on multimodal fusion of incomplete face, body, and gait information in severe operational conditions, with contributions in the following areas:
The Principal Investigators who oversaw my research include Dr. Joshua Gleason, Dr. Jennifer Xu, Dr. Soraya Stevens, Dr. Nathan Shnidman, Dr. Mark Keck, Professor Vishal Patel, and Professor Rama Chellappa. My team at Systems & Technology Research (STR) partnered with Johns Hopkins University, with whom I currently collaborate on 3D/4D reconstruction research under the IARPA WRIVA program at the University of Maryland Institute of Advanced Computer Studies, and the University of Texas at Dallas.
|
TransFIRA: Transfer Learning for Face Image
Recognizability Assessment
IEEE FG, 2026. PAPER CODE WEBSITE Redefine template-based recognition through encoder-grounded recognizability prediction that learns directly from embedding geometry via class-center similarity and angular separation, enabling principled filtering, calibrated weighting, and cross-modal explainability that surpass prior FIQA methods in accuracy, interpretability, and generality. Allen Tu, Kartik Narayan, Joshua Gleason, Jennifer Xu, Matthew Meyn, Tom Goldstein, Vishal Patel |
|
|
Learned Frame
Feature Aggregation for Face Recognition with Low-Quality Video
POSTER Replace heuristic frame averaging in low-quality video face recognition with a learned cluster-and-aggregate network that predicts frame reliability from visual and embedding-space attributes, filtering unreliable embeddings and fusing informative ones into a single probe template, improving identification accuracy by up to 11.54%. Allen Tu, Joshua Gleason, Soraya Stevens, Matthew Meyn, Nathan Shnidman, Jennifer Xu |
|
|
Style-Based Appearance Flow for Clothing-Robust Body
Representation Learning
Enable clothing-invariant whole-body recognition by generating realistic cross-subject garment transfers that preserve biometric identity, augmenting training data to improve robustness to cross-garment variation. Allen Tu, Matthew Meyn, Mark Keck |
Huge thanks to Systems & Technology Research (STR), where I spent four years as a computer vision intern and was lucky to grow from an undergraduate into a PhD student under the mentorship of the incredible VIU group. It was through our IARPA BRIAR project that I met my long-term collaborators at the Johns Hopkins Whiting School of Engineering, who later partnered with our University of Maryland Institute of Advanced Computer Studies (UMIACS) team on IARPA WRIVA for 3D reconstruction research. The mentorship, research environment, and opportunities at STR were instrumental in shaping my research direction and enabling this line of work.
If you're interested in computer vision internships or full-time roles, I'd be happy to connect — feel free to reach out for a referral or apply via my referral link.
This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via [2022-21102100005]. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The US Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.