I am a thesis-based M.Sc. student in Computing Science at Simon Fraser University, supervised by Dr. Ali Mahdavi-Amiri. My research interests include computer vision, computer graphics, generative modeling, self-supervised learning, machine learning theory, and representation learning.
My current research focuses on 3D-aware hair modeling from images, and I also work part-time as a Machine Learning Engineer with the VanityAI team at MARZ. Previously, I earned my B.Sc. in Computer Engineering from Sharif University of Technology, where I worked on multimodal learning and machine learning theory.
Before SFU, I spent two years at Tapsi as a Software Engineer and Data Scientist. I was also awarded a silver medal at the 13thInternational Olympiad on Astronomy and Astrophysics in 2019, after earning a gold medal and ranking first in Iran’s national olympiad.
@inproceedings{heidari2026hairport,title={HairPort: In-context 3D-Aware Hair Import and Transfer for Images},author={Heidari, Alireza and Alimohammadi, A. and Lira, W. Michel Pinto and Bar-Lev, A. and Mahdavi-Amiri, A.},booktitle={ACM SIGGRAPH},year={2026},publisher={ACM SIGGRAPH}}
We propose a novel framework for incorporating unlabeled data into semi-supervised classification problems, where scenarios involving the minimization of either i) adversarially robust or ii) non-robust loss functions have been considered. Notably, we allow the unlabeled samples to deviate slightly (in total variation sense) from the in-domain distribution. The core idea behind our framework is to combine Distributionally Robust Optimization (DRO) with self-supervised training. As a result, we also leverage efficient polynomial-time algorithms for the training stage. From a theoretical standpoint, we apply our framework on the classification problem of a mixture of two Gaussians in ℝd, where in addition to the m independent and labeled samples from the true distribution, a set of n (usually with n≫m) out of domain and unlabeled samples are given as well. Using only the labeled data, it is known that the generalization error can be bounded by ∝(d/m)1/2. However, using our method on both isotropic and non-isotropic Gaussian mixture models, one can derive a new set of analytically explicit and non-asymptotic bounds which show substantial improvement on the generalization error compared to ERM. Our results underscore two significant insights: 1) out-of-domain samples, even when unlabeled, can be harnessed to narrow the generalization gap, provided that the true data distribution adheres to a form of the “cluster assumption", and 2) the semi-supervised learning paradigm can be regarded as a special case of our framework when there are no distributional shifts. We validate our claims through experiments conducted on a variety of synthetic and real-world datasets.
@inproceedings{saberi2024outofdomain,title={Unlabeled Out-of-Domain Data Improves Generalization},author={Saberi, Amir Hossein and Najafi, Amir and Heidari, Alireza and Movasaghinia, Mohammad Hosein and Motahari, Abolfazl and Khalaj, Babak},booktitle={The Twelfth International Conference on Learning Representations},year={2024},url={https://openreview.net/forum?id=Bo6GpQ3B9a},publisher={International Conference on Learning Representations}}