My current research focuses on studying rotational dynamic in neural networks, natural and artificial. I am interested in using concepts and techniques from Lie theory and linear algebra to understand the geometry of neuronal transformation spaces, how they can be differentiated for various classes of tasks (specifically in the context of working memory) and how they evolve over time. We hope that the results of our studies open new horizons in understanding brain, as well as novel approaches for studying deep neural networks for AI.

Prior to Princeton, I was at Zuckerman Mind Brain Behavior Institute, where I studied how neural networks represent the concepts they have learned. Specifically at the Qian Lab, we were studying such representations and their various topological and geometrical properties in recurrent neural networks of firing rate cells, specifically in the context of visual perception and its relationship with (working) memory.

During my PhD, I worked on large-scale human-in-the-loop data analytics (HILDA). My research interests include a wide range of data management and analysis topics, from building frameworks and tools for large-scale HILDA to applications of data management techniques on Big Data. I worked as a part of AnHai Doan's group. We build frameworks for the next generation of entity matching, data integration and data cleaning tools.

Here's my CV (as of Apr 2022).

Ph.D. in Computer Sciences, University of Wisconsin-Madison (2018).
M.Sc. in Computer Sciences, University of Wisconsin-Madison (2015).
M.Sc. in Information Technology Engineering, University of Tehran, Iran (2008).
B.Sc. in Software Engineering, University of Tehran, Iran (2005).

Version: 3.1
GCS/MU d-(+)@x s+: a C++$ ULC++(+++)$
P+ L++(++++) !E !W++ !N !o K--? w++
!O M+ !V PS+ PE- Y+ !PGP !t !5 X- R
tv+ b+ DI D+ G e++++ h r y++