About Me
I am passionate about leveraging innovative AI technologies for positive societal impact, promoting equitable opportunities, and driving meaningful change toward societal well-being. My main interest is about leveraging non-verbal cues to gain deeper insights into human behavior patterns and understanding how humans think and act 👀🧠. With over 7 years of experience in machine learning, I specialize in projects involving eye gaze movement, behavioral data, time series data, and multi-modal approaches. I focus on integrating domain knowledge with AI to develop solutions that are both innovative and impactful, ensuring they address real-world challenges effectively. My work emphasizes optimization and applicability, particularly in Human-Computer Interaction (HCI) and multimodal AI systems. By leveraging behavioral insights, I enhance interaction design, adaptive interfaces, and user-centric applications, bridging the gap between foundational AI research and practical implementation.
Education
Experiences
Meta, Redmond, WA
Reality Labs - Audio team
Amazon, Seattle, WA
Computer Vision and NLP team
Early-fusion multi-modal model for product type classification