ACLab is situated at the convergence of computer vision and machine learning, focusing especially on representation learning for visual perception challenges. Our applied research significantly contributes to the understanding, detection, and prediction of human or animal behaviors in natural settings by assessing their physical, physiological, and emotional states. Our algorithms efficiently encapsulate the state of the world into a low-dimensional “pose” embedding, offering a clear and interpretable summary of critical state information.

Our group specifically tackles machine learning and vision issues in scenarios where data acquisition or labeling is constrained by cost, privacy, or security concerns—commonly known as Small Data domains. To navigate these data restrictions, our approach incorporates specific domain knowledge into the learning process through generative models, simultaneously leveraging the latest in data-efficient machine learning techniques.

Challenges in Small Data domains often involve a complex array of elements with profound implications for both engineering and science. Thus, we actively seek collaboration with experts across various fields, including medicine, psychology, therapy, physics, and neuroscience. These interdisciplinary partnerships have significantly shaped the direction and impact of my research.

For many of these projects, augmented reality (AR) and virtual reality (VR) tools are essential for both the assessment and enhancement portions of the project. Below, you can find some of our active research projects. Also visit our The Signal Processing, Imaging, Reasoning, and Learning (SPIRAL) Group for more information about our collaborative cluster. 

ACLab Funded Projects:

NSF-HCC-Small 2417448 ($250K)

“Collaborative Research: HCC: Small: Graph-Centric Exploration of Nonlinear Neural Dynamics in Visuospatial-Motor Functions during Immersive Human-Computer Interactions” (Role: PI, Date: 10/2024-9/2027)

Sony Faculty Innovative Award ($100K)

“Live Stream Temporally Embedded 3D Human Body Pose and Shape Estimation” (Role: PI, Date: 10/2023-9/2024)

NSF-PFI-RP 2234346 ($550K)

“PFI-RP: Use of Augmented Reality and Electroencephalography for Visual Unilateral Neglect Detection, Assessment and Rehabilitation in Stroke Patients” (Role: CoPI, Date: 12/2023-11/2026)

NSF-DARE 2327066 ($150K)

“Collaborative Research: Development of a precision closed loop BCI for socially fearful teens with depression and anxiety” (Role: PI, Date: 11/2023-10/2026)

NSF-CAREER 2143882 ($600K) 

“CAREER: Learning Visual Representations of Motor Function in Infants as Prodromal Signs for Autism” (Role: PI, Date: 5/2022-4/2027)

Northeastern-UMaine Seed Funding ($50K)
“Using artificial intelligence to examine the interplay between pacifier use and sudden infant death syndrome” (Role: CoPI, Date: 11/2020-10/2023)

NSF-IIS 2005957 ($190K)
“CHS: Small: Collaborative Research: A Graph-Based Data Fusion Framework Towards Guiding A Hybrid Brain-Computer Interface” (Role: PI, Date: 10/2020-09/2023)

Northeastern Tier 1 Award ($50K)  
“Novel methods to quantify the affective impact of virtual reality for motor skill learning” (Role: CoPI, Date: 07/2020-09/2021 )

Northeastern GapFund360 ($50K)

“A-Eye: A Nanotechnology and AI-assisted Artificial Cone Cell Capable of Color and Spectral Recognition” (Role: CoPI, Date: 01/2020-12/2020)

ADVANCE Mutual Mentoring Grant ($3K) 
“Translating the ‘Mastermind’ Concept from Business to Academia: Facilitating Peer mentorship among female PIs leading active research labs” [Project Link] (Role: CoI, Date: 01/2020-12/2020)

NSF-NRI 1944964 ($102K)  
“NRI: EAGER: Teaching Aerial Robots to Perch Like a Bat via AI-Guided Design and Control” (Role: PI, Date: 10/2019-09/2020)

NSF-IIS 1915065 ($395K)  
“SCH: INT: Collaborative Research: Detection, Assessment and Rehabilitation of Stroke-Induced Visual Neglect Using Augmented Reality (AR) and Electroencephalography (EEG)” [Project Link] (Role: PI, Date: 09/2019-08/2023 )

NSF-NCS 1835309 ($999K)  
“NCS-FO: Leveraging Deep Probabilistic Models to Understand the Neural Bases of Subjective Experience” ( Role: CoPI, Date: 08/2018-07/2021 )

Amazon AWS ($30K)  
“Synthetic Data Augmentation for Deep Learning” [Project Link] (Role: PI, Date: 07/2018-09/2020)

Biogen ($20K)  
“Low-Cost Kinematic Measures for In-Clinic Assessments of Neurodegenerative Diseases” (Role: PI, Date: 05/2019-12/2019)

NSF-IIS 1755695 ($169K)  
“CRII: SCH: Semi-Supervised Physics-Based Generative Model for Data Augmentation and Cross-Modality Data Reconstruction” [Project Link] (Role: PI, Date: 06/2018-06/2020 )

Northeastern Tier 1 Award ($50K)  
“Decoding Situational Empathy: A Graph Theoretic Approach towards Introducing a Quantitative Empathy Measure” [Project Link] (Role: PI, Date: 07/2017-09/2018 )

MathWorks Microgrant ($20K)  
“Compressive Sensing for In-Shoe Pressure Monitoring” [Project Link] (Role: PI, Date: 06/2017-06/2018)

MathWorks Microgrant ($20K)  
“Curriculum Development: Biomedical Sensors and Signals” (Role: PI, Date: 03/2016-06/2017)

NSF SBIR PHASE I-1248587 ($150K) 
“Pressure Map Analytics for Ulcer Prevention” (Role: PI, Date: 01/2013-06/2013)

QUANTIFYING INFANT REACH AND GRASP 

AUGUST 20, 2023

Instructions on Quantifying Infant Reach and Grasp Reported by: Victoria Berry, Franklin Chen, Muskan Kumar ACLab Students Mentors: Dr. Elaheh Hatami and Pooria Daneshvar YSP/REU Summer 2023 Abstract:  In the United States, approximately 7% p..

INFANT VIDEO-BASED STATE ESTIMATION 

  • JULY 28, 2023

If you are a USA-located parent of an infant under age one and want to participant in this data collection study (supported by an NSF grant and several industry awards), please email us at: aclabdata@northeastern.edu or fill out the sign up fo..

IEEE VIP CUP 2021: PRIVACY-PRESERVING IN-BED HUMAN POSE ESTIMATION 

MAY 6, 2021

[Sponsored by the IEEE Signal Processing Society] MOTIVATION Every person spends around 1/3 of their life in bed. For an infant or a young toddler this percentage can be much higher, and for bed-bound patients it can go up to 100% of their tim..

QUANTIFYING THE NOTION OF BIAS IN FACE INFERENCE MODELS 

JANUARY 11, 2021,

Overview: In this research, we present a general pipeline to interpret the internal representation of face-centric inference models. Our pipeline is inspired by the Network Dissection, which is a popular model interpretability pipeline that is..

INFANT POSE LEARNING WITH SMALL DATA 

OCTOBER 9, 2020,

With the increasing maturity of the human pose estimation domain, its applications have become more and more broaden. Yet, the state-of-the-art pose estimation models performance degrades significantly in the applications that include novel su..

HUMAN BODY 3D SCANNING AND VR IMPLEMENTATION MANUAL 

AUGUST 3, 2019

Instructions on Creating 3D Human Models and Virtual Reality Implementation of Them Reported by: Sophia Franklin and Caleb Lee ACLab Students Mentors: Shuangjun Liu, Naveen Sehgal, Xiaofei Huang, and Isaac McGonagle YSP Summer 2019 Overvi..

SLP DATASET FOR MULTIMODAL IN-BED POSE ESTIMATION 

JUNE 27, 2019

This research is funded by the NSF Award #1755695. Also special thanks to the Amazon for the AWS Cloud Credits Award. This research is also highlighted by News@Northeastern in August 2019, by Experience Magazine in October 2019, and by Si..

AN AR-EEG HYBRID SYSTEM FOR VISUAL NEGLECT REHABILITATION 

JUNE 19, 2019,

This research is funded by the NSF Award #1915065 entitled “SCH: INT: Collaborative Research: Detection, Assessment and Rehabilitation of Stroke-Induced Visual Neglect Using Augmented Reality (AR) and Electroencephalography (EEG)“…

INNER SPACE PRESERVING GENERATIVE POSE MACHINE 

JULY 23, 2018,

Image-based generative methods, such as generative adversarial networks (GANs) have already been able to generate realistic images with much context control, specially when they are conditioned. However, most successful frameworks share a comm..

UNDERSTANDING POSE OF THE WORLD AND 3D SCENE RECONSTRUCTION 

JANUARY 18, 2018,

Our understanding of the world started with a research that aimed to improve the indoor navigation for first-person navigators by fusing IMU data collected from their smartphone with the vision information concurrently obtained through the ph..

IN-BED POSTURE ESTIMATION 

NOVEMBER 19, 2017

Tracking human sleeping postures over time provides critical information to biomedical research including studies on sleeping behaviors and bedsore prevention. In this work, we introduce a vision-based tracking system for pervasive yet unob..

HUMAN POSE ESTIMATION: DEEP LEARNING WITH SMALL DATASET 

SEPTEMBER 1, 2017

Although human pose estimation for various computer vision (CV) applications has been studied extensively in the last few decades, yet in-bed pose estimation using camera-based vision methods has been ignored by the CV community because it is ..

HUMAN 3D FACIAL POSE ESTIMATION AND TRACKING 

FEBRUARY 9, 2017

To track relative movements of the facial landmarks from a video, we have developed a robust tracking approach, in which head movement is also tracked and decoupled from the facial landmark movements. We first employed an state-of-the-art 2D f..

MULTIPLE DEFORMABLE OBJECTS TRACKING AND POSE DETECTION 

SEPTEMBER 5, 2016,

Pervasive use of rodents as animal models in biological and psychological studies have generated a growing interest in developing automated laboratory apparatus for long-term monitoring of animal behaviors. Classically, the animal’s beha..

DEVELOPING DIGITAL PROSTHETICS VIA EMPLOYING AVATARS, VR, AND AR ENVIRONMENTS 

AUGUST 29, 2016

Augmented Cognition Lab (ACLab) primarily researches digital prosthetics. Digital prosthetics are a class of cognitive and neurological assistive devices and can be used for rehabilitation, Parkinson’s patients, diabetics, and individuals on..

SPARSE REPRESENTATION/RECONSTRUCTION OF PLANTAR PRESSURE IMAGE 

  • AUGUST 8, 2016

The foot complications constitute a tremendous challenge for diabetic patients, caregivers, and the healthcare system. The proposed sparse representation/reconstruction system can: (1) Reconstruct the pressure image of foot using a personali..