If you are a USA-located parent of an infant under age one and want to participant in this data collection study (supported by an NSF grant and several industry awards), please email us at: aclabdata@northeastern.edu. The participants will receive a $100 Amazon e-gift card, a commercial baby monitor and toys.

The Augmented Cognition Lab (ACLab) and our cross-disciplinary team of collaborators pioneer the creation of an artificial intelligence (AI)-driven proactive health system that unobtrusively tracks infant development (See our YouTube demo). By observing and analyzing everything from infant’s PosePostureAction, to BreathingFeeding, and Sleep patterns, extracted from footage recorded by affordable and readily available cameras, and via developing advanced machine learning tools to quantify various dimensions of infant development, potentially we enable early detection of neurodevelopmental conditions within the first few months of life. Despite recommended pediatric check-ups, families often encounter barriers to regular visits, and clinical assessments frequently overlook 60-80% of developmental delays. This effort seeks to bridge these gaps by offering continuous, objective monitoring in infants’ natural environments, thereby providing a scalable solution. With an interdisciplinary team composed of AI/engineering/cloud computing experts and clinical developmental researchers, we are uniquely equipped to accomplish this objective.

But, we also require the support of other key stakeholders, particularly parents of young infants, to turn this significant goal into a reality.

Did you know round 7% of U.S. children are impacted by neurodevelopmental disabilities, emphasizing the need for early detection and intervention to promote optimal long-term development. Additionally, each year, approximately 10,000 infants find themselves in ER due to crib-related injuries, with an alarming 100 fatalities resulting from unsafe sleeping environments. We’re on a crusade to significantly reduce that number by harnessing the power of AI. 

Your participation is crucial! By supporting our efforts, you’re aiding in creating a healthier developmental trajectory for all infants. Our non-intrusive, video-based system would offer a non-invasive way to monitor your baby’s movements. Moreover, we provide a baby monitor that needs no manual handling during recording. We envision our AI-driven infant monitoring technology would offer an extra set of eyes to watch over little ones, minimizing parental worries and contributing to a safer and healthier infancy. 

At ACLab, we have been actively working on this topic with strong track record of paper publications, public release of codes and novel datasets, and several media coverage:

Cade Prize for Inventivity (September 2024): The winner of the 2024 Cade Prize for Inventivity Technology category is Sarah Ostadabbas from Northeastern University in Boston. Congratulations Sarah!

Equalize2024 Pitch Competition (August 2024): Equalize2024 Pitch Competition and Symposium Celebrates Academic Women Innovators and Here

TiE Boston Women Pitch Competition (August 2024): Meet the finalists for the TiE Boston Women Pitch Competition 2024!

BostInno and Boston Business Journal article (July 2024): From childcare to urban planning, these startups take AI to new areas

C10 Labs (June 2024): Introducing C10 Labs’ “AI-First” Summer Cohort ’24!

The Cade Museum Inventivity Podcast (April 2024): The Future of Baby Monitoring with Sarah Ostadabbas

9th Annual LDV Vision Summit (April 2023): Professor Ostadabbas spoke about machine learning algorithms that detect signs of future developmental disorders in infants.

Acorn Innovation Award (January 2023): MassVentures Announces $195,000 in Seed Funding for 12 Faculty Research Projects

EAI Post (September 2022): Game-Changing Startup by EAI Faculty Member Introduces AI to Baby Monitors

Northeastern CRII (May 2022): Opening new horizons for infant health through machine learning

CNBC (May 2022): Meta is opening its first store as VR headsets inch closer to mainstream reality

EAI Post (April 2022): Sarah Ostadabbas on the Intersection of Computer Vision and Healthcare Technologies

Northeastern University and the Deloitte AI Institute (May 2021):  Leading conversations in AI Future of AI: From Academia to Industry

Gizmodo (April 2021): Is There VR for Senses Other Than Sight?

Gizmodo (September 2020): Why Are VR Headsets So Bulky?

Northeastern COE (March 2020): Invasion of the Bias Snatchers

Experience Magazine (October 2019): ‘Cuddlebots.’ Space-age masks. How future technology will help us sleep

Computer Vision News (December 2017): Women in Computer Vision 

Publications:

1. Infant 2D Pose Estimation: “Invariant representation learning for infant pose estimation with small data” 

2. Infant 3D Pose Estimation: “Heuristic Weakly Supervised 3D Human Pose Estimation” 

3. Infant Posture Classification: “Appearance-Independent Pose-Based Posture Classification in Infants” 

4. Infant Action Recognition: “Posture-based Infant Action Recognition in the Wild with Very Limited Data” 

5. Infant Body Symmetry Estimation: “Computer vision to the rescue: Infant postural symmetry estimation from incongruent annotations

6. Infant Face and Body Symmetry Estimation: “Automatic Assessment of Infant Face and Upper-Body Symmetry as Early Signs of Torticollis” 

7. Infant Face Pose Estimation: “InfAnFace: Bridging the infant–adult domain gap in facial landmark estimation in the wild

8. Infant Sucking Pattern Action Recognition: “A Video-based End-to-end Pipeline for Non-nutritive Sucking Action Recognition and Segmentation in Young Infants” 

9. Infant Respiratory Pattern Recognition: “Automatic Infant Respiration Estimation from Video: A Deep Flow-based Algorithm and a Novel Public Benchmark” 

These papers are also accompanied by several open-source datasets including:

  1. Infant Action (InfAct)—A dataset consisting of 200 fully annotated home videos representing a wide range of common infant actions, intended as a public benchmark. 
  2. Infant Annotated Faces (InfAnFace)—A dataset consisting of 410 images of infant faces with labels for 68 facial landmark locations and various pose attributes.
  3. Synthetic and Real Infant Pose (SyRIP)—An infant 2D/3D body pose dataset with diverse and fully-annotated real infant images and generated synthetic infant images. 
  4. Simultaneously-collected multimodal Lying Pose (SLP)—The first-ever large scale dataset on in-bed poses in four imaging modalities. 

Please make sure to followup with the License instructions of these datasets: “By downloading or using any of the datasets provided by the ACLab, you are agreeing to the “Non-commercial Purposes” condition. “Non-commercial Purposes” means research, teaching, scientific publication and personal experimentation. Non-commercial Purposes include use of the Dataset to perform benchmarking for purposes of academic or applied research publication. Non-commercial Purposes does not include purposes primarily intended for or directed towards commercial advantage or monetary compensation, or purposes intended for or directed towards litigation, licensing, or enforcement, even in part. These datasets are provided as-is, are experimental in nature, and not intended for use by, with, or for the diagnosis of human subjects for incorporation into a product.”