Using computer vision to enforce sleeping pose with the Jetson Nano and OpenCV

(special thanks to Eon-kun for helping demonstrate what it looks like)

Imagine you HAVE to sleep on your back for some reason and possibly restrict neck movement during the night. Here are some options:

  • Tennis balls strapped to sides
  • Placing an iphone on chest/pocket and using an app (SomnoPose) that monitors position with the accelerometer and beeps when it detects angle changes. (it works ok but the app is old and has some quirks like not running in the background)

The above methods are missing something though – they don’t detect head rotation. If you look at the wall instead of the ceiling while not moving your body, they don’t know.

The tiny $99 Jetson Nano computer combined with a low light USB camera can solve this problem in under 100 lines of Python code! (A Raspberry Pi would work too)

The open source software OpenCV is used to processed the camera images. When the program can’t detect a face, it plays an annoying sound until it does, forcing you to wake up and move back into the correct position so you can enjoy sweet silence.

If you’re interested in playing with stuff like this, I recommend Paul McWhorter’s “AI on the Jetson Nano” tutorial series, the code below can be used with that.

I’m really excited about the potential of DIY electronics projects like this to help with real life solutions.

The Pi and Nano have GPIO pins so instead of playing a sound, we could just as easily activate a motor, turn a power switch on, whatever.

Of course instead of just tracking faces, it’s also possible to look for eye, colors, shapes or cars, anything really.

The Python code listing for Forcing you to sleep on your back

import cv2
import time
from playsound import playsound
import os

dispW=1024
dispH=768
timeAllowedWithNoFaceDetectBeforeWarning = 22
timeBetweenWarningsSeconds = 10

timeOfLastFaceDetect = time.time()
timeSinceLastDetect = time.time()
timeOfLastWarning = time.time()
warningCount = 0

def PlayWarningIfNeeded():
    global timeBetweenWarningsSeconds
    global timeOfLastWarning
    global warningCount

    if time.time() - timeOfLastWarning > timeBetweenWarningsSeconds:
        print ("WARNING!")
        warningCount = warningCount + 1
        os.system("gst-launch-1.0 -v filesrc location=/home/nano/win.wav ! wavparse ! audioconvert ! audioresample ! pulsesink")
        timeOfLastWarning = time.time()


bCloseProgram = False

cv2.namedWindow('nanoCam')
cv2.moveWindow('nanoCam', 0,0)
cam = cv2.VideoCapture("/dev/video0")

cam.set(cv2.CAP_PROP_FRAME_WIDTH,int(dispW))
cam.set(cv2.CAP_PROP_FRAME_HEIGHT,int(dispH))
cam.set(cv2.CAP_PROP_FPS, int(10))
face_cascade = cv2.CascadeClassifier('/home/nano/Desktop/Python/haarcascades/haarcascade_frontalface_default.xml')
fnt = cv2.FONT_HERSHEY_DUPLEX

while True:

    ret, frame = cam.read()
    frame = cv2.flip(frame, 0) #vertical flip

    #rotate 90 degrees
    #frame = cv2.rotate(frame, cv2.ROTATE_90_CLOCKWISE)
    gray=cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    faces = face_cascade.detectMultiScale(gray, 1.3, 5)

    for (x,y,w,h) in faces:
           cv2.rectangle(frame, (x,y), (x+w, y+h), (0,255,0), 4)
           timeOfLastFaceDetect = time.time()
   
    timeSinceLastDetect = time.time()-timeOfLastFaceDetect
   
    if timeSinceLastDetect > timeAllowedWithNoFaceDetectBeforeWarning:
         PlayWarningIfNeeded()
        
    text = "Seconds since face: {:.1f} ".format(timeSinceLastDetect)
    frame = cv2.putText(frame, text, (10,dispH-65),fnt, 1.5,(0,0,255), 2)

    text = "Warnings: {} ".format(warningCount)
    frame = cv2.putText(frame, text, (10,dispH-120),fnt, 1.5,(0,255,255), 2)

    cv2.imshow('nanoCam',frame)
    if cv2.waitKey(10)==ord('q') or cv2.getWindowProperty('nanoCam',1) < 1:
        bCloseProgram = True
    
    if (bCloseProgram):
        break

cam.release()
cv2.destroyAllWindows()