Hanger @ Athens Science Festival
Technopolis, City of Athens
April 24-29, 2018

HanGeR - Hand Gesture Recognition

The Hand Gesture Recognition application is a wearable application, used to capture data from smart watches with 3-axial accelerometers and gyroscopes and during hand movements, while making specific gestures. The data collected from the apps are used to create a dataset for training a Deep Learning model, in order to support hand gesture recognition. The application is available in two versions: one for apple watches and one for android wear devices.

Currently, capturing data for 6 gestures is available:

 

  1.     “Eating”
    Where the user is asked to use a fork and provide accelerometer and gyroscope measurements for detection of eating.
  2.     “Reach for wallet”
    Where the user is asked to reach for his/her wallet and provide accelerometer and gyroscope measurements for detection of the actions taken when someone is reaching for his/her wallet in his/her back pocket.
  3.     “Reach for wallet (fake action)”
    Where the user is asked to make a certain movement (tap two times on his/her back pocket) before reaching for his/her wallet and provide accelerometer and gyroscope measurements for detection of this distress signal movement, which could indicate a robbery.
  4.     “Hands up”
    Where the user is asked to put his/her hands up, in order to provide accelerometer and gyroscope measurements for detection of a possible life threatening situation.
  5.     “Distress signal”
    Where the user is asked to shake his/her hand three times, while pointing down, in order to provide accelerometer and gyroscope measurements for detection of a stressful or dangerous situation.
  6.     “Wave”
    Where the user is asked to wave, as if hailing someone with his/her hand up, in order to provide accelerometer and gyroscope measurements for detection of waiving.

The applications are able to collect data continuously, while triggering of the above gestures can be done through the user’s mobile phone.

 

In particular, the captured motion data are emitted to the tethered smartphone and are fed to the trained Deep Learning model, which is deployed to the smartphone. The Deep Learning model classifies them real-time into the aforementioned 6 categories.

 

  • Development status: Under development
  • Project Responsible: This email address is being protected from spambots. You need JavaScript enabled to view it. / This email address is being protected from spambots. You need JavaScript enabled to view it.

Go to top