Model 4: Static Gesture Classification
This final model takes the embedding and classifies it into a known, pre-defined gesture.
- Model:
canned_gesture_classifier.xml - Purpose: To perform classification on the gesture embedding from Stage 3, identifying which of several pre-trained static gestures (like "Fist" or "Open Palm") is being performed.
- Input:
[1, 128]- The gesture embedding from Stage 3. - Output:
[1, 8]- A vector of probabilities for each of the 8 possible gestures. - Key Functions & Logic:
- Inference & Classification (
classify_gestureinhand_landmark.py): The embedding is fed into thecanned_gesture_classifiermodel. The function applies a sigmoid activation to the raw output probabilities and finds the gesture with the highest score. If the confidence exceeds a threshold, the gesture name (e.g.,"Closed_Fist") is stored in theHandRegionobject.
- Custom Mapping: Users can choose which gesture triggers which action, offering flexibility and control.
- Inference & Classification (