Gunes -

For specific gestures like head nods or shakes, the is considered the distinguishing feature. This "angle feature" represents the trajectory of the head's movement in 2D space. 4. Normalize and Segment Movement is segmented into temporal phases (e.g., neutral →right arrow →right arrow →right arrow

The first step is to identify the face within a video frame. Researchers like Gunes often use standard detection techniques to isolate the facial area, ensuring that background noise does not interfere with the feature extraction. 2. Compute Optical Flow For specific gestures like head nods or shakes,

To capture movement over time, is calculated between two consecutive frames. This process determines the magnitude and direction of pixel movement within the detected facial region. 3. Extract the Angle Feature Normalize and Segment Movement is segmented into temporal

offset). This ensures the machine can recognize when a gesture starts, peaks in intensity, and ends. 5. Fuse Multi-modal Data Compute Optical Flow To capture movement over time,

Gunes’s work often emphasizes , where facial features are combined with other modalities like body gestures or audio markers (e.g., MFCCs) to improve the accuracy of emotion recognition. ✅ Result