Sensor Based (American) Sign Language Recognition
@kaggle.mouadfiali_sensor_based_american_sign_language_recognition
@kaggle.mouadfiali_sensor_based_american_sign_language_recognition
This dataset features detailed sensor data from smart gloves designed to translate sign language into spoken language. The gloves were equipped with 5 flex sensors and a "Grove - 6 - Axis Accelerometer & Gyroscope" for each hand. The dataset consists of different files, each recorded by a different individual, showcasing a variety of hand movements and signs. Key features include:
- Flex-Left/Right-p-Frame-n : Measures the degree of flexion for each finger over 20 frames, with 'n' indicating the frame and 'p' the finger number.
- Acceleration-X/Y/Z-Left/Right-Frame-n : Captures the hand's acceleration in 3D space across 20 frames.
- Orientation-X/Y/Z-Left/Right-Frame-n : Details the hand's orientation in 3D space for each frame.
So for each hand, there are 3 types of measurements (Flex, Acceleration, Orientation), each with 3 axes (X, Y, Z) for Acceleration and Orientation, and 5 measurements for Flex (one per finger), resulting in 11 measurements per hand per frame. Multiplied by 20 frames, and then doubled for both hands, this amounts to 440 columns. Additionally, there's a "SIGN" column, which is the target for each data entry, identifying the specific sign language gesture.
Each recording in the dataset represents a sequence of 20 frames, capturing detailed motion for both left and right hands.
This dataset is not yet complete and is still being collected.
@owid
Share link
Anyone who has the link will be able to view this.