Abstract. This article provides a review of the literature and existing research in recent years on the topic, describes the tasks associated with recognizing and predicting human movements


Download 92.98 Kb.
bet4/7
Sana19.11.2023
Hajmi92.98 Kb.
#1787048
1   2   3   4   5   6   7
Bog'liq
Maqola 3

Implementation of the program
The code was written in Python using the Google Colab interactive environment. The environment allows you to use the basic modern capabilities of popular Python libraries for data analysis and visualization. The work used libraries such as numpy for working with arrays, pandas for processing the dataset, the TensorFlow framework and its add-on, Keras for training a neural network and the mathplotlib library for visualizing data using graphs. OpenPose is a multi-user system [6] that allows you to detect key points of the human body, arms, face and legs in individual images. The framework was released as code in the Python programming language, a C++ implementation, and a Unity plugin.


Rice. 1. Operation of the OpenPose module
The operation of the method is illustrated in (Figure 1). The system takes a w×h color image as input and shows the 2D location of anatomical key points for each person in the image.

Rice. 2. Operation of the OpenPose system on collected video data


Description of the dataset. During a laboratory data collection experiment, about 10 hours of material was filmed from two angles. All video data was cut into 5-minute segments for ease of storage. For each video, markup was created - a table that indicated the frame number and six binary target variables corresponding to this frame that needed to be predicted in the task. The video was recorded from two different angles. Each column contained a target variable that corresponds to this action: 1 - the action is performed, 0 - not performed. The values of the target variables were filled in manually through frame-by-frame viewing of the obtained data from the video. Thus, the input was a video shot from two different angles, showing people performing six different actions. The output contains the following binary labels:
1. Grabbing an object
2. Lifting a large object
3. Lowering a large object
4. Working on the table
5. Bent over work
6. Work overhead

Download 92.98 Kb.

Do'stlaringiz bilan baham:
1   2   3   4   5   6   7




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling