Information technology for determination of characteristics on the face for emotional recognition
DOI:
https://doi.org/10.31891/CSIT-2020-2-7Keywords:
Facial Expression Recognition SystemsAbstract
One of the ways to process an image presented in the form of a set of pixels, in order to further identify, classify the objects present on it is to display the specified set in the form of sets of certain features. Such features are not universal in nature, but rather significantly depend on the tasks under consideration. For certain classes of problems, such features (model) are selected that best allow the application of appropriate methods to solve the problem. The paper considers a class of problems for recognizing the emotional state on a person's face. In, a convolutional neural network (CNN) is used to detect emotions. CNN differs from multilayer perceptron (MLP) in that they have hidden layers called convolutional layers. The proposed method is based on a two-tier CNN system. At the first level, the background of the image is removed to better reflect emotions. A standard CNN network module is used to obtain the primary expression vector (EV). EV is formed by tracking the relevant important points of the face. EV is directly related to changes in facial expression. EV is obtained using a basic perceptron unit plotted on a face image with the background removed. In the proposed model at the last stage, there is a non-evolutionary perceptron layer. Each of the
convoluted layers receives input data (images), converts them, and then takes them to the next level. After detecting a face, the CNN filter of the second part captures parts of the face, such as eyes, ears, lips, nose, and cheeks. The authors agree that the method has some limitations, and especially requires high computing power when setting up CNN. The technology of determination of characteristic features on the face for recognition of emotional manifestations is presented and experimentally investigated.