CN111414895A - Face recognition method and device and storage equipment - Google Patents
Face recognition method and device and storage equipment Download PDFInfo
- Publication number
- CN111414895A CN111414895A CN202010280653.5A CN202010280653A CN111414895A CN 111414895 A CN111414895 A CN 111414895A CN 202010280653 A CN202010280653 A CN 202010280653A CN 111414895 A CN111414895 A CN 111414895A
- Authority
- CN
- China
- Prior art keywords
- face
- picture
- network
- recognition method
- coordinate points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 22
- 238000012545 processing Methods 0.000 claims abstract description 17
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 9
- 238000001514 detection method Methods 0.000 claims abstract description 8
- 210000003128 head Anatomy 0.000 claims description 7
- 230000004913 activation Effects 0.000 claims description 4
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 238000003709 image segmentation Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 3
- 210000001061 forehead Anatomy 0.000 claims description 2
- 239000011521 glass Substances 0.000 claims description 2
- 238000009826 distribution Methods 0.000 description 7
- 238000013135 deep learning Methods 0.000 description 5
- 238000012549 training Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 210000000744 eyelid Anatomy 0.000 description 2
- 230000008921 facial expression Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Human Computer Interaction (AREA)
- Life Sciences & Earth Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a face recognition method, a device and a storage device, which can accurately acquire a face image in a complex environment, and comprises the steps of acquiring a face picture to be recognized, and detecting a face area in the picture by adopting a target detection algorithm; processing the expression picture by using a WGAN network, removing a shelter and restoring a complete and clear face picture; processing the face picture by adopting an FCN (fuzzy C-means) network, distinguishing the face from other parts of the head, and recording face edge coordinate points; and selecting a face area according to the face edge coordinate points, and extracting a clear and complete face picture.
Description
Technical Field
The invention relates to the technical field of machine learning, in particular to a face recognition method, a face recognition device and storage equipment.
Background
With the continuous promotion of the reform of 'putting, managing and serving', in order to promote the improvement of the service quality of government affairs windows, the artificial intelligence technology is needed to be used for making an evaluation reference for the service of workers.
Therefore, it is a feasible idea to adopt an image recognition technology based on deep learning to perform facial expression recognition on the customer so as to score the service according to the facial expression of the customer. However, one of the factors restricting the improvement of the expression recognition effect is that the face of the customer is not detected accurately enough in a complex scene, and the recognition effect is easily influenced by various shelters, backgrounds and the like.
Disclosure of Invention
The invention aims to provide a face recognition method, a face recognition device and storage equipment, which are used for accurately acquiring a face image in a complex environment.
In order to achieve the above object, an aspect of the present invention provides a face recognition method, including:
acquiring a face picture to be recognized, and detecting a face area in the picture by adopting a target detection algorithm;
processing the expression picture by using a WGAN network, removing a shelter and restoring a complete and clear face picture;
processing the face picture by adopting an FCN (fuzzy C-means) network, distinguishing the face from other parts of the head, and recording face edge coordinate points;
and selecting a face area according to the face edge coordinate points, and extracting a clear and complete face picture.
Furthermore, a face detection algorithm based on the SSD network is used for obtaining the face position in the image, the image is input into the SSD network, and the coordinates of the face area are obtained from the output of the network.
Further, the WGAN network model includes a first generation network and a first discriminant network, the first generation network includes a convolution layer, a hole convolution layer, a batch normalization layer, an average pooling layer, an lrelu active layer, a relu active layer, and a residual block, and the first discriminant network includes 5 convolution blocks.
Furthermore, a WGAN network model is adopted to learn the face picture data set with and without the shielding objects, and complete and clear face pictures are restored, wherein the shielding objects comprise glasses, beards, a hat, forehead shielding objects and the like.
Further, the FCN network samples the convolution result using the deconvolution layer, thereby making a prediction on the pixels of the face picture, and performing pixel-level classification to complete image segmentation.
Further, a face area is selected according to the face edge coordinate points, a background area outside the face area is removed, and the face area is subjected to standardization processing.
In another aspect, the present invention further provides a face recognition apparatus, including:
the face image acquisition module is used for acquiring a face image to be recognized;
the obstruction removing module is used for generating an expressionless standard graph according to the existing expression picture through the WGAN network;
the face area distinguishing module is used for processing the face picture by adopting an FCN (fuzzy C-means) network, distinguishing the face from other parts of the head and recording face edge coordinate points;
and the face picture extraction module is used for selecting a face area according to the face edge coordinate points and extracting a clear and complete face picture.
In another aspect, the present invention further provides a storage device, where the storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by a processor to perform the steps in the method for face recognition according to the claims above.
The invention provides a face recognition method, a device and a storage device, which can accurately acquire a face image in a complex environment, and comprises the steps of acquiring a face picture to be recognized, and detecting a face area in the picture by adopting a target detection algorithm; processing the expression picture by using a WGAN network, removing a shelter and restoring a complete and clear face picture; processing the face picture by adopting an FCN (fuzzy C-means) network, distinguishing the face from other areas of the head, and recording face edge coordinate points; and selecting a face area according to the face edge coordinate points, and extracting a clear and complete face picture.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a flowchart of a method of face recognition according to an embodiment of the present invention.
Fig. 2 is a system architecture diagram of a face recognition apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. A
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
Where similar language of "first/second" appears in the specification, the following description is added, and where reference is made to the term "first \ second \ third" merely for distinguishing between similar items and not for indicating a particular ordering of items, it is to be understood that "first \ second \ third" may be interchanged both in particular order or sequence as appropriate, so that embodiments of the application described herein may be practiced in other than the order illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
A face recognition method, apparatus, and storage device according to embodiments of the present invention are described below with reference to the accompanying drawings, and first, a face recognition method according to embodiments of the present invention will be described with reference to the accompanying drawings.
Fig. 1 is a flowchart of a method of a face recognition method according to an embodiment of the present invention. As shown in fig. 1, the evaluation method includes the steps of:
and step S1, acquiring a human face picture to be recognized, and detecting the human face area in the picture by adopting a target detection algorithm.
As one embodiment, the invention uses a face detection algorithm based on an SSD network to obtain the face position in an image, inputs the image into the SSD network, and obtains the coordinates of a face area from the output of the network.
Specifically, the SSD algorithm (Single Shot multi box Detector), the backbone network of the SSD, is based on the traditional image classification network, such as VGG, ResNet, etc. In this embodiment, a feature map (feature maps) of a human face is obtained through processing of a convolutional layer (con). Regression calculation is carried out on the feature maps to obtain the position and the category of the face, so that the coordinates of the face region are obtained.
Step S2, processing the expression picture by using WGAN network, removing the obstruction and restoring a complete and clear face picture;
a Generative Adaptive Network (GAN) is a Generative deep learning Network model, and has a good application effect in the field of computer vision in recent years. With the rapid development of deep learning and mobile devices, the application of deep learning in the fields of image processing, image generation, image style migration and the like has a very great application effect. The GAN can generate a target data set to make up for the defect of insufficient training data, so that the GAN has great significance for deep learning.
In one embodiment, the WGAN network model includes a first generation network including a convolutional layer, a hole convolutional layer, a batch normalization layer, an average pooling layer, an lrelu activation layer, a relu activation layer, and a residual block, and a first discriminant network including 5 convolutional blocks.
The principle of the WGAN network is:
(1) a first generation network and a first generation first discrimination network are trained. From a noise profile, samples were taken as input to a first generation first generator network that could generate some poor pictures of the face. Then, the first generation first discrimination network can judge the existing face picture and the generated face picture, and the discriminator can accurately discriminate which of the existing face pictures are and which of the generated face pictures are.
(2) And training the second generation network and the second generation discrimination network. Better face pictures can be generated, so that the second generation first discriminator is difficult to judge which are better face pictures and which are worse face pictures.
(3) There are 3 rd generation and 4 th generation. . . Generation n. Stopping until the n-th generation discrimination network can hardly judge a better face picture. The nth generation generator is the best face picture generator, and can be used for face picture generation.
Specifically, the training process of the WGAN network model of this embodiment is as follows: loading data, normalizing the data and remolding the data; constructing a Wtherstein distance as a judgment index of a judgment network; sampling a noise picture X from Gaussian distribution as an input of a generating network G, performing network training through full connection, taking the output of the generating network as one input of a discrimination network D, taking a real training sample as the other input, performing loss function calculation through the discrimination network, and finally obtaining an image required by deep convolutional neural network recognition through minimizing the size of the loss function.
The loss function of the WGAN network model of the present embodiment is:
where n (Pr, Pg) is the set of all possible joint distributions of Pr and Pg in combination, i.e. the edge distribution of each distribution in n (Pr, Pg) is Pr and Pg for each possibleFor the joint distribution gamma, a real sample x and a generated sample y can be obtained by sampling (x, y) -gamma from the gamma, the distance between the pair of samples is calculated to obtain the expected value E of the distance of the samples under the joint distribution gamma(x,y)~γ[||x-y||]The lower bound taken by this expectation is taken from all possible joint distributions.
Specifically, in this embodiment, 2000 images with and without a mask are respectively used to train the WGAN network model, and the image characteristics of converting a map with a mask into a map without a mask are learned.
Step S3, extracting the coordinate points of the sensitive area of the blankness standard graph, and respectively calculating a first angle value of a connecting line between the coordinate points and the central point.
In one embodiment, the number of the sensitive area coordinate points of the blankness standard chart is 15, and the sensitive area coordinate points include the central points of the upper eyelid and the lower eyelid of each eye, the end points of the two ends of each eyebrow, the central points of the upper lip and the lower lip, the end points of the two ends of the corner of the mouth, and the end points of the corner of the eye of each eye.
Further, angle values of connecting lines between the center points of the nose and the coordinate points of the sensitive region are calculated, and first angle values (X1, X2, X14) of 14 blankness standard graphs are generated respectively.
Step S3, processing the face picture using the FCN network, distinguishing the face from other parts of the head, and recording face edge coordinate points.
As will be appreciated by those skilled in the art, FCN networks classify images at the pixel level, thereby solving the semantic level image segmentation (segmentation) problem. Unlike the classic CNN which uses a full link layer to obtain a fixed-length feature vector for classification (full link layer + softmax output) after a convolutional layer, the FCN network can accept an input image of any size, and uses a deconvolution layer to upsample the feature map of the last convolutional layer to restore it to the same size as the input image, thereby generating a prediction for each pixel, simultaneously retaining spatial information in the original input image, and finally performing pixel-by-pixel classification on the upsampled feature map.
And step S4, selecting a face area according to the face edge coordinate points, and extracting a clear and complete face picture.
Specifically, a face region is selected according to the face edge coordinate points, a background region outside the face region is removed, the face region is standardized, and a standardized face picture is generated.
As shown in fig. 2, in another aspect, the present invention further provides a face recognition apparatus, including:
the face image acquisition module 101 is used for acquiring a face image to be recognized;
the obstruction removing module 102 is used for generating an expressionless standard graph according to the existing expression picture through the WGAN network;
the face area distinguishing module 103 is used for processing the face picture by adopting an FCN (face-communications network), distinguishing the face from other parts of the head, and recording face edge coordinate points;
the face image extraction module 104 is configured to select a face region according to the face edge coordinate points, and extract a clear and complete face image.
In another aspect, the present invention further provides a storage device, where the storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by a processor to perform the steps in the method for face recognition according to the claims above.
In another aspect, the present invention further provides a storage device, wherein the storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by a processor to perform the steps of the satisfaction evaluation method of the claims.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including AN object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (8)
1. A face recognition method, comprising:
acquiring a face picture to be recognized, and detecting a face area in the picture by adopting a target detection algorithm;
processing the expression picture by using a WGAN network, removing a shelter and restoring a complete and clear face picture;
processing the face picture by adopting an FCN (fuzzy C-means) network, distinguishing the face from other parts of the head, and recording face edge coordinate points;
and selecting a face area according to the face edge coordinate points, and extracting a clear and complete face picture.
2. A face recognition method as claimed in claim 1, characterized in that the face detection algorithm based on the SSD network is used to obtain the face position in the image, the image is input to the SSD network, and the coordinates of the face area are obtained from the output of the network.
3. The face recognition method of claim 1,
the WGAN network model comprises a first generation network and a first judgment network, wherein the first generation network comprises a convolution layer, a cavity convolution layer, a normalization layer, an average pooling layer, an lrelu activation layer, a relu activation layer and a residual block, and the first judgment network comprises 5 convolution blocks.
4. The face recognition method of claim 3,
and learning the face picture data set with and without the shelter by adopting a WGAN network model, and restoring a complete and clear face picture, wherein the shelter comprises glasses, beard, a hat, a forehead shelter and the like.
5. The face recognition method of claim 1,
the FCN adopts the deconvolution layer to sample the convolution result, predicts the pixels of the face picture, and then classifies the pixel level to complete the image segmentation.
6. The face recognition method of claim 1,
and selecting a face area according to the face edge coordinate points, removing a background area outside the face area and carrying out standardization processing on the face area.
7. A face recognition apparatus, comprising:
the face image acquisition module is used for acquiring a face image to be recognized;
the obstruction removing module is used for generating an expressionless standard graph according to the existing expression picture through the WGAN network;
the face area distinguishing module is used for processing the face picture by adopting an FCN (fuzzy C-means) network, distinguishing the face from other parts of the head and recording face edge coordinate points;
and the face picture extraction module is used for selecting a face area according to the face edge coordinate points and extracting a clear and complete face picture.
8. A storage device, wherein the storage medium stores a plurality of instructions adapted to be loaded by a processor to perform the steps of the satisfaction assessment method of any of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010280653.5A CN111414895A (en) | 2020-04-10 | 2020-04-10 | Face recognition method and device and storage equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010280653.5A CN111414895A (en) | 2020-04-10 | 2020-04-10 | Face recognition method and device and storage equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111414895A true CN111414895A (en) | 2020-07-14 |
Family
ID=71493533
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010280653.5A Pending CN111414895A (en) | 2020-04-10 | 2020-04-10 | Face recognition method and device and storage equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111414895A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112365416A (en) * | 2020-11-10 | 2021-02-12 | 浙江大华技术股份有限公司 | Picture occlusion processing method and device, storage medium and electronic device |
CN112836654A (en) * | 2021-02-07 | 2021-05-25 | 上海卓繁信息技术股份有限公司 | Expression recognition method and device based on fusion and electronic equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103927520A (en) * | 2014-04-14 | 2014-07-16 | 中国华戎控股有限公司 | Method for detecting human face under backlighting environment |
CN105631406A (en) * | 2015-12-18 | 2016-06-01 | 小米科技有限责任公司 | Method and device for recognizing and processing image |
CN110660076A (en) * | 2019-09-26 | 2020-01-07 | 北京紫睛科技有限公司 | Face exchange method |
-
2020
- 2020-04-10 CN CN202010280653.5A patent/CN111414895A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103927520A (en) * | 2014-04-14 | 2014-07-16 | 中国华戎控股有限公司 | Method for detecting human face under backlighting environment |
CN105631406A (en) * | 2015-12-18 | 2016-06-01 | 小米科技有限责任公司 | Method and device for recognizing and processing image |
CN110660076A (en) * | 2019-09-26 | 2020-01-07 | 北京紫睛科技有限公司 | Face exchange method |
Non-Patent Citations (2)
Title |
---|
林云等: "基于语义分割的活体检测算法" * |
陈灿林: "面向部分遮挡人脸识别的研究与实现" * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112365416A (en) * | 2020-11-10 | 2021-02-12 | 浙江大华技术股份有限公司 | Picture occlusion processing method and device, storage medium and electronic device |
CN112836654A (en) * | 2021-02-07 | 2021-05-25 | 上海卓繁信息技术股份有限公司 | Expression recognition method and device based on fusion and electronic equipment |
CN112836654B (en) * | 2021-02-07 | 2024-06-07 | 上海卓繁信息技术股份有限公司 | Fusion-based expression recognition method and device and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110909651B (en) | Method, device and equipment for identifying video main body characters and readable storage medium | |
CN108334848B (en) | Tiny face recognition method based on generation countermeasure network | |
CN111178183B (en) | Face detection method and related device | |
WO2021139324A1 (en) | Image recognition method and apparatus, computer-readable storage medium and electronic device | |
CN109389129A (en) | A kind of image processing method, electronic equipment and storage medium | |
CN111353385B (en) | Pedestrian re-identification method and device based on mask alignment and attention mechanism | |
CN112257665A (en) | Image content recognition method, image recognition model training method, and medium | |
CN113487610B (en) | Herpes image recognition method and device, computer equipment and storage medium | |
CN113869449A (en) | Model training method, image processing method, device, equipment and storage medium | |
CN112836625A (en) | Face living body detection method and device and electronic equipment | |
CN117095180B (en) | Embryo development stage prediction and quality assessment method based on stage identification | |
CN113177892B (en) | Method, apparatus, medium and program product for generating image restoration model | |
CN113780145A (en) | Sperm morphology detection method, sperm morphology detection device, computer equipment and storage medium | |
CN112052730B (en) | 3D dynamic portrait identification monitoring equipment and method | |
CN112819008B (en) | Method, device, medium and electronic equipment for optimizing instance detection network | |
CN111414895A (en) | Face recognition method and device and storage equipment | |
CN115641317B (en) | Pathological image-oriented dynamic knowledge backtracking multi-example learning and image classification method | |
CN110472673B (en) | Parameter adjustment method, fundus image processing device, fundus image processing medium and fundus image processing apparatus | |
CN115761834A (en) | Multi-task mixed model for face recognition and face recognition method | |
CN112766028A (en) | Face fuzzy processing method and device, electronic equipment and storage medium | |
CN112818899B (en) | Face image processing method, device, computer equipment and storage medium | |
CN114170662A (en) | Face recognition method and device, storage medium and electronic equipment | |
CN116152576B (en) | Image processing method, device, equipment and storage medium | |
CN112183336A (en) | Expression recognition model training method and device, terminal equipment and storage medium | |
CN111723688A (en) | Human body action recognition result evaluation method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200714 |
|
RJ01 | Rejection of invention patent application after publication |