CN101976360A - Sparse characteristic face recognition method based on multilevel classification - Google Patents
Sparse characteristic face recognition method based on multilevel classification Download PDFInfo
- Publication number
- CN101976360A CN101976360A CN 201010522281 CN201010522281A CN101976360A CN 101976360 A CN101976360 A CN 101976360A CN 201010522281 CN201010522281 CN 201010522281 CN 201010522281 A CN201010522281 A CN 201010522281A CN 101976360 A CN101976360 A CN 101976360A
- Authority
- CN
- China
- Prior art keywords
- face
- word bank
- facial image
- training
- vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Collating Specific Patterns (AREA)
Abstract
The invention discloses a sparse characteristic face recognition method based on multilevel classification, which mainly solves the defect that the traditional face recognition method can not effectively use multi-class face recognition. A realization process comprises the following steps of: (1) randomly dividing a face database for training into n sub-bases, respectively reducing the dimension of each sub-base, and retaining training face data after dimension reduction and a transformation matrix corresponding to each sub-base; (2) inputting a test face image, reducing the dimension of the test face image by using the transformation matrix of each sub-base, and retaining the test face data after dimension reduction; (3) carrying out inner-product operation by using the test face data after dimension reduction and training face data in each sub-base, using the front k sub-bases with a maximum inner product as candidate sub-bases, and reducing a searching range into the k sub-bases; (4) respectively recognizing faces in the k sub-bases and ensuring the classification of the test face images. Compared with the prior art, the invention is capable of effectively extracting face features and reducing computation complexity and is suitable for multi-class face recognition.
Description
Technical field
The invention belongs to technical field of image processing, relate to a kind of sparse table traveller on a long journey face recognition method, can be used for that police criminal detection is solved a case, the identity validation in the field such as gate control system or shooting and monitoring system or search.
Background technology
Recognition of face is refered in particular to utilize to analyze and is compared the computer technology that people's face visual signature information is carried out the identity discriminating.Recognition of face is one of challenging research direction of tool of pattern-recognition, machine learning and computer vision field, is the pattern recognition problem of a higher-dimension.Therefore people often extract feature with facial image, differentiate in the subspace of low-dimensional.Up to the present, various feature extracting methods have obtained in the recognition of face field using widely.
Recently, people such as Wright propose a kind of recognition of face new method SRC of the rarefaction representation based on signal, successfully the compressed sensing theory are applied to recognition of face.This method is based on sampling sparse signal representation theory, the classification problem of recognition of face being regarded as a plurality of linear regression model (LRM)s, test sample book is regarded as the linear combination of similar sample in the training storehouse, the coefficient of linear weighted function is sparse naturally for whole sample set, so just sparse reconstruct problem can be converted into L1 norm optimization problem.But this SRC method has the defective of following two aspects:
(1) owing to do not carry out feature extraction, the dimension of facial image is bigger, so when finding the solution the L1 norm, the complexity of calculating is very big;
(2) under the bigger situation of people's face classification number, the SRC method can not be carried out effective recognition.
Fisher linear discriminant analysis method can overcome the defective described in (1) effectively, the Fisher linear discriminant analysis is to find certain several direction in the original sample space, make that the effect of separating after projecting on this direction is best to sample, promptly find the projection line that is easy to classify most according to actual conditions, its basic thought is to make the interior distance of class of sample as far as possible little, and between class distance is big as far as possible.For the c classification problem, it can find c-1 projecting direction, thereby dimension is compressed to c-1, therefore, this method can not only merge the classification information of training sample effectively, extract feature according to classification capacity, also have outstanding data compression ability, can effectively reduce the data volume of subsequent treatment.
The sparse table traveller on a long journey face identifying of employing Fisher feature extraction as shown in Figure 2.Though this method can be extracted face characteristic effectively, reduce computation complexity, under the more situation of people's face classification number, the discrimination of this method will reduce.
Summary of the invention
The objective of the invention is to overcome the deficiency of above-mentioned prior art, a kind of sparse table traveller on a long journey face recognition method based on multiclass classification is proposed, to reduce the dimension of facial image, the complexity that reduces to calculate, and under the more situation of people's face classification number, improve discrimination.
Realize that purpose technical scheme of the present invention is to adopt the Fisher feature extracting methods that facial image is carried out dimensionality reduction, thereby reduce the complexity of calculating, and on the basis of Fisher feature extracting method, utilize the strategy of multiclass classification, implement feature extraction to every group respectively, construct a kind of criterion the face image searching scope is tapered on seldom several groups, when reducing computation complexity, to realize effective identification others' face of multiclass.Concrete steps comprise as follows:
(1) face database that will be used to train at random be divided into n word bank, n gets 4, respectively each word bank utilization Fisher criterion is realized dimensionality reduction, keeps the training of human face data behind the dimensionality reduction and the transformation matrix W of corresponding each word bank;
(2) input test facial image carries out matrixing respectively under the transformation matrix W of each word bank, realize dimensionality reduction fast, and keeps the test person face data behind the dimensionality reduction
(3) make inner product operation with test person face data behind the dimensionality reduction and the training of human face data B in each word bank, preceding k word bank selecting the inner product value maximum narrows down in this k word bank the hunting zone as candidate's word bank, and k gets 2;
(4) respectively in k word bank utilization based on the face identification method SRC of the rarefaction representation of signal people's face is discerned, determine the classification under the test facial image.
The present invention compared with prior art has following advantage:
1. extract face characteristic effectively, reduce computation complexity
The present invention has realized dimensionality reduction because utilization Fisher criterion is carried out feature extraction to facial image, has therefore reduced the computation complexity when utilization is discerned people's face based on the face identification method SRC of the rarefaction representation of signal effectively.The Fisher linear discriminant analysis is to find certain several direction in the original sample space, makes that the effect of separating after projecting on this direction is best to sample, promptly finds the projection line that is easy to classify most according to actual conditions.Its basic thought is to make the interior distance of class of sample as far as possible little, and between class distance is big as far as possible.For the c classification problem, it can find c-1 projecting direction, thereby dimension is compressed to c-1.Therefore, this method can not only merge the classification information of training sample effectively, extracts feature according to classification capacity, also has outstanding data compression ability, can effectively reduce the data volume of subsequent treatment.
2. be applicable to multi-class recognition of face situation.
The present invention makes that by the strategy of introducing multiclass classification the range of application of Fisher method is wider, because the multiclass classification strategy can be divided into multi-class face database suitable some groups, thereby overcome the limitation of Fisher method dimensionality reduction DeGrain under multi-class situation, improved the discrimination of people's face.
Description of drawings
Fig. 1 is the sparse table traveller on a long journey face identifying figure that the present invention is based on multiclass classification.
Fig. 2 is the sparse table traveller on a long journey face identifying figure of the existing Fisher of employing feature extraction.
Embodiment
With reference to Fig. 1, implementation step of the present invention is as follows:
Step 1, the face database that will be used to train at random be divided into n word bank, n gets 4, and respectively the utilization of the training facial image in each word bank Fisher criterion is realized dimensionality reduction according to following steps:
(1a) the training of human face image pattern collection of establishing in the word bank is: X={x
i, i=1,2 ..., N, wherein, N is the total number of training facial image in the word bank, the classification number of training facial image is c in the word bank, the matrix S of taking a walk between the class of the training of human face image pattern in the calculating word bank
bWith the matrix S of taking a walk in the class
w:
Wherein, n
iBe the number of i class training facial image, μ
iBe the average of i class facial image, μ is the average of face images, D
iBe the set of i class training of human face image pattern, x is D
iIn a width of cloth facial image;
(1b) calculation criterion function J (W):
Wherein, W is one and makes criterion function J (W) obtain peaked optimum matrix;
(1c) i the column vector of establishing optimum matrix W is w
i, w then
iBe the pairing proper vector of eigenvalue of maximum in the following equation:
S
bw
i=λ
iS
ww
i, λ wherein
iIt is eigenwert
Because S
bBe c order be 1 or 0 matrix and, it is separate wherein having only c-1 matrix, so S
bOrder be c-1 or lower, the eigenwert of non-zero has only c-1 at the most like this, the characteristic of correspondence vector also just has c-1 at the most, just optimum matrix W has c-1 column vector at most;
(1d) the training facial image in the word bank is projected to respectively on c-1 the column vector of optimum matrix W, obtain the training facial image after the projection, its dimension is c-1, thereby realizes the dimensionality reduction to the training facial image, the training of human face data behind the reservation dimensionality reduction and the transformation matrix W of corresponding each word bank.
Step 2 is to test facial image y dimensionality reduction
If the size of test facial image y is u*v, the size of the transformation matrix W of each word bank is m* (c-1), test person face image stretch is become the column vector of a m dimension, m=u*v wherein, transformation matrix W to each word bank asks transposition respectively, obtain the transformation matrix W ' of (c-1) * m size, with the test facial image y after matrix W ' the multiply by stretching, obtaining dimension is the test person face data of c-1:
Thereby realize dimensionality reduction to the test facial image.
Step 3 is determined candidate's word bank, to dwindle the hunting zone
If the training of human face data of each word bank are: B=[b
1, b
2..., b
P], with the test person face data behind the dimensionality reduction
Make inner product operation with the training of human face data B in each word bank, obtain test person face data
Inner product value P with each word bank:
Preceding k word bank selecting the inner product value maximum narrows down in this k word bank the hunting zone as candidate's word bank, and k gets 2.
Step 4, utilization is discerned people's face based on the face identification method SRC of the rarefaction representation of signal in k word bank respectively, determines the classification under the test facial image:
(4a) find the solution the rarefaction representation vector of testing facial image
Find the solution by one of following dual mode:
Mode one:
Mode two:
Wherein, I is a unit matrix, and x is the rarefaction representation vector of test facial image to be found the solution, and ε is an error threshold;
(4b) at everyone face classification i, according to the rarefaction representation vector of test facial image
Calculate residual error
Wherein
Be by the rarefaction representation vector
The new vector that obtains, in this vector, pairing element entry of i class people face and rarefaction representation vector
Middle elements corresponding item is identical, and other element entries are zero;
(4c) use the recognition result of the classification of residual error minimum as final people's face classification:
Advantage of the present invention is further specified by the data of following emulation
1. simulated conditions
(1) chooses Extended Yale B face database and carry out emulation experiment, and compare with method of the present invention with based on the recognition performance of the face identification method of Fisher feature extraction.
(2) Extended Yale B face database is by totally 2414 width of cloth image constructions of 38 classes, and selection 1216 width of cloth images wherein are as the training facial image, and remaining 1198 width of cloth images are as the test facial image.
(3) the training face database is divided into 4 word banks, i.e. n=4 in the emulation experiment.
(4) 4 word banks comprise 10,10,10 and 8 classes training facial image respectively in the emulation experiment, and the intrinsic dimensionality behind the dimensionality reduction is respectively 9,9,9 and 7, and each trains the number of facial image to be respectively: 320,320,320 and 256.
2. emulation content and result
(1) utilize existing face identification method based on the Fisher feature extraction to carry out emulation experiment, its result is as shown in table 1:
Table 1 is based on the discrimination and the recognition time of the face identification method of Fisher feature extraction
Intrinsic dimensionality | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 |
Discrimination | 0.9249 | 0.9307 | 0.9341 | 0.937 | 0.9366 | 0.9391 | 0.9516 | 0.9574 |
Recognition time | 0.1854 | 0.1896 | 0.1903 | 0.1998 | 0.2001 | 0.2036 | 0.2133 | 0.2204 |
Wherein recognition time refers to the recognition time of single width people face, and promptly total recognition time is divided by test person face picture number.
As seen from Table 1, in the time of between intrinsic dimensionality gets 30 and 37, be about 0.2003 (s) based on the average recognition time of the face identification method of Fisher feature extraction, average recognition rate is about 0.9389.
(2) utilize method of the present invention to carry out emulation experiment, to train face database to be divided into 4 word banks, 4 word banks comprise 10,10,10 and 8 classes training facial image respectively, intrinsic dimensionality behind the dimensionality reduction is respectively 9,9,9 and 7, each trains the number of facial image to be respectively: 320,320,320 and 256, select different test person face picture number to carry out 8 experiments, its result is as shown in table 2:
The discrimination of table 2 face identification method of the present invention and recognition time
Test person face picture number | 1198 | 1175 | 1150 | 1125 | 1100 | 1075 | 1050 | 1025 |
Discrimination | 0.9466 | 0.9478 | 0.9399 | 0.9401 | 0.9503 | 0.9456 | 0.9468 | 0.9557 |
Recognition time | 0.1051 | 0.0998 | 0.1032 | 0.1102 | 0.1078 | 0.1098 | 0.1086 | 0.1115 |
Wherein recognition time refers to the recognition time of single width people face, and promptly total recognition time is divided by test person face picture number.
As seen from Table 2, the average recognition time of method of the present invention is 0.1070 (s), and average recognition rate is 0.9466 (s).Owing to introduced the multiclass classification strategy, utilize method of the present invention to carry out recognition of face, when reducing computation complexity, can keep discrimination preferably.
(3) emulation experiment of choosing the final recognition of face rate of influence of candidate's word bank number k
This emulation is on the basis of Extended Yale B face database, to train face database to be divided into 4 word banks at random, respectively each word bank utilization Fisher criterion is realized dimensionality reduction, investigate the influence of the number k of candidate's word bank of choosing according to the maximum principle of inner product to final recognition of face rate more on this basis, its result is as shown in table 3:
Table 3 selects rate and final recognition of face rate according to the mistake that the maximum principle of inner product is chosen candidate's word bank
Wherein, candidate's word bank mistake selects rate to be meant that the correct classification of test person face is not selected into the probability of candidate's word bank when selecting word bank.
As seen from Table 3, it is the highest that the k value is got 2 o'clock final recognition of face rates, is higher than the recognition of face rate that k got at 3 o'clock, so the k value is not to be the bigger the better because the k value conference introduce the interference of wrong word bank, thereby influence final recognition of face rate.
In sum, sparse table traveller on a long journey face recognition method based on multiclass classification of the present invention, the advantage that has not only kept Fisher feature extracting method dimensionality reduction in face characteristic extracts, can also under the situation of multi-class face database, effectively use the Fisher feature extracting method, make that the face identification method SRC based on the rarefaction representation of signal can access better utilization.
Claims (4)
1. the sparse table traveller on a long journey face recognition method based on multiclass classification comprises the steps:
(1) face database that will be used to train at random be divided into n word bank, n gets 4, respectively each word bank utilization Fisher criterion is realized dimensionality reduction, keeps the training of human face data behind the dimensionality reduction and the transformation matrix W of corresponding each word bank;
(2) input test facial image carries out matrixing respectively under the transformation matrix W of each word bank, realize dimensionality reduction fast, and keeps the test person face data behind the dimensionality reduction
(3) make inner product operation with test person face data behind the dimensionality reduction and the training of human face data B in each word bank, preceding k word bank selecting the inner product value maximum narrows down in this k word bank the hunting zone as candidate's word bank, and k gets 2;
(4) respectively in k word bank utilization based on the face identification method SRC of the rarefaction representation of signal people's face is discerned, determine the classification under the test facial image.
2. face identification method according to claim 1, wherein step (1) is described realizes dimensionality reduction to each word bank utilization Fisher criterion respectively, carries out according to following steps:
(2a) the training of human face image pattern collection of establishing in the word bank is: X={x
i, i=1,2 ..., N, wherein, N is the total number of training facial image in the word bank, the classification number of training facial image is c in the word bank, the matrix S of taking a walk between the class of the training of human face image pattern in the calculating word bank
bWith the matrix S of taking a walk in the class
w:
Wherein, n
iBe the number of i class training facial image, μ
iBe the average of i class facial image, μ is the average of face images, D
iBe the set of i class training of human face image pattern, x is D
iIn a width of cloth facial image;
(2b) calculation criterion function J (W):
Wherein, W is one and makes criterion function J (W) obtain peaked optimum matrix;
(2c) i the column vector of establishing optimum matrix W is w
i, w then
iBe the pairing proper vector of eigenvalue of maximum in the following equation:
S
bw
i=λ
iS
ww
i, λ wherein
iIt is eigenwert
Because S
bBe c order be 1 or 0 matrix and, it is separate wherein having only c-1 matrix, so S
bOrder be c-1 or lower, the eigenwert of non-zero has only c-1 at the most like this, the characteristic of correspondence vector also just has c-1 at the most, just optimum matrix W has c-1 column vector at most;
(2d) the training facial image in the word bank is projected to respectively on c-1 the column vector of optimum matrix W, obtain the training facial image after the projection, its dimension is c-1, thereby realizes the dimensionality reduction to the training facial image.
3. face identification method according to claim 1, wherein step (4) described respectively in k word bank utilization based on the face identification method SRC of the rarefaction representation of signal people's face is discerned, carry out according to following steps:
(3a) find the solution the rarefaction representation vector of testing facial image by following formula
Wherein, x is the rarefaction representation vector of test facial image to be found the solution, and ε is an error threshold;
Wherein
Be by the rarefaction representation vector
The new vector that obtains, in this vector, pairing element entry of i class people face and rarefaction representation vector
Middle elements corresponding item is identical, and other element entries are zero;
(3c) use the recognition result of the classification of residual error minimum as final people's face classification:
4. face identification method according to claim 1, wherein step (4) described respectively in k word bank utilization based on the face identification method SRC of the rarefaction representation of signal people's face is discerned, carry out according to following steps:
(4a) find the solution the rarefaction representation vector of testing facial image by following formula
Wherein I is a unit matrix, and x is the rarefaction representation vector of test facial image to be found the solution, and ε is an error threshold;
(4b) at everyone face classification i, according to the rarefaction representation vector of test facial image
Calculate residual error
Wherein
Be by the rarefaction representation vector
The new vector that obtains, in this vector, pairing element entry of i class people face and rarefaction representation vector
Middle elements corresponding item is identical, and other element entries are zero;
(4c) use the recognition result of the classification of residual error minimum as final people's face classification:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201010522281 CN101976360B (en) | 2010-10-27 | 2010-10-27 | Sparse characteristic face recognition method based on multilevel classification |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201010522281 CN101976360B (en) | 2010-10-27 | 2010-10-27 | Sparse characteristic face recognition method based on multilevel classification |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101976360A true CN101976360A (en) | 2011-02-16 |
CN101976360B CN101976360B (en) | 2013-02-27 |
Family
ID=43576244
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 201010522281 Expired - Fee Related CN101976360B (en) | 2010-10-27 | 2010-10-27 | Sparse characteristic face recognition method based on multilevel classification |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101976360B (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102298703A (en) * | 2011-04-20 | 2011-12-28 | 中科院成都信息技术有限公司 | Classification method based on projection residual errors |
CN102915436A (en) * | 2012-10-25 | 2013-02-06 | 北京邮电大学 | Sparse representation face recognition method based on intra-class variation dictionary and training image |
CN103218609A (en) * | 2013-04-25 | 2013-07-24 | 中国科学院自动化研究所 | Multi-pose face recognition method based on hidden least square regression and device thereof |
CN103745465A (en) * | 2014-01-02 | 2014-04-23 | 大连理工大学 | Sparse coding background modeling method |
CN104318261A (en) * | 2014-11-03 | 2015-01-28 | 河南大学 | Graph embedding low-rank sparse representation recovery sparse representation face recognition method |
CN104463148A (en) * | 2014-12-31 | 2015-03-25 | 南京信息工程大学 | Human face recognition method based on image reconstruction and Hash algorithm |
CN103984918B (en) * | 2014-04-21 | 2015-06-10 | 郑州轻工业学院 | Human face image recognition method based on intra-class and inter-class variation |
CN105574475A (en) * | 2014-11-05 | 2016-05-11 | 华东师范大学 | Common vector dictionary based sparse representation classification method |
CN105868309A (en) * | 2016-03-24 | 2016-08-17 | 广东微模式软件股份有限公司 | Image quick finding and self-service printing method based on facial image clustering and recognizing techniques |
CN106066994A (en) * | 2016-05-24 | 2016-11-02 | 北京工业大学 | A kind of face identification method of the rarefaction representation differentiated based on Fisher |
US9747494B2 (en) | 2015-11-16 | 2017-08-29 | MorphoTrak, LLC | Facial matching system |
CN113239917A (en) * | 2021-07-12 | 2021-08-10 | 南京邮电大学 | Robust face recognition method based on singular value decomposition |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7242807B2 (en) * | 2003-05-05 | 2007-07-10 | Fish & Richardson P.C. | Imaging of biometric information based on three-dimensional shapes |
JP2008276406A (en) * | 2007-04-26 | 2008-11-13 | Toyota Motor Corp | Face image processor |
CN101464950A (en) * | 2009-01-16 | 2009-06-24 | 北京航空航天大学 | Video human face identification and retrieval method based on on-line learning and Bayesian inference |
-
2010
- 2010-10-27 CN CN 201010522281 patent/CN101976360B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7242807B2 (en) * | 2003-05-05 | 2007-07-10 | Fish & Richardson P.C. | Imaging of biometric information based on three-dimensional shapes |
JP2008276406A (en) * | 2007-04-26 | 2008-11-13 | Toyota Motor Corp | Face image processor |
CN101464950A (en) * | 2009-01-16 | 2009-06-24 | 北京航空航天大学 | Video human face identification and retrieval method based on on-line learning and Bayesian inference |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102298703B (en) * | 2011-04-20 | 2015-06-17 | 中科院成都信息技术股份有限公司 | Classification method based on projection residual errors |
CN102298703A (en) * | 2011-04-20 | 2011-12-28 | 中科院成都信息技术有限公司 | Classification method based on projection residual errors |
CN102915436A (en) * | 2012-10-25 | 2013-02-06 | 北京邮电大学 | Sparse representation face recognition method based on intra-class variation dictionary and training image |
CN102915436B (en) * | 2012-10-25 | 2015-04-15 | 北京邮电大学 | Sparse representation face recognition method based on intra-class variation dictionary and training image |
CN103218609A (en) * | 2013-04-25 | 2013-07-24 | 中国科学院自动化研究所 | Multi-pose face recognition method based on hidden least square regression and device thereof |
CN103218609B (en) * | 2013-04-25 | 2016-01-20 | 中国科学院自动化研究所 | A kind of Pose-varied face recognition method based on hidden least square regression and device thereof |
CN103745465A (en) * | 2014-01-02 | 2014-04-23 | 大连理工大学 | Sparse coding background modeling method |
CN103984918B (en) * | 2014-04-21 | 2015-06-10 | 郑州轻工业学院 | Human face image recognition method based on intra-class and inter-class variation |
CN104318261A (en) * | 2014-11-03 | 2015-01-28 | 河南大学 | Graph embedding low-rank sparse representation recovery sparse representation face recognition method |
CN104318261B (en) * | 2014-11-03 | 2016-04-27 | 河南大学 | A kind of sparse representation face identification method representing recovery based on figure embedding low-rank sparse |
CN105574475A (en) * | 2014-11-05 | 2016-05-11 | 华东师范大学 | Common vector dictionary based sparse representation classification method |
CN105574475B (en) * | 2014-11-05 | 2019-10-22 | 华东师范大学 | A kind of rarefaction representation classification method based on common vector dictionary |
CN104463148A (en) * | 2014-12-31 | 2015-03-25 | 南京信息工程大学 | Human face recognition method based on image reconstruction and Hash algorithm |
CN104463148B (en) * | 2014-12-31 | 2017-07-28 | 南京信息工程大学 | Face identification method based on Image Reconstruction and hash algorithm |
US9747494B2 (en) | 2015-11-16 | 2017-08-29 | MorphoTrak, LLC | Facial matching system |
CN105868309A (en) * | 2016-03-24 | 2016-08-17 | 广东微模式软件股份有限公司 | Image quick finding and self-service printing method based on facial image clustering and recognizing techniques |
CN105868309B (en) * | 2016-03-24 | 2019-05-24 | 广东微模式软件股份有限公司 | It is a kind of quickly to be searched and self-help print method based on facial image cluster and the image of identification technology |
CN106066994A (en) * | 2016-05-24 | 2016-11-02 | 北京工业大学 | A kind of face identification method of the rarefaction representation differentiated based on Fisher |
CN113239917A (en) * | 2021-07-12 | 2021-08-10 | 南京邮电大学 | Robust face recognition method based on singular value decomposition |
Also Published As
Publication number | Publication date |
---|---|
CN101976360B (en) | 2013-02-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101976360B (en) | Sparse characteristic face recognition method based on multilevel classification | |
US10255691B2 (en) | Method and system of detecting and recognizing a vehicle logo based on selective search | |
CN106971174B (en) | CNN model, CNN training method and CNN-based vein identification method | |
CN102609716B (en) | Pedestrian detecting method based on improved HOG feature and PCA (Principal Component Analysis) | |
CN101226590B (en) | Method for recognizing human face | |
CN102521561B (en) | Face identification method on basis of multi-scale weber local features and hierarchical decision fusion | |
CN101739555B (en) | Method and system for detecting false face, and method and system for training false face model | |
CN103870811B (en) | A kind of front face Quick method for video monitoring | |
CN102722708B (en) | Method and device for classifying sheet media | |
CN103279768B (en) | A kind of video face identification method based on incremental learning face piecemeal visual characteristic | |
CN102855496A (en) | Method and system for authenticating shielded face | |
CN102163281B (en) | Real-time human body detection method based on AdaBoost frame and colour of head | |
CN102982349A (en) | Image recognition method and device | |
CN102156887A (en) | Human face recognition method based on local feature learning | |
CN103164710B (en) | A kind of choice set based on compressed sensing becomes face identification method | |
CN106909946A (en) | A kind of picking system of multi-modal fusion | |
CN105046205A (en) | Method for identifying palm print on the basis of fusion of local feature and global feature | |
CN102254183A (en) | Face detection method based on AdaBoost algorithm | |
CN114241564A (en) | Facial expression recognition method based on inter-class difference strengthening network | |
Sharma et al. | Deep convolutional neural network with ResNet-50 learning algorithm for copy-move forgery detection | |
Wohlhart et al. | Discriminative Hough Forests for Object Detection. | |
CN105608443A (en) | Multi-feature description and local decision weighting face identification method | |
CN104408468A (en) | Face recognition method based on rough set and integrated learning | |
CN104318224A (en) | Face recognition method and monitoring equipment | |
CN104463234A (en) | Face recognition method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20130227 Termination date: 20181027 |
|
CF01 | Termination of patent right due to non-payment of annual fee |