CN114283052A - Method and device for cosmetic transfer and training of cosmetic transfer network - Google Patents

Method and device for cosmetic transfer and training of cosmetic transfer network Download PDF

Info

Publication number
CN114283052A
CN114283052A CN202111653519.6A CN202111653519A CN114283052A CN 114283052 A CN114283052 A CN 114283052A CN 202111653519 A CN202111653519 A CN 202111653519A CN 114283052 A CN114283052 A CN 114283052A
Authority
CN
China
Prior art keywords
image
makeup
local
migration
migrated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111653519.6A
Other languages
Chinese (zh)
Inventor
吴文岩
郑程耀
甘世康
唐斯伟
张丽
钱晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Datianmian White Sugar Technology Co ltd
Original Assignee
Beijing Datianmian White Sugar Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Datianmian White Sugar Technology Co ltd filed Critical Beijing Datianmian White Sugar Technology Co ltd
Priority to CN202111653519.6A priority Critical patent/CN114283052A/en
Publication of CN114283052A publication Critical patent/CN114283052A/en
Priority to PCT/CN2022/125086 priority patent/WO2023124391A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the disclosure provides a method and a device for cosmetic transfer and training of a cosmetic transfer network, wherein the cosmetic transfer method comprises the following steps: acquiring a target image to be migrated and a local image to be migrated, wherein the target image to be migrated comprises a target object, and the local image to be migrated comprises a local area of the target object; respectively transferring preset makeup styles to the target image to be transferred and the local image to be transferred through a target makeup transfer network to obtain a transfer target image and a transfer local image; and fusing the migration target image and the migration local image to obtain a makeup migration result of the target object.

Description

Method and device for cosmetic transfer and training of cosmetic transfer network
Technical Field
The present disclosure relates to the field of computer vision technologies, and in particular, to a method and an apparatus for cosmetic migration and training of a cosmetic migration network, a computer device, and a storage medium.
Background
Cosmetic migration is an important direction in the field of image generation in computer vision. Makeup migration refers to the migration of a makeup style to an image that does not have the makeup style, for example, for a human face image, makeup migration may be the migration of a certain makeup style to a plain human face image. However, the conventional makeup migration method has a low reduction degree.
Disclosure of Invention
In a first aspect, embodiments of the present disclosure provide a method of cosmetic transfer, the method comprising: acquiring a target image to be migrated and a local image to be migrated, wherein the target image to be migrated comprises a target object, and the local image to be migrated comprises a local area of the target object; respectively transferring preset makeup styles to the target image to be transferred and the local image to be transferred through a target makeup transfer network to obtain a transfer target image and a transfer local image; and fusing the migration target image and the migration local image to obtain a makeup migration result of the target object.
The dressing migration method and the dressing migration device have the advantages that the target dressing migration network is used for performing dressing migration on the target image to be migrated and the local image to be migrated respectively, and the dressing migration of the local image to be migrated can better migrate the dressing details of the local area, so that the reduction degree of the dressing migration can be improved. Further, by performing makeup transfer on the target image to be transferred including the complete target object, it is also possible to improve the naturalness of makeup transfer. In summary, the scheme of the embodiment of the disclosure can give consideration to both the naturalness and the reduction degree of makeup migration.
In some embodiments, obtaining the local image to be migrated includes: carrying out target detection on an original image of the target object, and determining the position of a key point of the target object in the original image; and cutting out the local image to be migrated from the original image based on the key point position of the target object. According to the method and the device, the local image to be migrated can be accurately cut out from the original image through key point detection.
In some embodiments, the cropping the partial image to be migrated from the original image based on the key point position of the target object includes: cutting out an image area with a first preset size from the original image based on the key point position of the target object, wherein the first preset size is smaller than the size of the target object and larger than the size of the local area, and the local area is located at a first preset position in the image area with the first preset size; and determining the image area with the first preset size as the local image to be migrated. In this embodiment, an image area with a first preset size is cut out to serve as a local image to be migrated, and since the first preset size is larger than the size of the local area, the cut-out migration local image can include a complete local area. Furthermore, since the local area is located at a first preset position in the image area of the first preset size, on the one hand, the position of the local area in the image is determined without detecting the position of the local area in the image area; on the other hand, the local area is fixedly arranged at the first preset position in the image, so that the local area in the image is convenient to process.
In some embodiments, the target image to be migrated is a video frame including the target object acquired in real time, and the local image to be migrated is a local video frame including the local region cropped from the video frame. The embodiment can perform makeup transfer processing on each continuous video frame collected in real time, and the processing real-time performance is high.
In some embodiments, the target makeup migration network is obtained by co-training based on a sample image to be migrated, a local sample image to be migrated, a reference sample image and a reference local sample image; wherein the reference sample image includes a target object having the preset makeup style, and the sample image to be migrated includes a target object having a makeup style other than the preset makeup style; the local sample image to be migrated comprises a local region of a target object in the sample image to be migrated, the reference local sample image comprises a local region of the target object in the reference sample image, and the local region in the local sample image to be migrated is the same as the local region in the reference local sample image. In the embodiment, the target makeup migration network is obtained by training the sample image to be migrated, the local sample image to be migrated, the reference sample image and the reference local sample image together, so that the target makeup migration network can learn the makeup features of the local area and the makeup features of the whole target object simultaneously, and the natural degree and the reduction degree of makeup migration of the trained target makeup migration network can be considered.
In some embodiments, the reference sample image is selected from a first set of images comprising a plurality of images, each image in the first set of images comprising the same target object having the preset makeup style; the sample images to be migrated are selected from a second image set, the second image set comprises a plurality of images, each image in the second image set comprises a target object with a makeup style except the preset makeup style, and the target objects in at least two images in the second image set are different.
The target makeup migration network of the embodiment of the disclosure is obtained by training the multiple images in the first image set as sample images, and because the target objects in different images in the first image set often have different angles and/or expressions, and the target objects in different images in the first image set may also be affected differently by illumination, shadow and shading, the multiple images in the first image set cover visual effects exhibited under various angles (for example, front face, side face, and head distortion) and expressions (for example, laugh, mouth, eyes of gagger, and the like) under a specific makeup, so that the trained target makeup migration network can learn slight changes of the same makeup style in different images in the first image set, and the reduction degree of makeup migration is improved. Further, the target objects included in at least two images in the second image set are different, so that subtle changes of the target objects of different id information (for example, eyelid attributes are single or double eyelids, lips are thick or thin lips, etc.) can be learned. Therefore, by adopting the present embodiment, it is possible to balance the retention of the user id information and the intensity of makeup migration (i.e., the reduction degree).
In some embodiments, the reference sample image comprises a plurality of video frames in a sample video, each of the plurality of video frames in the sample video comprising a target object having the preset cosmetic style.
In some embodiments, the target makeup migration network comprises a first subnetwork and a second subnetwork; the first sub-network is used for transferring preset makeup styles to the target image to be transferred; the second sub-network is used for transferring preset makeup styles to the local image to be transferred. In the embodiment, two different sub-networks are adopted to respectively carry out the whole makeup migration and the partial makeup migration, so that the whole target makeup migration network has the migration capability of the whole makeup feature of the target object and the migration capability of the partial makeup feature of the partial area at the same time.
In some embodiments, the fusing the migration target image and the migration partial image to obtain the makeup migration result of the target object includes: performing semantic segmentation on the migration target image to obtain the position of the local area in the migration target image; and fusing the migration local image into the migration target image based on the position of the local area in the migration target image to obtain a makeup migration result of the target object. According to the embodiment, the position of the local area in the migration target image can be accurately determined through semantic segmentation, so that the migration local image is accurately fused into the migration target image, and the final makeup migration result obtains high naturalness and reduction degree.
In some embodiments, after fusing the migration target image and the migration local image, the method further comprises: acquiring the color of the target object after the target object is moved; adjusting the color of the area on the target object where the makeup transfer is not performed based on the color of the target object after the transfer. The embodiment further improves the naturalness of the natural migration result by adjusting the color of the area, which is not subjected to makeup migration, of the target object so that the color of the area, which is not subjected to makeup migration, of the target object is naturally transited to the color of the area, which is subjected to makeup migration.
In a second aspect, embodiments of the present disclosure provide a method for training a makeup migration network, the method including: acquiring a sample image to be migrated and a local sample image to be migrated, wherein the sample image to be migrated comprises a target object, and the local sample image to be migrated comprises a local area on the target object; transferring the makeup style of a target object in a reference sample image to the sample image to be transferred through an original makeup transfer network to obtain a transfer sample image; transferring the makeup style of a target object in a reference local sample image to the local sample image to be transferred through an original makeup transfer network to obtain a transferred local sample image, wherein the reference local sample image comprises a local area of the target object in the reference sample image, and the local area included in the local sample image to be transferred is the same as the local area included in the reference local sample image; training the original makeup migration network based on the migration sample image and the migration local sample image to obtain a target makeup migration network.
The method and the device for transferring the makeup appearance of the target object have the advantages that the target makeup transfer network is obtained by training the sample image to be transferred, the local sample image to be transferred, the reference sample image and the reference local sample image together, so that the target makeup transfer network can learn the makeup characteristics of a local area and the makeup characteristics of the whole target object simultaneously, and the natural degree and the reduction degree of the makeup transfer can be considered in the target makeup transfer network obtained through training.
In some embodiments, the training of the original makeup migration network based on the migration sample images and the migration partial sample images includes: establishing a first loss function based on the migrated sample image; establishing a second loss function based on the migrated local sample image; training an original makeup migration network based on the first loss function and the second loss function to obtain the target makeup migration network.
In some embodiments, the target makeup migration network includes a first sub-network for migrating a preset makeup style onto the target image to be migrated and a second sub-network for migrating a preset makeup style onto the partial image to be migrated; the training of the original makeup migration network based on the first loss function and the second loss function includes: training an original first sub-network based on the first loss function to obtain the first sub-network; and training an original second sub-network based on the second loss function to obtain the second sub-network. In the embodiment, two different sub-networks are adopted to respectively carry out the whole makeup migration and the local makeup migration, and the two sub-networks are independent from each other and do not influence each other, so that the whole target makeup migration network can locally migrate the whole makeup feature of the target object and the local makeup feature of the local area at the same time.
In some embodiments, the penalty function used to train a sub-network comprises at least one of: a first objective loss function for characterizing a loss of realism of an output image of the sub-network; a second objective loss function for characterizing attribute similarity loss between an output image of the sub-network and an image to be migrated that is input to the sub-network; a third objective loss function for characterizing cosmetic similarity loss between an output image of the sub-network and a reference image input to the sub-network; a fourth target loss function for characterizing a similarity loss between a target sample image and an image to be migrated input to the sub-network; the target sample image is obtained by transferring the makeup style on the image to be transferred which is input into the sub-network to the output image of the sub-network; the sub-network is the first sub-network or the second sub-network.
By adopting the first target loss function, the truth degree of the makeup migration result can be improved; by adopting the second target loss function, the attribute characteristics (for example, the eyelid attribute is a single eyelid or a double eyelid) of the target object before and after the migration can be kept consistent as much as possible; by adopting the third target loss function, the reduction degree of the makeup transfer result can be improved; by using the fourth objective loss function, the structural information of the target object before and after migration can be kept as uniform as possible.
In some embodiments, the reference sample image comprises a plurality of video frames in a video, each reference local sample image comprises a local area on a target object in one video frame, and the target objects included on the video frames are the same and have the same makeup style; and/or the number of the sample images to be migrated is more than 1, the target objects in at least two sample images to be migrated are different objects, and each local sample image to be migrated comprises a local area on the target object in one sample image to be migrated.
Because the target objects in different video frames often have different angles and/or expressions and the target objects in different video frames are influenced by illumination, shadow and shading possibly differently, a plurality of video frames cover the visual effect presented under various angles and expressions under a specific makeup, so that the trained target makeup migration network can learn the fine change of the same makeup style in different video frames, and the reduction degree of makeup migration is improved.
In a third aspect, embodiments of the present disclosure provide a makeup transfer device, the device comprising: the device comprises an acquisition module, a migration module and a migration module, wherein the acquisition module is used for acquiring a target image to be migrated and a local image to be migrated, the target image to be migrated comprises a target object, and the local image to be migrated comprises a local area of the target object; the transfer module is used for respectively transferring preset makeup styles to the target image to be transferred and the local image to be transferred through a target makeup transfer network to obtain a transfer target image and a transfer local image; and the fusion module is used for fusing the migration target image and the migration local image to obtain a makeup migration result of the target object.
In a fourth aspect, embodiments of the present disclosure provide a training device for a makeup migration network, the device comprising: the device comprises an acquisition module, a migration module and a migration module, wherein the acquisition module is used for acquiring a sample image to be migrated and a local sample image to be migrated, the sample image to be migrated comprises a target object, and the local sample image to be migrated comprises a local area on the target object; the first transfer module is used for transferring the makeup style of a target object in a reference sample image to the sample image to be transferred through an original makeup transfer network to obtain a transfer sample image; the second migration module is used for migrating the makeup style of a target object in a reference local sample image to the local sample image to be migrated through an original makeup migration network to obtain a migrated local sample image, wherein the reference local sample image comprises a local area of the target object in the reference sample image, and the local area included in the migrated local sample image is the same as the local area included in the reference local sample image; and the training module is used for training the original makeup migration network based on the migration sample image and the migration local sample image to obtain a target makeup migration network.
In a fifth aspect, the embodiments of the present disclosure provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the method according to any of the embodiments.
In a sixth aspect, the embodiments of the present disclosure provide a computer device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the method according to any of the embodiments when executing the program.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a schematic view of makeup migration.
Fig. 2 is a flow chart of a makeup transfer method of an embodiment of the present disclosure.
Fig. 3 is a schematic diagram of a target image to be migrated and a partial image to be migrated according to an embodiment of the present disclosure.
Fig. 4 is a schematic diagram of an output result of an embodiment of the present disclosure.
Fig. 5 is a schematic diagram of a training process of an embodiment of the present disclosure.
Fig. 6 is a flow chart of a makeup transfer method according to another embodiment of the present disclosure.
Fig. 7 is a flowchart of a training method of a makeup migration network according to an embodiment of the present disclosure.
Fig. 8 is a flowchart of a training method of a makeup migration network according to another embodiment of the present disclosure.
Fig. 9 is a block diagram of a makeup transfer device according to an embodiment of the present disclosure.
Fig. 10 is a block diagram of a makeup transfer device according to another embodiment of the present disclosure.
Fig. 11 is a block diagram of a training apparatus of a makeup migration network according to an embodiment of the present disclosure.
Fig. 12 is a block diagram of a training apparatus of a makeup migration network according to another embodiment of the present disclosure.
Fig. 13 is a schematic structural diagram of a computer device according to an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In order to make the technical solutions in the embodiments of the present disclosure better understood and make the above objects, features and advantages of the embodiments of the present disclosure more comprehensible, the technical solutions in the embodiments of the present disclosure are described in further detail below with reference to the accompanying drawings.
The makeup transfer refers to transferring a makeup style to an image without the makeup style, and generally, the image with a certain makeup style is obtained, then the characteristic of the makeup in the image is extracted through a neural network, and the extracted characteristic of the makeup is transferred to the image without the makeup style. The makeup may be obtained by any means such as painting, tattooing, sticking, making-up, etc., and the image may be an image of a certain body part (e.g., back, face) of a human or an animal. For convenience of explanation, the following describes a scheme of an embodiment of the present disclosure by taking a face image as an example. Fig. 1 is a schematic diagram of makeup migration, which can migrate a preset makeup style onto a one-pixel image 101, resulting in a migration result 102 having the preset makeup style. The face included in the migration result 102 and the face included in the face image 101 are faces of the same person. The preset makeup style may include makeup styles of other areas in addition to the makeup styles of the eye area 1021, the cheek area 1022, the lip area 1023, and the like shown in the drawing. The plain image in this embodiment is a face image without makeup, and of course, in addition to transferring the preset makeup style to the plain image, the preset makeup style may be transferred to a face image with other makeup styles to cover the original makeup style on the face image, or to blend with the original makeup style on the face image to obtain a new makeup style.
The makeup migration algorithm needs to ensure the makeup reduction degree of each area of the face makeup, such as eye shadow, eye liner and pupil of the eye area; the high gloss, color, texture, etc. of the lipstick in the mouth area need to be transferred to the face of the user with a certain cosmetic reduction. However, in the related art, it is often difficult to achieve a sufficiently high degree of cosmetic reduction.
In addition, the makeup migration method in the related art often has the following problems:
(1) the robustness and naturalness of the migration result are low. Because the face image to be migrated may have any illumination/angle/face shape/shielding state, it is difficult for the related art to ensure that the migrated image is still natural, and it is also difficult to consider various face images, and the migration result often has a sense of incongruity.
(2) The migration algorithm needs to migrate makeup, at the same time, recognizable id information (also called id attribute information) of the user itself is often changed, the id information is used for representing identity information of the user, and any one or more of the five-sense organ shape, the expression, the face angle, the single and double eyelid attributes of eyes, the mouth opening and closing condition and the like of the user may cause the id information to be changed, that is, one user is identified as another user. Therefore, on the premise that the migration strength is sufficient, it is difficult for the related art to ensure that the id information of the user itself is not changed, that is, it is difficult to ensure a high id retention degree.
(3) Only a single image can be used as a reference makeup, and the makeup cannot be expressed under different visual angles and expressions, so that the reduction degree and the naturalness of the transferred makeup are poor.
In addition, in the related art, generally, only enhancement can be performed in a certain dimension among the makeup reduction degree, the naturalness, and the user-recognizable id information, and the effects of the above dimensions cannot be considered at the same time. Namely, the related art guarantees the image naturalness and id retention after migration at the cost of lower makeup reduction degree; or the natural degree of migration is sacrificed, and some id attribute information of the face image is obviously modified to ensure stronger cosmetic reduction degree.
Based on this, embodiments of the present disclosure provide a method of cosmetic transfer, see fig. 2, comprising:
step 201: acquiring a target image pic _ global to be migrated and a local image pic _ local to be migrated, wherein the target image pic _ global to be migrated comprises a target object, and the local image pic _ local to be migrated comprises a local area of the target object;
step 202: respectively migrating a preset makeup style to the target image pic _ global to be migrated and the local image pic _ local to be migrated through a target makeup migration network to obtain a migration target image gen _ global and a migration local image gen _ local;
step 203: and fusing the migration target image P _ global and the migration local image P _ local to obtain a makeup migration result of the target object.
The scheme of the embodiment of the disclosure can be used for products such as interactive entertainment, makeup beautification and makeup trial. The embodiment of the disclosure performs makeup migration on the target image to be migrated pic _ global and the local image to be migrated pic _ local through the target makeup migration network, and performs makeup migration on the local image to be migrated pic _ local, so that the makeup details in the local area can be migrated better, and the reduction degree of the makeup migration can be improved. Further, by performing makeup migration on the target image pic _ global to be migrated including the complete target object, it is also possible to improve the naturalness of the makeup migration. In summary, the scheme of the embodiment of the disclosure can give consideration to both the naturalness and the reduction degree of makeup migration.
In step 201, the target image to be migrated may be a single image, or may be one or more frames of video frames in the video, and the target image to be migrated may be collected in real time or collected and stored in advance. In some embodiments, the target image to be migrated is a video frame including the target object acquired in real time, and the local image to be migrated is a local video frame including the local region cropped from the video frame. The video frames collected in real time can comprise a plurality of continuous or discontinuous video frames, makeup transfer can be performed on all video frames comprising the target object in the video, and makeup transfer can also be performed on video frames which comprise the target object and meet preset conditions and which are selected from the video. The preset condition may include, but is not limited to, a definition condition, a size condition of the target object, and the like. Specifically, the definition condition is satisfied, that is, the definition of the video frame is greater than a preset definition threshold; the size condition of the target object, namely the size of the target object in the video frame, is within a preset size interval.
The target image pic _ global to be migrated may include a complete target object, for example, a human face, which is an object to be subjected to makeup migration. The target object may include a plurality of local regions thereon, for example, regions on a human face including eyes, nose, mouth, and the like. The partial image to be migrated pic _ local may include one or more partial regions, for example, the partial image to be migrated pic _ local may include at least one of a left eye region, a right eye region, a nose region, a forehead region, an apple muscle region, etc., in consideration of a makeup type and a makeup effect. In addition, in order to be able to better perform detail reduction, the local image pic _ local to be migrated may also include only one local area according to the requirement of makeup migration.
The local image to be migrated pic _ local may be obtained by performing target detection and image segmentation on the target image to be migrated pic _ global. Specifically, target detection may be performed on an original image of the target object, and a position of a key point of the target object in the original image is determined; and cutting out the local image to be migrated from the original image based on the key point position of the target object. For example, face key points such as a left eye key point, a right eye key point, a nose key point, a mouth key point, and the like of a human face may be detected, and a partial image to be migrated in a left eye region may be cut out from the original image based on the position of the left eye key point.
In some embodiments, the partial image to be migrated, which is cut out from the original image, may be an image area having a first preset size in the original image, where the first preset size is smaller than the size of the target object and larger than the size of the partial area, and the partial area is located at a first preset position in the image area of the first preset size. Alternatively, the first preset position may be a central position of the image, or an intersection position of a transverse three-line and a longitudinal three-line in the image, or other positions. Because the first preset size is larger than the size of the local area, the cut local image to be migrated can include the complete local area.
Further, target detection can be performed on an original image of the target object, the position and the angle of the target object in the original image are determined, and the target image to be migrated is cut out from the original image based on the position and the angle of the target object in the original image. An affine matrix can be established based on the position and the angle of the target object in the original image, and the target image to be migrated can be cut out from the original image based on the affine matrix.
In some embodiments, the target image to be migrated, which is cut out from the original image, may be an image area having a second preset size in the original image, the second preset size is larger than the size of the target object, and the target object is located at a second preset position in the image area having the second preset size. Alternatively, the second preset position may be a central position of the image, or an intersection position of the transverse three-line and the longitudinal three-line in the image, or another position. And the second preset size is larger than the size of the target object, so that the cut target image to be migrated can comprise the complete target object.
In some embodiments, the first predetermined size is 256 × 256, the second predetermined size is 1024 × 1024, and the size of the target object is 800 × 800. Of course, the above numerical values are merely illustrative and are not intended to limit the present disclosure. In practical application, other sizes can be set according to requirements. Under the condition that the size of the original image is too large or too small, the original image can be zoomed first, and then the target image to be migrated with the required size can be cut out from the zoomed original image. Alternatively, an image area including the target object may be cut out from the original image, and then the cut-out image area may be scaled to a desired size.
Further, since the original image including the target object may include a background region, the original image may be cropped or background-segmented to obtain an image region where the target object is located (i.e., the target image to be migrated).
In some embodiments, the target object in the target image pic _ global to be migrated may also be adjusted to a preset angle, for example, the preset angle satisfies: the crown and chin of the target object are aligned in the vertical direction. The angle adjustment process can be realized by affine transformation. By performing the angle adjustment, various processes, such as image segmentation, feature extraction, and the like, can be performed on the to-be-migrated target image pic _ global.
In the above embodiment, the original image may be an image uploaded by a user (for example, an image stored in a mobile phone album), or an image acquired in real time by an image acquisition device. Of course, the user can also cut the original image to obtain the target image to be migrated which meets the requirement, and directly upload the target image to be migrated to perform makeup migration. In the above embodiment, the to-be-migrated local image is cut out from the original image based on the key point position of the target object, which may be directly cut out from the original image, or may be cut out from the original image first and then cut out from the to-be-migrated target image.
The target image to be migrated pic _ global and the local image to be migrated pic _ local of some embodiments are shown in fig. 3, where the local image to be migrated pic _ local includes a right-eye local image, a left-eye local image, and a mouth local image. In practical applications, the local image pic _ local to be migrated is not limited to the part including the 3 local images, and images of other local areas besides the 3 local images may be set according to the actual makeup migration requirement, which is not limited by the present disclosure.
In step 202, the target makeup migration network may learn the makeup features of the preset makeup style in advance, so as to be able to migrate the preset makeup style to the target image to be migrated pic _ global and the partial image to be migrated pic _ local. In some embodiments, a target makeup migration network corresponds to a preset makeup style. The image having the preset makeup style may be used as a sample image to train a target makeup migration network. In other embodiments, a target makeup migration network may also correspond to a plurality of preset makeup styles. And training a target makeup migration network by using the image with each preset makeup style in the plurality of preset makeup styles as a sample image, wherein the image with each preset makeup style carries label information, and the label information is used for identifying the preset makeup style corresponding to the image.
In some embodiments, the target makeup migration network may include a first sub-network for migrating a preset makeup style onto the target image to be migrated pic _ global and a second sub-network for migrating a preset makeup style onto the partial image to be migrated pic _ local. In some embodiments, the number of the second sub-networks is at least one, and in the case of including at least two second sub-networks, different second sub-networks are used for performing makeup migration on the local images pic _ local to be migrated in different local areas. For example, the face image may include partial regions of the left eye, the right eye, the nose, the mouth, and the like, and thus, at least 4 second sub-networks may be used to perform makeup migration on the left eye region, the right eye region, the nose region, and the mouth region, respectively. Each sub-network may include a makeup style extractor for extracting a makeup feature F _ ref from a makeup image ref; and a generator for generating an image after style migration (i.e., the migration target image gen _ global or the migration partial image gen _ local) based on the extracted makeup feature F _ ref and the image to be migrated (i.e., the migration target image pic _ global or the migration partial image pic _ local).
By adopting the step, the overall makeup migration effect of the target object can be obtained by taking the target image pic _ global to be migrated as a whole; meanwhile, the makeup feature of the local area of the target object can be sufficiently excavated, so that the makeup migration effect of the local area can be obtained.
In step 203, the migration local image gen _ local is fused into the migration target image gen _ global to obtain a final migration image gen _ face (i.e., a makeup migration result), where the migration image gen _ face includes a target object included in the target image to be migrated, and the target object has the preset makeup style. For example, the target image to be migrated pic _ global includes the facial makeup face of the user a, the preset makeup style includes the gray eye shadow, the red lipstick, and the blue pupil, and the migration image gen _ face includes the image of the user a having the gray eye shadow, the red lipstick, and the blue pupil.
In some embodiments, semantic segmentation may be performed on the migration target image to obtain a position of the local region in the migration target image; and fusing the migration local image into the migration target image based on the position of the local area in the migration target image to obtain a makeup migration result of the target object. The image fusion process can be realized by using algorithms such as laplacian fusion and feathering fusion, and the adopted specific algorithm is not limited in the disclosure.
The fusion process can be realized outside the target makeup migration network and can also be realized inside the target makeup migration network. In an embodiment where image fusion is implemented inside the target makeup migration network, the target makeup migration network may include a first sub-network, a second sub-network, and a third sub-network. The first sub-network is used for transferring a preset makeup style to the target image to be transferred to obtain a transfer target image; the second sub-network is used for transferring a preset makeup style to the local image to be transferred to obtain a transferred local image. And the third sub-network is used for acquiring the migration target image and the migration local image and fusing the migration target image and the migration local image.
In some embodiments, in order to obtain a better makeup migration effect, the original makeup migration network may be trained in advance based on the sample image to be migrated, the local sample image to be migrated, the reference sample image, and the reference local sample image, so as to obtain the target migration network. Wherein the reference sample image ref _ global includes a complete target object having the preset makeup style, and the target object included in the reference sample image ref _ global and the target object included in the sample image to be migrated samp _ global are target objects of the same category, for example, both are human faces. The sample image to be migrated samp _ global includes a target object having a makeup style other than the preset makeup style, that is, the target object included in the reference sample image ref _ global has a different makeup than the target object included in the sample image to be migrated samp _ global. In particular, in the embodiment of the present disclosure, the plain face may be used as a special makeup, and may also belong to one of the makeup categories included in the sample image to be migrated.
The local sample image to be migrated sampl image sampl local includes a local region of a target object in the sample image to be migrated sampl image sampl global, the reference local sample image ref local includes a local region of a target object in the reference sample image ref global, and the local region included in the local sample image to be migrated sampl image sampl local is the same as the local region included in the reference local sample image ref local. Both the local sample image to be migrated samp local and the reference local sample image ref local may include one or more local regions. For example, both include a left eye region, or both include a left eye region and a nose region. The local sample image to be migrated samp _ local may be obtained by performing target detection and image segmentation on the sample image to be migrated samp _ global.
In some embodiments, the reference sample image is selected from a first set of images comprising a plurality of images, each image in the first set of images comprising the same target object having the preset makeup style. Optionally, the reference sample image includes a plurality of video frames in a sample video, each of the plurality of video frames in the sample video including a target object having the preset makeup style. The sample video can be a video acquired by directly acquiring through an image acquisition device or an edited video. The plurality of video frames in the sample video may include a plurality of video frames that are temporally continuous, or may include discontinuous video frames.
In some embodiments, the plurality of video frames in the sample video satisfy at least one of the following conditions: the angles and/or expressions of the target objects in at least two video frames are different; the illumination intensity in at least two video frames is different. By adopting a plurality of video frames as the reference sample image ref _ global, enough makeup images can be provided, so that the trained target makeup migration network can repeatedly dig out detail change information of preset makeup styles under different angles, different illumination and different expressions, and the reduction degree of makeup migration is improved.
In some embodiments, the sample image to be migrated is selected from a second image set including a plurality of images, each image in the second image set includes a target object having a makeup style other than the preset makeup style, and the target objects included in at least two images in the second image set are different. By adopting the images of different target objects as the sample image samp _ global to be migrated, the trained target makeup migration network can fully learn the capacity of migrating the makeup style to the objects with different ids, so that the naturalness of the makeup migration result is higher.
Specifically, during training, the makeup style in the reference sample image ref _ global can be migrated to the sample image to be migrated samp _ gen _ global through the original makeup migration network to obtain a migration sample image samp _ gen _ global; transferring the makeup style in the reference local sample image ref _ local to a local sample image to be transferred, namely, samp _ gen _ local through the original makeup transfer network to obtain a transfer local sample image samp _ gen _ local; training the original makeup migration network based on the migration sample image samp _ gen _ global and the migration local sample image samp _ gen _ local to obtain a target makeup migration network.
Before training the original makeup migration network, the sample image to be migrated sampl _ global and the local sample image to be migrated sampl _ local, and the reference sample image ref _ global and the reference local sample image ref _ local may be processed, including adjusting the size of the image and adjusting the angle of the target object in the image. Here, the sample image to be migrated samp _ global and the reference sample image ref _ global may be adjusted to the same size (e.g., the second preset size), and the local sample image to be migrated samp _ local and the reference local sample image ref _ local may be adjusted to the same size (e.g., the first preset size).
In some embodiments, a first loss function may be established based on the migrated sample image samp _ gen _ global; establishing a second loss function based on the migration local sample image samp _ gen _ local; training an original makeup migration network based on the first loss function and the second loss function to obtain the target makeup migration network. Specifically, in the case that the target makeup migration network includes a first sub-network and a second sub-network, an original first sub-network in the original makeup migration network may be trained based on the first loss function, resulting in the first sub-network; and training an original second sub-network in the original makeup migration network based on the second loss function to obtain the second sub-network.
The penalty function for training a sub-network may include at least one of:
(1) a loss function for characterizing a loss of realism of an output image of the sub-network. The output image of the sub-network may be input to a discriminator, and whether the output image is a composite image obtained by makeup migration is judged by the discriminator with the object of making it impossible for the discriminator to recognize whether the output image is a real image or a composite image, and therefore, the loss function may be obtained from the difference between the output result of the discriminator and the real result. By adopting the loss function, the reality degree and the naturalness degree of the output image can be improved. The loss function is obtained by competing through the generator and the competitor, and therefore, may also be referred to as a competition generation loss function.
(2) A loss function for characterizing a loss of similarity of attributes between an output image of the sub-network and an image to be migrated that is input to the sub-network. The attributes corresponding to different local regions are different, for example, the attributes corresponding to the eye regions (including the left eye region and the right eye region) may include eyelid attributes for characterizing whether the eyelid is a single eyelid or a double eyelid; the attribute corresponding to the nose region may include a height of the bridge; the corresponding attribute of the mouth region may include the radian of the mouth angle, and the like. The output images of the sub-networks can be input into the attribute classifier to obtain the attribute categories corresponding to the output images, and similarity comparison is performed on the attribute categories corresponding to the output images and the to-be-migrated images (to-be-migrated sample images samp _ global or to-be-migrated local sample images samp _ local) input into the sub-networks, so that the loss function is obtained. By using this loss function, it is possible to ensure that the id attribute information of the target object after migration and the id attribute information of the target object before migration match as much as possible, and therefore, this loss function may also be referred to as an attribute preserving loss function.
(3) A loss function for characterizing a cosmetic similarity loss between an output image of the sub-network and a reference image input to the sub-network. The output image of the sub-network may be input to a makeup style extractor included in the sub-network to extract a makeup feature in the output image, and the makeup feature in the output image may be compared with a similarity of the makeup feature in a reference image (reference sample image ref _ global or reference local sample image ref _ local) input to the sub-network to obtain the loss function. By using this loss function, it is possible to improve the reduction degree of makeup transfer by ensuring that the makeup style after transfer is as uniform as possible as the makeup style in the reference image, and therefore, this loss function may also be referred to as a style-uniform loss function.
(4) A loss function for characterizing a loss of similarity between a target sample image and an image to be migrated input to the sub-network; and the target sample image is obtained by transferring the makeup style on the image to be transferred which is input into the sub-network to the output image of the sub-network. That is, the output image of the sub-network may be used as the image to be migrated, the image to be migrated that is originally input to the sub-network may be used as the reference image, makeup migration may be performed again through the sub-network, and the loss function may be determined based on the similarity between the obtained migration result and the image to be migrated that is originally input to the sub-network, and may also be referred to as a round robin uniform loss function. By adopting the loss function, the output image of the sub-network can be consistent with the structural information of the target object in the original image to be migrated. The structural information includes semantic information of each point in the image, which is used to indicate a local region to which the pixel belongs, for example, whether the pixel belongs to a nose region or a mouth region.
In the above embodiment, the sub-network may be the first sub-network or the second sub-network. In the case where the sub-network is the first sub-network, the image to be migrated, the reference image, and the output image of the sub-network, which are input to the sub-network, are all images including a complete target object. When the sub-network is the second sub-network, the image to be migrated, the reference image, and the output image of the sub-network, which are input to the sub-network, are images of a local area including the target object. In the case where the number of the second subnetworks is plural, the first subnetwork and each of the second subnetworks may be trained based on at least one of the four loss functions, and the attribute preserving loss functions used by different second subnetworks may be obtained based on different attribute classes. For example, the attribute retention loss function employed by the second sub-network for processing the left-eye region and the right-eye region may be acquired based on the eyelid attribute category, and the attribute retention loss function employed by the second sub-network for processing the mouth region may be acquired based on the lip thickness category and/or the mouth angle radian category.
In some embodiments, performing the makeup transfer may cause the color of the transferred partial region of the target object to be different from the color of the partial region of the target object that is not transferred, for example, the color of the face portion may be different from the color of the neck portion after performing the makeup transfer. Therefore, in order to further reduce the sense of incongruity of the makeup transfer result and improve the naturalness, the color transfer may be performed on the partial area of the target object on which the makeup transfer is not performed after the fusion of the transfer target image and the transfer partial image. Specifically, the color of the target object after migration may be obtained; adjusting the color of the area on the target object where the makeup transfer is not performed based on the color of the target object after the transfer.
Further, after the makeup transfer result (output image) of the target object is obtained, the output image may also be restored to the same size as the target image to be transferred. For example, assuming that the size of the target image to be migrated, which is cut out from the original image, is 1024 × 1024, the target image to be migrated can be restored from 1024 × 1024 to the original size.
The following describes an overall process of the embodiment of the present disclosure, taking an example that a target image to be migrated is a face image and a partial image to be migrated is a partial image corresponding to a main makeup area on a face. The partial images corresponding to the major makeup area may include a left eyebrow partial image, a right eyebrow partial image, a left eye partial image, a right eye partial image, a nose partial image, and/or a mouth partial image, and the partial images corresponding to the major makeup area in the following embodiments are described by taking the example that the partial images include a left eye partial image, a right eye partial image, and a mouth partial image. The overall process of the makeup transfer method of the embodiment of the disclosure is as follows:
giving an original image with any size, cutting a face image in the original image and a local image corresponding to a main makeup area on the face, and adjusting the face image and the local image to a specified size; these data were prepared for the next makeup migration. The detailed process is as follows:
(1) face key point detection is carried out on the original image, key point coordinates and position information and angle information of the face are obtained, and an affine matrix can be generated based on the position information and the angle information of the face.
(2) From the original image, a face image with a size of 1024 × 1024 can be cut out according to the affine matrix, wherein the face is centered and the face portion occupies a size of 800 × 800.
(3) In the 1024 × 1024 face image, according to the face key points, the left eye partial image, the right eye partial image, and the mouth partial image are respectively clipped and normalized to 256 × 256.
(4) Performing makeup migration on the face image, the left eye partial image, the right eye partial image and the mouth partial image respectively, and finally fusing the makeup migration results of the four images into one image. Meanwhile, the color of the makeup transferred skin is different from the original skin color, so that the color of the area (such as neck and ear) which is not subjected to makeup transfer in the originally exposed skin of the user is transferred, the discomfort is reduced, and the naturalness is improved. The detailed process is as follows:
(4.1) based on the key points obtained in (1) and the four images after migration (i.e. the face image gen _ face after migration, the left eye partial image gen _ left _ eye after migration, the right eye partial image gen _ right _ eye after migration, and the mouth partial image gen _ mouth after migration), a segmentation map of the partial area of the face can be drawn, wherein the segmentation map includes the mouth partial image, the left eye partial image, and the right eye partial image.
(4.2) according to the segmentation graph, fusing the migrated left-eye local image gen _ left _ eye, migrated right-eye local image gen _ right _ eye and migrated mouth local image gen _ mouth into the migrated face image gen _ face to obtain a fusion result blend _ face, wherein the fusion algorithm can adopt Laplace fusion or other fusion modes.
(4.3) performing affine transformation on the blend _ face of the fusion result based on the inverse matrix of the affine matrix obtained in (1), so that the blend _ face of the fusion result is restored from 1024 by 1024 to the original size, and a final migration result is obtained, as shown in fig. 4.
The target makeup migration network of an embodiment of the present disclosure may include a first sub-network and at least one second sub-network, each of which may include the feature extractor and generator. A training framework for a second sub-network for cosmetic migration of left eye partial images is shown in fig. 5, wherein the second sub-network can be trained in conjunction with an arbiter, an eyelid property classifier, and a makeup style extractor. The two feature extractors in the training framework described above have the same network structure. The other second sub-networks and the first sub-network may be trained by using similar training frames, and only the eyelid attribute classifier and the eyelid attribute retention loss function thereof need to be replaced by the corresponding attribute classifier and the corresponding attribute retention loss function, and the following description will be given by taking a training process of the second sub-network for performing makeup migration on the left-eye partial image as an example, and the training process of the other sub-networks may refer to the training process of the second sub-network. The whole process of the training process is as follows:
(1) and obtaining a group of video frames from a whole single-id video through frame extraction, and establishing a single-id makeup data set (namely the first image set) according to the group of video frames. The single id means that all the target objects included in the video frame are target objects with the same id information, and the target objects in each video frame have the same makeup style. A video typically contains 1000 to 5000 video frames, depending on the video duration. A specified number of video frames may be decimated according to a certain decimation strategy (e.g., at certain frame number intervals, or randomly decimated, etc.). It is also possible to collect images of non-cosmetic faces of different id information and create a multi-id-pixel data set (i.e., the aforementioned second image set), where the multi-id-pixel data set may include, for example, 1.5 ten thousand images, and each image includes an independent non-cosmetic face. Each image in the single id makeup data set and the multi-id pixel data set can be subjected to the processing of face key point detection, face cutting, local area cutting and the like.
(2) The second subnetwork is jointly trained with the discriminator, the eyelid attribute classifier, and the makeup style extractor.
(3) For each training, a left eye partial image src _ left _ eye is randomly extracted from the multi-id pixel data set as a pixel image (i.e., a partial sample image to be migrated), and a left eye partial image is randomly extracted from the single-id makeup data set as a reference partial sample image of a left eye area, which is denoted as ref _ left _ eye.
(4) Ref _ left _ eye is fed to the feature extractor to get the makeup features (a 64 x 1 tensor).
(5) Src _ left _ eye is input to the generator together with the makeup feature, and a migrated image gen _ left _ eye is generated, which is a migration effect diagram for migrating a reference makeup to the left eye of the user, which should have id information of the attributes of the eyelids of the user, the eye size shape, etc., and which has makeup information of the makeup image such as the pupil color, eyelashes, eyeshadow, etc.
(6) For generating results, supervision is performed from the following four aspects, which are the core of the migration training algorithm:
(6.1) the discriminator discriminates whether the image after the migration is a generated image or a true image, thereby establishing a countermeasure generation loss function. This loss function can improve the fidelity (i.e., realism) of the generated results.
(6.2) an eyelid attribute retention loss function, inputting the migrated image into an eyelid attribute classifier for classification, wherein the classification result is used for indicating whether the eyelid attribute in the migrated image is a single eyelid or a double eyelid, and the classification result is consistent with the plain chart because the id information of the user is expected not to be changed.
(6.3) style consistent loss function, to ensure that the migrated makeup style is the same as the reference makeup drawing, the result would also be input to the makeup style extractor, resulting in a makeup feature of 64 × 1, which should be similar to that obtained for ref _ left _ eye.
(6.4) the cyclic consistent loss function takes the generated image (i.e. the makeup migration result) as a pixel map, and the pixel map src _ left _ eye as a reference local sample image, and the pixel map src _ left _ eye is migrated again through the migration frame in the figure, and the obtained result should be similar to src _ left _ eye.
(7) For the other areas (face, mouth, right eye), the makeup migration creation frame is similar to that for the left eye area described above.
The user can upload the photo of the user and use the makeup transfer method disclosed by the invention, so that the photo of the user after transfer is obtained.
The disclosed embodiment has the following advantages:
(1) a section of makeup video can be used as input to train a target makeup migration network, wherein the makeup video comprises a plurality of frames of video, and each frame of video comprises a target object with a preset makeup style. Thus, the degree of detail and reduction of makeup migration can be improved.
(2) In the training process, an attribute retention loss function of a local area is adopted, so that the local attribute of a user is not changed, and the retention degree of user id information in the makeup transfer process is improved. For example, when the local area is the left eye area, an eyelid attribute retention loss function is adopted in the training process of the second sub-network for performing makeup migration on the local sample image to be migrated and the reference local sample image of the left eye area, and if the eyelid attribute in the generated image obtained after the local sample image to be migrated is subjected to makeup migration through the second sub-network is changed, the value of the eyelid attribute retention loss function is larger; and if the eyelid attribute in the generated image is not changed, keeping the value of the loss function smaller. Therefore, the eyelid property retention loss function can be made to obtain a smaller value by adjusting the network parameters of the second sub-network. Thus, the retention of the eyelid attributes before and after makeup transfer is improved, thereby improving the retention of user id information.
(3) The multi-id makeup data set and the single-id makeup data set are used as sample data, different images in the single-id makeup data set cover visual effects presented under various angles and expressions under a specific makeup, and different images in the multi-id makeup data set cover attribute information of target objects with various id information, so that a target makeup migration network trained based on the two data sets can learn fine changes of the same makeup style and different id information. Therefore, the retention of user id information and the intensity of makeup migration (i.e., the reduction degree) can be both considered.
(4) Dressing transfer and training of a target dressing transfer network are carried out by adopting the local images and the complete image of the target object, and the target dressing transfer network can better acquire dressing details of a local area by adopting the local images; by using the full image, the target makeup migration network can better grasp the makeup features of the entire target object. Therefore, it is possible to secure not only the naturalness of the target object but also a high degree of reduction of the key makeup area.
In summary, the embodiment of the present disclosure can give consideration to both detail reduction degree and id retention degree of makeup migration, cover face images of various situations, and improve robustness of makeup migration, and makeup migration of the embodiment of the present disclosure does not sacrifice performance (reduction degree, id retention degree, robustness, etc.) of any dimension.
Referring to fig. 6, embodiments of the present disclosure also provide a method of cosmetic transfer, the method including:
step 601: acquiring a target image to be migrated;
step 602: transferring a preset makeup style to the target image to be transferred through a pre-trained target makeup transfer network to obtain a transfer target image;
the target makeup migration network is obtained by training an original makeup migration network by adopting each video frame in a plurality of video frames and a migration sample image corresponding to each video frame, wherein the plurality of video frames comprise target objects with the same makeup style, and the migration sample image corresponding to one video frame is an image obtained by migrating the makeup style of the target objects in the video frames to a sample image to be migrated through the original makeup migration network.
For details of this embodiment, reference may be made to the makeup migration method in the previous embodiment, which is not described herein again.
Referring to fig. 7, embodiments of the present disclosure also provide a method for training a makeup migration network, which may include:
step 701: acquiring a sample image to be migrated and a local sample image to be migrated, wherein the sample image to be migrated comprises a target object, and the local sample image to be migrated comprises a local area on the target object;
step 702: transferring the makeup style of a target object in a reference sample image to the sample image to be transferred through an original makeup transfer network to obtain a transfer sample image;
step 703: transferring the makeup style of a target object in a reference local sample image to the local sample image to be transferred through an original makeup transfer network to obtain a transferred local sample image, wherein the reference local sample image comprises a local area of the target object in the reference sample image, and the local area included in the local sample image to be transferred is the same as the local area included in the reference local sample image;
step 704: and training the original makeup migration network based on the migration sample image and the migration local sample image to obtain a target makeup migration network.
Details of each step in the above embodiment of the training method are described in the above embodiment of the training process of the target makeup migration network in the makeup migration method, and are not described herein again.
Referring to fig. 8, embodiments of the present disclosure also provide a method for training a makeup migration network, which may include:
step 801: acquiring each video frame in a plurality of video frames, wherein the plurality of video frames comprise target objects with the same makeup style;
step 802: transferring the makeup style of the target object in each video frame to a sample image to be transferred through an original makeup transfer network to obtain a transfer sample image corresponding to each video frame;
step 803: training the original makeup migration network based on each video frame and the migration sample image corresponding to the video frame to obtain a target makeup migration network.
Details of each step in the above embodiment of the training method are described in the above embodiment of the training process of the target makeup migration network in the makeup migration method, and are not described herein again.
The disclosure relates to the field of augmented reality, and aims to detect or identify relevant features, states and attributes of a target object by means of various visual correlation algorithms by acquiring image information of the target object in a real environment, so as to obtain an AR effect combining virtual and reality matched with specific applications. For example, the target object may relate to a face, a limb, a gesture, an action, etc. associated with a human body, or a marker, a marker associated with an object, or a sand table, a display area, a display item, etc. associated with a venue or a place. The vision-related algorithms may involve visual localization, SLAM, three-dimensional reconstruction, image registration, background segmentation, key point extraction and tracking of objects, pose or depth detection of objects, and the like. The specific application can not only relate to interactive scenes such as navigation, explanation, reconstruction, virtual effect superposition display and the like related to real scenes or articles, but also relate to special effect treatment related to people, such as interactive scenes such as makeup beautification, limb beautification, special effect display, virtual model display and the like. The detection or identification processing of the relevant characteristics, states and attributes of the target object can be realized through the convolutional neural network. The convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
As shown in fig. 9, embodiments of the present disclosure also provide a makeup transfer device, the device including:
an obtaining module 901, configured to obtain a target image to be migrated and a local image to be migrated, where the target image to be migrated includes a target object, and the local image to be migrated includes a local area of the target object;
a migration module 902, configured to migrate a preset makeup style to the target image to be migrated and the local image to be migrated through a target makeup migration network, respectively, so as to obtain a migration target image and a migration local image;
a fusion module 903, configured to fuse the migration target image and the migration partial image to obtain a makeup migration result of the target object.
As shown in fig. 10, embodiments of the present disclosure also provide a makeup transfer device, the device including:
an obtaining module 1001, configured to obtain a target image to be migrated;
a migration module 1002, configured to migrate a preset makeup style to the target image to be migrated through a target makeup migration network to obtain a migration target image;
the target makeup migration network is obtained by training an original makeup migration network by adopting each video frame in a plurality of video frames and a migration sample image corresponding to each video frame, wherein the plurality of video frames comprise target objects with the same makeup style, and the migration sample image corresponding to one video frame is an image obtained by migrating the makeup style of the target objects in the video frames to a sample image to be migrated through the original makeup migration network.
As shown in fig. 11, an embodiment of the present disclosure also provides a training apparatus of a makeup migration network, the apparatus including:
an obtaining module 1101, configured to obtain a sample image to be migrated and a local sample image to be migrated, where the sample image to be migrated includes a target object, and the local sample image to be migrated includes a local area on the target object;
a first migration module 1102, configured to migrate the makeup style of the target object in the reference sample image to the sample image to be migrated through an original makeup migration network, so as to obtain a migrated sample image;
a second migration module 1103, configured to migrate, through an original makeup migration network, the makeup style of a target object in a reference local sample image onto the local sample image to be migrated to obtain a migration local sample image, where the reference local sample image includes a local area of the target object in the reference sample image, and the local area included in the local sample image to be migrated is the same as the local area included in the reference local sample image;
and the training module 1104 is used for training the original makeup migration network based on the migration sample image and the migration local sample image to obtain a target makeup migration network.
As shown in fig. 12, an embodiment of the present disclosure also provides a training apparatus of a makeup migration network, the apparatus including:
an obtaining module 1201, configured to obtain each video frame of a plurality of video frames, where the plurality of video frames include target objects with the same makeup style;
a migration module 1202, configured to migrate the makeup style of the target object in each video frame to a sample image to be migrated through an original makeup migration network, so as to obtain a migration sample image corresponding to each video frame;
a training module 1203, configured to train the original makeup migration network based on each video frame and the migration sample image corresponding to the video frame, so as to obtain a target makeup migration network.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present specification also provide a computer device, which at least includes a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the method according to any of the foregoing embodiments when executing the program.
Fig. 13 is a more specific hardware structure diagram of a computing device provided in an embodiment of the present specification, where the device may include: a processor 1301, a memory 1302, an input/output interface 1303, a communication interface 1304, and a bus 1305. Wherein the processor 1301, the memory 1302, the input/output interface 1303 and the communication interface 1304 enable communication connections within the device with each other through the bus 1305.
The processor 1301 may be implemented by a general purpose CPU (Central Processing Unit), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits, and is configured to execute related programs to implement the technical solutions provided in the embodiments of the present disclosure. The processor 1301 may further include a graphics card, which may be an Nvidia titan X graphics card or a 1080Ti graphics card, etc.
The Memory 1302 may be implemented in the form of a ROM (Read Only Memory), a RAM (Random Access Memory), a static storage device, a dynamic storage device, or the like. The memory 1302 may store an operating system and other application programs, and when the technical solution provided by the embodiments of the present specification is implemented by software or firmware, the relevant program codes are stored in the memory 1302 and called by the processor 1301 to be executed.
The input/output interface 1303 is used for connecting an input/output module to realize information input and output. The i/o module may be configured as a component in a device (not shown) or may be external to the device to provide a corresponding function. The input devices may include a keyboard, a mouse, a touch screen, a microphone, various sensors, etc., and the output devices may include a display, a speaker, a vibrator, an indicator light, etc.
The communication interface 1304 is used for connecting a communication module (not shown in the figure) to implement communication interaction between the present device and other devices. The communication module can realize communication in a wired mode (such as USB, network cable and the like) and also can realize communication in a wireless mode (such as mobile network, WIFI, Bluetooth and the like).
Bus 1305 includes a path that transfers information between the various components of the device, such as processor 1301, memory 1302, input/output interface 1303, and communication interface 1304.
It should be noted that although the above-mentioned device only shows the processor 1301, the memory 1302, the input/output interface 1303, the communication interface 1304 and the bus 1305, in a specific implementation process, the device may also include other components necessary for normal operation. In addition, those skilled in the art will appreciate that the above-described apparatus may also include only those components necessary to implement the embodiments of the present description, and not necessarily all of the components shown in the figures.
The embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the method of any of the foregoing embodiments.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
From the above description of the embodiments, it is clear to those skilled in the art that the embodiments of the present disclosure can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the embodiments of the present specification may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments of the present specification.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus embodiment, since it is substantially similar to the method embodiment, it is relatively simple to describe, and reference may be made to some descriptions of the method embodiment for relevant points. The above-described apparatus embodiments are merely illustrative, and the modules described as separate components may or may not be physically separate, and the functions of the modules may be implemented in one or more software and/or hardware when implementing the embodiments of the present disclosure. And part or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The foregoing is only a specific embodiment of the embodiments of the present disclosure, and it should be noted that, for those skilled in the art, a plurality of modifications and decorations can be made without departing from the principle of the embodiments of the present disclosure, and these modifications and decorations should also be regarded as the protection scope of the embodiments of the present disclosure.

Claims (19)

1. A method of cosmetic transfer, the method comprising:
acquiring a target image to be migrated and a local image to be migrated, wherein the target image to be migrated comprises a target object, and the local image to be migrated comprises a local area of the target object;
respectively transferring preset makeup styles to the target image to be transferred and the local image to be transferred through a target makeup transfer network to obtain a transfer target image and a transfer local image;
and fusing the migration target image and the migration local image to obtain a makeup migration result of the target object.
2. The method according to claim 1, wherein acquiring the local image to be migrated comprises:
carrying out target detection on an original image of the target object, and determining the position of a key point of the target object in the original image;
and cutting out the local image to be migrated from the original image based on the key point position of the target object.
3. The method of claim 2, wherein the cropping the to-be-migrated partial image from the original image based on the key point position of the target object comprises:
cutting out an image area with a first preset size from the original image based on the key point position of the target object, wherein the first preset size is smaller than the size of the target object and larger than the size of the local area, and the local area is located at a first preset position in the image area with the first preset size;
and determining the image area with the first preset size as the local image to be migrated.
4. The method according to any one of claims 1 to 3, wherein the target image to be migrated is a video frame including the target object acquired in real time, and the local image to be migrated is a local video frame including the local area cropped from the video frame.
5. The method according to any one of claims 1 to 4, wherein the target makeup migration network is obtained based on co-training of a sample image to be migrated, a local sample image to be migrated, a reference sample image, and a reference local sample image;
wherein the reference sample image includes a target object having the preset makeup style, and the sample image to be migrated includes a target object having a makeup style other than the preset makeup style;
the local sample image to be migrated comprises a local region of a target object in the sample image to be migrated, the reference local sample image comprises a local region of the target object in the reference sample image, and the local region in the local sample image to be migrated is the same as the local region in the reference local sample image.
6. The method of claim 5, wherein the reference sample image is selected from a first image set comprising a plurality of images, each image in the first image set comprising the same target object having the preset cosmetic style;
the sample images to be migrated are selected from a second image set, the second image set comprises a plurality of images, each image in the second image set comprises a target object with a makeup style except the preset makeup style, and the target objects in at least two images in the second image set are different.
7. The method according to claim 5 or 6, wherein the reference sample image comprises a plurality of video frames in a sample video, each of the plurality of video frames in the sample video comprising a target object having the preset cosmetic style.
8. The method of any one of claims 1 to 7, wherein the target makeup migration network comprises a first sub-network and a second sub-network;
the first sub-network is used for transferring preset makeup styles to the target image to be transferred;
the second sub-network is used for transferring preset makeup styles to the local image to be transferred.
9. The method according to any one of claims 1 to 8, wherein the fusing the migration target image and the migration partial image to obtain the makeup migration result of the target object includes:
performing semantic segmentation on the migration target image to obtain the position of the local area in the migration target image;
and fusing the migration local image into the migration target image based on the position of the local area in the migration target image to obtain a makeup migration result of the target object.
10. The method according to any one of claims 1 to 9, characterized in that after fusing the migration target image and the migration partial image, the method further comprises:
acquiring the color of the target object after the target object is moved;
adjusting the color of the area on the target object where the makeup transfer is not performed based on the color of the target object after the transfer.
11. A method of training a makeup migration network, the method comprising:
acquiring a sample image to be migrated and a local sample image to be migrated, wherein the sample image to be migrated comprises a target object, and the local sample image to be migrated comprises a local area on the target object;
transferring the makeup style of a target object in a reference sample image to the sample image to be transferred through an original makeup transfer network to obtain a transfer sample image;
transferring the makeup style of a target object in a reference local sample image to the local sample image to be transferred through an original makeup transfer network to obtain a transferred local sample image, wherein the reference local sample image comprises a local area of the target object in the reference sample image, and the local area included in the local sample image to be transferred is the same as the local area included in the reference local sample image;
training the original makeup migration network based on the migration sample image and the migration local sample image to obtain a target makeup migration network.
12. The method of claim 11, wherein the training of the original makeup migration network based on the migration sample images and the migration partial sample images comprises:
establishing a first loss function based on the migrated sample image;
establishing a second loss function based on the migrated local sample image;
training an original makeup migration network based on the first loss function and the second loss function to obtain the target makeup migration network.
13. The method according to claim 12, wherein the target makeup migration network includes a first sub-network for migrating a preset makeup style onto the target image to be migrated and a second sub-network for migrating a preset makeup style onto the partial image to be migrated; the training of the original makeup migration network based on the first loss function and the second loss function includes:
training an original first sub-network based on the first loss function to obtain the first sub-network;
and training an original second sub-network based on the second loss function to obtain the second sub-network.
14. The method of claim 13, wherein the penalty function used to train a sub-network comprises at least one of:
a loss function for characterizing a loss of realism of an output image of the sub-network;
a loss function for characterizing attribute similarity loss between an output image of the sub-network and an image to be migrated that is input to the sub-network;
a loss function for characterizing a cosmetic similarity loss between an output image of the sub-network and a reference image input to the sub-network;
a loss function for characterizing a loss of similarity between a target sample image and an image to be migrated input to the sub-network; the target sample image is obtained by transferring the makeup style on the image to be transferred which is input into the sub-network to the output image of the sub-network;
the sub-network is the first sub-network or the second sub-network.
15. The method according to any one of claims 11 to 14, wherein the reference sample image comprises a plurality of video frames in a video, each reference local sample image comprises a local area on a target object in one video frame, and the target objects included on the respective video frames are the same and have the same makeup style; and/or
The number of the sample images to be migrated is more than 1, the target objects in at least two sample images to be migrated are different objects, and each local sample image to be migrated comprises a local area on the target object in one sample image to be migrated.
16. A makeup transfer device, characterized in that said device comprises:
the device comprises an acquisition module, a migration module and a migration module, wherein the acquisition module is used for acquiring a target image to be migrated and a local image to be migrated, the target image to be migrated comprises a target object, and the local image to be migrated comprises a local area of the target object;
the transfer module is used for respectively transferring preset makeup styles to the target image to be transferred and the local image to be transferred through a target makeup transfer network to obtain a transfer target image and a transfer local image;
and the fusion module is used for fusing the migration target image and the migration local image to obtain a makeup migration result of the target object.
17. A training device for a makeup migration network, said device comprising:
the device comprises an acquisition module, a migration module and a migration module, wherein the acquisition module is used for acquiring a sample image to be migrated and a local sample image to be migrated, the sample image to be migrated comprises a target object, and the local sample image to be migrated comprises a local area on the target object;
the first transfer module is used for transferring the makeup style of a target object in a reference sample image to the sample image to be transferred through an original makeup transfer network to obtain a transfer sample image;
the second migration module is used for migrating the makeup style of a target object in a reference local sample image to the local sample image to be migrated through an original makeup migration network to obtain a migrated local sample image, wherein the reference local sample image comprises a local area of the target object in the reference sample image, and the local area included in the migrated local sample image is the same as the local area included in the reference local sample image;
and the training module is used for training the original makeup migration network based on the migration sample image and the migration local sample image to obtain a target makeup migration network.
18. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the method of any one of claims 1 to 15.
19. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any one of claims 1 to 15 when executing the program.
CN202111653519.6A 2021-12-30 2021-12-30 Method and device for cosmetic transfer and training of cosmetic transfer network Pending CN114283052A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111653519.6A CN114283052A (en) 2021-12-30 2021-12-30 Method and device for cosmetic transfer and training of cosmetic transfer network
PCT/CN2022/125086 WO2023124391A1 (en) 2021-12-30 2022-10-13 Methods and apparatuses for makeup transfer and makeup transfer network training

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111653519.6A CN114283052A (en) 2021-12-30 2021-12-30 Method and device for cosmetic transfer and training of cosmetic transfer network

Publications (1)

Publication Number Publication Date
CN114283052A true CN114283052A (en) 2022-04-05

Family

ID=80878778

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111653519.6A Pending CN114283052A (en) 2021-12-30 2021-12-30 Method and device for cosmetic transfer and training of cosmetic transfer network

Country Status (2)

Country Link
CN (1) CN114283052A (en)
WO (1) WO2023124391A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115375857A (en) * 2022-10-26 2022-11-22 深圳市其域创新科技有限公司 Three-dimensional scene reconstruction method, device, equipment and storage medium
WO2023124391A1 (en) * 2021-12-30 2023-07-06 上海商汤智能科技有限公司 Methods and apparatuses for makeup transfer and makeup transfer network training
WO2023241375A1 (en) * 2022-06-17 2023-12-21 Lemon Inc. Agilegan-based stylization method to enlarge style region

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117241064B (en) * 2023-11-15 2024-03-19 北京京拍档科技股份有限公司 Live-broadcast real-time face replacement method, equipment and storage medium
CN117912120B (en) * 2024-03-19 2024-06-07 中国科学技术大学 Face privacy protection method, system, equipment and storage medium
CN118780972A (en) * 2024-06-19 2024-10-15 山东财经大学 A multi-scale digital makeup migration method and system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110111291A (en) * 2019-05-10 2019-08-09 衡阳师范学院 Based on part and global optimization blending image convolutional neural networks Style Transfer method
US11508148B2 (en) * 2020-03-18 2022-11-22 Adobe Inc. Automatic makeup transfer using semi-supervised learning
CN111640057B (en) * 2020-05-25 2022-04-15 武汉理工大学 Hidden variable decoupling-based human face image local feature migration network and method
CN111950430B (en) * 2020-08-07 2024-06-14 武汉理工大学 Multi-scale dressing style difference measurement and migration method and system based on color textures
CN112949605A (en) * 2021-04-13 2021-06-11 杭州欣禾圣世科技有限公司 Semantic segmentation based face makeup method and system
CN114283052A (en) * 2021-12-30 2022-04-05 北京大甜绵白糖科技有限公司 Method and device for cosmetic transfer and training of cosmetic transfer network

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023124391A1 (en) * 2021-12-30 2023-07-06 上海商汤智能科技有限公司 Methods and apparatuses for makeup transfer and makeup transfer network training
WO2023241375A1 (en) * 2022-06-17 2023-12-21 Lemon Inc. Agilegan-based stylization method to enlarge style region
CN115375857A (en) * 2022-10-26 2022-11-22 深圳市其域创新科技有限公司 Three-dimensional scene reconstruction method, device, equipment and storage medium

Also Published As

Publication number Publication date
WO2023124391A1 (en) 2023-07-06

Similar Documents

Publication Publication Date Title
CN114283052A (en) Method and device for cosmetic transfer and training of cosmetic transfer network
US12039454B2 (en) Microexpression-based image recognition method and apparatus, and related device
CN109376582B (en) Interactive face cartoon method based on generation of confrontation network
KR102045695B1 (en) Facial image processing method and apparatus, and storage medium
KR102697772B1 (en) Augmented reality content generators that include 3D data within messaging systems
CN100468463C (en) Method,apparatua and computer program for processing image
CN109952594B (en) Image processing method, device, terminal and storage medium
US11995703B2 (en) Image-to-image translation using unpaired data for supervised learning
CN110796593B (en) Image processing method, device, medium and electronic device based on artificial intelligence
CN107341435A (en) Processing method, device and the terminal device of video image
JP2024522287A (en) 3D human body reconstruction method, apparatus, device and storage medium
CN106372629A (en) Living body detection method and device
CN107025678A (en) A kind of driving method and device of 3D dummy models
JP2014211719A (en) Apparatus and method for information processing
CN108198130A (en) Image processing method, device, storage medium and electronic equipment
KR102229061B1 (en) Apparatus and method for generating recognition model of facial expression, and apparatus and method using the same
CN110909680A (en) Facial expression recognition method and device, electronic equipment and storage medium
CN111259814A (en) Living body detection method and system
CN116802692A (en) Object reconstruction using media data
CN112733946A (en) Training sample generation method and device, electronic equipment and storage medium
Laishram et al. High-quality face caricature via style translation
CN113947520A (en) Method for realizing face makeup conversion based on generation of confrontation network
CN115546361A (en) Three-dimensional cartoon image processing method and device, computer equipment and storage medium
CN112818899A (en) Face image processing method and device, computer equipment and storage medium
CN111597926A (en) Image processing method and device, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40064898

Country of ref document: HK