CN108830820B - Electronic device, image acquisition method, and computer-readable storage medium - Google Patents
Electronic device, image acquisition method, and computer-readable storage medium Download PDFInfo
- Publication number
- CN108830820B CN108830820B CN201810547508.1A CN201810547508A CN108830820B CN 108830820 B CN108830820 B CN 108830820B CN 201810547508 A CN201810547508 A CN 201810547508A CN 108830820 B CN108830820 B CN 108830820B
- Authority
- CN
- China
- Prior art keywords
- image
- target object
- shooting scene
- preset
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 66
- 230000015572 biosynthetic process Effects 0.000 claims description 14
- 238000003786 synthesis reaction Methods 0.000 claims description 14
- 230000002194 synthesizing effect Effects 0.000 claims description 8
- 238000010586 diagram Methods 0.000 description 5
- 239000012634 fragment Substances 0.000 description 4
- 239000010985 leather Substances 0.000 description 4
- 230000011218 segmentation Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000007621 cluster analysis Methods 0.000 description 2
- 239000002131 composite material Substances 0.000 description 2
- 238000000354 decomposition reaction Methods 0.000 description 2
- 238000003708 edge detection Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000004806 packaging method and process Methods 0.000 description 2
- 238000009928 pasteurization Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
The invention discloses an electronic device, an image acquisition method and a computer readable storage medium. When the targets are in the same gesture, the invention acquires at least one corresponding first image containing the targets for each preset shooting scene; selecting an optimal shooting scene according to a predetermined selection rule and a plurality of acquired first images; and after the arrangement of the optimal shooting scene is completed and an instruction for starting to acquire images is received, acquiring a second image containing the target object. Compared with the prior art, the method and the device have the advantages that the limitation of shooting scenes on the target object is relieved, and therefore the flexibility of an image acquisition method is improved.
Description
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an electronic device, an image capturing method, and a computer readable storage medium.
Background
With the development of network technology, live broadcast has become a popular interaction mode. To increase the interest of live broadcast, attracting more viewers to watch, it is often necessary to replace the current actual live background with a virtual background. The virtual background replacement method comprises the following steps: and extracting the anchor image from the acquired image by using the matting technology, and then synthesizing the anchor image and the virtual background image to realize the replacement of the virtual background.
At present, in order to facilitate the later image matting processing, the commonly adopted image acquisition method is as follows: and building a shooting scene by using a green curtain cloth, and collecting an image of the anchor in a green background. The method is convenient for the later image matting processing to a certain extent, but the method still has the defect that if the anchor wears clothes and shoes and caps with the same color as the background color or wears jewelry accessories with the same color as the background color, the anchor image and the background image cannot be accurately distinguished when the collected image is subjected to the image matting processing, so that the anchor image is partially lost. Therefore, the current image acquisition method has limitation on the target object and lacks flexibility.
Disclosure of Invention
The invention mainly aims to provide an electronic device, an image acquisition method and a computer readable storage medium, and aims to solve the problems that the existing image acquisition method has limitation on a target object and lacks flexibility.
In order to achieve the above object, the present invention provides an electronic device, which includes a memory and a processor, wherein an image acquisition program is stored in the memory, and the image acquisition program when executed by the processor implements the following steps:
a first image acquisition step: when the target objects are in the same gesture, acquiring at least one corresponding first image containing the target objects for each preset shooting scene;
a first selection step: selecting an optimal shooting scene according to a predetermined selection rule and a plurality of acquired first images;
a second image acquisition step: and after finishing the arrangement of the optimal shooting scene and receiving an instruction for starting to acquire images, acquiring a second image containing the target object.
Preferably, the preset shooting scene is a shooting scene with a shooting background of a preset single color, and the first image acquisition step includes:
when the targets are in the same gesture, selecting the preset shooting scenes one by one, and after selecting one preset shooting scene, collecting a first image which corresponds to the selected preset shooting scene and contains the targets;
after the first image corresponding to the selected preset shooting scene is shot, if the preset shooting scene is not selected, other unselected preset shooting scenes are selected continuously, or if all the preset shooting scenes are selected, the first selection step is carried out.
Preferably, the first selecting step includes:
the acquisition step: acquiring target object images in the first images;
the synthesis steps are as follows: synthesizing all the obtained target object images to generate a standard target object image;
determining: determining the similarity between each target object image and the standard target object image;
a second selection step: and selecting a first image with the maximum similarity between the target object image and the standard target object image as an optimal first image, and taking a shooting scene adopted by the optimal first image as an optimal shooting scene.
Preferably, the acquiring step includes:
setting a color threshold according to a background color in the first image;
determining a difference value between a color value and a color threshold value of each pixel in the first image;
and acquiring pixels with the difference value between the color value and the color threshold value smaller than a preset threshold value, and taking an image formed by the acquired pixels as an image of a target object.
Preferably, after the second image acquisition step, the processor executes the image acquisition program, and further implements the steps of:
and acquiring a target object image from the acquired second image, and performing synthesis processing by utilizing the target object image and a preset virtual background to generate a synthesized image.
In addition, in order to achieve the above object, the present invention also provides an image acquisition method, which includes the steps of:
a first image acquisition step: when the target objects are in the same gesture, acquiring at least one corresponding first image containing the target objects for each preset shooting scene;
a first selection step: selecting an optimal shooting scene according to a predetermined selection rule and a plurality of acquired first images;
a second image acquisition step: and after finishing the arrangement of the optimal shooting scene and receiving an instruction for starting to acquire images, acquiring a second image containing the target object.
Preferably, the preset shooting scene is a shooting scene with a shooting background of a preset single color, and the first image acquisition step includes:
when the targets are in the same gesture, selecting the preset shooting scenes one by one, and after selecting one preset shooting scene, collecting a first image which corresponds to the selected preset shooting scene and contains the targets;
after the first image corresponding to the selected preset shooting scene is shot, if the preset shooting scene is not selected, other unselected preset shooting scenes are selected continuously, or if all the preset shooting scenes are selected, the first selection step is carried out.
Preferably, the first selecting step includes:
the acquisition step: acquiring target object images in the first images;
the synthesis steps are as follows: synthesizing all the obtained target object images to generate a standard target object image;
determining: determining the similarity between each target object image and the standard target object image;
a second selection step: and selecting a first image with the maximum similarity between the target object image and the standard target object image as an optimal first image, and taking a shooting scene adopted by the optimal first image as an optimal shooting scene.
Preferably, after the second image acquisition step, the method further comprises:
and acquiring a target object image from the acquired second image, and performing synthesis processing by utilizing the target object image and a preset virtual background to generate a synthesized image.
Furthermore, to achieve the above object, the present invention also proposes a computer-readable storage medium storing an image acquisition program executable by at least one processor to cause the at least one processor to perform the steps of the image acquisition method as set forth in any one of the above.
When the targets are in the same gesture, the invention acquires at least one corresponding first image containing the targets for each preset shooting scene; selecting an optimal shooting scene according to a predetermined selection rule and a plurality of acquired first images; and after the arrangement of the optimal shooting scene is completed and an instruction for starting to acquire images is received, acquiring a second image containing the target object. Compared with the prior art, the method and the device can automatically select the optimal shooting scene for the target object to shoot by analyzing the plurality of first images of the target object in different shooting scenes, and the limitation of the shooting scene on the target object is eliminated, so that the flexibility of the image acquisition method is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the structures shown in these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic view of an operating environment of a first and a second embodiment of an image acquisition program according to the present invention;
FIG. 2 is a block diagram of a first embodiment of an image acquisition process according to the present invention;
FIG. 3 is a detailed program block diagram of a selection module in the image acquisition program of the present invention;
FIG. 4 is a block diagram of a second embodiment of an image acquisition process according to the present invention;
FIG. 5 is a flowchart of a first embodiment of an image acquisition method according to the present invention;
FIG. 6 is a detailed flowchart of step S20 in the first embodiment of the image acquisition method of the present invention
Fig. 7 is a flowchart of a second embodiment of the image acquisition method of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
The principles and features of the present invention are described below with reference to the drawings, the examples are illustrated for the purpose of illustrating the invention and are not to be construed as limiting the scope of the invention.
The invention provides an image acquisition program.
Referring to fig. 1, a schematic view of an operation environment of a first and a second embodiment of an image acquisition program 10 according to the present invention is shown.
In the present embodiment, the image pickup program 10 is installed and run in the electronic apparatus 1. The electronic device 1 may be a computing device such as a desktop computer, a notebook computer, a palm computer, a server, or the like. The electronic device 1 may include, but is not limited to, a memory 11, a processor 12, and a display 13. Fig. 1 shows only an electronic device 1 with components 11-13, but it is understood that not all shown components are required to be implemented, and that more or fewer components may alternatively be implemented.
The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a hard disk or a memory of the electronic device 1. The memory 11 may in other embodiments also be an external storage device of the electronic apparatus 1, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the electronic apparatus 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic apparatus 1. The memory 11 is used for storing application software installed in the electronic device 1 and various data, such as program codes of the image acquisition program 10. The memory 11 may also be used to temporarily store data that has been output or is to be output.
The processor 12 may in some embodiments be a central processing unit (Central Processing Unit, CPU), microprocessor or other data processing chip for executing program code or processing data stored in the memory 11, such as executing the image acquisition program 10 or the like.
The display 13 may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like in some embodiments. The display 13 is used for displaying information processed in the electronic device 1 and for displaying a visualized user interface. The components 11-13 of the electronic device 1 communicate with each other via a program bus.
Referring to fig. 2, a block diagram of a first embodiment of an image acquisition process 10 according to the present invention is shown. In this embodiment, the image acquisition program 10 may be divided into one or more modules, and one or more modules are stored in the memory 11 and executed by one or more processors (the processor 12 in this embodiment) to complete the present invention. For example, in fig. 2, the image acquisition program 10 may be divided into a first image acquisition module 101, a selection module 102, and a second image acquisition module 103. The modules referred to in the present invention refer to a series of computer program instruction segments capable of performing a specific function, more suitable than a program for describing the execution of the image acquisition program 10 in the electronic device 1, wherein:
the first image acquisition module 101 is configured to acquire, for each preset shooting scene, at least one corresponding first image including the target object when the target object is in the same pose.
The target may be a person, an object, an animal, or the like, and the present invention is not limited to the target.
The preset shooting scene is preferably a shooting scene with a shooting background of a single color. For example, five shooting scenes whose shooting backgrounds are red, blue, green, yellow, and purple, respectively, are preset.
Preferably, in this embodiment, the step of collecting at least one corresponding first image including the target object for each preset shooting scene includes:
firstly, when the targets are in the same gesture, the preset shooting scenes are selected one by one, and after one preset shooting scene is selected, a first image containing the targets corresponding to the selected preset shooting scene is acquired.
Then, after the shooting of the first image corresponding to the selected preset shooting scene is completed, if any preset shooting scene is not selected, other unselected preset shooting scenes are continuously selected, or if all preset shooting scenes are selected, the selection module 102 is invoked.
The selection module 102 is configured to select an optimal shooting scene according to a predetermined selection rule and the acquired plurality of first images.
If the target object is in a shooting background with a single color and the color of the target object is the same as or similar to that of the shooting background, in the process of extracting the target object image, the image part of the target object, which is the same as or similar to that of the shooting background, cannot be distinguished from the background image, so that the extracted target object image is incomplete.
For this purpose, an optimal shooting background should be selected for image acquisition before image acquisition, so as to avoid the occurrence of the above situation.
In this embodiment, according to a predetermined selection rule and a plurality of acquired first images, an optimal shooting scene is selected as the shooting scene of the second image.
Referring to fig. 3, in the present embodiment, the selection module 102 preferably includes an acquisition unit 1021, a synthesis unit 1022, a determination unit 1023, and a selection unit 1024, where:
an acquiring unit 1021, configured to acquire a target object image of each of the first images.
First, the acquisition unit 1021 sets a color threshold according to the background color in the first image.
Then, a difference value between the color value and a color threshold value of each pixel in the first image is determined.
Wherein the variance value can be determined by the following variance value formula:
wherein D is a difference value between the first color value C1 (r 1, g1, b 1) and the second color value C2 (r 2, g2, b 2). In this embodiment, the first color value is a color threshold, and the second color value is a color value of a pixel in the first image.
And finally, acquiring pixels with the difference value between the color value and the color threshold value smaller than a preset threshold value, and taking an image formed by the acquired pixels as a target object image.
In other embodiments other than the present embodiment, the acquiring unit 1021 may further segment the object in the first image and the background by using a region segmentation method (such as a region growing method, a region splitting and merging method, etc.) to obtain an object image; or, dividing the target object from the background in the first image by edge detection (for example, identifying the position where the color value or the gray level or the structure is suddenly changed as an edge) to obtain a target object image; in addition, the segmentation of the target object and the background can be realized by a cluster analysis method and the like, so that a target object image is obtained.
A synthesizing unit 1022, configured to perform synthesis processing on all the obtained target object images of the first image, and generate a standard target object image.
The synthesizing unit 1022 takes the object image of a first image as a layer, the object images of a plurality of first images form a plurality of layers, and the layers are subjected to superposition processing to generate a synthesized image, and the synthesized image is the standard object image.
And a determining unit 1032 configured to determine a similarity between the target object image of each of the first images and the standard target object image.
In this embodiment, there are many algorithms for determining the similarity between two images by the determination unit 1032, for example, a histogram matching method, a matrix decomposition method, a pasteurization method, and the like. And according to a specific application scene, a suitable similarity algorithm can be selected to determine the similarity between the target object image of each first image and the standard target object image.
And a selecting unit 1024, configured to select a first image with the greatest similarity between the target object image and the standard target object image as an optimal first image, and use a shooting scene adopted by the optimal first image as an optimal shooting scene.
The following is illustrated by way of an example:
a host player wears green earrings, blue clothes and red leather shoes to shoot. First, the anchor keeps the same gesture to shoot corresponding first images under blue background, green background, red background and purple background respectively. Then, the target image in each first image is acquired, at this time, the image data of the anchor garment is not included in the target image acquired from the first image of the blue background, the image data of the anchor ear ring is not included in the target image acquired from the first image of the green background, and the image data of the anchor leather shoes is not included in the target image acquired from the first image of the red background. The object images obtained from the first images are only image data fragments corresponding to the object, and the object cannot be expressed completely, but after the object images of all the first images are synthesized, all the image data fragments corresponding to the object are spliced together, so that a complete object image is formed.
In addition, if the target object changes (for example, the anchor changes the clothing, etc.), the electronic device 1 needs to recall the first image capturing module 101 and the selecting module 102 to select a new optimal shooting scene.
And the second image acquisition module 103 is used for acquiring a second image containing the target object after the arrangement of the optimal shooting scene is completed and an instruction for starting to acquire the image is received.
The above instruction to start capturing an image may be issued by a user (e.g., a host), and the method for the user to send the instruction may be: pressing a capture button, triggering a virtual capture button, voice, gestures, etc.
It should be noted that, in the process of acquiring the second image, the second image acquisition module 103 does not limit the gesture of the target object.
In the embodiment, when the target objects are in the same gesture, at least one corresponding first image containing the target objects is acquired for each preset shooting scene; selecting an optimal shooting scene according to a predetermined selection rule and a plurality of acquired first images; and after the arrangement of the optimal shooting scene is completed and an instruction for starting to acquire images is received, acquiring a second image containing the target object. Compared with the prior art, the method and the device for capturing the image can automatically select the optimal shooting scene for the target object to shoot by analyzing the plurality of first images of the target object in different shooting scenes, so that the limitation of the shooting scene on the target object is eliminated, and the flexibility of the image acquisition method is improved.
As shown in fig. 4, fig. 4 is a block diagram of a second embodiment of the image acquisition procedure according to the present invention.
The present embodiment further includes, based on the first embodiment, an acquisition module 104 and a synthesis module 105, where:
an acquiring module 104, configured to acquire an image of the target object from the acquired second image.
The method for acquiring the target image from the second image by the acquiring module 104 is the same as the method for acquiring the target image from the first image by the acquiring unit 1021 in the selecting module 102, and will not be described herein.
In this embodiment, after the target object image is acquired, some parameters (for example, smoothness, contrast, brightness, transparency, etc.) of the target object image may also be adjusted.
And a synthesizing module 105, configured to perform synthesis processing with the target object image and preset virtual background data, so as to generate a synthesized image.
When the image of the target object is an image including only the target object, the synthesis module 105 superimposes and merges the image of the target object as a top layer and the preset virtual background image as a bottom layer to generate a synthesized image.
Alternatively, when the target image includes pixel data that constitutes the target, the synthesis module 105 replaces the pixels at the corresponding locations on the virtual background image with the pixels of the target image.
Preferably, in the present embodiment, the processor 12 executes the image acquisition program 10, and further implements the following steps:
and encoding and packaging the generated synthesized image to obtain media stream data, and distributing the obtained media stream data to a network.
According to the embodiment, the target object image and the preset virtual background data are synthesized, and the synthesized image is coded and packaged and then distributed to the network in real time, so that the replacement of the live broadcast picture background can be realized, and the personalized requirement of a user is met.
In addition, the invention provides an image acquisition method.
Fig. 5 is a schematic flow chart of a first embodiment of the image acquisition method according to the present invention.
In this embodiment, the method includes:
step S10, when the targets are in the same gesture, changing the shooting scene where the targets are located, and collecting a first image containing the targets corresponding to the shooting scene every time the shooting scene is changed.
The target may be a person, an object, an animal, or the like, and the present invention is not limited to the target.
The preset shooting scene is preferably a shooting scene with a shooting background of a single color. For example, five shooting scenes whose shooting backgrounds are red, blue, green, yellow, and purple, respectively, are preset.
Preferably, in this embodiment, the step of collecting at least one corresponding first image including the target object for each preset shooting scene includes:
firstly, when the targets are in the same gesture, the preset shooting scenes are selected one by one, and after one preset shooting scene is selected, a first image containing the targets corresponding to the selected preset shooting scene is acquired.
Then, after the shooting of the first image corresponding to the selected preset shooting scene is completed, if any preset shooting scene is not selected, other unselected preset shooting scenes are continuously selected, or if all preset shooting scenes are selected, the step goes to the execution of step S20.
Step S20, selecting an optimal shooting scene according to a predetermined selection rule and the acquired plurality of first images.
If the target object is in a shooting background with a single color and the color of the target object is the same as or similar to that of the shooting background, in the process of extracting the target object image, the image part of the target object, which is the same as or similar to that of the shooting background, cannot be distinguished from the background image, so that the extracted target object image is incomplete.
For this purpose, an optimal shooting background should be selected for image acquisition before image acquisition, so as to avoid the occurrence of the above situation.
In this embodiment, according to a predetermined selection rule and a plurality of acquired first images, an optimal shooting scene is selected as the shooting scene of the second image.
Referring to fig. 6, in this embodiment, preferably, the step S20 specifically includes:
step S21, obtaining a target object image of each first image.
The step S21 specifically includes:
first, a color threshold is set according to a background color in the first image.
Then, a difference value between the color value and a color threshold value of each pixel in the first image is determined.
Wherein the variance value can be determined by the following variance value formula:
wherein D is a difference value between the first color value C1 (r 1, g1, b 1) and the second color value C2 (r 2, g2, b 2). In this embodiment, the first color value is a color threshold, and the second color value is a color value of a pixel in the first image.
And finally, acquiring pixels with the difference value between the color value and the color threshold value smaller than a preset threshold value, and taking an image formed by the acquired pixels as a target object image.
In other embodiments except the present embodiment, the target object in the first image may be segmented from the background by a region segmentation method (such as a region growing method, a region splitting and merging method, etc.), to obtain a target object image; or, dividing the target object from the background in the first image by edge detection (for example, identifying the position where the color value or the gray level or the structure is suddenly changed as an edge) to obtain a target object image; in addition, the segmentation of the target object and the background can be realized by a cluster analysis method and the like, so that a target object image is obtained.
Step S22, performing a synthesis process on all the obtained target object images of the first image, so as to generate a standard target object image.
The step S22 specifically includes: and taking the target object image of the first image as a layer, forming a plurality of layers by the target object images of the first images, and performing superposition processing on the layers to generate a composite image, wherein the composite image is the standard target object image.
Step S23, determining a similarity between the target object image of each of the first images and the standard target object image.
In this embodiment, there are many algorithms for determining the similarity between two images, such as histogram matching, matrix decomposition, and pasteurization. And according to a specific application scene, a suitable similarity algorithm can be selected to determine the similarity between the target object image of each first image and the standard target object image.
And S24, selecting a first image with the maximum similarity between the target object image and the standard target object image as an optimal first image, and taking a shooting scene adopted by the optimal first image as an optimal shooting scene.
The following is illustrated by way of an example:
a host player wears green earrings, blue clothes and red leather shoes to shoot. First, the anchor keeps the same gesture to shoot corresponding first images under blue background, green background, red background and purple background respectively. Then, the target image in each first image is acquired, at this time, the image data of the anchor garment is not included in the target image acquired from the first image of the blue background, the image data of the anchor ear ring is not included in the target image acquired from the first image of the green background, and the image data of the anchor leather shoes is not included in the target image acquired from the first image of the red background. The object images obtained from the first images are only image data fragments corresponding to the object, and the object cannot be expressed completely, but after the object images of all the first images are synthesized, all the image data fragments corresponding to the object are spliced together, so that a complete object image is formed.
After step S20, if the target object is changed (for example, the anchor changes the clothing), steps S10 and S20 are re-executed to select the optimal shooting scene.
Step S30, after finishing the arrangement of the optimal shooting scene and receiving an instruction for starting to acquire images, acquiring a second image containing a target object.
The above instruction to start capturing an image may be issued by a user (e.g., a host), and the method for the user to send the instruction may be: pressing a capture button, triggering a virtual capture button, voice, gestures, etc.
Note that in step S30, the posture operation of the target is not limited.
In the embodiment, when the target objects are in the same gesture, at least one corresponding first image containing the target objects is acquired for each preset shooting scene; selecting an optimal shooting scene according to a predetermined selection rule and a plurality of acquired first images; and after the arrangement of the optimal shooting scene is completed and an instruction for starting to acquire images is received, acquiring a second image containing the target object. Compared with the prior art, the method and the device for capturing the image can automatically select the optimal shooting scene for the target object to shoot by analyzing the plurality of first images of the target object in different shooting scenes, so that the limitation of the shooting scene on the target object is eliminated, and the flexibility of the image acquisition method is improved.
Fig. 7 is a schematic flow chart of a second embodiment of the image acquisition method according to the present invention.
This embodiment is based on the first embodiment, and after step S30, the method further includes:
and step S40, acquiring an object image from the acquired second image.
The method for acquiring the target image from the second image is the same as the method for acquiring the target image from the first image in step S21, and will not be described herein.
In this embodiment, after the target object image is acquired, some parameters (for example, smoothness, contrast, brightness, transparency, etc.) of the target object image may also be adjusted.
Step S50, performing a synthesis process by using the target object image and preset virtual background data, so as to generate a synthesized image.
The step S50 specifically includes:
when the target object image is an image only comprising the target object, the image of the target object is used as a top layer image layer, and a preset virtual background image is used as a bottom layer image layer to be overlapped and combined to generate a synthetic image.
Or when the target object image comprises pixel point data of the target object, replacing the pixel points at the corresponding positions on the virtual background image with the pixel points of the target object image.
Preferably, in this embodiment, after step S50, the method further includes:
and encoding and packaging the generated synthesized image to obtain media stream data, and distributing the obtained media stream data to a network.
According to the embodiment, the target object image and the preset virtual background data are synthesized, and the synthesized image is coded and packaged and then distributed to the network in real time, so that the replacement of the live broadcast picture background can be realized, and the personalized requirement of a user is met.
Further, the present invention also proposes a computer-readable storage medium storing an image acquisition program executable by at least one processor to cause the at least one processor to perform the steps of the image acquisition method in any of the above embodiments.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the invention, and all equivalent structural changes made by the description of the present invention and the accompanying drawings or direct/indirect application in other related technical fields are included in the scope of the invention.
Claims (8)
1. An electronic device comprising a memory and a processor, wherein an image acquisition program is stored on the memory, the image acquisition program when executed by the processor performing the steps of:
a first image acquisition step: when the target objects are in the same gesture, acquiring at least one corresponding first image containing the target objects for each preset shooting scene;
a first selection step: selecting an optimal shooting scene according to a predetermined selection rule and a plurality of acquired first images, wherein the method comprises the following steps: obtaining target object images in the first images, synthesizing all the obtained target object images to generate standard target object images, determining the similarity between the target object images and the standard target object images, selecting a first image with the largest similarity between the target object images and the standard target object images as an optimal first image, and taking a shooting scene adopted by the optimal first image as an optimal shooting scene;
a second image acquisition step: and after finishing the arrangement of the optimal shooting scene and receiving an instruction for starting to acquire images, acquiring a second image containing the target object.
2. The electronic device of claim 1, wherein the preset shooting scene is a shooting scene with a shooting background of a preset single color, and the first image acquisition step comprises:
when the targets are in the same gesture, selecting the preset shooting scenes one by one, and after selecting one preset shooting scene, collecting a first image which corresponds to the selected preset shooting scene and contains the targets;
after the first image corresponding to the selected preset shooting scene is shot, if the preset shooting scene is not selected, other unselected preset shooting scenes are selected continuously, or if all the preset shooting scenes are selected, the first selection step is carried out.
3. The electronic device of claim 1, wherein the acquiring the target image in each of the first images comprises:
setting a color threshold according to a background color in the first image;
determining a difference value between a color value and a color threshold value of each pixel in the first image;
and acquiring pixels with the difference value between the color value and the color threshold value smaller than a preset threshold value, and taking an image formed by the acquired pixels as an image of a target object.
4. The electronic device of any one of claims 1 to 3, wherein after the second image acquisition step, the processor executes the image acquisition program, further implementing the steps of:
and acquiring a target object image from the acquired second image, and performing synthesis processing by utilizing the target object image and a preset virtual background to generate a synthesized image.
5. An image acquisition method, characterized in that the method comprises the steps of:
a first image acquisition step: when the target objects are in the same gesture, acquiring at least one corresponding first image containing the target objects for each preset shooting scene;
a first selection step: selecting an optimal shooting scene according to a predetermined selection rule and a plurality of acquired first images, wherein the method comprises the following steps: obtaining target object images in the first images, synthesizing all the obtained target object images to generate standard target object images, determining the similarity between the target object images and the standard target object images, selecting a first image with the largest similarity between the target object images and the standard target object images as an optimal first image, and taking a shooting scene adopted by the optimal first image as an optimal shooting scene;
a second image acquisition step: and after finishing the arrangement of the optimal shooting scene and receiving an instruction for starting to acquire images, acquiring a second image containing the target object.
6. The image capturing method according to claim 5, wherein the preset capturing scene is a capturing scene with a capturing background of a preset single color, and the first image capturing step includes:
when the targets are in the same gesture, selecting the preset shooting scenes one by one, and after selecting one preset shooting scene, collecting a first image which corresponds to the selected preset shooting scene and contains the targets;
after the first image corresponding to the selected preset shooting scene is shot, if the preset shooting scene is not selected, other unselected preset shooting scenes are selected continuously, or if all the preset shooting scenes are selected, the first selection step is carried out.
7. The image acquisition method according to claim 5 or 6, characterized in that after the second image acquisition step, the method further comprises:
and acquiring a target object image from the acquired second image, and performing synthesis processing by utilizing the target object image and a preset virtual background to generate a synthesized image.
8. A computer-readable storage medium, characterized in that the computer-readable storage medium stores an image acquisition program executable by at least one processor to cause the at least one processor to perform the steps of the image acquisition method according to any one of claims 5-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810547508.1A CN108830820B (en) | 2018-05-31 | 2018-05-31 | Electronic device, image acquisition method, and computer-readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810547508.1A CN108830820B (en) | 2018-05-31 | 2018-05-31 | Electronic device, image acquisition method, and computer-readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108830820A CN108830820A (en) | 2018-11-16 |
CN108830820B true CN108830820B (en) | 2023-06-02 |
Family
ID=64145249
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810547508.1A Active CN108830820B (en) | 2018-05-31 | 2018-05-31 | Electronic device, image acquisition method, and computer-readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108830820B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110909368B (en) * | 2019-11-07 | 2023-09-05 | 腾讯科技(深圳)有限公司 | Data encryption method, device and computer readable storage medium |
CN112995491B (en) * | 2019-12-13 | 2022-09-16 | 阿里巴巴集团控股有限公司 | Video generation method and device, electronic equipment and computer storage medium |
CN111083518B (en) * | 2019-12-31 | 2022-09-09 | 安博思华智能科技有限责任公司 | Method, device, medium and electronic equipment for tracking live broadcast target |
US20230316533A1 (en) * | 2022-03-31 | 2023-10-05 | Zoom Video Communications, Inc. | Virtual Background Sharing |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006146730A (en) * | 2004-11-24 | 2006-06-08 | Casio Comput Co Ltd | Image retrieval device, image retrieval method and image retrieval device |
CN107610080B (en) * | 2017-09-11 | 2020-08-07 | Oppo广东移动通信有限公司 | Image processing method and apparatus, electronic apparatus, and computer-readable storage medium |
CN107734251A (en) * | 2017-09-29 | 2018-02-23 | 维沃移动通信有限公司 | A kind of photographic method and mobile terminal |
CN107808373A (en) * | 2017-11-15 | 2018-03-16 | 北京奇虎科技有限公司 | Sample image synthetic method, device and computing device based on posture |
-
2018
- 2018-05-31 CN CN201810547508.1A patent/CN108830820B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN108830820A (en) | 2018-11-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113810587B (en) | Image processing method and device | |
US10049308B1 (en) | Synthesizing training data | |
US10109051B1 (en) | Item recommendation based on feature match | |
CN108830820B (en) | Electronic device, image acquisition method, and computer-readable storage medium | |
US10956784B2 (en) | Neural network-based image manipulation | |
CN108537859B (en) | Image mask using deep learning | |
Matern et al. | Gradient-based illumination description for image forgery detection | |
TWI618409B (en) | Local change detection in video | |
KR101457313B1 (en) | Method, apparatus and computer program product for providing object tracking using template switching and feature adaptation | |
JP2022023887A (en) | Appearance search system and method | |
US9710698B2 (en) | Method, apparatus and computer program product for human-face features extraction | |
CN108805169B (en) | Image processing method, non-transitory computer readable medium and image processing system | |
US9208548B1 (en) | Automatic image enhancement | |
US9721387B2 (en) | Systems and methods for implementing augmented reality | |
US11977981B2 (en) | Device for automatically capturing photo or video about specific moment, and operation method thereof | |
CN110119700B (en) | Avatar control method, avatar control device and electronic equipment | |
CN106203286B (en) | Augmented reality content acquisition method and device and mobile terminal | |
WO2014053837A2 (en) | Image processing | |
US20210374972A1 (en) | Panoramic video data processing method, terminal, and storage medium | |
WO2017148259A1 (en) | Method for image searching and system thereof | |
WO2021258579A1 (en) | Image splicing method and apparatus, computer device, and storage medium | |
CN117032520A (en) | Video playing method and device based on digital person, electronic equipment and storage medium | |
CN105447846B (en) | Image processing method and electronic equipment | |
JP6272071B2 (en) | Image processing apparatus, image processing method, and program | |
CN114926351B (en) | Image processing method, electronic device, and computer storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |