CN112202958B - Screenshot method and device and electronic equipment - Google Patents

Screenshot method and device and electronic equipment Download PDF

Info

Publication number
CN112202958B
CN112202958B CN202010899636.XA CN202010899636A CN112202958B CN 112202958 B CN112202958 B CN 112202958B CN 202010899636 A CN202010899636 A CN 202010899636A CN 112202958 B CN112202958 B CN 112202958B
Authority
CN
China
Prior art keywords
screenshot
dynamic
image
target
dynamic object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010899636.XA
Other languages
Chinese (zh)
Other versions
CN112202958A (en
Inventor
沈健春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010899636.XA priority Critical patent/CN112202958B/en
Publication of CN112202958A publication Critical patent/CN112202958A/en
Application granted granted Critical
Publication of CN112202958B publication Critical patent/CN112202958B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a screenshot method, a screenshot device and electronic equipment, and belongs to the technical field of communication. The method comprises the following steps: starting to capture a screenshot of the target screenshot area under the condition of receiving the screenshot control message; under the condition that the target screenshot area comprises a dynamic object, acquiring dynamic characteristic information of the dynamic object in the process of screenshot of the target screenshot area; and outputting the screenshot image, and storing the screenshot image and the dynamic characteristic information of the dynamic object in an associated manner. The method and the device can enable the screenshot image to be consistent with the dynamic content of the screenshot area actually displayed, and achieve the purpose of intercepting the dynamic image.

Description

Screenshot method and device and electronic equipment
Technical Field
The application belongs to the technical field of communication, and particularly relates to a screenshot method, a screenshot device and electronic equipment.
Background
With the continuous development of science and technology, electronic devices (such as mobile phones, tablet computers and the like) have become an indispensable tool in the work and life of people.
At present, pictures after a user screenshot or a long screenshot on an electronic device are static, if a dynamic element (gif motion picture, video and the like) exists in a screenshot area of a current screen, only one static picture can be captured, and the obtained screenshot image does not accord with the content in the actually displayed screenshot area.
Disclosure of Invention
The embodiment of the application aims to provide a screenshot method, a screenshot device and electronic equipment, and can solve the problems that only a static screenshot picture can be obtained when a region with dynamic elements is subjected to screenshot, and the obtained screenshot image is inconsistent with actually displayed content in the screenshot region in the prior art.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a screenshot method, including:
starting to capture a screenshot of the target screenshot area under the condition of receiving the screenshot control message;
under the condition that the target screenshot area comprises a dynamic object, acquiring dynamic characteristic information of the dynamic object in the process of screenshot of the target screenshot area;
and outputting the screenshot image, and storing the screenshot image and the dynamic characteristic information of the dynamic object in an associated manner.
In a second aspect, an embodiment of the present application provides a screenshot device, where the screenshot device includes:
the target area screenshot module is used for starting screenshot of the target screenshot area under the condition of receiving the screenshot control message;
the dynamic characteristic acquisition module is used for acquiring dynamic characteristic information of the dynamic object in the process of screenshot of the target screenshot area under the condition that the target screenshot area comprises the dynamic object;
and the screenshot image output module is used for outputting a screenshot image and storing the screenshot image and the dynamic characteristic information of the dynamic object in a correlation manner.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the screenshot method according to the first aspect.
In a fourth aspect, an embodiment of the present application provides a readable storage medium, on which a program or instructions are stored, and when the program or instructions are executed by a processor, the program or instructions implement the steps of the screenshot method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the screenshot method according to the first aspect.
In the embodiment of the application, under the condition that the screenshot control message is received, the screenshot of the target screenshot area is started, under the condition that the target screenshot area comprises the dynamic object, in the process of screenshot of the target screenshot area, the dynamic characteristic information of the dynamic object is obtained, the screenshot image is output, and the screenshot image and the dynamic characteristic information of the dynamic object are stored in an associated mode. According to the embodiment of the application, when the screenshot is carried out on the area containing the dynamic object, the dynamic characteristic information and the screenshot image in the screenshot area are stored in an associated mode, when a subsequent user browses the screenshot image, the dynamic characteristic information can be obtained according to the association relation, when the screenshot image is displayed, the corresponding dynamic object file can be displayed on the screenshot image, therefore, the screenshot image can be made to be consistent with the dynamic content of the screenshot area which is actually displayed, the dynamic element in the screen is intercepted when the screenshot is carried out, and the purpose of seeing the effect of the dynamic image when the screenshot is browsed is achieved.
Drawings
Fig. 1 is a flowchart illustrating steps of a screenshot method according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a preview dynamic screenshot image provided in an embodiment of the present application;
fig. 3 is a schematic diagram of generating a dynamic screenshot image according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a screenshot device provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of another electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The screenshot scheme provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through a specific embodiment and an application scenario thereof.
Referring to fig. 1, a flowchart illustrating steps of a screenshot method provided in an embodiment of the present application is shown, and as shown in fig. 1, the screenshot method may specifically include the following steps:
step 101: and under the condition of receiving the screenshot control message, starting to screenshot in the target screenshot area.
The embodiment of the application can be applied to scenes of capturing dynamic pictures in the screen during screenshot.
The screenshot control message refers to a control message for instructing to screenshot a target screenshot area in a current display screen of the electronic device.
In some examples, the screenshot control message may be a message that automatically triggers the current display screen to be screenshot, e.g., when an application needs to screenshot the current display screen, the application may send the screenshot control message to a processor of the electronic device, etc.
In some examples, the screenshot control message may be a message actively triggered by the user to screenshot the current display screen, for example, when the user needs to screenshot the current display screen, the screenshot control message may be generated by clicking a screenshot button or clicking a screenshot button, and the like.
It can be understood that the above examples are only examples listed for better understanding of the technical solution of the embodiment of the present application, and in a specific implementation, the screenshot control message may also be triggered in other forms, specifically, the screenshot control message may be triggered according to a service requirement, and this embodiment is not limited to this.
After receiving the screenshot control message, screenshot can be started for the target screenshot area, and then step 102 is performed.
Step 102: and under the condition that the target screenshot area comprises the dynamic object, acquiring dynamic characteristic information of the dynamic object in the process of screenshot of the target screenshot area.
The dynamic object is an object that can be dynamically displayed and is displayed in the target cutout region on the current display screen, and for example, if a person that is dynamically displayed is displayed in the target cutout region, the person that is dynamically displayed is regarded as the dynamic object.
In this example, the dynamic characteristic information may include a storage path of the dynamic object file and location information of the dynamic object file within the target cutout area.
The dynamic object file refers to a file, such as an image, dynamically displayed in the target screenshot area.
The position information refers to a position where the dynamic object file is displayed in the target cutout area, and in this example, the position information may be information such as coordinates of the dynamic object file in the target cutout area.
After the screenshot is started in the target screenshot area, whether a dynamic object is contained in the target screenshot area or not can be judged.
And under the condition that the target screenshot area does not contain the dynamic object, directly screenshot the target screenshot area to generate a screenshot image.
In the case that it is determined that the target screenshot area includes a dynamic object, dynamic feature information of the dynamic object may be obtained in the process of screenshot of the target screenshot area, and specifically, a dynamic object file displayed in the target screenshot area may be obtained, and a storage path of the dynamic object file and a display position of the dynamic object file in the target screenshot area may be obtained.
After the dynamic feature information of the dynamic object is acquired, step 103 is executed.
Step 103: and outputting the screenshot image, and storing the screenshot image and the dynamic characteristic information of the dynamic object in an associated manner.
And after acquiring the dynamic characteristic information of the dynamic object in the target screenshot area, outputting a screenshot image, and storing the screenshot image and the acquired dynamic characteristic information of the dynamic object in a correlation manner.
According to the embodiment of the application, the screenshot image and the dynamic characteristic information of the dynamic object are stored in an associated mode, the dynamic object characteristic information can be obtained according to the association relation in the process that a subsequent user browses the screenshot image, and the dynamic object file in the dynamic object characteristic information is displayed at the relevant position in the screenshot image when the screenshot image is displayed, so that the screenshot image can be consistent with the content in the screenshot area which is actually displayed, and the purpose of intercepting the dynamic image is achieved.
In this embodiment, a scheme for determining whether the target screenshot area contains a dynamic object may be described in detail with reference to the following specific implementation manner.
In a specific implementation manner of the present application, before step 102, the method may further include:
step A1: acquiring at least two area images of the target screenshot area in a preset time period, and determining whether a dynamic object exists in the target screenshot area according to the at least two area images.
Step A2: and acquiring a file corresponding to at least one object in the target capture area, and determining whether a dynamic object exists in the target capture area according to a suffix of a file name of the file corresponding to the at least one object.
In the embodiment of the present application, the provided manners for determining whether a dynamic object exists in the target capture area may include the following two manners:
1. and determining whether a dynamic object exists in the target screenshot area or not according to a comparison result of at least two area images shot in a short time in the target screenshot area.
The preset time period refers to a time period preset by a user for image capturing of the target screenshot area.
When it is required to determine whether a dynamic object exists in the target screenshot area, image shooting can be performed on the target screenshot area within a preset time period to obtain at least two area images of the target screenshot area within the preset time period, and then, whether the dynamic object exists in the target screenshot area can be determined according to the at least two area images, for example, after screenshot is triggered, the target screenshot area is shot within a short time (100ms) to obtain the at least two area images in the target screenshot area, then, the at least two area images are compared, and if an object change exists in the images of the at least two area images, it is indicated that the dynamic object exists in the target screenshot area.
2. And determining whether a dynamic object exists in the target screenshot area according to the file name suffix of the file corresponding to at least one object in the target screenshot area.
When it is required to determine whether a dynamic object exists in the target screenshot area, a file corresponding to at least one object in the target screenshot area may be obtained, and whether a dynamic object exists in the target screenshot area is determined according to a suffix of a file name of the file, for example, capture recognition is performed on the target screenshot area to obtain at least one object in the target screenshot area, and if a gif file is appended to the file with the at least one object, it is indicated that a dynamic object exists in the target screenshot area at this time.
Of course, in a specific implementation, it may also be determined whether a dynamic object exists in the target screenshot area by using other manners, for example, the target screenshot area is recorded, whether a dynamic object exists in the target screenshot area is determined according to the recorded video, and the like.
In a specific implementation of the two manners for determining whether the dynamic object exists in the target screenshot area provided by the embodiment of the application, whether the dynamic object exists in the target screenshot area can be determined by automatically selecting any one manner or combining the two manners, so that the accuracy of determining whether the dynamic object exists in the area can be improved.
In this embodiment, the acquired dynamic feature information may include a storage path and object coordinate information of a dynamic object file, and when storing the screenshot image and the dynamic feature information in association, the screenshot image and the dynamic feature information may be stored in association with each other, specifically, after outputting the screenshot image and acquiring the dynamic feature information, an association relationship between the screenshot image and the dynamic feature information may be established, and then the screenshot image may be stored in an album, and the dynamic feature information (the dynamic object file and the object coordinate information) may be stored in a preset folder, when displaying the screenshot image and needing to display the dynamic feature file in the screenshot image, the dynamic feature information associated with the screenshot image may be acquired from the preset folder according to the association relationship, and the storage manner of the association relationship may be stored in a form, or the data may be saved in a database manner, and specifically, the data may be saved according to business requirements, which is not limited in this embodiment.
The process of associating stores may be described in detail in connection with the specific implementations described below.
In another specific implementation manner of the present application, the step 102 may include:
substep B1: and acquiring a dynamic object file of the dynamic object.
In this embodiment, the dynamic object file refers to a file that can describe a dynamic object in the target screenshot area. For example, the target cutout area is sequentially displayed by a plurality of moving images in a fixed playback order, and in this case, the moving images of a plurality of frames may be used as a moving object file or the like. Alternatively, the video recording may be performed on the dynamic object in the target capture area in a video recording manner, and the recorded dynamic object video is used as the dynamic object file, which may be described in detail in combination with the following specific implementation manner.
In another specific implementation manner of the present application, the sub-step B1 may include:
substep C1: extracting a dynamic object file of the dynamic object through the webpage address of the dynamic object; or,
substep C2: and carrying out video recording on the dynamic object to generate a dynamic object video, wherein the dynamic object file is the dynamic object video.
In this embodiment, the dynamic object file may be obtained in the following manner:
1. taking the web page address corresponding to the dynamic object as the dynamic object file
In this embodiment, a web page address corresponding to the dynamic object may be obtained, and the web page address may be used as the dynamic object file, for example, through screen capture recognition, if the dynamic object has a network address, the web page address may be downloaded to the local through the network address to be used as the dynamic object file of the dynamic object.
2. Using the video file recorded to the dynamic object as the dynamic object file
In this embodiment, a video of a dynamic object in the target screenshot area may be recorded to generate a dynamic object video, the recorded dynamic object video is used as a dynamic object file, if the network address of the dynamic object in the target screenshot area is not detected, the background may record a local screen of the area where the dynamic object is located, and the recorded area is saved in the local mobile phone to be used as the dynamic object file of the dynamic object after the recording is completed. For example, when a user captures a display page and a video clip is played in a capture area, the playing time of the video frequency band is 10s, at this time, a video of the video clip played in the capture area may be recorded in the capture area to obtain a 10s long dynamic video, and then the recorded dynamic video may be used as a dynamic object file, and of course, the video recording time may also be preset, for example, 5s, video recording may be performed from 0s from the playing time in the capture area, and a 5s video may be recorded as a dynamic object file. Of course, the playing time length for recording one of the time intervals may also be set, for example, video recording is started from the playing time of 2 s.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present application, and are not to be taken as the only limitation to the embodiments.
Substep B2: and acquiring the object coordinate information of the dynamic object in the target capture area.
The object coordinate information refers to coordinates of a position of the dynamic object in the target screenshot area, for example, the position of the dynamic object in the target screenshot area is a square area, and at this time, coordinates of four vertices of the square area may be regarded as the object coordinate information of the dynamic object.
When the dynamic characteristic information of the dynamic object needs to be acquired, the dynamic object file and the object coordinate information of the dynamic object can be acquired by adopting the above mode, and then the dynamic characteristic information is determined by combining the storage path of the dynamic object file and the object coordinate information.
It should be understood that, the execution sequence of sub-step B1 and sub-step B2 is not sequential, and sub-step B1 may be executed first, and then sub-step B2 may be executed first, and sub-step B2 may be executed first, and then sub-step B1 may be executed first, and specifically, the present embodiment is not limited thereto.
After acquiring the dynamic object file corresponding to the dynamic object and the object coordinate information of the dynamic object in the target cutout area, substep C1 is performed.
The step 103 may include:
substep C1: and establishing an incidence relation among the screenshot image, the dynamic object file and the object coordinate information, and storing the screenshot image, the dynamic object file and the object coordinate information to a preset storage path.
The preset storage path refers to a path for storing the screenshot image, the dynamic object file and the object coordinate information.
After the dynamic object file and the object coordinate information are obtained, an association relationship among the screenshot image, the dynamic object file and the object coordinate information can be established, and the screenshot image, the dynamic object file and the object coordinate information are stored to a preset storage path.
In the embodiment of the application, the incidence relation among the screenshot image, the dynamic object file and the object coordinate information is established, so that when the screenshot image is displayed subsequently, the dynamic object file can be obtained according to the incidence relation, and the dynamic object file is displayed at the position in the screenshot image associated with the object coordinate information, so that the display content of the screenshot image can be consistent with the content actually displayed in the screenshot area, the dynamic element in a screen can be intercepted during screenshot, and the purpose of seeing the motion picture effect during screenshot is achieved.
In this embodiment, when the screenshot image is displayed, the dynamic object file may be automatically displayed in the screenshot image, and specifically, the detailed description may be described in combination with the following specific implementation manner.
In another specific implementation manner of the present application, after the step 103, the method may further include:
step D1: a first input is received from a user.
In this embodiment, the first input refers to an input performed by the user to display the screenshot image.
In some examples, the first input may be an input formed by a click operation performed by a user on the screenshot image, for example, the screenshot image is displayed in an electronic device album, and if the user needs to browse the screenshot image, the screenshot image may be clicked by the user to form the first input.
In some examples, the first input may be an input formed by a specific gesture operation performed by the user, for example, the specific gesture operation for displaying the screenshot image is stored in the electronic device in advance, and when the user needs to browse the screenshot image, the specific gesture operation performed by the user may be received to form the first input of the user.
It is to be understood that the above examples are merely examples listed for better understanding of the technical solution of the embodiment of the present application, and in a specific implementation, the first input may also be an input formed by other operations performed by a user, and in particular, may be determined according to a business requirement, and this embodiment is not limited thereto.
After receiving the first input of the user, step D2 is performed.
Step D2: and responding to the first input, displaying the screenshot image, and displaying the dynamic object file at a target position in the screenshot image.
The target position refers to a position determined in conjunction with object coordinate information of the dynamic object in the screenshot image. For example, the object coordinate information includes four coordinates, and in this case, a square area may be formed from the four coordinates, and the square area may be set as the target position.
After receiving a first input from a user, a screenshot image may be displayed and a dynamic object file may be displayed at a target location in the screenshot image in response to the first input. As shown in fig. 2, 11 is a screenshot image displayed in a mobile phone screen, and 12 is a target position of a dynamic object display, and when the screenshot image 11 is displayed, a dynamic object file may be displayed at the target position 12.
According to the method and the device, the screenshot image is displayed through the input of the user, the dynamic object file is automatically displayed at the target position of the screenshot image, the dynamic object file in the screenshot image is automatically played, the user does not need to additionally execute the operation of displaying the dynamic object file, the operation steps of the user are reduced, and the user experience is improved.
In this embodiment, after the screenshot image is displayed, the display control of the dynamic object file may be implemented in combination with the input of the target position in the screenshot image by the user, and specifically, the detailed description may be implemented in combination with the following specific implementation manner.
In another specific implementation manner of the present application, after the step 103, the method may further include:
step S1: a second input by the user is received.
In this embodiment, the second input refers to an input performed by the user for displaying the screenshot image.
In some examples, the second input may be an input formed by a click operation performed by the user on the screenshot image, for example, the screenshot image is displayed in an electronic device album, and if the user needs to browse the screenshot image, the screenshot image may be clicked by the user to form the second input.
In some examples, the second input may be an input formed by a specific gesture operation performed by the user, for example, the specific gesture operation for displaying the screenshot image is stored in the electronic device in advance, and when the user needs to browse the screenshot image, the specific gesture operation performed by the user may be received to form the second input of the user.
It is to be understood that the above examples are merely examples listed for better understanding of the technical solution of the embodiment of the present application, and in a specific implementation, the second input may also be an input formed by other operations performed by a user, and in particular, may be determined according to a business requirement, and this embodiment is not limited thereto.
After receiving the second input by the user, step S2 is performed.
Step S2: displaying the screenshot image in response to the second input.
After receiving the second input from the user, the screenshot image may be displayed in response to the second input, and then step S3 is performed.
Step S3: receiving a third input of the target position in the screenshot image by the user.
The third input refers to an input performed by the user on the target position in the screenshot image.
In some examples, the third input may be an input formed by the user clicking on a target location in the screenshot image, for example, the target location may be clicked by the user to form the third input when the user needs to display the associated dynamic object file at the target location of the screenshot image.
In some examples, the third input may be an input formed by a user pressing a target location in the screenshot image, for example, when the user needs to display an associated dynamic object file at the target location of the screenshot image, the target location may be pressed by the user to form the third input.
It is to be understood that the above examples are merely examples listed for better understanding of the technical solution of the embodiment of the present application, and are not limited to the present embodiment only, and in a specific implementation, the third input may also be an input formed by other forms of operations performed by a user on a target position in the screenshot image, and specifically, may be determined according to business requirements, and the present embodiment is not limited to this.
After the screenshot image is displayed, a third input by the user for the target location in the screenshot image may be received, and step S4 is performed.
Step S4: and responding to the third input, acquiring a dynamic object file of the screenshot image corresponding to the target position according to the incidence relation, and displaying the dynamic object file at the target position.
After receiving a third input of the user to the target position in the screenshot image, the third input may be responded to, the dynamic object file of the screenshot image corresponding to the target position is obtained according to the association relationship, and the dynamic object file is displayed at the target position.
According to the method and the device, the dynamic object file corresponding to the target position is displayed through the input of the user to the target position in the screenshot image, and when the associated dynamic object file is stored in a plurality of positions in the screenshot image, the dynamic object file can be displayed in the position input by the user, the personalized requirements of the user can be met, and the user experience is improved.
In this embodiment, the screenshot image and the dynamic object at the target position in the screenshot image may also be fused to generate a dynamic image, and specifically, the detailed description may be described in conjunction with the following specific implementation manner.
In another specific implementation manner of the present application, after the step 103, the method may further include:
step M1: and receiving fourth input of the screenshot image from the user.
In this embodiment, the fourth input refers to an input performed by the user on the screenshot image to obtain an object image of the dynamic object in the screenshot image.
In some examples, the fourth input may be an input formed by a user long-pressing the screenshot image, for example, when a dynamic object image in the screenshot image needs to be acquired, a long-pressing operation may be performed on the screenshot image by the user to form the fourth input.
In some examples, the fourth input may be an input formed by double-clicking the screenshot image by the user, for example, when a dynamic object image in the screenshot image needs to be acquired, then a double-click operation may be performed on the screenshot image by the user to form the fourth input.
It is to be understood that the above examples are merely examples listed for better understanding of the technical solution of the embodiment of the present application, and in a specific implementation, the fourth input may also be an input formed by other forms of operations performed on the screenshot image by the user, and specifically, the fourth input may be determined according to business requirements, and this embodiment is not limited to this.
After receiving the fourth input of the user to the screenshot image, the step M2 is executed.
Step M2: and responding to the fourth input, and acquiring N frames of object images corresponding to the dynamic object.
The object image refers to an image corresponding to a dynamic object in the screenshot image.
In a specific implementation, video recording may be performed on an area where the dynamic object is located to obtain a video segment, and N frames of object images corresponding to the dynamic object may be obtained by combining the video segment, where N is a positive integer greater than or equal to 1. As shown in fig. 3, the acquired object images are five object images, which are an image 21, an image 22, an image 23, an image 24, and an image 25, respectively.
Of course, in practical application, other methods may also be used to obtain the N frames of object images corresponding to the dynamic object, and specifically, the method may be determined according to the service requirement, which is not limited in this embodiment.
After receiving a fourth input of the screenshot image from the user, the fourth input may be responded to obtain N frames of object images corresponding to the dynamic object.
After acquiring the N frames of object images corresponding to the dynamic object, step M3 is executed.
Step M3: and generating a target dynamic image based on the screenshot image and the N frames of object images.
The target dynamic image is a dynamic image generated by fusing the screenshot image and the N frames of object images.
After the N frames of object images corresponding to the dynamic object are acquired, the screenshot image and the N frames of object images may be combined to generate a target dynamic image, and specifically, detailed description may be given in combination with the following specific implementation manner.
In another specific implementation manner of the present application, the step M3 may include:
substep N1: and acquiring the image position of each frame of object image in the screenshot image.
In this embodiment, the image position refers to a position where the object image is located in the screenshot image.
When each frame of object image needs to be fused with the screenshot image, the image position of each frame of object image in the screenshot image can be acquired, and then sub-step N2 is performed.
Substep N2: and carrying out image fusion processing on the N frames of object images and the screenshot image based on the image position of each frame of object image to generate N frames of fusion images.
After acquiring the image position of each frame of object image in the screenshot image, image fusion images may be performed on the N frames of object images and the screenshot image based on the image position of each frame of object image to generate N frames of fusion images, for example, as shown in fig. 3, the acquired object images are the image 21, the image 22, the image 23, the image 24 and the image 25, respectively, after acquiring the 5 frames of object images, the image positions of the 5 frames of object images in the screenshot image may be acquired, and then image fusion processing may be performed on the N frames of object images and the screenshot image in combination with the image positions to generate 5 frames of corresponding fusion images, for example, after performing image fusion processing on the image 21 and the screenshot image, the fusion image 31 may be obtained. After image fusion processing of the image 22 with the screenshot image, a fused image 32 may be obtained. After the image 23 and the captured image are subjected to image fusion processing, a fused image 33 can be obtained. After image fusion processing of the image 24 and the screenshot image, a fused image 34 may be obtained. After the image fusion process is performed on the image 25 and the captured image, a fused image 35 can be obtained.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present application, and are not to be taken as the only limitation to the embodiments.
After the N frames of object images and the screen capture image are subjected to the image fusion process based on the image position of each frame of object image to generate N frames of fusion image, sub-step N3 is performed.
Substep N3: and synthesizing the N frames of fused images according to the sequence of the generation time of the N frames of object images to generate the target dynamic image.
After the N frames of fused images are obtained, the N frames of fused images may be synthesized according to the sequence of the generation time of the N frames of object images to generate a target dynamic image, specifically, a dedicated gif motion picture synthesis tool may be used to synthesize the N frames of fused images into the target dynamic image, and specifically, the implementation process may be: firstly, an animation guide window can be opened, a user sets the size of a canvas, secondly, according to the sequence of the generation time of the N frames of object images, the user clicks an image adding button to sequentially import the N frames of object images, and then the N frames of object images are spliced, so that a target dynamic image can be formed. As shown in fig. 3, after the obtained 5 frames of fused images are subjected to image synthesis processing, one target moving image 30 can be obtained.
In a specific implementation manner, other manners may also be used to synthesize the N frames of fused images to generate the target dynamic image, and specifically, a specific implementation manner of image synthesis may be set according to a service requirement, which is not limited in this embodiment.
In this embodiment, the target dynamic image generated by synthesizing the N frames of fused images may be a video segment or a gif image, and specifically, may be determined according to a service requirement, which is not limited in this embodiment.
According to the method and the device, the dynamic image is generated in a fusion processing mode, the associated storage of the screenshot image and the dynamic characteristic information is not needed, the storage space of the electronic equipment can be saved, the utilization rate of a system memory is improved, and the intercepted dynamic image can be shared by the user after the target dynamic image is generated so as to achieve the purpose of sharing the intercepted dynamic image of the user.
The screenshot method provided by the embodiment of the application starts to screenshot in the target screenshot area under the condition that the screenshot control message is received, acquires the dynamic characteristic information of the dynamic object in the process of screenshot in the target screenshot area under the condition that the target screenshot area comprises the dynamic object, outputs the screenshot image, and stores the screenshot image and the dynamic characteristic information of the dynamic object in an associated manner. According to the embodiment of the application, when the screenshot is carried out on the area containing the dynamic object, the dynamic characteristic information and the screenshot image in the screenshot area are stored in an associated mode, when a subsequent user browses the screenshot image, the dynamic characteristic information can be obtained according to the association relation, when the screenshot image is displayed, the corresponding dynamic object file can be displayed on the screenshot image, therefore, the screenshot image can be made to be consistent with the dynamic content of the screenshot area which is actually displayed, and the purpose of intercepting the dynamic image is achieved.
It should be noted that, in the screenshot method provided in the embodiment of the present application, the execution subject may be a screenshot device, or a control module in the screenshot device for executing the screenshot method. In the embodiment of the present application, a screenshot device is taken as an example to execute a screenshot method, which illustrates the screenshot device provided in the embodiment of the present application.
Referring to fig. 4, a schematic structural diagram of a screenshot device provided in an embodiment of the present application is shown, and as shown in fig. 4, the screenshot device 400 may specifically include the following modules:
a target area screenshot module 410, configured to start screenshot for a target screenshot area when a screenshot control message is received;
a dynamic characteristic obtaining module 420, configured to, when the target screenshot area includes a dynamic object, obtain dynamic characteristic information of the dynamic object in a process of screenshot of the target screenshot area;
and a screenshot image output module 430, configured to output a screenshot image, and store the screenshot image and the dynamic feature information of the dynamic object in an associated manner.
Optionally, the dynamic feature obtaining module 420 includes:
a dynamic file acquiring unit configured to acquire a dynamic object file of the dynamic object;
a coordinate information acquiring unit, configured to acquire object coordinate information of the dynamic object in the target capture area;
wherein the dynamic characteristic information includes a storage path of the dynamic object file and the object coordinate information;
the screenshot image output module 430 includes:
and the incidence relation establishing unit is used for establishing the incidence relation among the screenshot image, the dynamic object file and the object coordinate information and storing the screenshot image, the dynamic object file and the object coordinate information to a preset storage path.
Optionally, the dynamic file acquiring unit includes:
the dynamic file extracting subunit is used for extracting the dynamic object file of the dynamic object through the webpage address of the dynamic object;
and the dynamic video generation subunit is configured to perform video recording on the dynamic object to generate a dynamic object video, where the dynamic object file is the dynamic object video.
Optionally, the method further comprises:
the first input receiving module is used for receiving a first input of a user;
a dynamic file display module, configured to respond to the first input, display the screenshot image, and display the dynamic object file at a target location in the screenshot image;
wherein the target position is determined based on the object coordinate information.
Optionally, the method further comprises:
the second input receiving module is used for receiving a second input of the user;
a screenshot image display module for displaying the screenshot image in response to the second input;
a third input receiving module, configured to receive a third input of the target position in the screenshot image by the user;
and the dynamic file display module is used for responding to the third input, acquiring a dynamic object file of the screenshot image corresponding to the target position according to the incidence relation, and displaying the dynamic object file at the target position.
Optionally, the method further comprises:
a fourth input receiving module, configured to receive a fourth input to the screenshot image by the user;
an object image obtaining module, configured to obtain, in response to the fourth input, N frames of object images corresponding to the dynamic object;
the target image generation module is used for generating a target dynamic image based on the screenshot image and the N frames of object images;
wherein N is a positive integer greater than or equal to 1.
Optionally, the target image generation module includes:
the image position acquisition unit is used for acquiring the image position of each frame of object image in the screenshot image;
a fused image generating unit, configured to perform image fusion processing on the N frames of object images and the screenshot image based on the image position of each frame of object image, and generate N frames of fused images;
and the target image generation unit is used for synthesizing the N frames of fused images according to the sequence of the generation time of the N frames of object images to generate the target dynamic image.
Optionally, the method further comprises:
the first dynamic image determining module is used for acquiring at least two area images of the target screenshot area in a preset time period and determining whether a dynamic object exists in the target screenshot area according to the at least two area images;
and the second dynamic image determining module is used for acquiring a file corresponding to at least one object in the target capture area and determining whether a dynamic object exists in the target capture area according to a suffix of a file name of the file corresponding to the at least one object.
According to the screenshot device provided by the embodiment of the application, after screenshot input is received, whether a dynamic element exists in a screenshot area or not is determined, under the condition that the dynamic element exists in the screenshot area, a dynamic element file corresponding to the dynamic element in the screenshot area and a target position of the dynamic element in the screenshot area are obtained, first input of a user to the screenshot area is received, a screenshot image corresponding to the screenshot area is generated in response to the first input, and an association relation among the screenshot image, the dynamic element file and the target position is established. According to the method and the device, the incidence relation between the dynamic element file and the screenshot image and the target position is established, so that when a user watches the screenshot image, the dynamic element can be displayed at the target position of the screenshot image according to the incidence relation, the dynamic element in the screen can be shot when the screenshot is conducted, and the purpose that the dynamic image effect can be seen when the screenshot is browsed is achieved.
The screenshot device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The screenshot device in the embodiment of the present application may be a device with an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The screenshot device provided in the embodiment of the present application can implement each process implemented in the method embodiment of fig. 1, and is not described here again to avoid repetition.
Optionally, as shown in fig. 5, an electronic device 500 is further provided in this embodiment of the present application, and includes a processor 501, a memory 502, and a program or an instruction stored in the memory 502 and executable on the processor 501, where the program or the instruction is executed by the processor 501 to implement each process of the screenshot method embodiment, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 6 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 600 includes, but is not limited to: a radio frequency unit 601, a network module 602, an audio output unit 603, an input unit 604, a sensor 605, a display unit 606, a user input unit 607, an interface unit 608, a memory 609, a processor 610, and the like.
Those skilled in the art will appreciate that the electronic device 600 may further comprise a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 610 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 6 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 610 is configured to start to capture a screenshot of the target screenshot area when receiving a screenshot control message; under the condition that the target screenshot area comprises a dynamic object, acquiring dynamic characteristic information of the dynamic object in the process of intercepting the target screenshot area; and outputting the screenshot image, and storing the screenshot image and the dynamic characteristic information of the dynamic object in an associated manner.
According to the embodiment of the application, the screenshot image can accord with the dynamic content of the screenshot area which is actually displayed, the dynamic elements in the screen can be intercepted during screenshot, and the dynamic image effect can be seen during screenshot browsing.
Optionally, the processor 610 is further configured to obtain a dynamic object file of the dynamic object; acquiring object coordinate information of the dynamic object in the target capture area; wherein the dynamic characteristic information includes a storage path of the dynamic object file and the object coordinate information;
the processor 610 is further configured to establish an association relationship between the screenshot image, the dynamic object file, and the object coordinate information, and store the screenshot image, the dynamic object file, and the object coordinate information to a preset storage path.
Optionally, the processor 610 is further configured to extract a dynamic object file of the dynamic object through a web address of the dynamic object; or, performing video recording on the dynamic object to generate a dynamic object video, wherein the dynamic object file is the dynamic object video.
Optionally, an input unit 604 for receiving a first input of a user;
a processor 610, further configured to display the screenshot image in response to the first input and display the dynamic object file at a target location in the screenshot image; wherein the target position is determined based on the object coordinate information.
Optionally, the input unit 604 is further configured to receive a second input from the user;
a processor 610 further configured to display the screenshot image in response to the second input;
an input unit 604, further configured to receive a third input of the target position in the screenshot image by the user;
the processor 610 is further configured to, in response to the third input, obtain a dynamic object file of the screenshot image corresponding to the target location according to the association relationship, and display the dynamic object file at the target location.
Optionally, the input unit 604 is further configured to receive a fourth input of the screenshot image by the user;
the processor 610 is further configured to, in response to the fourth input, obtain N frames of object images corresponding to the dynamic object; generating a target dynamic image based on the screenshot image and the N frames of object images; wherein N is a positive integer greater than or equal to 1.
Optionally, the processor 610 is further configured to obtain an image position of each frame of the object image in the screenshot image; based on the image position of each frame of object image, carrying out image fusion processing on the N frames of object images and the screenshot image to generate N frames of fusion images; and synthesizing the N frames of fused images according to the sequence of the generation time of the N frames of object images to generate the target dynamic image.
Optionally, the processor 610 is further configured to obtain at least two area images of the target screenshot area within a preset time period, and determine whether a dynamic object exists in the target screenshot area according to the at least two area images; or acquiring a file corresponding to at least one object in the target capture area, and determining whether a dynamic object exists in the target capture area according to a suffix of a file name of the file corresponding to the at least one object.
According to the method and the device, the dynamic image is generated in a fusion processing mode, associated storage of the screenshot image and the dynamic characteristic information is not needed, the storage space of the electronic equipment can be saved, the utilization rate of a system memory is improved, and the intercepted dynamic image can be shared by the user after the target dynamic image is generated, so that the purpose of sharing the intercepted dynamic image of the user is achieved.
It is to be understood that, in the embodiment of the present application, the input Unit 604 may include a Graphics Processing Unit (GPU) 6041 and a microphone 6042, and the Graphics Processing Unit 6041 processes image data of a still picture or a video obtained by an image capturing apparatus (such as a camera) in a video capturing mode or an image capturing mode. The display unit 606 may include a display panel 6061, and the display panel 6061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 607 includes a touch panel 6071 and other input devices 6072. A touch panel 6071, also referred to as a touch screen. The touch panel 61071 may include two parts of a touch detection device and a touch controller. Other input devices 61072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 609 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 610 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the screenshot method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the screenshot method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (9)

1. A method of screenshot, comprising:
starting to capture a screenshot of the target screenshot area under the condition of receiving the screenshot control message;
under the condition that the target screenshot area comprises a dynamic object, acquiring dynamic characteristic information of the dynamic object in the process of screenshot of the target screenshot area;
outputting a screenshot image, and storing the screenshot image and the dynamic characteristic information of the dynamic object in an associated manner;
the acquiring of the dynamic feature information of the dynamic object includes: acquiring a dynamic object file of the dynamic object; acquiring object coordinate information of the dynamic object in the target capture area; wherein the dynamic characteristic information includes a storage path of the dynamic object file and the object coordinate information; the object coordinate information refers to the coordinates of the position of the dynamic object in the target capture area;
the storing of the screenshot image and the dynamic feature information of the dynamic object in an associated manner includes:
and establishing an incidence relation among the screenshot image, the dynamic object file and the object coordinate information, and storing the screenshot image, the dynamic object file and the object coordinate information to a preset storage path.
2. The method according to claim 1, wherein the obtaining the dynamic object file of the dynamic object comprises:
extracting a dynamic object file of the dynamic object through the webpage address of the dynamic object; or,
and carrying out video recording on the dynamic object to generate a dynamic object video, wherein the dynamic object file is the dynamic object video.
3. The method according to claim 1, further comprising, after the outputting the screenshot image and storing the screenshot image and the dynamic feature information of the dynamic object in association with each other:
receiving a first input of a user;
in response to the first input, displaying the screenshot image and displaying the dynamic object file at a target location in the screenshot image;
wherein the target position is determined based on the object coordinate information.
4. The method according to claim 2, further comprising, after the outputting the screenshot image and storing the screenshot image and the dynamic feature information of the dynamic object in association with each other:
receiving a second input of the user;
displaying the screenshot image in response to the second input;
receiving a third input of the user to a target position in the screenshot image;
and responding to the third input, acquiring a dynamic object file of the screenshot image corresponding to the target position according to the incidence relation, and displaying the dynamic object file at the target position.
5. The method according to claim 1, further comprising, after the outputting the screenshot image and storing the screenshot image and the dynamic feature information of the dynamic object in association with each other:
receiving a fourth input of the screenshot image from the user;
responding to the fourth input, and acquiring N frames of object images corresponding to the dynamic object;
generating a target dynamic image based on the screenshot image and the N frames of object images;
wherein N is a positive integer greater than or equal to 1.
6. The method of claim 5, wherein generating a target dynamic image based on the screenshot image and the N frame object images comprises:
acquiring the image position of each frame of object image in the screenshot image;
based on the image position of each frame of object image, carrying out image fusion processing on the N frames of object images and the screenshot image to generate N frames of fusion images;
and synthesizing the N frames of fused images according to the sequence of the generation time of the N frames of object images to generate the target dynamic image.
7. The method according to claim 1, before obtaining the dynamic feature information of the dynamic object in the process of screenshot of the target screenshot area, further comprising:
acquiring at least two area images of the target screenshot area within a preset time period, and determining whether a dynamic object exists in the target screenshot area according to the at least two area images; or,
and acquiring a file corresponding to at least one object in the target capture area, and determining whether a dynamic object exists in the target capture area according to a suffix of a file name of the file corresponding to the at least one object.
8. A screenshot device, comprising:
the target area screenshot module is used for starting screenshot of the target screenshot area under the condition of receiving the screenshot control message;
the dynamic characteristic acquisition module is used for acquiring dynamic characteristic information of the dynamic object in the process of screenshot of the target screenshot area under the condition that the target screenshot area comprises the dynamic object;
the screenshot image output module is used for outputting a screenshot image and storing the screenshot image and the dynamic characteristic information of the dynamic object in a correlation manner;
the dynamic feature acquisition module comprises: a dynamic file acquiring unit configured to acquire a dynamic object file of the dynamic object; a coordinate information acquiring unit, configured to acquire object coordinate information of the dynamic object in the target capture area; wherein the dynamic characteristic information includes a storage path of the dynamic object file and the object coordinate information; the object coordinate information refers to the coordinates of the position of the dynamic object in the target capture area;
the screenshot image output module includes: and the incidence relation establishing unit is used for establishing the incidence relation among the screenshot image, the dynamic object file and the object coordinate information and storing the screenshot image, the dynamic object file and the object coordinate information to a preset storage path.
9. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the screenshot method as claimed in any one of claims 1-7.
CN202010899636.XA 2020-08-31 2020-08-31 Screenshot method and device and electronic equipment Active CN112202958B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010899636.XA CN112202958B (en) 2020-08-31 2020-08-31 Screenshot method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010899636.XA CN112202958B (en) 2020-08-31 2020-08-31 Screenshot method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112202958A CN112202958A (en) 2021-01-08
CN112202958B true CN112202958B (en) 2021-07-23

Family

ID=74005404

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010899636.XA Active CN112202958B (en) 2020-08-31 2020-08-31 Screenshot method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112202958B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116095234A (en) * 2023-01-31 2023-05-09 维沃移动通信有限公司 Image generation method, device, electronic equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110543347A (en) * 2019-09-02 2019-12-06 联想(北京)有限公司 Method and device for generating screenshot image and electronic equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102681829B (en) * 2011-03-16 2016-03-30 阿里巴巴集团控股有限公司 A kind of screenshot method, device and telecommunication customer end
CN105808659B (en) * 2016-02-29 2019-03-05 努比亚技术有限公司 Mobile terminal and its webpage capture method
CN107229402B (en) * 2017-05-22 2021-08-10 努比亚技术有限公司 Dynamic screen capturing method and device of terminal and readable storage medium
CN108881984B (en) * 2018-07-02 2020-11-03 深圳市九洲电器有限公司 Method and system for storing screenshot of digital television equipment
US11039196B2 (en) * 2018-09-27 2021-06-15 Hisense Visual Technology Co., Ltd. Method and device for displaying a screen shot
CN109445894A (en) * 2018-10-26 2019-03-08 维沃移动通信有限公司 A kind of screenshot method and electronic equipment
CN110806913A (en) * 2019-10-30 2020-02-18 支付宝(杭州)信息技术有限公司 Webpage screenshot method, device and equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110543347A (en) * 2019-09-02 2019-12-06 联想(北京)有限公司 Method and device for generating screenshot image and electronic equipment

Also Published As

Publication number Publication date
CN112202958A (en) 2021-01-08

Similar Documents

Publication Publication Date Title
CN109683761B (en) Content collection method, device and storage medium
CN112306607B (en) Screenshot method and device, electronic equipment and readable storage medium
CN114598823B (en) Special effect video generation method and device, electronic equipment and storage medium
CN113296661A (en) Image processing method and device, electronic equipment and readable storage medium
CN113794831B (en) Video shooting method, device, electronic equipment and medium
CN112202958B (en) Screenshot method and device and electronic equipment
CN112181252B (en) Screen capturing method and device and electronic equipment
CN112399010B (en) Page display method and device and electronic equipment
WO2024160133A1 (en) Image generation method and apparatus, electronic device, and storage medium
CN114390205B (en) Shooting method and device and electronic equipment
CN114253449B (en) Screen capturing method, device, equipment and medium
CN113726953B (en) Display content acquisition method and device
CN115729412A (en) Interface display method and device
CN115631109A (en) Image processing method, image processing device and electronic equipment
CN114268801A (en) Media information processing method, media information presenting method and device
CN114785949A (en) Video object processing method and device and electronic equipment
CN115037874A (en) Photographing method and device and electronic equipment
CN114564921A (en) Document editing method and device
CN113721816A (en) Video processing method and device
CN112261483A (en) Video output method and device
CN111796733A (en) Image display method, image display device and electronic equipment
CN114063863B (en) Video processing method and device and electronic equipment
CN114816933A (en) Method and device for generating note with watch, readable storage medium and electronic equipment
CN115328367A (en) Screen capturing method and device, electronic equipment and storage medium
CN118885092A (en) Information processing method, apparatus, electronic device, storage medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant