US20110001758A1 - Apparatus and method for manipulating an object inserted to video content - Google Patents
Apparatus and method for manipulating an object inserted to video content Download PDFInfo
- Publication number
- US20110001758A1 US20110001758A1 US12/867,253 US86725309A US2011001758A1 US 20110001758 A1 US20110001758 A1 US 20110001758A1 US 86725309 A US86725309 A US 86725309A US 2011001758 A1 US2011001758 A1 US 2011001758A1
- Authority
- US
- United States
- Prior art keywords
- manipulation
- video
- user
- content
- video content
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 19
- 238000009877 rendering Methods 0.000 claims description 21
- 230000003993 interaction Effects 0.000 claims description 11
- 230000008569 process Effects 0.000 claims description 3
- 238000004590 computer program Methods 0.000 claims description 2
- 238000004891 communication Methods 0.000 description 12
- 235000015243 ice cream Nutrition 0.000 description 11
- 230000008859 change Effects 0.000 description 9
- 230000006399 behavior Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 230000009471 action Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 230000008921 facial expression Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000007619 statistical method Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000000903 blocking effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23418—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/235—Processing of additional data, e.g. scrambling of additional data or processing content descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/235—Processing of additional data, e.g. scrambling of additional data or processing content descriptors
- H04N21/2353—Processing of additional data, e.g. scrambling of additional data or processing content descriptors specifically adapted to content descriptors, e.g. coding, compressing or processing of metadata
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/239—Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/24—Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/251—Learning process for intelligent management, e.g. learning user preferences for recommending movies
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/251—Learning process for intelligent management, e.g. learning user preferences for recommending movies
- H04N21/252—Processing of multiple end-users' preferences to derive collaborative data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/258—Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/258—Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
- H04N21/25808—Management of client data
- H04N21/25825—Management of client data involving client display capabilities, e.g. screen resolution of a mobile phone
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/258—Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
- H04N21/25866—Management of end-user data
- H04N21/25883—Management of end-user data being end-user demographical data, e.g. age, family status or address
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/266—Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
- H04N21/2668—Creating a channel for a dedicated end-user group, e.g. insertion of targeted commercials based on end-user profiles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
- H04N21/4314—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for fitting data in a restricted space on the screen, e.g. EPG data in a rectangular grid
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
- H04N21/4316—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4318—Generation of visual interfaces for content selection or interaction; Content or additional data rendering by altering the content in the rendering process, e.g. blanking, blurring or masking an image region
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/435—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/435—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
- H04N21/4355—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/435—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
- H04N21/4355—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen
- H04N21/4356—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen by altering the spatial resolution, e.g. to reformat additional data on a handheld device, attached to the STB
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/435—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
- H04N21/4355—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen
- H04N21/4358—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen for generating different versions, e.g. for different peripheral devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44012—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/4508—Management of client data or end-user data
- H04N21/4524—Management of client data or end-user data involving the geographical location of the client
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/4722—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
- H04N21/4725—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content using interactive regions of the image, e.g. hot spots
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/812—Monomedia components thereof involving advertisement data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/84—Generation or processing of descriptive data, e.g. content descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/858—Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot
- H04N21/8586—Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot by using a URL
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/272—Means for inserting a foreground image in a background image, i.e. inlay, outlay
- H04N5/2723—Insertion of virtual advertisement; Replacing advertisements physical present in the scene by virtual advertisement
Definitions
- the present invention relates to insertion of objects into video content in general, and to manipulation of such objects in particular.
- video content has significantly developed during the past five years.
- Such video content may be received from adaptive websites, such as YouTube, or from other web pages, such as websites that provide news or entertainment content, online educational content and the like.
- Video content may also be received during video conferences, live video stream, web cameras and the like.
- Objects known in the art are inserted into the video content and displayed in a static mode. For example, added from the lower end of the frame. Such objects are inserted after the video is processed.
- the video content does not change its properties.
- the objects may provide the user with additional content, such as commercials, related links, news, messages and the like. Since the objects are static, they do not require enough attention, and the user is likely to ignore them and focus on the content itself. However, the provider of the objects wishes that user will focus on the object's content, not only the video content.
- some of the objects are named pre-rolls mid-rolls and post-rolls, tickers, overlays and the like. The main disadvantage is that these objects are intrusive and do not fit contextually to the video content.
- the computerized content is an image. In some embodiments, the computerized content is video. In some embodiments, the input is received from a user. In some embodiments, the input is received as metadata related to the content.
- the method further comprises a step of detecting interaction between a user provided with the computerized content and the manipulated object. In some embodiments, the method further comprises a step of providing an analysis based on the detected interaction.
- system the computerized content is video.
- system further comprises a frame-based metadata storage for sending the rendering module metadata related to the display of the object in addition to the video.
- system further comprises an input device for receiving user input such that the manipulation is determined as a function of the user input.
- system further comprises a video event dispatcher for tracking an event in the video such that the manipulation is determined as a function of the events.
- FIG. 1 shows a computerized environment for manipulating objects inserted into video content, according to some exemplary embodiments of the subject matter
- FIG. 2 shows a computerized module for manipulating objects added to video content, in accordance with some exemplary embodiments of the subject matter
- FIGS. 3A-3D show objects being manipulated, in accordance with some exemplary embodiments of the subject matter.
- FIG. 4 shows a flow for implementing the method of the disclosed subject matter, in accordance with some exemplary embodiments of the subject matter.
- One technical problem dealt with in the disclosed subject matter is to enable interactivity of objects inserted into video content. Interactivity of such objects results in increasing the attractiveness of the video content, as well as the attractiveness of the objects themselves. As a result the value of interactive objects within video content, is increased, especially when the object or the video content contain commercial content.
- One technical solution discloses a system that comprises a receiving module for receiving information used to determine a manipulation applied on an object inserted to video content.
- the information may be received from a user or from another computerized entity, for example the distributor of the video content.
- the system also comprises a determination module for determining the manipulation applied on the object.
- Such manipulation may be changing the location or size of the object, generating sound feedback to be executed by the object and the like.
- the manipulation may be a function of a content of the video.
- the system of the disclosed subject matter may also comprise a rendering module for determining the display of the object or the display of the entire video content after the manipulation is determined. For example, determining the display takes into consideration the location of the camera in the video content, the location of specific elements in the frames, such as figures and the like.
- the rendering module may redraw the object, determine the shadow casted by the modified object and the like.
- the manipulated object may then be displayed on a display device of the user.
- the disclosed subject matter relates to objects inserted to video content, as well as objects inserted into images, text or other visual entities displayed on a computerized device.
- FIG. 1 shows a computerized environment for manipulating objects inserted into video content, according to some exemplary embodiments of the subject matter.
- Computerized environment 100 comprises a user's device 120 that receive content from a communication server 150 .
- the communication server 150 may transmit the content to the user's device 120 via a network 110 .
- the communication server 150 may be a group of servers or computers providing the content to the user's device 120 .
- the communication server 150 is a web server and the content comprises video data, video metadata, properties related to manipulating and inserting objects into the video data, events within the video and the like.
- the communication server 150 may be a server that handles instant messaging applications or video conferences, such as ICQ, MSN messenger and the like, in which video is transmitted bi-directionally.
- the user's device 120 may be personal computer, television, wireless device such to as mobile phone, Personal Digital Assistance (PDA) and the like.
- the user's device 120 communicates with or comprises a display device 115 used for displaying the video transmitted from the communication server 150 .
- the user's device 120 further comprises an input device, such as a pointing device 125 , keyboard 128 , touch screen (not shown) or other input devices desired by a person skilled in the art.
- Such input device enables the user to interact with the video content or with the object inserted to the video content, for example by pointing at the object and pressing a key.
- the user's device 120 incorporates a computerized application used for converting the data received from the communication server 150 into the data displayed on the user's device 120 or on the display device 115 .
- a computerized application may be a media player, such as Windows media player, Adobe media player and the like.
- the video may be displayed on a specific region 130 within the display device 115 .
- user's input that is received at the user's device 120 via the input devices manipulates the overlay object. For example, when the user hovers over or points at the object, the object increases by a predetermined proportion, such as 5 percents. In other alternative examples, the user can change the location of the object, change display parameters of the object such as color, luminance and the like.
- the user's input is received by a receiver (not shown) within or connected to the user's device 120 .
- Such receiver may be hardware or software module and forwards the user's input to a processing module that manipulates the object according to the user's input and to a predetermined set of rules.
- Such rules may be stored in a storage device within the user's device, or within the communication server 150 .
- the user may click or otherwise select the object and as a result, the video player may stop, pause, fast-forward, seek, rewind the video and the like. Additionally, clicking on an object may pause the video and display a second object or additional content, such as a window or bubble displaying additional information and/or drawings, figures, images, text, video and the like.
- manipulation on the object is performed according to the content of the video.
- the manipulation may be determined as a function of metadata of the video content received by the user's device 120 , for example sound volume level of the video content.
- Some of the analysis may be done before transmitting the video content to the user's device 120 , and some analysis may be performed in runtime. For example, volume level can be analyzed in runtime, while detecting specific objects or figures in the video, is more likely to be performed before the video content is transmitted to the user's device 120 , for example in the video server 150 .
- another server may receive the video content from the video server, and add the objects to the video after analyzing said video content.
- another server may select the object to be added to the video and send an indication to the user's device 120 to add the object.
- a selection may be performed in accordance with predetermined parameters, rules and configurations.
- the selection may be done in accordance with demographical information, user's history such as viewing history, location, video content and the like.
- FIG. 2 shows a computerized module for manipulating objects added to video content, in accordance with some exemplary embodiments of the subject matter.
- Computerized module 200 comprises an I/O module 220 for receiving input from a user that relates to interacting with an object added to video content as overlay. Such input may be hover, pointing, clicking, touching the display device to interact with object, pressing a key on a keyboard, vocal input using a microphone and the like.
- I/O module 220 is likely to reside on the user's device 120 of FIG. 1 , receive the user's input and send said input to a manipulation server 235 to determine the manipulation injected to the object according to the user's device.
- the I/O module 220 may receive manipulations from sources other than the user watching the video content, such as an RSS feed from a website, a computerized clock, an additional application and the like. In some exemplary embodiments, a lack of input from the I/O module 220 may initiate a manipulation by the manipulation server 235 such as illuminating the object, or displaying an additional object calling the user to interact with the object.
- sources other than the user watching the video content such as an RSS feed from a website, a computerized clock, an additional application and the like.
- a lack of input from the I/O module 220 may initiate a manipulation by the manipulation server 235 such as illuminating the object, or displaying an additional object calling the user to interact with the object.
- the manipulation server 235 may also be connected to a video event tracker 210 that tracks events in the video content transmitted to the user's device.
- the events tracked by the video event dispatcher 210 may also affect the manipulation selected by the manipulation server 235 .
- an object may be manipulated to follow a point of interest in the video, such as a ball bouncing.
- the video event dispatcher 210 may reside in the communication server 150 , or in another server that analyzes the video content before said video content is transmitted to the user's device 120 of FIG. 1 .
- the video event dispatcher 210 may comprise software or hardware applications to detect changes in the video content, such as location of objects in different video frames, shadowing, blocking of view by an obstacle, sound data, new scene, and the like.
- the video event dispatcher 210 may be connected to a process video module 215 or to a storage containing preprocessed data of the video content.
- preprocessed data provides the video event dispatcher specific information concerning events, for example a specific frame, specific event and the like.
- Such preprocessed data is used when the video event dispatcher 210 dispatches a command to one or more manipulation servers, such as manipulation server 235 , which determines a manipulation to be applied on the inserted object at a specific frame.
- the video event dispatcher 210 is also connected to the timeline of the video data when displayed on the user's device, to provide indications at a precise time segment.
- the video event dispatcher 210 receives the metadata from the preprocessed video content, analyzes the metadata and issues notifications to the manipulation server 235 to provide a manipulation at a predefined time or frame.
- the manipulation server 235 receives data according to which a manipulation is determined. Such data may be sent from the I/O module 220 , from the video event dispatcher 210 , from the communication server 150 of FIG. 1 , from another source of video content, from a publisher that wishes to add an object to the video content and the like.
- the manipulation server 235 comprises or communicates with object behavior storage 230 that stores data concerning manipulations. Such data may be manipulation options, set of rules, technological requirements for performing manipulations, cases upon which a manipulation cannot be provided, minimal time for a applying a manipulation on an object or video content and the like. In some cases, the user's device 120 of FIG.
- the manipulation server 235 may take into account the processing abilities and other resources of the user's device 120 of FIG. 1 when determining a manipulation.
- the user may wish to change the object's location to an unauthorized location, for example the location of a show presenter that is required to appear on the display device 115 of FIG. 1 .
- Such rules may be stored in the object behavior storage 230 .
- the manipulation server 235 is connected to a rendering module 250 and transmits the determined manipulation to the rendering module 250 .
- the rendering module 250 determines the display of the content once the manipulation is applied on the object. For example, the rendering module 250 determines the angle from which the object is displayed. Further, the rendering module 250 may determine to modify or limit the manipulation determined by the manipulation module 235 . For example, when the user wishes to raise a part of the object beyond a predefined height, and such height is determined by the manipulation module 235 , the rendering module 250 may determine to limit the manipulation to a predefined height. Additionally, the rendering module 250 may define the frame displayed to the user, in terms of either video content, single image or the like.
- the rendering module 250 may also determine the shadow casted by the manipulated object, for example increasing the shadow when the object's size is increased, or change the shadow's location.
- the rendering module 250 may further determine the shadows casted on the manipulated object.
- the rendering module 250 may change transparency or opaque level according to the location of at least a portion of the object after manipulated.
- the rendering module 250 may generate or draw at least a portion of the object to execute the manipulation, for example draw facial expression of the object, determined according to the context of the video content.
- the rendering module 250 may further determine to display a portion of the manipulated object. For example, in case the object's visibility is partially blocked by an obstacle.
- the rendering module 250 may be connected to frame-based metadata (FBM) storage 240 .
- the FBM storage 240 comprises data related to the video content itself, camera angle provided in a specific frame in the video content, median or average gray scale value of a specific frame, appearance of a specific character or entity in the video content, atmosphere, points of interest in the video content, events in a scene and the like. Indication of such data enables the rendering module 250 to display the manipulated object in a more precise method, which is more attractive to the user, and improves the influence of a commercial object within video content.
- the I/O module 220 may detect user's behavior actions concerning the object. Such behavior actions may be hovering over a pointing device such as a mouse on the display device, on a specific location where the object is displayed. Another exemplary behavior action may be pressing a link connected to the object.
- the I/O module 220 may send the detected behavior actions to another entity that analyzes said actions and provides statistical analysis.
- the statistical analysis refers also to changing the size and location of the object, refers to interaction with specific portion of the object, preferred manipulations in specific regions, ages, time in the day and the like.
- the computerized module 200 and other elements disclosed in the subject matter detect, handle, and analyze manipulations and instructions using applications that preferably comprise software or hardware components.
- Such to software components may be written in any programming language such as C, C#, C++, Java, VB, VB.Net, or the like.
- Such components nay be developed under any development environment, such as Visual Studio.Net, Eclipse or the like.
- Communication between the elements disclosed above may be performed via the interne, or via another communication media, such as a telephone network, satellite, physical or wireless channels, and other media desired to a person skilled in the art.
- the elements of the disclosed subject matter may be downloadable or installable on the user's device as an extension to a media player already installed on the user's device.
- the elements comprise an interface to communicate with other portions of the media player already installed on the user's device.
- the elements may be downloaded as part of a new media player, not as an add-on to an existing media player.
- FIGS. 3A-3D show objects being manipulated, in accordance with some exemplary embodiments of the subject matter.
- FIGS. 3A and 3B show a display device displaying an object being manipulated according to the user's input.
- FIG. 3A shows a display device 322 having a display region 324 .
- Said display region 324 may be a region where an image is displayed, or a region used for a media player to provide video content.
- An object is displayed at the display region.
- the object of the exemplary embodiment comprises an ice cream cone 326 and ice cream 328 .
- the object is inserted to an image or to video content provided to a user's device (such as 120 of FIG. 1 ).
- FIG. 3B shows a display device 302 and a display region 304 , generally equivalent to elements 322 and 324 of FIG. 3A .
- the display region 304 displays ice cream 308 and ice-cream cone 306 .
- the interaction disclosed in FIG. 3B relates to increasing the size of the ice cream ( 328 of FIG. 3A ).
- the user points at the ice-cream 308 using a pointing device (not shown), such as a mouse.
- a pointer 310 related to the pointing device (not shown) points at the ice cream 308 .
- the size of the ice cream 308 increases, for example by 25 percent.
- the I/O device 220 may detect the user's pointing on the ice cream 308 , which is part of the object inserted into video content or image.
- the manipulation server 235 determines the manipulation performed on the ice cream 308 or on the entire object. For example, determines to extend the ice cream 308 and not change its location, which was also possible according to the user's input.
- FIGS. 3C and 3D show a display device displaying an object manipulated according to the context of the video content or the content of the image to which the object is inserted, according to some exemplary embodiments of the disclosed subject matter.
- FIG. 3C shows a display device 342 , a display region 344 and two objects displayed within the display region 344 .
- the first object 346 is a person
- the second object 348 is a telephone.
- the first object 346 is part of the content provided by the content server (such as 150 of FIG. 1 ) while the second object 348 is added to the original content and can be manipulated.
- FIG. 3D shows the manipulation applied on the second object added to the original content.
- FIG. 3D discloses a display device 362 , a display region 364 , a first object 366 and a second object 368 .
- the first object 366 and the second object 368 are generally equivalent to elements 346 and 348 of FIG. 3C .
- the second object 368 is manipulated according to the context of the video content displayed in the video region 364 . For example, when a specific sound tone is provided at a specific frame or group of frames in the video content, the second object 368 is manipulated in a way that it seems as the phone rings.
- Such manipulation increases the attractiveness of the second object 368 and enables interaction between the user and the video content. Further, such manipulation improves the visibility of the second object 368 to the user, and as a result, increases the value of the content provided along with the second object.
- FIG. 4 shows a flow diagram of a method of the disclosed subject matter, in accordance with some exemplary embodiments of the subject matter.
- the video content is processed before transmitted to the user's device. Such processing includes identifying events in which manipulation may be applied on an inserted object, identifying frames in which a scene begins, identifying change in the audio data and the like. Such preprocessed data is likely to be transmitted to the user's device in addition to the video content.
- the user's input is received by the user's device. Such input may be provided using a mouse, keyboard, touch screen and the like.
- the detection in step 405 may be a result of a command or message from the user watching the content, from the source of the content, from a computer engine that generates such indications in a random manner, and the like.
- the computerized entity that detects the indication may send a notification to another module that the input from the user has been detected.
- a computerized entity detects an indication to apply a manipulation on an object inserted into content displayed to a user.
- Such content may be video content, animated image or any other content desired by a person skilled in the art.
- the content may also be text. It will be noted that the object may be seamlessly inserted to the content, such as by being in line with a perspective of the content, such as video content.
- the object may be displayed as an overlay over the content, such as a ticker being presented over a video content.
- the indication is from the source of the content, it is likely that the indication is sent to an adaptive module in the media player that displays the content, to provide a specific manipulation at a given moment, or that a predefined event takes place at a specific frame or sequence or frames.
- the manipulation to be applied on the object added to the content is determined.
- the determination may be a function of the source of the indication, for example a user or a computerized source.
- the manipulation may be a function of the number of objects inserted into the content, or the number of objects visible to the user in different content units, for example inserting a first object to video content and a second object into an image, while each object is manipulated autonomously.
- Manipulation may be changing the objects to parameters, such as size, location, texture, facial expression, level of speech, accent, outfit and the like.
- the manipulation may change the display of the content. For example, the manipulation may pause a video content.
- the manipulation may replace the inserted object with a different object. determining the manipulation is likely to be performed in the user's device.
- a computerized entity determines the display of the manipulated object. Such determination takes into consideration the content displayed to the user, for example, the location of other elements in the frame or the image. Determination the display of the object may require drawing or designing at least part of the manipulated object, for example in case the shape of the object is modified as a result of the manipulation. Determination of the display may also comprise determining other elements of the content to which the object is inserted into, or pausing the video, in case the content provided to the user is a sequence of video frames. Determination of the display may comprise determining shadow within the content and/or over the object, transparency level, location and size of elements in the content, limits of the manipulation and the like. Such determination may be performed by the rendering module 250 of FIG. 2 , by an extension to a media player, by extension to a browser or to instant messages application and the like.
- the manipulated object is displayed.
- the object may be injected or otherwise inserted into video content, animated image, text, and the like.
- the computerized module determines the object to apply the manipulation on.
- the system of the disclosed subject matter may comprise a Z-order module, for determining which object to display in front of other objects.
- step 440 the user interacts with the manipulated object.
- Such interaction may be by pressing a key, moving a pointing device, touching the screen, opening a link, clicking on the object, speaking to a microphone and the to like.
- the computerized module 200 of FIG. 2 detects such interactions, especially using the I/O module 220 .
- step 445 the interactions with the objects are analyzed. Such analysis may be performed in the user's device, or by an adaptive server after transmitted from the user's device.
- the analysis of interaction between the user and the manipulated object allows more than just analysis of links pressed by the user, for example the time in which the user interacts with the object, preferred manipulations, and the like.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Computer Graphics (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Computing Systems (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computer Hardware Design (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Library & Information Science (AREA)
- Computer Security & Cryptography (AREA)
- Social Psychology (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
- The present invention claims priority of the filing date of provisional patent application Ser. No. 61/065,703 titled In-video advertising real estate, filed Feb. 13, 2008, the contents of which is hereby incorporated by reference herein.
- 1. Field of the Invention
- The present invention relates to insertion of objects into video content in general, and to manipulation of such objects in particular.
- 2. Discussion of the Related Art
- The use of online video content has significantly developed during the past five years. Such video content may be received from adaptive websites, such as YouTube, or from other web pages, such as websites that provide news or entertainment content, online educational content and the like. Video content may also be received during video conferences, live video stream, web cameras and the like.
- Objects known in the art are inserted into the video content and displayed in a static mode. For example, added from the lower end of the frame. Such objects are inserted after the video is processed. The video content does not change its properties. The objects may provide the user with additional content, such as commercials, related links, news, messages and the like. Since the objects are static, they do not require enough attention, and the user is likely to ignore them and focus on the content itself. However, the provider of the objects wishes that user will focus on the object's content, not only the video content. some of the objects are named pre-rolls mid-rolls and post-rolls, tickers, overlays and the like. The main disadvantage is that these objects are intrusive and do not fit contextually to the video content.
- There is a long felt need to attract a user watching online video content to the object displayed in addition to the video, in order to increase the visibility of the inserted object and the attractiveness of the video content. By increasing the visibility of the object, the value of the commercial content represented by the object, such as an advertisement, is improved.
- It is an object of the subject matter to disclose a method of manipulating an object inserted into computerized content, comprising: receiving input related to manipulation of the object; determining the manipulation to be applied on the object; determining the display of the object according to the determined manipulation; displaying the object after manipulated.
- In some embodiments, the computerized content is an image. In some embodiments, the computerized content is video. In some embodiments, the input is received from a user. In some embodiments, the input is received as metadata related to the content.
- In some embodiments, the method further comprises a step of detecting interaction between a user provided with the computerized content and the manipulated object. In some embodiments, the method further comprises a step of providing an analysis based on the detected interaction.
- It is another object of the subject matter to disclose a computer program product embodied on a on one or more computer-usable medium for performing a computer process comprising: receiving input related to manipulation of the object; determining the manipulation to be applied on the object; determining the display of the object according to the determined manipulation; displaying the object after manipulated.
- It is another object of the subject matter to disclose a system for manipulating an object inserted into computerized content, comprising a manipulation module for receiving input related to manipulation of the object and determining the manipulation to be applied on the object; a rendering module for determining the display of the computerized content with the manipulated object.
- In some embodiments, the system the computerized content is video. In some embodiments, the system further comprises a frame-based metadata storage for sending the rendering module metadata related to the display of the object in addition to the video.
- In some embodiments, the system further comprises an input device for receiving user input such that the manipulation is determined as a function of the user input. In some embodiments, the system further comprises a video event dispatcher for tracking an event in the video such that the manipulation is determined as a function of the events.
- Exemplary non-limited embodiments of the disclosed subject matter will be described, with reference to the following description of the embodiments, in conjunction with the figures. The figures are generally not shown to scale and any sizes are only meant to be exemplary and not necessarily limiting. Corresponding or like elements are designated by the same numerals or letters.
-
FIG. 1 shows a computerized environment for manipulating objects inserted into video content, according to some exemplary embodiments of the subject matter; -
FIG. 2 shows a computerized module for manipulating objects added to video content, in accordance with some exemplary embodiments of the subject matter; -
FIGS. 3A-3D show objects being manipulated, in accordance with some exemplary embodiments of the subject matter; and, -
FIG. 4 shows a flow for implementing the method of the disclosed subject matter, in accordance with some exemplary embodiments of the subject matter. - One technical problem dealt with in the disclosed subject matter is to enable interactivity of objects inserted into video content. Interactivity of such objects results in increasing the attractiveness of the video content, as well as the attractiveness of the objects themselves. As a result the value of interactive objects within video content, is increased, especially when the object or the video content contain commercial content.
- One technical solution discloses a system that comprises a receiving module for receiving information used to determine a manipulation applied on an object inserted to video content. The information may be received from a user or from another computerized entity, for example the distributor of the video content. The system also comprises a determination module for determining the manipulation applied on the object. Such manipulation may be changing the location or size of the object, generating sound feedback to be executed by the object and the like. The manipulation may be a function of a content of the video. The system of the disclosed subject matter may also comprise a rendering module for determining the display of the object or the display of the entire video content after the manipulation is determined. For example, determining the display takes into consideration the location of the camera in the video content, the location of specific elements in the frames, such as figures and the like. The rendering module may redraw the object, determine the shadow casted by the modified object and the like. The manipulated object may then be displayed on a display device of the user.
- The disclosed subject matter relates to objects inserted to video content, as well as objects inserted into images, text or other visual entities displayed on a computerized device.
-
FIG. 1 shows a computerized environment for manipulating objects inserted into video content, according to some exemplary embodiments of the subject matter. Computerized environment 100 comprises a user'sdevice 120 that receive content from acommunication server 150. Thecommunication server 150 may transmit the content to the user'sdevice 120 via anetwork 110. Thecommunication server 150 may be a group of servers or computers providing the content to the user'sdevice 120. In some cases, thecommunication server 150 is a web server and the content comprises video data, video metadata, properties related to manipulating and inserting objects into the video data, events within the video and the like. In other embodiments, thecommunication server 150 may be a server that handles instant messaging applications or video conferences, such as ICQ, MSN messenger and the like, in which video is transmitted bi-directionally. The user'sdevice 120 may be personal computer, television, wireless device such to as mobile phone, Personal Digital Assistance (PDA) and the like. The user'sdevice 120 communicates with or comprises adisplay device 115 used for displaying the video transmitted from thecommunication server 150. The user'sdevice 120 further comprises an input device, such as apointing device 125,keyboard 128, touch screen (not shown) or other input devices desired by a person skilled in the art. Such input device enables the user to interact with the video content or with the object inserted to the video content, for example by pointing at the object and pressing a key. In some exemplary embodiments, the user'sdevice 120 incorporates a computerized application used for converting the data received from thecommunication server 150 into the data displayed on the user'sdevice 120 or on thedisplay device 115. Such computerized application may be a media player, such as Windows media player, Adobe media player and the like. The video may be displayed on aspecific region 130 within thedisplay device 115. - In accordance with one exemplary embodiment of the disclosed subject matter, user's input that is received at the user's
device 120 via the input devices manipulates the overlay object. For example, when the user hovers over or points at the object, the object increases by a predetermined proportion, such as 5 percents. In other alternative examples, the user can change the location of the object, change display parameters of the object such as color, luminance and the like. The user's input is received by a receiver (not shown) within or connected to the user'sdevice 120. Such receiver may be hardware or software module and forwards the user's input to a processing module that manipulates the object according to the user's input and to a predetermined set of rules. Such rules may be stored in a storage device within the user's device, or within thecommunication server 150. In yet other alternative examples, the user may click or otherwise select the object and as a result, the video player may stop, pause, fast-forward, seek, rewind the video and the like. Additionally, clicking on an object may pause the video and display a second object or additional content, such as a window or bubble displaying additional information and/or drawings, figures, images, text, video and the like. - In accordance with other exemplary embodiments of the disclosed subject matter, manipulation on the object is performed according to the content of the video. The manipulation may be determined as a function of metadata of the video content received by the user's
device 120, for example sound volume level of the video content. Some of the analysis may be done before transmitting the video content to the user'sdevice 120, and some analysis may be performed in runtime. For example, volume level can be analyzed in runtime, while detecting specific objects or figures in the video, is more likely to be performed before the video content is transmitted to the user'sdevice 120, for example in thevideo server 150. In an alternative embodiment, another server (not shown) may receive the video content from the video server, and add the objects to the video after analyzing said video content. In yet another exemplary embodiment, another server may select the object to be added to the video and send an indication to the user'sdevice 120 to add the object. Such a selection may be performed in accordance with predetermined parameters, rules and configurations. The selection may be done in accordance with demographical information, user's history such as viewing history, location, video content and the like. -
FIG. 2 shows a computerized module for manipulating objects added to video content, in accordance with some exemplary embodiments of the subject matter.Computerized module 200 comprises an I/O module 220 for receiving input from a user that relates to interacting with an object added to video content as overlay. Such input may be hover, pointing, clicking, touching the display device to interact with object, pressing a key on a keyboard, vocal input using a microphone and the like. Such I/O module 220 is likely to reside on the user'sdevice 120 ofFIG. 1 , receive the user's input and send said input to amanipulation server 235 to determine the manipulation injected to the object according to the user's device. The I/O module 220 may receive manipulations from sources other than the user watching the video content, such as an RSS feed from a website, a computerized clock, an additional application and the like. In some exemplary embodiments, a lack of input from the I/O module 220 may initiate a manipulation by themanipulation server 235 such as illuminating the object, or displaying an additional object calling the user to interact with the object. - The
manipulation server 235 may also be connected to avideo event tracker 210 that tracks events in the video content transmitted to the user's device. The events tracked by thevideo event dispatcher 210 may also affect the manipulation selected by themanipulation server 235. For example, an object may be manipulated to follow a point of interest in the video, such as a ball bouncing. Thevideo event dispatcher 210 may reside in thecommunication server 150, or in another server that analyzes the video content before said video content is transmitted to the user'sdevice 120 ofFIG. 1 . Thevideo event dispatcher 210 may comprise software or hardware applications to detect changes in the video content, such as location of objects in different video frames, shadowing, blocking of view by an obstacle, sound data, new scene, and the like. Thevideo event dispatcher 210 may be connected to aprocess video module 215 or to a storage containing preprocessed data of the video content. As a result, such preprocessed data provides the video event dispatcher specific information concerning events, for example a specific frame, specific event and the like. Such preprocessed data is used when thevideo event dispatcher 210 dispatches a command to one or more manipulation servers, such asmanipulation server 235, which determines a manipulation to be applied on the inserted object at a specific frame. Thevideo event dispatcher 210 is also connected to the timeline of the video data when displayed on the user's device, to provide indications at a precise time segment. In some exemplary embodiments of the disclosed subject matter, thevideo event dispatcher 210 receives the metadata from the preprocessed video content, analyzes the metadata and issues notifications to themanipulation server 235 to provide a manipulation at a predefined time or frame. - In accordance with some exemplary embodiments of the disclosed subject matter, the
manipulation server 235 receives data according to which a manipulation is determined. Such data may be sent from the I/O module 220, from thevideo event dispatcher 210, from thecommunication server 150 ofFIG. 1 , from another source of video content, from a publisher that wishes to add an object to the video content and the like. Themanipulation server 235 comprises or communicates withobject behavior storage 230 that stores data concerning manipulations. Such data may be manipulation options, set of rules, technological requirements for performing manipulations, cases upon which a manipulation cannot be provided, minimal time for a applying a manipulation on an object or video content and the like. In some cases, the user'sdevice 120 ofFIG. 1 may be limited in processing abilities such that some manipulations cannot be performed even if determined by themanipulation server 235. In some exemplary embodiments, themanipulation server 235 may take into account the processing abilities and other resources of the user'sdevice 120 ofFIG. 1 when determining a manipulation. In some other cases, the user may wish to change the object's location to an unauthorized location, for example the location of a show presenter that is required to appear on thedisplay device 115 ofFIG. 1 . Such rules may be stored in theobject behavior storage 230. - The
manipulation server 235 is connected to arendering module 250 and transmits the determined manipulation to therendering module 250. Therendering module 250 determines the display of the content once the manipulation is applied on the object. For example, therendering module 250 determines the angle from which the object is displayed. Further, therendering module 250 may determine to modify or limit the manipulation determined by themanipulation module 235. For example, when the user wishes to raise a part of the object beyond a predefined height, and such height is determined by themanipulation module 235, therendering module 250 may determine to limit the manipulation to a predefined height. Additionally, therendering module 250 may define the frame displayed to the user, in terms of either video content, single image or the like. Therendering module 250 may also determine the shadow casted by the manipulated object, for example increasing the shadow when the object's size is increased, or change the shadow's location. Therendering module 250 may further determine the shadows casted on the manipulated object. Therendering module 250 may change transparency or opaque level according to the location of at least a portion of the object after manipulated. Therendering module 250 may generate or draw at least a portion of the object to execute the manipulation, for example draw facial expression of the object, determined according to the context of the video content. Therendering module 250 may further determine to display a portion of the manipulated object. For example, in case the object's visibility is partially blocked by an obstacle. - The
rendering module 250 may be connected to frame-based metadata (FBM)storage 240. TheFBM storage 240 comprises data related to the video content itself, camera angle provided in a specific frame in the video content, median or average gray scale value of a specific frame, appearance of a specific character or entity in the video content, atmosphere, points of interest in the video content, events in a scene and the like. Indication of such data enables therendering module 250 to display the manipulated object in a more precise method, which is more attractive to the user, and improves the influence of a commercial object within video content. - Once the manipulated object is displayed on the user's device, the I/
O module 220 may detect user's behavior actions concerning the object. Such behavior actions may be hovering over a pointing device such as a mouse on the display device, on a specific location where the object is displayed. Another exemplary behavior action may be pressing a link connected to the object. The I/O module 220 may send the detected behavior actions to another entity that analyzes said actions and provides statistical analysis. The statistical analysis refers also to changing the size and location of the object, refers to interaction with specific portion of the object, preferred manipulations in specific regions, ages, time in the day and the like. - The
computerized module 200 and other elements disclosed in the subject matter detect, handle, and analyze manipulations and instructions using applications that preferably comprise software or hardware components. Such to software components may be written in any programming language such as C, C#, C++, Java, VB, VB.Net, or the like. Such components nay be developed under any development environment, such as Visual Studio.Net, Eclipse or the like. Communication between the elements disclosed above may be performed via the interne, or via another communication media, such as a telephone network, satellite, physical or wireless channels, and other media desired to a person skilled in the art. - The elements of the disclosed subject matter may be downloadable or installable on the user's device as an extension to a media player already installed on the user's device. As such, the elements comprise an interface to communicate with other portions of the media player already installed on the user's device. Alternatively, the elements may be downloaded as part of a new media player, not as an add-on to an existing media player.
-
FIGS. 3A-3D show objects being manipulated, in accordance with some exemplary embodiments of the subject matter.FIGS. 3A and 3B show a display device displaying an object being manipulated according to the user's input.FIG. 3A shows adisplay device 322 having adisplay region 324. Saiddisplay region 324 may be a region where an image is displayed, or a region used for a media player to provide video content. An object is displayed at the display region. The object of the exemplary embodiment comprises anice cream cone 326 andice cream 328. The object is inserted to an image or to video content provided to a user's device (such as 120 ofFIG. 1 ). - In the example disclosed in
FIG. 3B , the user desires to interact with the object.FIG. 3B shows adisplay device 302 and adisplay region 304, generally equivalent toelements FIG. 3A . Thedisplay region 304displays ice cream 308 and ice-cream cone 306. The interaction disclosed inFIG. 3B relates to increasing the size of the ice cream (328 ofFIG. 3A ). In accordance with the example disclosed inFIG. 3B , the user points at the ice-cream 308 using a pointing device (not shown), such as a mouse. Apointer 310 related to the pointing device (not shown) points at theice cream 308. As a result, the size of theice cream 308 increases, for example by 25 percent. The I/O device 220 may detect the user's pointing on theice cream 308, which is part of the object inserted into video content or image. Themanipulation server 235 determines the manipulation performed on theice cream 308 or on the entire object. For example, determines to extend theice cream 308 and not change its location, which was also possible according to the user's input. -
FIGS. 3C and 3D show a display device displaying an object manipulated according to the context of the video content or the content of the image to which the object is inserted, according to some exemplary embodiments of the disclosed subject matter.FIG. 3C shows a display device 342, adisplay region 344 and two objects displayed within thedisplay region 344. In the disclosed example, thefirst object 346 is a person, and thesecond object 348 is a telephone. Thefirst object 346 is part of the content provided by the content server (such as 150 ofFIG. 1 ) while thesecond object 348 is added to the original content and can be manipulated. -
FIG. 3D shows the manipulation applied on the second object added to the original content.FIG. 3D discloses a display device 362, adisplay region 364, afirst object 366 and asecond object 368. Thefirst object 366 and thesecond object 368 are generally equivalent toelements FIG. 3C . Thesecond object 368 is manipulated according to the context of the video content displayed in thevideo region 364. For example, when a specific sound tone is provided at a specific frame or group of frames in the video content, thesecond object 368 is manipulated in a way that it seems as the phone rings. Such manipulation increases the attractiveness of thesecond object 368 and enables interaction between the user and the video content. Further, such manipulation improves the visibility of thesecond object 368 to the user, and as a result, increases the value of the content provided along with the second object. -
FIG. 4 shows a flow diagram of a method of the disclosed subject matter, in accordance with some exemplary embodiments of the subject matter. Instep 402, the video content is processed before transmitted to the user's device. Such processing includes identifying events in which manipulation may be applied on an inserted object, identifying frames in which a scene begins, identifying change in the audio data and the like. Such preprocessed data is likely to be transmitted to the user's device in addition to the video content. instep 405, the user's input is received by the user's device. Such input may be provided using a mouse, keyboard, touch screen and the like. The detection instep 405 may be a result of a command or message from the user watching the content, from the source of the content, from a computer engine that generates such indications in a random manner, and the like. In case the detection's origin is user input, the computerized entity that detects the indication may send a notification to another module that the input from the user has been detected. Instep 410, a computerized entity detects an indication to apply a manipulation on an object inserted into content displayed to a user. Such content may be video content, animated image or any other content desired by a person skilled in the art. The content may also be text. It will be noted that the object may be seamlessly inserted to the content, such as by being in line with a perspective of the content, such as video content. Additionally, the object may be displayed as an overlay over the content, such as a ticker being presented over a video content. In case the indication is from the source of the content, it is likely that the indication is sent to an adaptive module in the media player that displays the content, to provide a specific manipulation at a given moment, or that a predefined event takes place at a specific frame or sequence or frames. - In
step 415, the manipulation to be applied on the object added to the content is determined. The determination may be a function of the source of the indication, for example a user or a computerized source. The manipulation may be a function of the number of objects inserted into the content, or the number of objects visible to the user in different content units, for example inserting a first object to video content and a second object into an image, while each object is manipulated autonomously. Manipulation may be changing the objects to parameters, such as size, location, texture, facial expression, level of speech, accent, outfit and the like. Further, the manipulation may change the display of the content. For example, the manipulation may pause a video content. In some exemplary embodiments, the manipulation may replace the inserted object with a different object. determining the manipulation is likely to be performed in the user's device. - In
step 420, a computerized entity determines the display of the manipulated object. Such determination takes into consideration the content displayed to the user, for example, the location of other elements in the frame or the image. Determination the display of the object may require drawing or designing at least part of the manipulated object, for example in case the shape of the object is modified as a result of the manipulation. Determination of the display may also comprise determining other elements of the content to which the object is inserted into, or pausing the video, in case the content provided to the user is a sequence of video frames. Determination of the display may comprise determining shadow within the content and/or over the object, transparency level, location and size of elements in the content, limits of the manipulation and the like. Such determination may be performed by therendering module 250 ofFIG. 2 , by an extension to a media player, by extension to a browser or to instant messages application and the like. - In
step 430, the manipulated object is displayed. As noted above, the object may be injected or otherwise inserted into video content, animated image, text, and the like. When more than one object is inserted to the content, the computerized module determines the object to apply the manipulation on. Further, the system of the disclosed subject matter may comprise a Z-order module, for determining which object to display in front of other objects. - In
step 440, the user interacts with the manipulated object. Such interaction may be by pressing a key, moving a pointing device, touching the screen, opening a link, clicking on the object, speaking to a microphone and the to like. Thecomputerized module 200 ofFIG. 2 detects such interactions, especially using the I/O module 220. Instep 445, the interactions with the objects are analyzed. Such analysis may be performed in the user's device, or by an adaptive server after transmitted from the user's device. The analysis of interaction between the user and the manipulated object allows more than just analysis of links pressed by the user, for example the time in which the user interacts with the object, preferred manipulations, and the like. - While the disclosure has been described with reference to exemplary embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings without departing from the essential scope thereof. Therefore, it is intended that the disclosed subject matter not be limited to the particular embodiment disclosed as the best mode contemplated for carrying out this invention, but only by the claims that follow.
Claims (15)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/867,253 US20110001758A1 (en) | 2008-02-13 | 2009-02-12 | Apparatus and method for manipulating an object inserted to video content |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US6570308P | 2008-02-13 | 2008-02-13 | |
PCT/IL2009/000168 WO2009101624A2 (en) | 2008-02-13 | 2009-02-12 | Apparatus and method for manipulating an object inserted to video content |
US12/867,253 US20110001758A1 (en) | 2008-02-13 | 2009-02-12 | Apparatus and method for manipulating an object inserted to video content |
Publications (1)
Publication Number | Publication Date |
---|---|
US20110001758A1 true US20110001758A1 (en) | 2011-01-06 |
Family
ID=40957344
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/867,075 Active US8745657B2 (en) | 2008-02-13 | 2009-02-12 | Inserting interactive objects into video content |
US12/867,253 Abandoned US20110001758A1 (en) | 2008-02-13 | 2009-02-12 | Apparatus and method for manipulating an object inserted to video content |
US14/287,247 Active 2029-03-12 US9723335B2 (en) | 2008-02-13 | 2014-05-27 | Serving objects to be inserted to videos and tracking usage statistics thereof |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/867,075 Active US8745657B2 (en) | 2008-02-13 | 2009-02-12 | Inserting interactive objects into video content |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/287,247 Active 2029-03-12 US9723335B2 (en) | 2008-02-13 | 2014-05-27 | Serving objects to be inserted to videos and tracking usage statistics thereof |
Country Status (2)
Country | Link |
---|---|
US (3) | US8745657B2 (en) |
WO (2) | WO2009101624A2 (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110004898A1 (en) * | 2009-07-02 | 2011-01-06 | Huntley Stafford Ritter | Attracting Viewer Attention to Advertisements Embedded in Media |
US20110078202A1 (en) * | 2008-05-28 | 2011-03-31 | Kyocera Corporation | Communication terminal, search server and communication system |
US20110106879A1 (en) * | 2009-10-30 | 2011-05-05 | Samsung Electronics Co., Ltd. | Apparatus and method for reproducing multimedia content |
US20110310227A1 (en) * | 2010-06-17 | 2011-12-22 | Qualcomm Incorporated | Mobile device based content mapping for augmented reality environment |
US20140114919A1 (en) * | 2012-10-19 | 2014-04-24 | United Video Properties, Inc. | Systems and methods for providing synchronized media content |
WO2014120312A1 (en) * | 2013-02-04 | 2014-08-07 | Google Inc. | Systems and methods of creating an animated content item |
US20140235123A1 (en) * | 2013-02-21 | 2014-08-21 | Yi-Jun Lin | Highly conducting and transparent film and process for producing same |
CN104918060A (en) * | 2015-05-29 | 2015-09-16 | 北京奇艺世纪科技有限公司 | Method and device for selecting position to insert point in video advertisement |
US20150262423A1 (en) * | 2014-03-11 | 2015-09-17 | Amazon Technologies, Inc. | Real-time exploration of video content |
US9332302B2 (en) | 2008-01-30 | 2016-05-03 | Cinsay, Inc. | Interactive product placement system and method therefor |
US20180167661A1 (en) * | 2014-03-11 | 2018-06-14 | Amazon Technologies, Inc. | Object discovery and exploration in video content |
US10055768B2 (en) | 2008-01-30 | 2018-08-21 | Cinsay, Inc. | Interactive product placement system and method therefor |
US10506003B1 (en) | 2014-08-08 | 2019-12-10 | Amazon Technologies, Inc. | Repository service for managing digital assets |
US10970843B1 (en) | 2015-06-24 | 2021-04-06 | Amazon Technologies, Inc. | Generating interactive content using a media universe database |
US11222479B2 (en) | 2014-03-11 | 2022-01-11 | Amazon Technologies, Inc. | Object customization and accessorization in video content |
US11227315B2 (en) | 2008-01-30 | 2022-01-18 | Aibuy, Inc. | Interactive product placement system and method therefor |
US20220414820A1 (en) * | 2019-12-20 | 2022-12-29 | Move Ai Ltd | Method of inserting an object into a sequence of images |
Families Citing this family (52)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2001294655A1 (en) | 2000-09-25 | 2002-04-08 | Richard Fuisz | System for providing access to product data |
US9495386B2 (en) | 2008-03-05 | 2016-11-15 | Ebay Inc. | Identification of items depicted in images |
CN102084391A (en) | 2008-03-05 | 2011-06-01 | 电子湾有限公司 | Method and apparatus for image recognition services |
US20090265212A1 (en) * | 2008-04-17 | 2009-10-22 | David Hyman | Advertising in a streaming media environment |
GB0809631D0 (en) * | 2008-05-28 | 2008-07-02 | Mirriad Ltd | Zonesense |
WO2010082199A1 (en) | 2009-01-14 | 2010-07-22 | Innovid Inc. | Video-associated objects |
US8369686B2 (en) * | 2009-09-30 | 2013-02-05 | Microsoft Corporation | Intelligent overlay for video advertising |
TW201115362A (en) | 2009-10-29 | 2011-05-01 | Ibm | System, method, and program for editing electronic document |
US9838744B2 (en) * | 2009-12-03 | 2017-12-05 | Armin Moehrle | Automated process for segmenting and classifying video objects and auctioning rights to interactive sharable video objects |
US9164577B2 (en) * | 2009-12-22 | 2015-10-20 | Ebay Inc. | Augmented reality system, method, and apparatus for displaying an item image in a contextual environment |
US9443147B2 (en) * | 2010-04-26 | 2016-09-13 | Microsoft Technology Licensing, Llc | Enriching online videos by content detection, searching, and information aggregation |
WO2011153358A2 (en) | 2010-06-02 | 2011-12-08 | Futurity Ventures LLC | Teleprompting system and method |
US8904277B2 (en) | 2010-08-31 | 2014-12-02 | Cbs Interactive Inc. | Platform for serving online content |
US10127606B2 (en) | 2010-10-13 | 2018-11-13 | Ebay Inc. | Augmented reality system and method for visualizing an item |
US9363448B2 (en) | 2011-06-02 | 2016-06-07 | Touchcast, Llc | System and method for providing and interacting with coordinated presentations |
US9449342B2 (en) | 2011-10-27 | 2016-09-20 | Ebay Inc. | System and method for visualization of items in an environment using augmented reality |
US9240059B2 (en) | 2011-12-29 | 2016-01-19 | Ebay Inc. | Personal augmented reality |
US9736520B2 (en) * | 2012-02-01 | 2017-08-15 | Futurewei Technologies, Inc. | System and method for organizing multimedia content |
US10846766B2 (en) | 2012-06-29 | 2020-11-24 | Ebay Inc. | Contextual menus based on image recognition |
US9336541B2 (en) | 2012-09-21 | 2016-05-10 | Paypal, Inc. | Augmented reality product instructions, tutorials and visualizations |
GB2508242B (en) | 2012-11-27 | 2016-08-03 | Mirriad Advertising Ltd | Producing video data |
US10356363B2 (en) | 2013-06-26 | 2019-07-16 | Touchcast LLC | System and method for interactive video conferencing |
US10757365B2 (en) | 2013-06-26 | 2020-08-25 | Touchcast LLC | System and method for providing and interacting with coordinated presentations |
US9852764B2 (en) | 2013-06-26 | 2017-12-26 | Touchcast LLC | System and method for providing and interacting with coordinated presentations |
US10075676B2 (en) | 2013-06-26 | 2018-09-11 | Touchcast LLC | Intelligent virtual assistant system and method |
US9661256B2 (en) | 2014-06-26 | 2017-05-23 | Touchcast LLC | System and method for providing and interacting with coordinated presentations |
US10297284B2 (en) | 2013-06-26 | 2019-05-21 | Touchcast LLC | Audio/visual synching system and method |
US9666231B2 (en) | 2014-06-26 | 2017-05-30 | Touchcast LLC | System and method for providing and interacting with coordinated presentations |
US10523899B2 (en) | 2013-06-26 | 2019-12-31 | Touchcast LLC | System and method for providing and interacting with coordinated presentations |
US11488363B2 (en) | 2019-03-15 | 2022-11-01 | Touchcast, Inc. | Augmented reality conferencing system and method |
US11659138B1 (en) | 2013-06-26 | 2023-05-23 | Touchcast, Inc. | System and method for interactive video conferencing |
US11405587B1 (en) | 2013-06-26 | 2022-08-02 | Touchcast LLC | System and method for interactive video conferencing |
US9787945B2 (en) | 2013-06-26 | 2017-10-10 | Touchcast LLC | System and method for interactive video conferencing |
US10084849B1 (en) | 2013-07-10 | 2018-09-25 | Touchcast LLC | System and method for providing and interacting with coordinated presentations |
US9179096B2 (en) * | 2013-10-11 | 2015-11-03 | Fuji Xerox Co., Ltd. | Systems and methods for real-time efficient navigation of video streams |
US9098369B1 (en) | 2013-11-13 | 2015-08-04 | Google Inc. | Application installation using in-video programming |
US10748206B1 (en) | 2014-02-21 | 2020-08-18 | Painted Dog, Inc. | Dynamic media-product searching platform apparatuses, methods and systems |
US10939175B2 (en) * | 2014-03-11 | 2021-03-02 | Amazon Technologies, Inc. | Generating new video content from pre-recorded video |
US10432986B2 (en) * | 2014-05-30 | 2019-10-01 | Disney Enterprises, Inc. | Recall and triggering system for control of on-air content at remote locations |
US10255251B2 (en) | 2014-06-26 | 2019-04-09 | Touchcast LLC | System and method for providing and interacting with coordinated presentations |
US9865305B2 (en) | 2015-08-21 | 2018-01-09 | Samsung Electronics Co., Ltd. | System and method for interactive 360-degree video creation |
US20170171275A1 (en) * | 2015-12-14 | 2017-06-15 | Jbf Interlude 2009 Ltd. | Object Embedding in Multimedia |
EP3398344A4 (en) * | 2015-12-29 | 2019-07-31 | Impressview Inc. | System and method for presenting video and associated documents and for tracking viewing thereof |
KR102483507B1 (en) | 2016-11-17 | 2022-12-30 | 페인티드 도그, 인크. | Machine-Based Object Recognition of Video Content |
EP3396963B1 (en) * | 2017-04-25 | 2021-04-07 | Accenture Global Solutions Limited | Dynamic media content rendering |
EP3396964B1 (en) * | 2017-04-25 | 2020-07-22 | Accenture Global Solutions Ltd | Dynamic content placement in a still image or a video |
CN111417974A (en) * | 2017-09-13 | 2020-07-14 | 源数码有限公司 | Rule-based assistance data |
EP3528196A1 (en) * | 2018-02-16 | 2019-08-21 | Accenture Global Solutions Limited | Dynamic content generation |
EP3672256A1 (en) * | 2018-12-20 | 2020-06-24 | Accenture Global Solutions Limited | Dynamic media placement in video feed |
US11910034B2 (en) * | 2018-12-21 | 2024-02-20 | Koninklijke Kpn N.V. | Network-based assistance for receiver processing of video data |
GR20200100010A (en) * | 2020-01-13 | 2021-08-13 | Ανδρεας Κωνσταντινου Γεωργιου | Method for inputting and/or changing objects in digital image files |
US11594258B2 (en) | 2021-07-19 | 2023-02-28 | Pes University | System for the automated, context sensitive, and non-intrusive insertion of consumer-adaptive content in video |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5264933A (en) * | 1991-07-19 | 1993-11-23 | Princeton Electronic Billboard, Inc. | Television displays having selected inserted indicia |
US6141060A (en) * | 1996-10-22 | 2000-10-31 | Fox Sports Productions, Inc. | Method and apparatus for adding a graphic indication of a first down to a live video of a football game |
US20020112249A1 (en) * | 1992-12-09 | 2002-08-15 | Hendricks John S. | Method and apparatus for targeting of interactive virtual objects |
US6496981B1 (en) * | 1997-09-19 | 2002-12-17 | Douglass A. Wistendahl | System for converting media content for interactive TV use |
US20040012717A1 (en) * | 2000-10-20 | 2004-01-22 | Wavexpress, Inc. | Broadcast browser including multi-media tool overlay and method of providing a converged multi-media display including user-enhanced data |
US20040021684A1 (en) * | 2002-07-23 | 2004-02-05 | Dominick B. Millner | Method and system for an interactive video system |
US20040075670A1 (en) * | 2000-07-31 | 2004-04-22 | Bezine Eric Camille Pierre | Method and system for receiving interactive dynamic overlays through a data stream and displaying it over a video content |
US6757906B1 (en) * | 1999-03-30 | 2004-06-29 | Tivo, Inc. | Television viewer interface system |
US6847778B1 (en) * | 1999-03-30 | 2005-01-25 | Tivo, Inc. | Multimedia visual progress indication system |
US20070073553A1 (en) * | 2004-05-20 | 2007-03-29 | Manyworlds, Inc. | Adaptive Commerce Systems and Methods |
US20070242066A1 (en) * | 2006-04-14 | 2007-10-18 | Patrick Levy Rosenthal | Virtual video camera device with three-dimensional tracking and virtual object insertion |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7207053B1 (en) * | 1992-12-09 | 2007-04-17 | Sedna Patent Services, Llc | Method and apparatus for locally targeting virtual objects within a terminal |
WO2001031497A1 (en) * | 1999-10-22 | 2001-05-03 | Activesky, Inc. | An object oriented video system |
US20020073167A1 (en) * | 1999-12-08 | 2002-06-13 | Powell Kyle E. | Internet content delivery acceleration system employing a hybrid content selection scheme |
US6850250B2 (en) * | 2000-08-29 | 2005-02-01 | Sony Corporation | Method and apparatus for a declarative representation of distortion correction for add-on graphics in broadcast video |
AU2002237748A1 (en) * | 2000-10-19 | 2002-05-21 | Loudeye Technologies, Inc. | System and method for selective insertion of content into streaming media |
US7203909B1 (en) * | 2002-04-04 | 2007-04-10 | Microsoft Corporation | System and methods for constructing personalized context-sensitive portal pages or views by analyzing patterns of users' information access activities |
US7530084B2 (en) * | 2002-05-28 | 2009-05-05 | Sony Corporation | Method and apparatus for synchronizing dynamic graphics |
US7221775B2 (en) * | 2002-11-12 | 2007-05-22 | Intellivid Corporation | Method and apparatus for computerized image background analysis |
US7979877B2 (en) * | 2003-12-23 | 2011-07-12 | Intellocity Usa Inc. | Advertising methods for advertising time slots and embedded objects |
CA2582649C (en) * | 2004-10-05 | 2015-05-19 | Vectormax Corporation | System and method for identifying and processing data within a data stream |
US7982738B2 (en) * | 2004-12-01 | 2011-07-19 | Microsoft Corporation | Interactive montages of sprites for indexing and summarizing video |
US20070118873A1 (en) * | 2005-11-09 | 2007-05-24 | Bbnt Solutions Llc | Methods and apparatus for merging media content |
US20090083147A1 (en) * | 2007-09-21 | 2009-03-26 | Toni Paila | Separation of advertising content and control |
US20090094555A1 (en) * | 2007-10-05 | 2009-04-09 | Nokia Corporation | Adaptive user interface elements on display devices |
-
2009
- 2009-02-12 WO PCT/IL2009/000168 patent/WO2009101624A2/en active Application Filing
- 2009-02-12 US US12/867,075 patent/US8745657B2/en active Active
- 2009-02-12 WO PCT/IL2009/000167 patent/WO2009101623A2/en active Application Filing
- 2009-02-12 US US12/867,253 patent/US20110001758A1/en not_active Abandoned
-
2014
- 2014-05-27 US US14/287,247 patent/US9723335B2/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5264933A (en) * | 1991-07-19 | 1993-11-23 | Princeton Electronic Billboard, Inc. | Television displays having selected inserted indicia |
US20020112249A1 (en) * | 1992-12-09 | 2002-08-15 | Hendricks John S. | Method and apparatus for targeting of interactive virtual objects |
US6141060A (en) * | 1996-10-22 | 2000-10-31 | Fox Sports Productions, Inc. | Method and apparatus for adding a graphic indication of a first down to a live video of a football game |
US6496981B1 (en) * | 1997-09-19 | 2002-12-17 | Douglass A. Wistendahl | System for converting media content for interactive TV use |
US6757906B1 (en) * | 1999-03-30 | 2004-06-29 | Tivo, Inc. | Television viewer interface system |
US6847778B1 (en) * | 1999-03-30 | 2005-01-25 | Tivo, Inc. | Multimedia visual progress indication system |
US20040075670A1 (en) * | 2000-07-31 | 2004-04-22 | Bezine Eric Camille Pierre | Method and system for receiving interactive dynamic overlays through a data stream and displaying it over a video content |
US20040012717A1 (en) * | 2000-10-20 | 2004-01-22 | Wavexpress, Inc. | Broadcast browser including multi-media tool overlay and method of providing a converged multi-media display including user-enhanced data |
US20040021684A1 (en) * | 2002-07-23 | 2004-02-05 | Dominick B. Millner | Method and system for an interactive video system |
US20070073553A1 (en) * | 2004-05-20 | 2007-03-29 | Manyworlds, Inc. | Adaptive Commerce Systems and Methods |
US20070242066A1 (en) * | 2006-04-14 | 2007-10-18 | Patrick Levy Rosenthal | Virtual video camera device with three-dimensional tracking and virtual object insertion |
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9986305B2 (en) | 2008-01-30 | 2018-05-29 | Cinsay, Inc. | Interactive product placement system and method therefor |
US9344754B2 (en) | 2008-01-30 | 2016-05-17 | Cinsay, Inc. | Interactive product placement system and method therefor |
US11227315B2 (en) | 2008-01-30 | 2022-01-18 | Aibuy, Inc. | Interactive product placement system and method therefor |
US10438249B2 (en) | 2008-01-30 | 2019-10-08 | Aibuy, Inc. | Interactive product system and method therefor |
US10425698B2 (en) | 2008-01-30 | 2019-09-24 | Aibuy, Inc. | Interactive product placement system and method therefor |
US10055768B2 (en) | 2008-01-30 | 2018-08-21 | Cinsay, Inc. | Interactive product placement system and method therefor |
US9338499B2 (en) | 2008-01-30 | 2016-05-10 | Cinsay, Inc. | Interactive product placement system and method therefor |
US9674584B2 (en) | 2008-01-30 | 2017-06-06 | Cinsay, Inc. | Interactive product placement system and method therefor |
US9338500B2 (en) | 2008-01-30 | 2016-05-10 | Cinsay, Inc. | Interactive product placement system and method therefor |
US9351032B2 (en) | 2008-01-30 | 2016-05-24 | Cinsay, Inc. | Interactive product placement system and method therefor |
US9332302B2 (en) | 2008-01-30 | 2016-05-03 | Cinsay, Inc. | Interactive product placement system and method therefor |
US9185349B2 (en) * | 2008-05-28 | 2015-11-10 | Kyocera Corporation | Communication terminal, search server and communication system |
US20110078202A1 (en) * | 2008-05-28 | 2011-03-31 | Kyocera Corporation | Communication terminal, search server and communication system |
US20110004898A1 (en) * | 2009-07-02 | 2011-01-06 | Huntley Stafford Ritter | Attracting Viewer Attention to Advertisements Embedded in Media |
US9355682B2 (en) * | 2009-10-30 | 2016-05-31 | Samsung Electronics Co., Ltd | Apparatus and method for separately viewing multimedia content desired by a user |
US20110106879A1 (en) * | 2009-10-30 | 2011-05-05 | Samsung Electronics Co., Ltd. | Apparatus and method for reproducing multimedia content |
US10268760B2 (en) | 2009-10-30 | 2019-04-23 | Samsung Electronics Co., Ltd. | Apparatus and method for reproducing multimedia content successively in a broadcasting system based on one integrated metadata |
US20110310227A1 (en) * | 2010-06-17 | 2011-12-22 | Qualcomm Incorporated | Mobile device based content mapping for augmented reality environment |
US20140114919A1 (en) * | 2012-10-19 | 2014-04-24 | United Video Properties, Inc. | Systems and methods for providing synchronized media content |
WO2014120312A1 (en) * | 2013-02-04 | 2014-08-07 | Google Inc. | Systems and methods of creating an animated content item |
CN105027110A (en) * | 2013-02-04 | 2015-11-04 | 谷歌公司 | Systems and methods of creating an animated content item |
US20140235123A1 (en) * | 2013-02-21 | 2014-08-21 | Yi-Jun Lin | Highly conducting and transparent film and process for producing same |
US20180167661A1 (en) * | 2014-03-11 | 2018-06-14 | Amazon Technologies, Inc. | Object discovery and exploration in video content |
US9892556B2 (en) * | 2014-03-11 | 2018-02-13 | Amazon Technologies, Inc. | Real-time exploration of video content |
US11222479B2 (en) | 2014-03-11 | 2022-01-11 | Amazon Technologies, Inc. | Object customization and accessorization in video content |
US20150262423A1 (en) * | 2014-03-11 | 2015-09-17 | Amazon Technologies, Inc. | Real-time exploration of video content |
US11363329B2 (en) * | 2014-03-11 | 2022-06-14 | Amazon Technologies, Inc. | Object discovery and exploration in video content |
US10506003B1 (en) | 2014-08-08 | 2019-12-10 | Amazon Technologies, Inc. | Repository service for managing digital assets |
US10564820B1 (en) * | 2014-08-08 | 2020-02-18 | Amazon Technologies, Inc. | Active content in digital media within a media universe |
US10719192B1 (en) | 2014-08-08 | 2020-07-21 | Amazon Technologies, Inc. | Client-generated content within a media universe |
CN104918060A (en) * | 2015-05-29 | 2015-09-16 | 北京奇艺世纪科技有限公司 | Method and device for selecting position to insert point in video advertisement |
US10970843B1 (en) | 2015-06-24 | 2021-04-06 | Amazon Technologies, Inc. | Generating interactive content using a media universe database |
US20220414820A1 (en) * | 2019-12-20 | 2022-12-29 | Move Ai Ltd | Method of inserting an object into a sequence of images |
Also Published As
Publication number | Publication date |
---|---|
WO2009101624A3 (en) | 2010-03-11 |
US20110016487A1 (en) | 2011-01-20 |
US20140282724A1 (en) | 2014-09-18 |
US9723335B2 (en) | 2017-08-01 |
WO2009101623A2 (en) | 2009-08-20 |
WO2009101624A2 (en) | 2009-08-20 |
US8745657B2 (en) | 2014-06-03 |
WO2009101623A3 (en) | 2010-03-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20110001758A1 (en) | Apparatus and method for manipulating an object inserted to video content | |
US9532116B2 (en) | Interactive video advertisement in a mobile browser | |
CN111435998B (en) | Video playing control method, device, equipment and storage medium | |
CN108924661B (en) | Data interaction method, device, terminal and storage medium based on live broadcast room | |
US9665965B2 (en) | Video-associated objects | |
CN112584224B (en) | Information display and processing method, device, equipment and medium | |
CN106204168A (en) | Commodity barrage display system, unit and method | |
US9015179B2 (en) | Media content tags | |
CN112764871B (en) | Data processing method, data processing device, computer equipment and readable storage medium | |
US10845892B2 (en) | System for monitoring a video | |
US20140344856A1 (en) | System and method for synchronized interactive layers for media broadcast | |
WO2017168215A1 (en) | A system and methods for dynamically generating animated gif files for delivery via the network | |
US11631114B2 (en) | Augmenting web-based video media with online auction functionality | |
US20220224978A1 (en) | Video content display method, client, and storage medium | |
US11617017B2 (en) | Systems and methods of presenting video overlays | |
CN115039174A (en) | System and method for interactive live video streaming | |
CN113448474B (en) | Method and device for displaying interactive interface of live broadcasting room, medium and electronic equipment | |
US20230007335A1 (en) | Systems and methods of presenting video overlays | |
CN111970563B (en) | Video processing method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INNOVID INC., NEW YORK Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHALOZIN, TAL;NETTER, IZHAK ZVI;REEL/FRAME:024826/0614 Effective date: 20100811 |
|
AS | Assignment |
Owner name: SILICON VALLEY BANK, MASSACHUSETTS Free format text: SECURITY AGREEMENT;ASSIGNOR:INNOVID INC.;REEL/FRAME:029201/0580 Effective date: 20121025 |
|
AS | Assignment |
Owner name: INNOVID INC., NEW YORK Free format text: RELEASE OF SECURITY INTEREST;ASSIGNOR:SILICON VALLEY BANK;REEL/FRAME:031485/0027 Effective date: 20131022 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |