US20110202834A1 - Visual motion feedback for user interface - Google Patents
Visual motion feedback for user interface Download PDFInfo
- Publication number
- US20110202834A1 US20110202834A1 US12/773,803 US77380310A US2011202834A1 US 20110202834 A1 US20110202834 A1 US 20110202834A1 US 77380310 A US77380310 A US 77380310A US 2011202834 A1 US2011202834 A1 US 2011202834A1
- Authority
- US
- United States
- Prior art keywords
- gesture
- motion
- boundary
- inertia
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04886—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/041—Indexing scheme relating to G06F3/041 - G06F3/045
- G06F2203/04104—Multi-touch detection in digitiser, i.e. details about the simultaneous detection of a plurality of touching locations, e.g. multiple fingers or pen and finger
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/048—Indexing scheme relating to G06F3/048
- G06F2203/04808—Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen
Definitions
- the design of an effective user interface poses many challenges.
- One challenge is how to provide a user with an optimal amount of visual information or functionality, given the space limitations of a display and the needs of a particular user. This challenge can be especially acute for devices with small displays, such as smartphones or other mobile computing devices. This is because there is often more information available to a user performing a particular activity (e.g., browsing for audio or video files in a library of files) than can fit on the display. A user can easily become lost unless careful attention is paid to how information is presented on the limited amount available display space.
- Visual cues are useful for indicating, for example, a user's location when browsing a list or other collection of data, since it is often not possible to show an entire collection (e.g., a list of contacts stored in a smartphone) on a small display.
- Another challenge is how to provide a high level of functionality while maintaining a satisfying and consistent user experience.
- devices have become more complex, and as consumers have become more demanding, it has become increasingly difficult to design user interfaces that are convenient and pleasing for users, without sacrificing reliability, flexibility, functionality or performance.
- boundary effects are presented to provide visual cues to a user to indicate that a boundary in a movable user interface element (e.g., the end of a scrollable list) has been reached.
- parallax effects are presented in which multiple parallel or substantially parallel layers in a multi-layer user interface move at different rates, in response to user input.
- simulated inertia motion of UI elements is used to provide a more natural feel for touch input.
- simulated inertia motion can be used in combination with parallax effects, boundary effects, or other types of visual feedback.
- a user interface (UI) system receives gesture information corresponding to a gesture on a touch input device.
- the UI system calculates simulated inertia motion for a movable user interface element based at least in part on the gesture information, and potentially on other inertia information such as a friction coefficient or a parking speed coefficient. Based at least in part on the gesture information and on the simulated inertia motion, the UI system calculates a post-gesture position of the movable user interface element.
- the UI system determines that the post-gesture position exceeds a gesture boundary of the movable user interface element, and calculates a distortion effect (e.g., a squeeze, compression or squish effect) in the movable user interface element to indicate that the gesture boundary has been exceeded.
- Calculating the distortion effect can include, for example, determining an extent by which the gesture boundary has been exceeded, determining a compressible area of the movable user interface element, determining a scale factor for the distortion effect based at least in part on the compressible area and the extent by which the gesture boundary has been exceeded, and scaling the compressible area according to the scale factor.
- the distortion effect can be calculated based on a distortion point (which, for compression, can be referred to as a compression point or squeeze point), which can indicate the part of the UI element to be distorted.
- user input indicates movement in a graphical user interface element having plural movable layers.
- a UI system calculates a first motion having a first movement rate in a first layer of the plural movable layers, and calculates a parallax motion in a second layer of the plural movable layers.
- the parallax motion is based at least in part on the first motion (and potentially simulated inertia motion), and the parallax motion comprises a movement of the second layer at a second movement rate that differs from the first movement rate.
- the parallax motion can be calculated based on, for example, a parallax constant for the second layer, or an amount of displayable data in the second layer.
- a UI system receives gesture information corresponding to a gesture on a touch input device, the gesture information indicating a movement of a user interface element having a movement boundary. Based at least in part on the gesture information, the UI system computes a new position of the user interface element. Based at least in part on the new position, the UI system determines that the movement boundary has been exceeded. The UI system determines an extent by which the movement boundary has been exceeded, determines a compressible area of the user interface element, determines a scale factor for a distortion effect based at least in part on the compressible area and the extent by which the movement boundary has been exceeded, and presents a distortion effect in the user interface element.
- the distortion effect comprises a visual compression of content in the compressible area (e.g., text, images, graphics, video or other displayable content) according to the scale factor.
- content in the compressible area e.g., text, images, graphics, video or other displayable content
- some parts of the compressible area may not be visible on a display, so the distortion can be virtual (e.g., in areas that are not visible on a display) or the distortion can be actually displayed, or some parts of the distorted content can be displayed while other parts of the distorted content are not displayed.
- the visual compression is in a dimension that corresponds to the movement of the user interface element. For example, a vertical movement in a UI element that exceeds a movement boundary can cause content in the UI element to be vertically compressed or squeezed.
- FIGS. 1A-1C and 2 are flow charts showing example techniques for presenting motion feedback in user interface elements, according to one or more described embodiments.
- FIG. 3 is a diagram showing a boundary effect, according to one or snore described embodiments.
- FIGS. 4A-4C are diagrams showing parallax effects, according to one or more described embodiments.
- FIGS. 5 and 6 A- 6 E are diagrams showing parallax effects and boundary effects in a user interface having parallel layers, according to one or more described embodiments.
- FIGS. 7A , 7 B, 8 A and 8 B are diagrams showing gesture boundary areas which can be used to determine whether to present boundary effects, according to one or more described embodiments.
- FIG. 9 is a diagram showing example pinch and stretch gestures, according to one or more described embodiments.
- FIG. 10 is a graph showing changes in position over time of a UI element that exhibits a boundary feedback effect, according to one or more described embodiments.
- FIG. 11 is a system diagram showing a UI system in which described embodiments can be implemented.
- FIG. 12 illustrates a generalized example of a suitable computing environment in which several of the described embodiments may be implemented.
- FIG. 13 illustrates a generalized example of a suitable implementation environment in which one or more described embodiments may be implemented.
- FIG. 14 illustrates a generalized example of a mobile computing device in which one or more described embodiments may be implemented.
- boundary effects are presented to provide visual cues to a user to indicate that a boundary in a movable user interface element (e.g., the end of a scrollable list) has been reached.
- parallax effects are presented in which multiple parallel or substantially parallel layers in a multi-layer user interface move at different rates, in response to user input.
- simulated inertia motion of UI elements is used to provide a more natural feel for touch input.
- a UI system that accepts touch input includes detailed motion rules (e.g., rules for interpreting different kinds of touch input, rules for presenting inertia motion in UI elements in response to touch input, rules for determining boundaries in UI elements, etc.).
- the motion rules can be combined with various combinations of optional motion features such as parallax effects, boundary effects, and other visual feedback.
- the visual feedback that is presented according to motion rules and optional motion features in a UI element can depend on many factors, such as the type of the UI element and the content of the UI element.
- UI user interface
- movements in elements are based at least in part on user input (e.g., gestures on a touchscreen) and an inertia model.
- user input e.g., gestures on a touchscreen
- inertia model e.g., a movement in a UI element can be extended beyond the actual size of a gesture on a touchscreen by applying inertia to the movement.
- Applying inertia to a movement in a UI element typically involves performing one more calculations using gesture information (e.g., a gesture start position, a gesture end position, gesture velocity and/or other information) and one or more inertia motion values (e.g., friction coefficients) to determine a post-gesture state (e.g., a new position) for the UI element.
- Simulated inertia motion can be used in combination with other effects (e.g., parallax effects, boundary effects, etc.) to provide feedback to a user.
- movements in UI elements can be rendered for display (e.g., depicting calculated distortion, parallax, or other effects, if any).
- Movement in UI elements typically depends to some extent on user interaction. For example, a user that wishes to navigate from one part of a UI element to another (e.g., from the beginning of a scrollable list to the end of the list) provides user input to indicate a desired movement. The user input can then cause movement in the UI element and potentially other elements in the user interface.
- a user causes movement in a display area of a device by interacting with a touchscreen. The interaction can include, for example, contacting the touchscreen with a fingertip, stylus or other object and moving it (e.g., with a flicking or sweeping motion) across the surface of the touchscreen to cause movement in a desired direction.
- buttons e.g., directional buttons
- a user can interact with a user interface in some other way, such as by pressing buttons (e.g., directional buttons) on a keypad or keyboard, moving a trackball, pointing and clicking with a mouse, making a voice command, etc.
- a user interface system can include a default setting that is used to calculate the amount of motion (e.g., in terms of pixels) as a function of the size or rate of a user movement.
- a user can adjust a touchscreen sensitivity control, such that the same motion of a fingertip or stylus on a touchscreen will produce smaller or larger movements, depending on the setting of the control.
- Gestures can be made in various directions to cause movement in UI elements. For example, upward and downward gestures can cause upward or downward movements, respectively, while rightward and leftward movements can cause rightward and leftward movements, respectively.
- Upward/downward motion can even be combined with left/right motion for diagonal movements.
- Other kinds of motion such as non-linear motion (e.g., curves) or bi-directional motion (e.g., pinch or stretch motions made with multiple contact points on a touchscreen) also can be used to cause movement in UI elements.
- FIG. 1A is a flow chart showing a general technique 100 for providing motion feedback in a UI.
- a device receives user input indicating motion in a UI element.
- a UI system on a mobile device receives gesture information corresponding to a gesture on a touchscreen on the mobile device.
- the device determines whether inertia will be applied to the motion indicated by the user input.
- a UI system determines based on gesture information (e.g., gesture start position, gesture end position, gesture direction, gesture velocity) whether to apply inertia to the motion in the UI element.
- gesture information e.g., gesture start position, gesture end position, gesture direction, gesture velocity
- the device determines whether visual effects (e.g., boundary effects, parallax effects, etc.) will be applied to the motion indicated by the user input. For example, the device determines whether to apply a distortion effect (e.g., a compression or squeeze effect) to indicate that a boundary in the UI element (e.g., a boundary at the end of a scrollable list) has been reached. As another example, the device determines whether to apply a parallax effect (e.g., by moving parallel layers in a multi-layer UI element at different rates). The applied effects also can be based on inertia, where inertia is applied to the motion indicated by the user input.
- a distortion effect e.g., a compression or squeeze effect
- a parallax effect e.g., by moving parallel layers in a multi-layer UI element at different rates.
- the applied effects also can be based on inertia, where inertia is applied to the motion indicated by the user input
- a UI system applies inertia to a movement and calculates, based on the inertia, a new position for a UI element that is outside a boundary for the UI element, the UI system can apply a boundary effect to provide a visual indicator that the boundary has been reached.
- the motion in the UI element is rendered for display.
- FIG. 19 is a flow chart showing a technique 110 for providing boundary effects in combination with inertia motion.
- a UI system receives gesture information corresponding to a gesture. For example, the UI system receives gesture coordinates and velocity information for the gesture.
- the UI system calculates inertia motion based on the gesture information. For example, the UI system determines that inertia motion is applied based on the velocity information, and calculates a duration of inertia motion for the gesture.
- the UI system calculates a post-gesture position based on the gesture information and the inertia motion.
- the UI system calculates the post-gesture position based on the gesture coordinates and the duration of the inertia motion.
- the UI system determines that a boundary for the UI element has been exceeded. For example, the UI system compares one or more coordinates (e.g., vertical or horizontal coordinates) of the post-gesture position and determines an extent by which the post-gesture position exceeds the boundary.
- the UI system calculates a distortion effect to indicate that the boundary has been exceeded. For example, the UI system calculates a squeeze or compression effect in the content of the UI element based on the extent to which the post-gesture position exceeds the boundary.
- FIG. 1C is a flow chart showing a technique 120 for providing parallax effects in combination with inertia motion.
- a UI system receives user input indicating motion in a UI element having plural layers. For example, the UI system receives gesture coordinates and velocity information for a gesture on a touch screen, where the gesture is directed to a content layer in multi-layer UI.
- the UI system calculates motion in a first layer based on inertia information and the user input. For example, the UI system determines that inertia motion should be applied to movement in the content layer based on the velocity information, and calculates a duration of inertia motion for the movement.
- the UI system calculates a parallax motion in a second layer based on the first motion in the first layer. For example, the UI system calculates the parallax motion in a layer above the content layer based on the motion in the content layer, with the parallax motion having a different movement rate than the motion in the content layer.
- the parallax motion also can include inertia motion, or inertia motion can be omitted in the parallax motion.
- any combination of the inertia, boundary, parallax, distortion, and other effects described herein can be applied.
- processing stages shown in example techniques 100 , 110 , 120 can be rearranged, added, omitted, split into multiple stages, combined with other stages, and/or replaced with like stages.
- FIG. 2 is a flow chart showing a detailed example technique 200 for providing visual feedback in a UI in response to a user gesture.
- a UI system on a device receives touch input information in a touch input stream.
- the touch input stream comprises data corresponding to a gesture on a touchscreen of a mobile device.
- Data received from the touch input stream can include, for example, gesture information such as a gesture start position, a gesture end position, and timestamps for the gesture.
- the touch input stream is typically received from a device operating system, which converts raw data received from a touch input device (e.g., a touchscreen) into gesture information.
- data received from the touch input stream can include other information, or gesture information can be received from some other source.
- filtering is applied to the touch input stream.
- one or more algorithms are applied to the touch input stream coming from the OS to filter out or correct anomalous data.
- the filtering stage can correct misaligned touch data caused by jittering (e.g., values that are not aligned with previous inputs) or filter out spurious touch contact points (e.g., incorrect interpretation of a single touch point as multiple touch points that are close together), etc.
- the filtering stage can convert any multi-touch input into a single-touch input.
- touch input filtering can be performed during generating of the touch input stream (e.g., at the device OS).
- touch input filtering can be performed during a coordinate space transform stage (e.g., coordinate space transform 220 ).
- touch input filtering can be omitted.
- the UI system applies a coordinate space transform to data in the touch input stream corresponding to the gesture.
- a coordinate space transform is applied to the data from the touch input stream in order to account for possible rotations of the device, scale changes, influence from other animations, etc., in order to properly interpret the original input stream. For example, if a UI element is rotated 90 degrees such that vertical movement in the UI element becomes horizontal movement (or vice versa), a vertical gesture can be transformed to a horizontal gesture (or vice versa) to account for the rotation of the device. If no adjustments are necessary, the coordinate space transform can leave gesture information unchanged. Alternatively the coordinate space transform state can be omitted.
- the UI system calculates the velocity at the end of the gesture. For example, the velocity is calculated by determining a first position near the end of the gesture and an end position of the gesture, and dividing by the time elapsed during the movement from the first position near the end of the gesture to the end position.
- the first position is determined by finding the gesture position at approximately 100 ms prior to the end of the gesture. Measuring velocity near the end of the gesture can help to provide a more accurate motion resulting from the gesture than measuring velocity over the entire course of the gesture. For example, if a gesture starts slowly and ends with a higher velocity, measuring the velocity at the end of the gesture can help to more accurately reflect the user's intended gesture (e.g., a strong flick).
- the velocity is calculated by determining the distance (e.g., in pixel units) between the start position for the gesture and the end position of the gesture, and dividing by the time elapsed during the movement from the start position to the end position.
- the time elapsed can be calculated, for example, by computing the difference between a timestamp associated with the start position and a timestamp associated with the end position.
- the UI system determines whether the gesture is an inertia gesture.
- an inertia gesture refers to a gesture, such as a flick gesture, capable of causing movement in one or more user interface elements to which inertia can be applied.
- the UI system can distinguish between a non-inertia gesture and an inertia gesture by determining how quickly the user's finger, stylus, etc., was moving when it broke contact with the touchscreen, and whether the velocity exceeds a threshold. If the gesture ends with a velocity above the threshold, the gesture can be interpreted as an inertia gesture.
- a gesture that starts with panning motion at a velocity below the threshold and ends with a velocity above the threshold can be interpreted as ending with a flick that causes movement to which inertia can be applied. If the gesture ends with a velocity below the threshold, the gesture can be interpreted as a non-inertia gesture. Exemplary techniques and tools used in some implementations for gesture interpretation are described in detail below.
- the UI system determines whether inertia will be applied to the motion indicated by the gesture. For example, the UI system determines based on gesture information (e.g., end-of-gesture velocity) and/or other information (e.g., user preferences) whether to apply inertia to the motion in the UI element.
- gesture information e.g., end-of-gesture velocity
- other information e.g., user preferences
- a gesture such as a flick may still not have inertia applied to its resulting movements, such as when a flick gesture is received for a UI element that does not support inertia movements, or for a UI element for which inertia movement has been deactivated (e.g., according to user preference).
- the UI system computes a new position for the UI element based on gesture information (e.g., end-of-gesture position coordinates). If inertia is to be applied, at 252 the UI system computes a new position based on the gesture information (e.g., end-of-gesture position coordinates) and simulated inertia.
- the simulated inertia involves treating a UI element, or part of a UI element, as a physical object of non-zero mass that moves according to an approximation of Newtonian physics.
- the approximation can include, for example, a friction coefficient and/or other parameters that control how the movement is calculated and/or rendered.
- the UI system determines at 260 whether boundary feedback will be presented. Determining whether boundary feedback will be presented involves determining whether the new position is within boundaries (if any) of the UI element. For example, in a scrollable list, the UI system can determine whether the new position is calculated to be outside the boundaries of the scrollable list (e.g., below the end of a vertically scrollable list). Some UI elements may not have boundaries that can be exceeded by any permitted motion. For example, a UI element may take the form of a wrappable list, which may have a default entry position but no beginning or end.
- wrappable list is axis-locked (e.g., if movement is only permitted along a vertical axis for a vertically scrolling list)
- the list may have no boundaries that can be exceeded by any permitted motion.
- the determination of whether the new position is within boundaries can be skipped. Axis locking is described in more detail below.
- the UI system applies a boundary effect to the UI element.
- a boundary effect such as a “squish” or compression of text, images or other visual information in the UI element, to provide a visual cue that a boundary of the UI element has been reached. Boundary effects are described in more detail below.
- the UI system determines at 270 whether parallax feedback will be presented. Determining whether parallax feedback will be presented involves determining whether the UI element has multiple parallel layers or substantially parallel layers that can be moved at different rates based on the same gesture. If parallax feedback is to be presented, at 272 the UI system applies a parallax effect to the UI element. In general, a parallax effect involves movement of multiple parallel layers, or substantially parallel layers, at different rates. Example parallax effects are described in more detail below.
- processing stages in example technique 200 indicate example flows of information in a UI system. Depending on implementation and the type of processing desired, processing stages can be rearranged, added, omitted, split into multiple stages, combined with other stages, and/or replaced with like stages.
- example technique 200 shows stages of receiving data from a touch input stream, applying touch input filtering, applying a coordinate space transform, calculating a velocity at the end of a gesture, and determining whether the gesture is an inertia gesture
- Gesture information e.g., gesture velocity, position, whether the gesture is a candidate for simulated inertia, etc.
- a module that determines whether to apply inertia motion and determines whether to apply boundary feedback or parallax effects can obtain gesture data from another source, such as another module that accepts touch input and makes calculations to obtain gesture information (e.g., gesture velocity, end-of-gesture position).
- example technique 200 shows a determination of whether to present boundary feedback occurring before a determination of whether to present parallax feedback
- a determination of whether to present boundary feedback and/or parallax feedback can be performed in other ways.
- determinations of whether to present boundary feedback and/or parallax feedback can occur in parallel, or the determination of whether to present a parallax effect can occur before the determination of whether to present a boundary effect.
- Such arrangements can be useful, for example, where a gesture may cause movements in multiple parallel layers of a UI element prior to reaching a boundary of the element.
- a UI system also can determine (e.g., based on characteristics of a current UI element) whether boundary effects and/or parallax effects are not available (e.g., for UI elements that do not have multiple layers or boundaries), and skip processing stages that are not relevant.
- Boundary feedback can be used to provide visual cues to a user to indicate that a boundary (e.g., a boundary at the end, beginning, or other location) in a UI element (e.g., a data collection such as a list) has been reached.
- a UI system presents a boundary effect in a UI element (or a portion of a UI element) by causing the UI element to be displayed in a visually distorted state, such as a squeezed or compressed state (i.e., a state in which text, images or other content is shown to be smaller than normal in one or more dimensions), to indicate that a boundary of the UI element has been reached.
- Described techniques and tools for presenting boundary feedback can be applied to any UI element with one or more boundaries that can be manipulated by moving the element.
- described techniques and tools can be used in an email viewer, such that text in a scrollable email message is distorted (e.g., squeezed or compressed) to indicate that the end of the email message has been reached.
- Boundary effects can be presented in different ways.
- a boundary effect can be held in place for different lengths of time depending on user input and/or design choice.
- a boundary effect can end, for example, by returning the UI element to a normal (e.g., undistorted) state when a user lifts a finger, stylus or other object to end an interaction with a touchscreen after reaching a boundary, or when an inertia motion has completed.
- distortion effects other than a squish, squeeze or compression can be used.
- One alternative distortion effect is a visual stretch.
- a stretch effect can be used, for example, in combination with a snap-back animation to indicate that boundary has been reached.
- Boundary effects can be presented even when it is possible to continue a movement beyond a boundary. For example, if a user scrolls to the end of a vertically-oriented list, causing a distortion of text or images at the end of the list, further motion can cause the list to wrap past the boundary and back to the beginning of the list.
- the UI also can show an element (or part of an element) at the top of the list to indicate that further movement can allow the user to wrap back to the beginning of the list.
- FIG. 3 is a diagram showing a graphical user interface (GUI) presented by a UI system that uses a distortion effect to indicate that a boundary of UI element has been reached.
- GUI graphical user interface
- a user 302 (represented by the hand icon) interacts with a list comprising list elements (“Contact1,” “Contact2,” etc.).
- list elements (“Contact1,” “Contact2,” etc.).
- distortion effects depend at least in part on the location of a squeeze point 396 .
- Some list elements with distortion effects are shown as being outside display area 300 .
- FIG. 3 shows example states 390 - 394 .
- user 302 interacts with a touchscreen by making an upward motion, indicated by an initial gesture position 350 and an end-of-gesture touch position 352 .
- the interaction can include, for example, contacting the touchscreen with a fingertip, stylus or other object and moving it (e.g., with a flicking or sweeping motion) along the surface of the touchscreen.
- FIG. 3 shows user 302 interacting with the touchscreen at particular locations in the display area 300 , the UI system allows interaction with other parts of the touchscreen to cause movement in the list.
- FIG. 3 shows user 302 interacting with the touchscreen at particular locations in the display area 300 , the UI system allows interaction with other parts of the touchscreen to cause movement in the list.
- FIG. 3 shows example states 390 - 394 .
- user 302 also can make other motions (e.g., downward motions to scroll towards the beginning of the list).
- the UI system can interpret different kinds of upward or downward user movements, even diagonal movements extending to the right or left of the vertical plane, as a valid upward or downward motion.
- the upward motion causes a distortion effect shown in state 392 .
- the upward motion is finger-tracking motion caused by a drag gesture, but distortion effects also can be caused by other motion resulting from other kinds of gestures, such as inertia motion caused by a flick gesture.
- the distortion effect indicates that a boundary in the list has been reached.
- the entire list is treated as a single surface, as indicated by the single dimension line to the right of the list in states 390 , 392 and 394 , respectively.
- the list has been squeezed or compressed in a vertical dimension, as shown by the reduced length of the dimension to the right of the list.
- the text of each list element has been squeezed or compressed in a vertical dimension.
- the elements are distorted proportionally.
- the effect in state 392 is as if all the list elements are being compressed against a barrier at the squeeze point 396 .
- the squeeze point 396 is indicated at the top of a list, outside the display area 300 .
- Other squeeze points are also possible.
- the squeeze point could be at the center of a list (e.g., at item 50 in a 100 item list) or at the top of a visible portion of a list.
- the list can be considered as having two parts—one part above the squeeze point, and one part below the squeeze point—where only one part of the list is squeezed.
- the squeeze point can change dynamically, depending on the state of the list and/or display.
- a squeeze point can move up or down (e.g., in response to where the center of the list is) as elements are added to or removed from the list, or a squeeze point can update automatically (e.g., when the end of the list has been reached) to be at the top of a visible portion of the list.
- a squeeze point can be placed outside of a list. This can be useful to provide more consistent visual feedback, such as when a UI element does not fill the visible area.
- the list has returned to the undistorted state shown in state 390 .
- the list can return to the undistorted state after the gesture shown in state 390 is ended (e.g., when the user breaks contact with the touchscreen).
- the upward motion shown in FIG. 3 is only an example of a possible user interaction.
- the same motion and/or other user interactions can cause different effects, different display states, different transitions between display states, etc.
- a motion that causes a distortion effect in a UI element e.g., at the end of a vertically scrollable list
- another portion of the UI element e.g., a list item at the beginning of a vertically scrollable list
- Further movement can then cause wrapping in the UI element (e.g., from the end back to the beginning of a vertically scrollable list).
- States 390 - 394 are only examples of possible states.
- a UI element can exist in any number of states (e.g., in intermediate states between example states 390 - 394 , etc.) in addition to, or as alternatives to, the example states 390 - 394 .
- Intermediate states such as states that may occur between state 390 and state 392 , or between state 392 and state 394 can show gradually increasing or decreasing degrees of distortion, as appropriate.
- a UI system can present parallel, or substantially parallel, movable layers.
- the UI system can present a parallax effect, in which layers move at different speeds relative to one another.
- the effect is referred to as a parallax effect because, in a typical example, a layer that is of interest to a user moves at a faster rate than other layers, as though the layer of interest were closer to the user than the other, slower-moving layers.
- the term “parallax effect” as used herein refers more generally to effects in which layers move at different rates relative to one another.
- the rate of movement in each layer can depend on several factors, including the amount of data to be presented visually (e.g., text or graphics) in the layers, or the arrangement of the layers relative to one another.
- the amount of data to be presented visually in a layer can measured by, for example, determining the length as measured in a horizontal direction of the data as rendered on a display or as laid out for possible rendering on the display. Length can be measured in pixels or by some other suitable measure (e.g., the number of characters in a string of text).
- a layer with a larger amount of data and moving at a faster rate can advance by a number of pixels that is greater than a layer with a smaller amount of data moving at a slower rate.
- Layer movement rates can be determined in different ways. For example, movement rates in slower layers can be derived from movement rates in faster layers, or vice versa. Or, layer movement rates can be determined independently of one another. Layers that exhibit parallax effects can be overlapping layers or non-overlapping layers.
- the movement of the layers is a typically a function of the length of the layers and the size and direction of the motion made by the user. For example, a leftward flicking motion on a touchscreen produces a leftward movement of the layers relative to the display area.
- user input can be interpreted in different ways to produce different kinds of movement in the layers. For example, a UI system can interpret any movement to the left or right, even diagonal movements extending well above or below the horizontal plane, as a valid leftward or rightward motion of a layer, or the system can require more precise movements.
- a UI system can require that a user interact with a part of a touchscreen corresponding to the display area occupied by a layer before moving that layer, or the system can allow interaction with other parts of the touchscreen to cause movement in a layer.
- a user can use an upward or downward motion to scroll up or down in a part of the content layer that does not appear on the display all at once.
- lock points indicate corresponding positions in layers with which a display area of a device will be aligned. For example, when a user navigates to a position on a content layer such that the left edge of the display area is at a left-edge lock point “A,” the left edge of display area will also be aligned at a corresponding left-edge lock point “A” in each of the other layers.
- Lock points also can indicate alignment of a right edge of a display area (right-edge lock points), or other types of alignment (e.g., center lock points).
- corresponding lock points in each layer are positioned to account for the fact that layers will move at different speeds.
- the background layer moves at half the rate of the content layer when transitioning between the two lock points.
- lock points can exhibit other behavior.
- lock points can indicate positions in a content layer to which the layer will move when the part of the layer corresponding to the lock point comes into view on the display. This can be useful, for example, when an image, list or other content element comes partially into view near an edge of the display area—the content layer can automatically bring the content element completely into view by moving the layer such that an edge of the display area aligns with an appropriate lock point.
- a lock animation can be performed at the end of a gesture, such as a flick or pan gesture, to align the layers with a particular lock point.
- a lock animation can be performed at the end of a gesture that causes movement of a content layer to a position between two elements in a content layer (e.g., where portions of two images in a content layer are visible in a display area).
- a UI system can select an element (e.g., by checking which element occupies more space in the display area) and transition to focus on that element using the lock animations. This can improve the overall look of the layers and can be effective in bringing information or functional elements into view in a display area.
- a lock animation also can be used together with simulated inertia motion.
- a lock animation can be presented after inertia motion stops, or a lock animation can be blended with inertia motion (such as by extending inertia motion to a lock point, or ending inertia motion early by gradually coming to a stop at a lock point) to present a smooth transition to a lock point.
- parallax effects can be calculated and presented in different ways.
- equations are described for calculating parallax effect movements in which a parallax constant is used to determine anew position for a layer after a gesture.
- motion in layers and/or other elements, such as lists can be calculated based on motion ratios.
- a UI system can calculate motion ratios for a background layer and a title layer by dividing the width of the background layer and the width of the title layer, respectively, by a maximum width of the content layer. Taking into account the widths of the background layer and the title layer, a system can map locations of lock points in the background layer and the title layer, respectively, based on the locations of corresponding lock points in the content layer.
- Movement of various layers can differ depending on context. For example, a user can navigate left from the beginning of a content layer to reach the end of a content layer, and can navigate right from the end of the content layer to reach the beginning of a content layer.
- This wrapping feature provides more flexibility when navigating through the content layer.
- Wrapping can be handled by the UI system in different ways. For example, wrapping can be handled by producing an animation that shows a rapid transition from the end of layers such as title layers or background layers back to the beginning of such layers, or vice-versa. Such animations can be combined with ordinary panning movements in the content layer, or with other animations in the content layer, such as locking animations. However, wrapping functionality is not required.
- FIGS. 4A-4C are diagrams showing a GUI presented by a UI system with three layers 410 , 412 , 414 and a background layer 450 .
- a user 302 (represented by the hand icon) interacts with content layer 414 by interacting with a touchscreen having a display area 300 .
- Background layer 450 floats behind the other layers. Data to be presented visually in background layer 450 can include, for example, an image that extends beyond the boundaries of display area 300 .
- the content layer 414 includes content elements (e.g., images) 430 A-H.
- Layers 410 , 412 include text information (“Category” and “Selected Subcategory,” respectively).
- the length of content layer 414 is indicated to be approximately twice the length of layer 412 , which is in turn indicated to be approximately twice the length of layer 410 .
- the length of background layer 450 is indicated to be slightly less than the length of layer 412 .
- FIGS. 4A-4C the direction of motion that can be caused in the layers 410 , 412 , 414 , 450 by user 302 is indicated by a left-pointing arrow and a right-pointing arrow. These arrows indicate possible movements (left or right horizontal movements) of layers 410 , 412 , 414 , 450 in response to user movements. In this example, the system interprets user movements to the left or right, even diagonal movements extending above or below the horizontal plane, as a valid leftward or rightward motion of a layer.
- 4A-4C show user 302 interacting with a portion of display area 300 that corresponds to content layer 414
- the system also allows interaction with other parts of the touchscreen (e.g., parts that correspond to portions of display area 300 occupied by other layers) to cause movement in layers 410 , 412 , 414 , 450 .
- the system When user input indicates a motion to the right or left, the system produces a rightward or leftward movement of the layers 410 , 412 , 414 , 450 relative to display area 300 .
- the amount of movement of layers 410 , 412 , 414 , 450 is a function of the data in the layers and the size or rate of the motion made by the user.
- example left-edge lock points “A,” “B” and “C” are indicated for layers 410 , 412 . 414 . 450 .
- the left-edge lock points indicate the corresponding position of the left edge of the display area 300 on each layer. For example, when a user navigates to a position on content layer 414 such that the left edge of display area 300 is at lock point “A,” the left edge of display area 300 will also be aligned at lock point “A” of the other layers 410 , 412 , 450 , as shown in FIG. 4A .
- the left edge of display area 300 is at lock point “B” in each of the layers 410 , 412 , 414 , 450 .
- the left edge of the display area 300 is at lock point “C” in each of the layers 410 , 412 , 414 , 450 .
- lock points shown in FIGS. 4A-4C are not generally representative of a complete set of lock points, and are limited to lock points “A,” “B” and “C” only for brevity.
- left-edge lock points can be set for each of the content elements 430 A- 430 H.
- fewer lock points can be used, or lock points can be omitted.
- lock points can indicate other kinds of alignment.
- right-edge lock points can indicate alignment with the right edge of display area 300
- center lock points can indicate alignment with the center of display area 300 .
- layers 410 , 412 , 414 , 450 move according to the following rules, except during wrapping animations:
- Movement of layers 410 , 412 , 414 , 450 may differ from the rules described above in some circumstances.
- wrapping is permitted.
- User 302 can navigate left from the beginning of content layer 414 (the position shown in FIG. 4A ), and can navigate right from the end of content layer 414 (the position shown in FIG. 4C ).
- some layers may move faster or slower than during other kinds of movements.
- the image in background layer 450 and the text in layers 410 and 412 moves faster when user input causes wrapping back to the beginning of content layer 414 .
- display area 300 shows portions of one and two letters, respectively, in layers 410 and 412 , at the end of the respective text strings.
- Display area 300 also shows the rightmost portion of the image in background layer 450 .
- a wrapping animation to return to the state shown in FIG. 4A can include bringing the leftmost portion of the image in background layer 450 and the beginning of the text in layers 410 , 412 into view from the right. This results in a more rapid movement in layers 410 , 412 and 450 than in other contexts, such as the transition from the state shown FIG. 4A to the state shown in FIG. 4B .
- Parallax effects can be used in combination with boundary effects and inertia motion.
- boundary effects can be used to indicate when a user has reached a boundary of a layer, or a boundary of an element within a layer.
- inertia motion can be used to extend motion of UI elements caused by some gestures (e.g., flick gestures). If inertia motion causes movement of a UI element (e.g., a layer) to extend beyond a boundary, a UI system can present a boundary effect.
- FIG. 5 is a diagram showing two layers 530 , 532 .
- Display area 300 is indicated by a dashed line and has dimensions typical of displays on smartphones or similar mobile computing devices.
- the content layer 532 includes content elements 540 - 544 .
- each content element 540 - 544 comprises an image representing a music album, and text indicating the title of the respective album.
- the list header layer 530 includes a text string (“Albums”).
- a user 302 (represented by the hand icon) interacts with content layer 532 by interacting with a touchscreen having the display area 300 .
- the interaction can include, for example, contacting the touchscreen with a fingertip, stylus or other object and moving it (e.g., with a flicking or sweeping motion) across the surface of the touchscreen.
- FIG. 5 shows example display states 590 - 594 .
- user 302 interacts with a touchscreen by making a flick gesture 510 , which is indicated by a leftward-pointing arrow.
- the flick gesture 510 causes an inertia motion in content layer 532 , which continues to move after the gesture 510 has ended.
- FIG. 5 shows user 302 interacting with the touchscreen at a particular location in the display area 300 , the UI system allows interaction with other parts of the touchscreen to cause movement.
- the example shown in FIG. 5 shows user 302 making a leftward flick gesture, user 302 also can make other motions (e.g., rightward motions to scroll towards the beginning of the list).
- the UI system can interpret different kinds of leftward or rightward user movements, even diagonal movements extending below or above the horizontal plane, as a valid leftward or rightward motion.
- the UI system In response to the flick gesture 510 , the UI system produces leftward movement of the layers 530 , 532 relative to the display area 300 .
- the flick gesture 510 causes a leftward movement in the layers and leads to display state 592 , in which element 540 is no longer visible, and elements 542 and 544 have moved to the left.
- the text string (“Albums”) in the list header layer 530 also has moved to the left, but at a slower rate (in terms of pixels) than the content layer 532 .
- the movement of the layers 530 , 532 is a function of the data in the layers and the velocity of the flick gesture 510 .
- the inertia motion causes continued leftward movement of the layers 530 , 532 without further input from the user 302 , and leads to display state 594 in which element 542 is no longer visible.
- the inertia motion causes the content layer to extend beyond a boundary (not shown) to the right of the element 544 in the content layer 532 , which results in a distortion effect in which an image and text in element 544 is squeezed or compressed in a horizontal dimension.
- the compression is indicated by the reduced length of the dimension lines above the image and text (“Rock & Roll Part in”) of element 544 , respectively.
- the text string (“Albums”) in the list header layer 530 also has moved to the left, but at a slower rate (in terms of pixels) than the content layer 532 .
- the text in list header layer 530 is uncompressed.
- the distortion effect gives user 302 an indication that the end of the content layer 532 has been reached.
- the boundary need not prevent further movement in the direction of the motion. For example, if wrapping functionality is available, further movement beyond the boundary can cause the content layer 530 to wrap back to the beginning (e.g., back to display state 590 ). In state 594 , element 540 at the beginning of the collection is partially visible, indicating that wrapping is available.
- the display can return from display state 594 to display state 592 , transitioning from a display state with a distortion effect to an undistorted display state. This can occur, for example, without any additional input by the user.
- the length of time that it takes to transition between states can vary depending on implementation.
- Flick gesture 510 is only an example of a possible user interaction.
- the same gesture 510 and/or other user interactions e.g., motions having different sizes, directions, or velocities
- Some display states e.g., display state 594
- Display states 590 - 594 are only examples of possible display states.
- a display can exist in any number of states (e.g., in intermediate states between example states 590 - 594 , in states with different visible UI elements, etc.) in addition to, or as alternatives to, the example display states 590 - 594 .
- Intermediate states such as states that may occur between state 592 and state 594 , can show gradually increasing or decreasing degrees of distortion, as appropriate.
- a UI system can provide a boundary effect by compressing the elements 542 and 544 shown in display state 592 without moving the elements 542 and 544 to the left in the display area 300 .
- Described techniques and tools can be used on display screens in different orientations, such as landscape orientation. Changes in display orientation can occur, for example, where a UI has been configured (e.g., by user preference) to be oriented in landscape fashion, or where a user has physically rotated a device.
- One or more sensors e.g., an accelerometer
- the device can be used to detect when a device has been rotated, and adjust the display orientation accordingly.
- the display area 300 is oriented in landscape fashion.
- Content e.g., data collection elements 540 - 544 in content layer 532
- other user interface features in the display area 300 can be dynamically adjusted to take into account effects of a reorientation (e.g., a new effective width of the display area 300 , interpreting directions of user interactions differently, etc.).
- distortion effects can be adjusted, such as by compressing data collection elements in a horizontal dimension instead of a vertical dimension, to account for display reorientation.
- FIGS. 6A-6E are diagrams showing a content layer 614 that moves in tandem with layer 612 above it.
- a user 302 represented by the hand icon
- the interaction can include, for example, contacting the touchscreen with a fingertip, stylus or other object and moving it (e.g., with a flicking or sweeping motion) across or along the surface of the touchscreen.
- the content layer 614 includes game icons 640 , 642 , 644 , lists 650 , 652 , 654 , and avatar 630 (which is described in more detail below in Example 8).
- the other layers 610 , 612 include text information (“Games” in layer 610 ; “Spotlight,” “Xbox Live, “Requests” and “Collection” in layer 612 ).
- the direction of motion that can be caused by user 302 is indicated by a left-pointing arrow and a right-pointing arrow in FIGS. 6A-6E , along with additional up- and down-pointing arrows in FIGS. 6A and 6E .
- the right-pointing and left-pointing arrows indicate possible movements (left or right horizontal movements) of the layers 610 , 612 , 614 in response to user movements.
- a user also can cause movements in elements or parts of layers, depending on the data in the layer and how the layer is arranged.
- a user can cause movements (e.g., vertical movements) in layer elements (e.g., lists in a content layer) that are orthogonal to movements (e.g., horizontal movements) that can be caused in a layer as a whole.
- movements e.g., vertical movements
- layer elements e.g., lists in a content layer
- movements e.g., horizontal movements
- Such can include scrolling vertically in a list embedded in a content layer that moves horizontally.
- a system that presents layers that move vertically can allow horizontal movements in layer elements.
- the up-pointing and down-pointing arrows indicate possible movements of the list 650 in response to user movements.
- the amount of movement of list 650 can be a function of the size or rate of the motion made by user 302 , and the data in list 650 .
- scrolling of the list 650 can be element-by-element, page-by-page of elements, or something in between that depends on size or rate of the motion.
- list 650 includes only one element that is not visible in the display area 300 , as shown in FIG. 6A , so a range of small or large downward movements may be enough to scroll to the end of list 650 .
- an upward user movement has caused a boundary effect in list 650 , in which the text of elements in the list are squeezed or compressed in a vertical dimension. This effect gives user 302 an indication that the end of the list has been reached.
- the amount of movement in layers 610 , 612 , 614 is a function of the data in the layers and the size or rate of the motion made by the user.
- Horizontal movement in layers 610 , 612 , 614 proceeds according to the following rules, except during wrapping animations:
- Movement in the layers 610 , 612 , 614 may differ from the rules described above in some circumstances.
- wrapping is permitted.
- the arrows indicate that a user can navigate left from the beginning of the content layer 614 (the position shown in FIG. 6A and FIG. 6E ), and can navigate right from the end of the content layer 614 (the position shown in FIG. 6D ).
- some layers may move faster or slower than during other kinds of movements.
- the text in layer 610 can move faster when wrapping back to the beginning of content layer 614 .
- display area 300 shows portions of two letters in layer 610 , at the end of the “Games” text string.
- a wrapping animation to return to the state shown in FIG. 6A can include bringing the data in layers 610 , 612 , 614 (including the text of layer 610 ) into view from the right, resulting in a more rapid movement in layer 610 than in other contexts, such as a transition from the state shown FIG. 6A to the state shown in FIG. 6B .
- example lock points “A,” “B,” “C” and “D” are indicated for layers 610 and 612 .
- content layer 614 is locked to layer 612 ; the lock points indicated for layer 612 also apply to layer 614 .
- the lock points for each layer indicate the corresponding position of the left edge of the display area 300 on each layer. For example, when a user navigates to a position on content layer 614 such that the left edge of the display area 300 is at lock point “A,” the left edge of display area 300 also is aligned at lock point “A” of the other layers 610 , 612 , as shown in FIGS. 6A and 6E .
- the left edge of the display area 300 is at lock point “B” in each of the layers 610 , 612 , 614 .
- the left edge of the display area 300 is at lock point “C” in each of the layers 610 , 612 , 614 .
- the left edge of the display area 300 is at lock point “D” in each of the layers 610 , 612 , 614 .
- the lock points shown in FIGS. 6A-6E are not generally representative of a complete set of lock points, and are limited to lock points “A,” “B,” “C” and “D” only for brevity.
- right-edge lock points can be added to obtain alignment with the right edge of display area 300
- center lock points can be added to obtain alignment with the center of display area 300 .
- fewer lock points can be used, more lock points can be used, or lock points can be omitted.
- User 302 can move left or right in content layer 614 after making an up or down movement in list 650 .
- the current position of list 650 can be saved, or the system can revert to a default position (e.g., the top-of-list position indicated in FIG. 6A ) when navigating left or right in content layer 614 from list 650 .
- a default position e.g., the top-of-list position indicated in FIG. 6A
- the display area 300 can itself display graphical indicators (such as arrows or chevrons) of possible movements for the layers and/or list.
- the system can interpret user movements to the left or right, even diagonal movements extending above or below the horizontal plane, as a valid leftward or rightward motion. Similarly, the system can interpret upward or downward movements, even diagonal movement extending to the left or right of the vertical plane, as a valid upward or downward motion.
- FIGS. 6A-6E show the user 302 interacting with a portion of the display area 300 that corresponds to the content layer 614 , the system also allows interaction with other parts of the touchscreen (e.g., those that correspond to display area occupied by other layers) to cause movement in the layers 610 , 612 , 614 , list 650 , or other UI elements.
- avatar 630 can provide a visual cue to indicate a relationship between or draw attention to parts of the content layer 614 .
- avatar 630 is positioned between list 652 and list 654 .
- avatar 630 floats behind the text of list 654 , but remains completely within display area 300 .
- avatar 630 is only partially within display area 300 ; the part that is within the display area floats behind game icons 640 , 642 , 644 .
- the positioning of avatar 630 at the left edge of display area 300 can indicate to the user 302 that information associated with avatar 630 is available if the user 302 navigates in the direction of avatar 630 .
- Avatar 630 can move at varying speeds. For example, avatar 630 moves faster in the transition between FIGS. 6B and 6C than it does in the transition between FIGS. 6C and 6D .
- avatar 630 can move in different ways, or exhibit other functionality.
- a UI system can present a distortion effect in avatar 630 to indicate a user's location in a data collection with which the avatar is associated.
- Avatar 630 also can be locked to particular position (e.g., a lock point) in content layer 614 or in some other layer, such that avatar 630 moves at the same horizontal rate as the layer to which it is locked.
- avatar 630 can be associated with a list that can be scrolled up or down, such as list 650 , and move up or down as the associated list is scrolled up or down.
- a set of equations, coefficients and rules are described that can allow a UI system (e.g., a UI system provided as part of a mobile device operating system) to interpret user input such as touch gestures (including multi-touch gestures with more than one touch contact point) and generate motion feedback in response to user input.
- a UI system e.g., a UI system provided as part of a mobile device operating system
- user input such as touch gestures (including multi-touch gestures with more than one touch contact point) and generate motion feedback in response to user input.
- touch gestures including multi-touch gestures with more than one touch contact point
- features described in this detailed example include inertia movement, panning and zooming operations, boundary effects, parallax effects, and combinations thereof. Described features can help to provide natural-looking, smooth motion in response to user input (e.g., touch gestures).
- processing tasks can be handled by different software modules.
- a module called “ITouchSession” provides coefficients, gesture positions, and gesture velocity information
- a dynamic motion module in a mobile device operating system uses information provided by ITouchSession to generate motion feedback (e.g., parallax effects, boundary effects, etc.).
- motion feedback e.g., parallax effects, boundary effects, etc.
- gesture information provided to the dynamic motion module is accurate (e.g., with minimal jitter in position information), detailed (e.g., with time stamps on touch input), and low-latency (e.g., under 30 ms).
- the information (e.g., motion feedback information) generated by the dynamic motion module can be used by other modules, as well. For example, web browsers or other applications that run on the mobile device operating system can use information generated by the dynamic motion module.
- the dynamic motion resulting from user interaction is defined by a set of motion rules.
- the motion rules define how different visual elements react on screen in response to different gestures. For example, some rules apply to finger-tracking gestures such as panning or dragging gestures, some rules apply to flick or toss gestures, and some rules apply to pinch or stretch gestures. Additionally, some rules, such as inertia rules, may apply to more than one type of gesture.
- the specific motion rules that apply to different UI elements are determined by factors such as the control type and control content; not all motion rules will apply to all UI elements. For example, rules for pinch and stretch gestures do not apply to UI elements where pinch and stretch gestures are not recognized.
- the motion resulting from the application of motion rules to the input stream generated by the user can be further refined by an optional set of modifiers, which are collectively called “optional motion features.”
- UI elements can apply zero or more optional motion features, which can be determined by factors such as the desired motion, control type and control content. For example, a list control may opt to enhance motion feedback with boundary effects, while a panorama control may apply a parallax feature to some of its layers.
- a user when a user interacts with a UI element, it can be helpful to provide some immediate (or substantially immediate) visual feedback to the user (e.g., a change in movement in the UI element, or some other effect such as a tilt or highlight). Immediate or substantially immediate feedback helps the user to know that the user interface is responsive to the user's actions.
- the following motion rules apply in UI elements where the rules (e.g., rules relating to finger-tracking gestures, inertia, boundaries, pinch/stretch gestures) are relevant to the types of motion that are permitted in the respective UI elements.
- the motion rules can be modified for some UI elements, such as where optional motion features apply to a UI element.
- the content at the initial gesture point moves in direct correspondence to the gesture.
- content under the user's finger at an initial touch point moves with the user's finger during the gesture.
- the current position of a visual element is given by the following equation:
- p is the (x, y) vector that represents the current position of the visual element
- p 0 is the (x 0 , y 0 ) vector that represents the visual element position at the beginning of the gesture
- q is the (x, y) vector that represents the current touch contact position
- q 0 is the (x 0 , y 0 ) vector that represents the touch contact position at the beginning of the gesture.
- a UI element that allows inertia movement e.g., a scrolling list
- a gesture e.g., by lifting a finger or other object to end the interaction with the touchscreen
- a velocity and direction for that movement is identified, and the motion initially continues in the same direction and speed as the gesture, as if the visual element was a real, physical object with a non-zero mass.
- the motion is not stopped for some other, permissible reason (e.g., where the UI element reaches a boundary or is stopped by another user gesture), the motion gradually decelerates over time, eventually coming to a stop.
- the deceleration proceeds according to a combination of equations and coefficients, which can vary depending on implementation. Default system-wide coefficient values can be made available. Default system-wide coefficients can help to maintain a consistent feeling across all controls. Alternatively, different equations or coefficients can be used, such as where a particular control has its own friction coefficient for modeling different kinds of motion.
- the velocity (e.g., in pixels/second) at the end of the gesture is computed by the following equation:
- v is the (v x , v y ) velocity vector that represents the inertia velocity at the end of the gesture
- q is the (x, y) vector that represents the touch contact position at the end of the gesture
- q 0 is the (x 0 , y 0 ) vector that represents the touch contact position at the time t 0
- t is the timestamp of the last touch input of the gesture
- t 0 is the timestamp of the least recent touch input that happened within some fixed period of time from the last touch input.
- the velocity can be calculated in another way. For example, a weighted sum of velocities at different time instances can be calculated, with greater weighting for velocities at the end of the gesture. In this detailed example, calculating the velocity is the responsibility of ITouchSession. However, velocity calculations can be handled by other modules.
- the duration of the inertia motion can be computed according to the following equation:
- t max is the duration of the inertia motion
- is the magnitude of the initial velocity vector (
- ⁇ is a friction coefficient (e.g., MotionParameter_Friction, 0 ⁇ 1)
- y is a parking speed coefficient (e.g., MotionParameter_ParkingSpeed, 0 ⁇
- the friction coefficient is 0.4
- the parking speed coefficient is 60.0.
- the duration is computed at the start of the inertia motion, and need not be computed again.
- the new position p′ for the visual element can be computed based on its last known position p and the time elapsed since the last position update ( ⁇ t), as shown in the following equation:
- a new gesture begins while a UI element is in inertia motion, the inertia motion is immediately interrupted. Depending on the new gesture, the motion in the UI element may be stopped, or a new motion may start. If the new gesture causes a new motion in the UI element, the new gesture controls the UI element's motion. The previous gesture and any consequent inertia do not affect the motion generated by the new gesture. Handling of new gestures during inertia motion can be different depending on implementation. For example, new gestures can be ignored during inertia motion or can have different effects on inertia motion.
- gesture boundaries The dimensions of gesture boundaries and the effects of exceeding gesture boundaries can differ depending on several factors, such as the content of a UI element and/or a minimum visible area of the UI element. For example, lists which don't wrap around indefinitely may only be able to scroll a certain distance based on the number of items in the list and a minimum amount of visible items (e.g., an amount of items that occupies most or all of a display area).
- the minimum visible area indicates a minimum visible amount of the control (e.g., a minimum number of list items in a scrollable list), but does not require any particular part of the control to be visible. Therefore, the content of the minimum visible area for a particular control can vary depending on, for example, the control's current state (e.g., whether the end or beginning of a scrollable list is currently visible).
- the position p T of the gesture boundary area can be defined according to the following equation:
- the x and y coordinates of the position p T of the gesture boundary area are defined according to the following equations:
- the dimensions of the gesture boundary area are defined according to the following equation:
- FIG. 7A shows an example boundary diagram for a control having a position 710 and area 720 .
- the control has a minimum visible area 740 (at position 730 ).
- the position 730 of the minimum visible area can be located at the top left of a display area.
- a gesture boundary at position 770 and having area 780 is calculated.
- example post-gesture positions 752 , 754 are shown.
- Post-gesture position 752 is outside the gesture boundary area 780 , and causes boundary feedback.
- Post-gesture position 754 is inside the gesture boundary 780 , and does not cause boundary feedback.
- FIG. 8A shows an example boundary diagram for a control corresponding to the scrollable list shown in FIG. 3 .
- the control at position 810 has a control area 820 (width W A , height h A ).
- the coordinates of the control position are considered to be (0, 0).
- the control has a minimum visible area 840 (at position 830 ).
- the position 830 of the minimum visible area 840 can be at the top left of a display area.
- a gesture boundary at position 850 (the same position as the initial gesture position) is calculated.
- the gesture boundary 880 has a height of h A ⁇ h Vmin , and a width of 0.
- the gesture boundary 880 is actually a vertical line.
- boundary feedback can be enabled or disabled on an axis basis (e.g., permitting boundary feedback for vertical movements but not for horizontal movements).
- Such a control also can be a candidate for axis locking, to allow only vertical movements and remove any need for boundary feedback for horizontal movements. Axis locking is explained in more detail below.
- example post-gesture positions 852 , 854 are shown.
- Post-gesture position 852 is outside the gesture boundary area 880 , and causes boundary feedback.
- a UI system can present a squeeze or compression effect to indicate that the post-gesture position is outside the gesture boundary area, as shown in state 392 .
- Post-gesture position 854 is inside the gesture boundary area 880 , and does not cause boundary feedback.
- a boundary can indicate a position at which a boundary effect will be presented (e.g., to indicate that the end of a list has been reached) without preventing further movement beyond the boundary (e.g., wrapping movement from the end of the list back to the beginning of the list).
- Pinch gestures and stretch gestures are gestures that can change the scale (zoom) of the subject area of a control (e.g., a map or image with zoom capability). Pinch gestures and stretch gestures are considered to be multi-touch gestures because they typically have multiple points of interaction. In a typical pinch or stretch gesture scenario, a user places two fingers some distance apart from each other on a touchscreen, and either increases (for a stretch gesture) or decreases (for a pinch gesture) the distance between them.
- FIG. 9 is a diagram showing example pinch and stretch gestures.
- a user 302 represented by a hand icon
- a control e.g., a map with zoom features
- the user 302 performs a pinch gesture beginning at touch points 950 , 960 and ending at touch points 952 , 962 .
- the user 302 performs a stretch gesture beginning at touch points 970 , 980 and ending at touch points 972 , 982 .
- a pinch or stretch gesture can begin or end at other touch points (e.g., with a greater or lesser distance between beginning and ending touch points) or can use a different orientation of touch points (e.g., horizontal or diagonal).
- the distance d 0 includes a horizontal component x d0 and a vertical component y d0 .
- the distance d also includes a horizontal component x d and a vertical component y d .
- the scale factor s zoom to apply to the UI element can be calculated according to the following equation:
- Equation 11 the scale s is not isometric, i.e., the X and Y axes will be scaled differently.
- isometric scaling the following equation can be used instead:
- s zoom is a scalar, so the same factor is applied to both X and Y components.
- a scale factor can be calculated in different ways. For example, inertia can be applied to a pinch or stretch gesture (such as when the gesture ends with a velocity above a threshold), and the scale factor can be based at least in part on the inertia of the gesture (e.g., increasing the scale of the zoom when a stretch gesture ends with a velocity above a threshold).
- the scale factor can be applied to a zooming point (e.g., a center point between touch points qA and qB).
- the zooming point c z (x cz , y cz ) can be calculated by averaging the two touch contact positions, as shown in the following equation:
- a zooming point can be calculated in a different way, or a calculation of a zooming point can be omitted.
- a pinch/stretch gesture can also produce position changes (panning) in addition to scale changes. Panning position changes can occur simultaneously with scale changes.
- a panning offset can be calculated in a different way, or a panning offset can be omitted.
- Optional motion features can be used (e.g., when requested by a control) to refine or add visual feedback to motion generated by gestures.
- Optional motion features can depend on control type and content.
- some controls e.g., a scrolling list
- Optional motion features can be used in combination with each other and with various motion rules.
- a vertically scrolling list can use an axis locking feature and a boundary effect feature, while following rules for inertia motion and finger tracking motion.
- Different UI elements can use different combinations of rules and optional motion features, even when the different UI elements are visible at the same time.
- a movable layer can use parallax effects but omit boundary effects, while a vertically scrolling list in the movable layer can use boundary effects but omit parallax effects.
- UI elements of the same basic type can use different sets of optional motion features.
- a first pair of movable layers can use parallax effects and move at different rates relative to one another, while a third layer parallel to the first pair remains stationary.
- optional motion features act like filters, modifying the values generated according to other motion rules, such as the motion rules described above.
- axis locking can be used as an optional motion feature.
- axis locking is applied to a UI element by using the relevant equations in the motion rules described above, but only applying an X or Y component (as appropriate) to the motion of the axis-locked UI element. Changes to the other component are ignored and not applied to the UI element's motion.
- axis locking can be performed in another way.
- a UI element such as a wheel element that moves about an axis such as a Z axis
- axis locking can be used to permit only rotational motion about the axis.
- axis locking can be omitted.
- Parallax effects can be applied to controls that present multiple layers of content.
- multiple layers are animated differently (e.g., moving at different speeds), but the movements of the layers are based on the same input stream generated by the user.
- layers that are animated in response to a gesture move at different speeds relative to one another.
- the layer that the user is interacting with directly e.g., a content layer
- a content layer is considered to be the top layer on a Z axis, that is, the layer that is closest to the user.
- Other layers are considered to be lower layers on a Z axis, that is, further away from the user. Examples of a parallax effects can be seen in FIG. 5 and in FIGS. 6A-6D .
- a top layer reacts directly to the gesture, and the other layers move at increasingly lower speeds the further they are from the top layer along the Z axis.
- that can be accomplished by applying a scaling factor to the delta between an initial gesture position and an updated gesture position.
- the updated gesture position can be obtained directly from user interaction (e.g., in a finger tracking gesture such as a panning gesture) or from a gesture with simulated inertia (e.g., a flick gesture).
- q is the (x, y) vector that represents the current, post-gesture position (e.g., after the gesture and application of any simulated inertia)
- q 0 is the (x 0 , y 0 ) vector that represents the touch contact position at the beginning of the gesture.
- the parallax constant k P can vary depending on the application, scenario and/or content of the control. For example, layers with different lengths can have different parallax constants.
- parallax effects can be presented in different ways.
- parallel layers can move according to the model shown in Equation 18 for some movements or parts of a movement and move according to other models in other movements or parts of a movement.
- parallel layers that exhibit parallax effects can move according to the model shown in Equation 18 in transitions from FIG. 4A to FIG. 4B , and from FIG. 4B to 4C , and then move according to a specialized wrapping animation if a gesture to the right from the state shown in FIG. 4C , or inertia motion from an earlier gesture, causes a wrap back to the state shown in FIG. 4A .
- parallax effects can be omitted.
- a boundary feedback effect can be applied whenever a gesture would move the UI element past a boundary, either directly (e.g., by a dragging or panning gesture) or indirectly (e.g., by inertia motion generated by a flick gesture).
- the content is compressed in the direction of the motion (e.g., a vertical compression for a vertical motion) up to a certain threshold. If the compression is caused by inertia, the content compresses up to a certain amount based on the velocity at the time the boundary is hit, then decompresses to the original size. If the compression is caused directly (e.g., by dragging), the compression can be held as long as the last touch contact point is held and decompress when the user breaks contact, or decompress after a fixed length of time.
- the compression effect is achieved by applying a scale factor and dynamically placing a compression point to ensure that the effect looks the same regardless of the size of the list.
- the first step is to identify that a boundary has been crossed and by how much.
- the boundary motion rule described above illustrates how to compute a boundary position in this first example boundary feedback model, and in the second example boundary feedback model described below.
- r may be calculated in only a vertical or horizontal dimension, as appropriate, while omitting a calculation of the other dimension of r.
- the compressible area can be calculated according to the following equation:
- S A is the control area with dimensions (W A , h A ), and S V , is the visible area with dimensions (w V , h V ).
- the compression percentage coefficient is 0.0 and the compression offset coefficient is 0.5*S V .
- s comp S c - k s ⁇ r S c ( Eq . ⁇ 19 )
- s compx w c - k s ⁇ r x w c ( Eq . ⁇ 20 )
- s compy h c - k s ⁇ r y h c ( Eq . ⁇ 21 )
- k s is the compression factor coefficient (e.g., MotionParameter_CompressFactor (0 ⁇ k s ⁇ 1)), and r ⁇ S v .
- the compression factor coefficient is 0.2.
- the scale factor and/or the compressible area can be calculated in different ways. For example, different ranges of compression coefficients can be used.
- a UI system can then place a distortion point (which can also be referred to as a “squeeze point” or “compression point” when applying compression effects) at the other side of the compressible area (i.e., the side of the compressible area opposite the side where the gesture is being made) and apply that scale factor, resulting in a compression effect.
- a distortion point which can also be referred to as a “squeeze point” or “compression point” when applying compression effects
- r I can come either from inertia or from an active drag, such as when a user drags the content into a compressed state, then flicks, generating inertia.
- s inertiacomp s c - r s c ( Eq . ⁇ 25 )
- s inertiacompx w c - r x w c ( Eq . ⁇ 26 )
- s inertiacompy h c - r y h c ( Eq . ⁇ 27 )
- Equation 22 the compression factor coefficient
- the scale factor can be calculated in a different way.
- constants such as the compression factor coefficient k s or the value 0.001 in Equation 23 can be replaced with other constants depending on implementation.
- a compression point C comp (c compx , c compy ) is calculated in order to generate the expected visual effect.
- a compression point can be at different positions in a UI element.
- a compression point can be located at or near the center of a UI element, such that half (or approximately half) of the content in the UI element will be compressed.
- a compression point can be located at or near a border of UI element, such that all (or approximately all) of the content in the UI element will be compressed.
- the compression point can vary for different UI elements. Using different compression points can be helpful for providing a consistent amount of distortion in the content of UI elements of different sizes.
- the compression point position can be computed according to the following equations:
- c compx ⁇ left ⁇ 1 - w c w A right ⁇ w c w A none ⁇ 0.5 ( Eq . ⁇ 28 )
- c compy ⁇ top ⁇ 1 - h c h A bottom ⁇ h c h A none ⁇ 0.5 ( Eq . ⁇ 29 )
- compression points can be calculated in a different way, or the calculation of compression points can be omitted.
- the appearance of the boundary feedback can be controlled in finer detail by using more coefficients. Also, regardless of whether the compression is caused directly (e.g., by dragging) or by inertia, the same calculations are used for the compression effects
- r (w r , h r ) represent how far the post-gesture position exceeds the boundaries with respect to ⁇ x L , x R , y T , y B ⁇ :
- r may be calculated in only a vertical or horizontal dimension, as appropriate, while omitting a calculation of the other dimension of r.
- S c is the compressible area with dimensions (w c , h c ), calculated as shown in Equation 18.
- k s is a spring factor coefficient (e.g., MotionParameter_SpringFactor (k s >0))
- k e is a spring power coefficient (e.g., MotionParameter_SpringPower (k e >0))
- k d is a damper factor coefficient (e.g., MotionParameter_DamperFactor (0 ⁇ k d ⁇ 1))
- k L is a compression limit coefficient (e.g., MotionParameter_CompressionLimit (k L >0))
- ⁇ t is the time interval since the last iteration of the simulation ( ⁇ t ⁇ 0).
- the spring factor coefficient k s is a number that specifies how much resistance will counteract the inertia force
- the spring power coefficient k e shapes the curve of the resistance.
- a spring power coefficient of 1 indicates linear resistance, where resistance increases at a constant rate as compression increases.
- a spring power coefficient greater than 1 means that the resistance will increase at an increasing rate at higher compression, and less than 1 means that the resistance will increase, but at a decreasing rate, at higher compression.
- the damper factor coefficient k d represents a percentage of energy absorbed by the system and taken away from the inertia. The damper factor coefficient can be used to smooth out the boundary effect and avoid a repeated cycle of compression and decompression.
- the time interval ⁇ t can vary depending on the number of frames per second in the animation of the boundary feedback, hardware speed, and other factors. In one implementation, the time interval is about 16 ms between each update. Varying the time interval can alter the effect of the boundary effect. For example, a smaller time interval can result in more fluid motion.
- the scale factor and/or the compressible area can be calculated in different ways. For example, different ranges or values of coefficients can be used.
- FIG. 10 is a graph of position changes in a UI element over time according to the second example boundary effects model. According to the graph shown in FIG. 10 , a compression effect occurs during the time that the position of the UI element exceeds the boundary position indicated by the dashed line 1010 in FIG. 10 ).
- the compression line can indicate the position of a boundary in a UI element.
- the shape of the position curve 1020 can be modified in different ways, such as by adjusting coefficients.
- the uppermost tip of the boundary effect curve 1020 can be made to go higher (e.g., up to a configurable limit) or lower for a particular initial velocity.
- a higher tip of the curve can indicate a greater compression effect, and a lower tip can indicate a lesser compression effect.
- the duration of the compression can be adjusted to be shorter or longer. In FIG. 10 , the duration is represented by the distance between the points at which the line 1010 is crossed by the curve 1020 .
- the damper factor coefficient can be adjusted in combination or independently, and other values besides those indicated can be adjusted as well, to cause changes in position. Different combinations of adjustments can be used to obtain specific shapes in the position curve 1020 .
- a current inertia velocity v c and a current touch contact position q c can be updated to reflect the physics interaction of the boundary effect.
- the updated velocity v′ c and updated touch contact position q′ c are calculated according to the following equations:
- v n max ( 0 , v c - ( F s ⁇ k d + max ( 0 , r ′ - k L ⁇ ⁇ ⁇ t ) ) ) ( Eq . ⁇ 39 )
- v c ′ ⁇ v n , r ′′ > 0 0 , r ′′ ⁇ 0 ( Eq . ⁇ 40 )
- q c ′ ⁇ q c - r ′ , r ′′ ⁇ 0 q c - d , r ′′ ⁇ 0 ( Eq . ⁇ 41 )
- boundary feedback models e.g., wrapping beyond a boundary (e.g., wrapping back to the beginning of a list after the end of the list has been reached)
- the compression is caused by dragging
- the list can wrap around once a threshold compression has been reached.
- boundary effects can be omitted.
- a UI system can provide programmatic access to system-wide values e.g., (inertia values, boundary effect values).
- system-wide values can help in maintaining consistent UI behavior across components and frameworks, and can allow adjustments to the behavior in multiple UI elements at once. For example, inertia effects in multiple UI elements can be changed by adjusting system-wide inertia values.
- an API is included the ITouchSession module (HRESULT GetMotionParameterValue(IN MotionParameter ID, OUT float*value)).
- the identifiers and default values for the coefficients whose values are accessible through the ITouchSession::GetMotionParameterValue( ) API are as follows:
- MotionParameter_Friction // default: 0.4f MotionParameter_ParkingSpeed, // default: 60.0f MotionParameter_MaximumSpeed, // default: 20000.0f MotionParameter_SpringFactor, // default: 48.0f MotionParameter_SpringPower, // default: 0.75f MotionParameter_DamperFactor, // default: 0.09f MotionParameter_CompressLimit, // default: 300.0f MotionParameter_CompressPercent, // default: 0.0f MotionParameter_CompressOffsetX, // default: 720.0f MotionParameter_CompressOffsetY, // default: 1200.0f ⁇ ;
- the values that are accessible through the API can vary depending on implementation.
- a UI system that uses the first example boundary effects model described above can omit values such as spring factor, spring power, and damper factor values.
- a UI system can use additional values or replace the listed default values with other default values. Values can be fixed or adjustable, and can be updated during operation of the system (e.g., based on system settings or user preferences).
- FIG. 11 is a system diagram showing an example UI system 1100 that presents a UI on a device (e.g., a smartphone or other mobile computing device).
- the UI system 1100 is a multi-layer UI system that presents motion feedback (e.g., parallax effects, boundary effects, etc.).
- the system 1100 presents motion feedback in UIs that do not have multiple UI layers.
- the system 1100 can be used to implement functionality described in other examples, or other functionality.
- the system 1100 includes a hub module 1110 that provides a declarative description of a hub page to UI control 1120 , which controls display of UI layers.
- UI control 1120 also can be referred to as a “panorama” or “pano” control in a multi-layer UI system. Such a description can be used, for example, when the UI layers move in a panoramic, or horizontal, fashion. Alternatively, UI control 1120 controls UI layers that move vertically, or in some other fashion.
- UI control 1120 includes markup generator 1130 and motion module 1140 .
- the declarative description of the hub page includes information that defines UI elements.
- UI elements can include multiple layers, such as a background layer, a title layer, a section header layer, and a content layer.
- the declarative description of the hub page is provided to markup generator 1130 , along with other information such as style information and/or configuration properties.
- Markup generator 1130 generates markup that can be used to render the UI layers.
- Motion module 1140 accepts events (e.g., direct UI manipulation events) generated in response to user input and generates motion commands. The motion commands are provided along with the markup to a UI framework 1150 .
- the markup and motion commands are received in layout module 1152 , which generates UI rendering requests to be sent to device operating system (OS) 1160 .
- the device OS 1160 receives the rendering requests and causes a rendered UI to be output to a display on the device.
- System components such as hub module 1110 , UI control 1120 , and UI framework 1150 also can be implemented as part of device OS 1160 .
- the device OS 1160 is a mobile computing device OS.
- a user can generate user input that affects how the UI is presented.
- the UI control 1120 listens for direct UI manipulation events generated by UI framework 1150 .
- direct UI manipulation events are generated by interaction module 1154 , which receives gesture messages (e.g., messages generated in response to panning or flick gestures by a user interacting with a touchscreen on the device) from device OS 1160 .
- Interaction module 1154 also can accept and generate direct UI manipulation events for navigation messages generated in response to other kinds of user input, such as voice commands, directional buttons on a keypad or keyboard, trackball motions, etc.
- Device OS 1160 includes functionality for recognizing user gestures and creating messages that can be used by UI framework 1150 .
- UI framework 1150 translates gesture messages into direction UI manipulation events to be sent to UI control 1120 .
- the system 1100 can distinguish between different gestures on the touchscreen, such as drag gestures, pan gestures and flick gestures.
- the system 1100 can also detect a tap or touch gesture, such as where the user touches the touchscreen in a particular location, but does not move the finger, stylus, etc. before breaking contact with the touchscreen. As an alternative, some movement is permitted, within a small threshold, before breaking contact with the touchscreen in a tap or touch gesture.
- the system 1100 interprets an interaction as a particular gesture depending on the nature of the interaction with the touchscreen.
- the system 1100 obtains one or more discrete inputs from a user's interaction.
- a gesture can be determined from a series of inputs. For example, when the user touches the touchscreen and begins a movement in UI element in a horizontal direction while maintaining contact with the touchscreen, the system 1100 can fire a pan input and begin a horizontal movement in the UI element.
- the system 1100 can continue to tire pan inputs while the user maintains contact with the touchscreen and continues moving. For example, the system 1100 can fire a new pan input each time the user moves N pixels while maintaining contact with the touch screen.
- a continuous physical gesture on a touchscreen can be interpreted by the system 1100 as a series of pan inputs.
- the system 1100 can continuously update the contact position and rate of movement.
- the system 1100 can determine whether to interpret the motion at the end as a flick by determining how quickly the user's finger, stylus, etc., was moving when it broke contact with the touchscreen, and whether the rate of movement exceeds a threshold.
- the system 1100 can render motion (e.g., motion in a layer, list, or other UI element) on the display differently depending on the type of gesture. For example, in the case of a horizontal drag gesture (in which the user is currently maintaining contact with the touchscreen) on a content layer in a multi-layer UI system, the system 1100 moves the content layer in a horizontal direction by the same distance as the horizontal distance of the drag. In a parallax effect, the title layer and background layer also move in response to the drag. As another example, in the case of a pan gesture (in which the user has ended the gesture) on the content layer, the system 1100 can move the content layer in the amount of the pan, and determine whether to perform an additional movement in the content layer.
- a horizontal drag gesture in which the user is currently maintaining contact with the touchscreen
- the system 1100 moves the content layer in a horizontal direction by the same distance as the horizontal distance of the drag.
- the title layer and background layer also move in response to the drag.
- the system 1100 can move the content layer
- the system 1100 can perform a locking animation (i.e., an animation of a movement in the content layer to snap to a lock point) and move the content layer to a left or right lock point associated with an item in the content layer.
- the system 1100 can determine which lock point associated with the current pane is closer, and transition to the closer lock point.
- the system 1100 can move the content layer in order to bring an item in the content layer that is in partial view on the display area into full view.
- the system 1100 can maintain the current position of the content layer.
- the system 1100 can use simulated inertia to determine a post-gesture position for the content layer.
- the system 1100 can present some other kind of motion, such as a wrapping animation or other transition animation.
- the threshold velocity for a flick to be detected i.e., to distinguish a flick gesture from a pan gesture
- the system 1100 also can implement edge tap functionality.
- a user can tap within a given margin of edges of the display area to cause a transition (e.g., to a next or previous item in a content layer, a next or previous list element, etc.). This can be useful, for example, where an element is partially in view in the display area. The user can tap near the element to cause the system to bring that element completely into the display area.
- described examples show different positions of UI elements (e.g., layers, lists, etc.) that may be of interest to a user.
- a user can begin navigation of an element at the beginning of an element, or use different entry points.
- a user can begin interacting in the middle of a content layer, at the end of a content layer, etc. This can be useful, for example, where a user has previously exited at a position other than the beginning of a layer (e.g., the end of a layer), so that the user can return to the prior location (e.g., before and after a user uses an application (such as an audio player) invoked by actuating a content image).
- an application such as an audio player
- controls can share global parameters, such as a global friction coefficient for inertia motion
- parameters can be customized.
- friction coefficients can be customized for specific controls or content, such as friction coefficients that result in more rapid deceleration of inertia motion for photos or photo slide shows.
- boundary feedback can be applied to pinch and stretch gestures. Such boundary feedback can useful, for example, to indicate that a border of the UI element has been reached.
- additional feedback on gestures can be used.
- visual feedback such as a distortion effect can be used to alert a user that a UI element with zoom capability (e.g., a map or image) has reached a maximum or minimum zoom level.
- boundary effects such as compression effects can themselves produce inertia movement.
- the decompression can be combined with a spring or rebound effect, causing the list to scroll in the opposite direction of the motion that originally caused the compression.
- the spring effect could provide boundary feedback to indicate that the end of list had been reached, while also providing an alternative technique for navigating the list.
- the spring effect could be used to cause a movement in the list similar to a flick in the opposite direction. Inertia motion can applied to motion caused by the spring effect.
- FIG. 12 illustrates a generalized example of a suitable computing environment 1200 in which several of the described embodiments may be implemented.
- the computing environment 1200 is not intended to suggest any limitation as to scope of use or functionality, as the techniques and tools described herein may be implemented in diverse general-purpose or special-purpose computing environments.
- the computing environment 1200 includes at least one CPU 1210 and associated memory 1220 .
- the processing unit 1210 executes computer-executable instructions and may be a real or a virtual processor. In a multi-processing system, multiple processing units execute computer-executable instructions to increase processing power.
- FIG. 12 shows a second processing unit 1215 (e.g., a GPU or other co-processing unit) and associated memory 1225 , which can be used for video acceleration or other processing.
- the memory 1220 , 1225 may be volatile memory (e.g., registers, cache, RAM), non-volatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two.
- volatile memory e.g., registers, cache, RAM
- non-volatile memory e.g., ROM, EEPROM, flash memory, etc.
- software 1280 for implementing a system with one or more of the described techniques and tools.
- a computing environment may have additional features.
- the computing environment 1200 includes storage 1240 , one or more input devices 1250 , one or more output devices 1260 , and one or more communication connections 1270 .
- An interconnection mechanism such as a bus, controller, or network interconnects the components of the computing environment 1200 .
- operating system software provides an operating environment for other software executing in the computing environment 1200 , and coordinates activities of the components of the computing environment 1200 .
- the storage 1240 may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, DVDs, memory cards, or any other medium which can be used to store information and which can be accessed within the computing environment 1200 .
- the storage 1240 stores instructions for the software 1280 implementing described techniques and tools.
- the input device(s) 1250 may be a touch input device such as a keyboard, mouse, pen, trackball or touchscreen, an audio input device such as a microphone, a scanning device, a digital camera, or another device that provides input to the computing environment 1200 .
- the input device(s) 1250 may be a video card, TV tuner card, or similar device that accepts video input in analog or digital form, or a CD-ROM or CD-RW that reads video samples into the computing environment 1200 .
- the output device(s) 1260 may be a display, printer, speaker, CD-writer, or another device that provides output from the computing environment 1200 .
- the communication connection(s) 1270 enable communication over a communication medium to another computing entity.
- the communication medium conveys information such as computer-executable instructions, audio or video input or output, or other data in a modulated data signal.
- a modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
- communication media include wired or wireless techniques implemented with an electrical, optical, RF, infrared, acoustic, or other carrier.
- Computer-readable media are any available media that can be accessed within a computing environment.
- Computer-readable media include memory 1220 , 1225 , storage 1240 , and combinations thereof.
- program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
- the functionality of the program modules may be combined or split between program modules as desired in various embodiments.
- Computer-executable instructions for program modules may be executed within a local or distributed computing environment. Any of the methods described herein can be implemented by computer-executable instructions encoded on one or more computer-readable media (e.g., computer-readable storage media or other tangible media).
- FIG. 13 illustrates a generalized example of a suitable implementation environment 1300 in which described embodiments, techniques, and technologies may be implemented.
- various types of services are provided by a cloud 1310 .
- the cloud 1310 can comprise a collection of computing devices, which may be located centrally or distributed, that provide cloud-based services to various types of users and devices connected via a network such as the Internet.
- the cloud computing environment 1300 can be used in different ways to accomplish computing tasks. For example, with reference to described techniques and tools, some tasks, such as processing user input and presenting a user interface, can be performed on a local computing device, while other tasks, such as storage of data to be used in subsequent processing, can be performed elsewhere in the cloud.
- the cloud 1310 provides services for connected devices with a variety of screen capabilities 1320 A-N.
- Connected device 1320 A represents a device with a mid-sized screen.
- connected device 1320 A could be a personal computer such as desktop computer, laptop, notebook, netbook, or the like.
- Connected device 1320 B represents a device with a small-sized screen.
- connected device 1320 B could be a mobile phone, smart phone, personal digital assistant, tablet computer, and the like.
- Connected device 1320 N represents a device with a large screen.
- connected device 1320 N could be a television (e.g., a smart television) or another device connected to a television or projector screen (e.g., a set-top box or gaming console).
- a variety of services can be provided by the cloud 1310 through one or more service providers (not shown).
- the cloud 1310 can provide services related to mobile computing to one or more of the various connected devices 1320 A-N.
- Cloud services can be customized to the screen size, display capability, or other functionality of the particular connected device (e.g., connected devices 1320 A-N).
- cloud services can be customized for mobile devices by taking into account the screen size, input devices, and communication bandwidth limitations typically associated with mobile devices.
- FIG. 14 is a system diagram depicting an exemplary mobile device 1400 including a variety of optional hardware and software components, shown generally at 1402 . Any components 1402 in the mobile device can communicate with any other component, although not all connections are shown, for ease of illustration.
- the mobile device can be any of a variety of computing devices (e.g., cell phone, smartphone, handheld computer, personal digital assistant (PDA), etc.) and can allow wireless two-way communications with one or more mobile communications networks 1404 , such as a cellular or satellite network.
- PDA personal digital assistant
- the illustrated mobile device can include a controller or processor 1410 (e.g., signal processor, microprocessor, ASIC, or other control and processing logic circuitry) for performing such tasks as signal coding, data processing, input/output processing, power control, and/or other functions.
- An operating system 1412 can control the allocation and usage of the components 1402 and support for one or more application programs 1414 .
- the application programs can include common mobile computing applications (e.g., email applications, calendars, contact managers, web browsers, messaging applications), or any other computing application.
- the illustrated mobile device can include memory 1420 .
- Memory 1420 can include non-removable memory 1422 and/or removable memory 1424 .
- the non-removable memory 1422 can include RAM, ROM, flash memory, a disk drive, or other well-known memory storage technologies.
- the removable memory 1424 can include flash memory or a Subscriber Identity Module (SIM) card, which is well known in GSM communication systems, or other well-known memory storage technologies, such as smart cards.
- SIM Subscriber Identity Module
- the memory 1420 can be used for storing data and/or code for running the operating system 1412 and the applications 1414 .
- Example data can include web pages, text, images, sound files, video data, or other data sets to be sent to and/or received from one or more network servers or other mobile devices via one or more wired or wireless networks.
- the memory 1420 can be used to store a subscriber identifier, such as an International Mobile Subscriber Identity (IMSI), and an equipment identifier, such as an International Mobile Equipment Identifier (IMEI). Such identifiers can be transmitted to a network server to identify users and equipment.
- IMSI International Mobile Subscriber Identity
- IMEI International Mobile Equipment Identifier
- the mobile device can support one or more input devices 1430 , such as a touchscreen 1432 , microphone 1434 , camera 1436 , physical keyboard 1438 and/or trackball 1440 and one or more output devices 1450 , such as a speaker 1452 and a display 1454 .
- input devices 1430 such as a touchscreen 1432 , microphone 1434 , camera 1436 , physical keyboard 1438 and/or trackball 1440
- output devices 1450 such as a speaker 1452 and a display 1454 .
- Other possible output devices can include a piezoelectric or other haptic output device. Some devices can serve more than one input/output function.
- touchscreen 1432 and display 1454 can be combined in a single input/output device.
- Touchscreen 1432 can accept input in different ways. For example, capacitive touchscreens detect touch input when an object (e.g., a fingertip or stylus) distorts or interrupts an electrical current running across the surface. As another example, touchscreens can use optical sensors to detect touch input when beams from the optical sensors are interrupted. Physical contact with the surface of the screen is not necessary for input to be detected by some touchscreens.
- an object e.g., a fingertip or stylus
- touchscreens can use optical sensors to detect touch input when beams from the optical sensors are interrupted. Physical contact with the surface of the screen is not necessary for input to be detected by some touchscreens.
- a wireless modem 1460 can be coupled to an antenna (not shown) and can support two-way communications between the processor 1410 and external devices, as is well understood in the art.
- the modem 1460 is shown generically and can include a cellular modem for communicating with the mobile communication network 1404 and/or other radio-based modems (e.g., Bluetooth or Wi-Fi).
- the wireless modem 1460 is typically configured for communication with one or more cellular networks, such as a GSM network for data and voice communications within a single cellular network, between cellular networks, or between the mobile device and a public switched telephone network (PSSTN).
- GSM Global System for Mobile communications
- PSSTN public switched telephone network
- the mobile device can further include at least one input/output port 1480 , a power supply 1482 , a satellite navigation system receiver 1484 , such as a Global Positioning System (GPS) receiver, an accelerometer 1486 , a transceiver 1488 (for wirelessly transmitting analog or digital signals) and/or a physical connector 1490 , which can be a USB port, IEEE 1494 (firewall) port, and/or RS-232 port.
- GPS Global Positioning System
- the illustrated components 1402 are not required or all-inclusive, as components can be deleted and other components can be added.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
- This application claims the benefit of U.S. Provisional Patent Application No. 61/304,004, filed on Feb. 12, 2010, entitled “MULTI-LAYER USER INTERFACE WITH FLEXIBLE MOVEMENT,” which is incorporated herein by reference.
- The design of an effective user interface poses many challenges. One challenge is how to provide a user with an optimal amount of visual information or functionality, given the space limitations of a display and the needs of a particular user. This challenge can be especially acute for devices with small displays, such as smartphones or other mobile computing devices. This is because there is often more information available to a user performing a particular activity (e.g., browsing for audio or video files in a library of files) than can fit on the display. A user can easily become lost unless careful attention is paid to how information is presented on the limited amount available display space. Visual cues are useful for indicating, for example, a user's location when browsing a list or other collection of data, since it is often not possible to show an entire collection (e.g., a list of contacts stored in a smartphone) on a small display.
- Another challenge is how to provide a high level of functionality while maintaining a satisfying and consistent user experience. As devices have become more complex, and as consumers have become more demanding, it has become increasingly difficult to design user interfaces that are convenient and pleasing for users, without sacrificing reliability, flexibility, functionality or performance.
- Whatever the benefits of previous techniques, they do not have the advantages of the techniques and tools presented below.
- Techniques and tools are described that relate to different aspects of a user interface that provides visual feedback in response to user input. For example, boundary effects are presented to provide visual cues to a user to indicate that a boundary in a movable user interface element (e.g., the end of a scrollable list) has been reached. As another example, parallax effects are presented in which multiple parallel or substantially parallel layers in a multi-layer user interface move at different rates, in response to user input. As another example, simulated inertia motion of UI elements is used to provide a more natural feel for touch input. Various combinations of features are described. For example, simulated inertia motion can be used in combination with parallax effects, boundary effects, or other types of visual feedback.
- In one aspect, a user interface (UI) system receives gesture information corresponding to a gesture on a touch input device. The UI system calculates simulated inertia motion for a movable user interface element based at least in part on the gesture information, and potentially on other inertia information such as a friction coefficient or a parking speed coefficient. Based at least in part on the gesture information and on the simulated inertia motion, the UI system calculates a post-gesture position of the movable user interface element. The UI system determines that the post-gesture position exceeds a gesture boundary of the movable user interface element, and calculates a distortion effect (e.g., a squeeze, compression or squish effect) in the movable user interface element to indicate that the gesture boundary has been exceeded. Calculating the distortion effect can include, for example, determining an extent by which the gesture boundary has been exceeded, determining a compressible area of the movable user interface element, determining a scale factor for the distortion effect based at least in part on the compressible area and the extent by which the gesture boundary has been exceeded, and scaling the compressible area according to the scale factor. The distortion effect can be calculated based on a distortion point (which, for compression, can be referred to as a compression point or squeeze point), which can indicate the part of the UI element to be distorted.
- In another aspect, user input (e.g., a gesture on a touch screen) indicates movement in a graphical user interface element having plural movable layers. Based at least in part on inertia information and the user input, a UI system calculates a first motion having a first movement rate in a first layer of the plural movable layers, and calculates a parallax motion in a second layer of the plural movable layers. The parallax motion is based at least in part on the first motion (and potentially simulated inertia motion), and the parallax motion comprises a movement of the second layer at a second movement rate that differs from the first movement rate. The parallax motion can be calculated based on, for example, a parallax constant for the second layer, or an amount of displayable data in the second layer.
- In another aspect, a UI system receives gesture information corresponding to a gesture on a touch input device, the gesture information indicating a movement of a user interface element having a movement boundary. Based at least in part on the gesture information, the UI system computes a new position of the user interface element. Based at least in part on the new position, the UI system determines that the movement boundary has been exceeded. The UI system determines an extent by which the movement boundary has been exceeded, determines a compressible area of the user interface element, determines a scale factor for a distortion effect based at least in part on the compressible area and the extent by which the movement boundary has been exceeded, and presents a distortion effect in the user interface element. The distortion effect comprises a visual compression of content in the compressible area (e.g., text, images, graphics, video or other displayable content) according to the scale factor. Depending, for example, on the size of the compressible area and the size of the display area, some parts of the compressible area may not be visible on a display, so the distortion can be virtual (e.g., in areas that are not visible on a display) or the distortion can be actually displayed, or some parts of the distorted content can be displayed while other parts of the distorted content are not displayed. The visual compression is in a dimension that corresponds to the movement of the user interface element. For example, a vertical movement in a UI element that exceeds a movement boundary can cause content in the UI element to be vertically compressed or squeezed.
- The foregoing and other objects, features, and advantages of the invention will become more apparent from the following detailed description, which proceeds with reference to the accompanying figures.
-
FIGS. 1A-1C and 2 are flow charts showing example techniques for presenting motion feedback in user interface elements, according to one or more described embodiments. -
FIG. 3 is a diagram showing a boundary effect, according to one or snore described embodiments. -
FIGS. 4A-4C are diagrams showing parallax effects, according to one or more described embodiments. - FIGS. 5 and 6A-6E are diagrams showing parallax effects and boundary effects in a user interface having parallel layers, according to one or more described embodiments.
-
FIGS. 7A , 7B, 8A and 8B are diagrams showing gesture boundary areas which can be used to determine whether to present boundary effects, according to one or more described embodiments. -
FIG. 9 is a diagram showing example pinch and stretch gestures, according to one or more described embodiments. -
FIG. 10 is a graph showing changes in position over time of a UI element that exhibits a boundary feedback effect, according to one or more described embodiments. -
FIG. 11 is a system diagram showing a UI system in which described embodiments can be implemented. -
FIG. 12 illustrates a generalized example of a suitable computing environment in which several of the described embodiments may be implemented. -
FIG. 13 illustrates a generalized example of a suitable implementation environment in which one or more described embodiments may be implemented. -
FIG. 14 illustrates a generalized example of a mobile computing device in which one or more described embodiments may be implemented. - Techniques and tools are described that relate to different aspects of a user interface that provides visual feedback in response to user input. For example, boundary effects are presented to provide visual cues to a user to indicate that a boundary in a movable user interface element (e.g., the end of a scrollable list) has been reached. As another example, parallax effects are presented in which multiple parallel or substantially parallel layers in a multi-layer user interface move at different rates, in response to user input. As another example, simulated inertia motion of UI elements is used to provide a more natural feel for touch input. Various combinations of features are described. In one implementation, a UI system that accepts touch input includes detailed motion rules (e.g., rules for interpreting different kinds of touch input, rules for presenting inertia motion in UI elements in response to touch input, rules for determining boundaries in UI elements, etc.). The motion rules can be combined with various combinations of optional motion features such as parallax effects, boundary effects, and other visual feedback. The visual feedback that is presented according to motion rules and optional motion features in a UI element can depend on many factors, such as the type of the UI element and the content of the UI element.
- Various alternatives to the implementations described herein are possible. For example, techniques described with reference to flowchart diagrams can be altered by changing the ordering of stages shown in the flowcharts, by repeating or omitting certain stages, etc. As another example, systems described with reference to system diagrams can be altered by changing the ordering of processing stages shown in the diagrams, by repeating or omitting certain stages, etc. As another example, user interfaces described with reference to diagrams can be altered by changing the content or arrangement of user interface features shown in the diagrams, by omitting certain features, etc. As another example, although some implementations are described with reference to specific devices and user input mechanisms (e.g., mobile devices with a touchscreen interface), described techniques and tools can be used with other devices and/or user input mechanisms.
- The various techniques and tools can be used in combination or independently. Different embodiments implement one or more of the described techniques and tools.
- As devices have become more complex, and as consumers have become snore demanding, it has become increasingly difficult to design user interfaces that are convenient and pleasing for users, without sacrificing reliability, flexibility, functionality or performance. The feel of a user interface (UI) is becoming increasingly important to distinguish the underlying product from its competitors. An important contributor to the feel of a UI is how it reacts when a user interacts with it. This is especially true for touch-based interfaces.
- Accordingly, techniques and tools are described for providing feedback (e.g., visual cues such as parallax effects, boundary effects, etc.) to users in response to user input (e.g., touch input). In some embodiments, movements in elements (also referred to as “controls”) are based at least in part on user input (e.g., gestures on a touchscreen) and an inertia model. For example, a movement in a UI element can be extended beyond the actual size of a gesture on a touchscreen by applying inertia to the movement. Applying inertia to a movement in a UI element typically involves performing one more calculations using gesture information (e.g., a gesture start position, a gesture end position, gesture velocity and/or other information) and one or more inertia motion values (e.g., friction coefficients) to determine a post-gesture state (e.g., a new position) for the UI element. Simulated inertia motion can be used in combination with other effects (e.g., parallax effects, boundary effects, etc.) to provide feedback to a user. In any of the examples herein, movements in UI elements can be rendered for display (e.g., depicting calculated distortion, parallax, or other effects, if any).
- Movement in UI elements typically depends to some extent on user interaction. For example, a user that wishes to navigate from one part of a UI element to another (e.g., from the beginning of a scrollable list to the end of the list) provides user input to indicate a desired movement. The user input can then cause movement in the UI element and potentially other elements in the user interface. In some embodiments, a user causes movement in a display area of a device by interacting with a touchscreen. The interaction can include, for example, contacting the touchscreen with a fingertip, stylus or other object and moving it (e.g., with a flicking or sweeping motion) across the surface of the touchscreen to cause movement in a desired direction. Alternatively, a user can interact with a user interface in some other way, such as by pressing buttons (e.g., directional buttons) on a keypad or keyboard, moving a trackball, pointing and clicking with a mouse, making a voice command, etc.
- The actual amount and direction of the user's motion that is necessary to produce particular movements can vary depending on implementation or user preferences. For example, a user interface system can include a default setting that is used to calculate the amount of motion (e.g., in terms of pixels) as a function of the size or rate of a user movement. As another example, a user can adjust a touchscreen sensitivity control, such that the same motion of a fingertip or stylus on a touchscreen will produce smaller or larger movements, depending on the setting of the control. Gestures can be made in various directions to cause movement in UI elements. For example, upward and downward gestures can cause upward or downward movements, respectively, while rightward and leftward movements can cause rightward and leftward movements, respectively. Upward/downward motion can even be combined with left/right motion for diagonal movements. Other kinds of motion, such as non-linear motion (e.g., curves) or bi-directional motion (e.g., pinch or stretch motions made with multiple contact points on a touchscreen) also can be used to cause movement in UI elements.
-
FIG. 1A is a flow chart showing ageneral technique 100 for providing motion feedback in a UI. At 101, a device receives user input indicating motion in a UI element. For example, a UI system on a mobile device receives gesture information corresponding to a gesture on a touchscreen on the mobile device. At 102, the device determines whether inertia will be applied to the motion indicated by the user input. For example, a UI system determines based on gesture information (e.g., gesture start position, gesture end position, gesture direction, gesture velocity) whether to apply inertia to the motion in the UI element. At 103, the device determines whether visual effects (e.g., boundary effects, parallax effects, etc.) will be applied to the motion indicated by the user input. For example, the device determines whether to apply a distortion effect (e.g., a compression or squeeze effect) to indicate that a boundary in the UI element (e.g., a boundary at the end of a scrollable list) has been reached. As another example, the device determines whether to apply a parallax effect (e.g., by moving parallel layers in a multi-layer UI element at different rates). The applied effects also can be based on inertia, where inertia is applied to the motion indicated by the user input. For example, if a UI system applies inertia to a movement and calculates, based on the inertia, a new position for a UI element that is outside a boundary for the UI element, the UI system can apply a boundary effect to provide a visual indicator that the boundary has been reached. At 104, the motion in the UI element is rendered for display. -
FIG. 19 is a flow chart showing atechnique 110 for providing boundary effects in combination with inertia motion. At 111, a UI system receives gesture information corresponding to a gesture. For example, the UI system receives gesture coordinates and velocity information for the gesture. At 112, the UI system calculates inertia motion based on the gesture information. For example, the UI system determines that inertia motion is applied based on the velocity information, and calculates a duration of inertia motion for the gesture. At 113, the UI system calculates a post-gesture position based on the gesture information and the inertia motion. For example, the UI system calculates the post-gesture position based on the gesture coordinates and the duration of the inertia motion. At 114, the UI system determines that a boundary for the UI element has been exceeded. For example, the UI system compares one or more coordinates (e.g., vertical or horizontal coordinates) of the post-gesture position and determines an extent by which the post-gesture position exceeds the boundary. At 115, the UI system calculates a distortion effect to indicate that the boundary has been exceeded. For example, the UI system calculates a squeeze or compression effect in the content of the UI element based on the extent to which the post-gesture position exceeds the boundary. -
FIG. 1C is a flow chart showing atechnique 120 for providing parallax effects in combination with inertia motion. At 121, a UI system receives user input indicating motion in a UI element having plural layers. For example, the UI system receives gesture coordinates and velocity information for a gesture on a touch screen, where the gesture is directed to a content layer in multi-layer UI. At 122, the UI system calculates motion in a first layer based on inertia information and the user input. For example, the UI system determines that inertia motion should be applied to movement in the content layer based on the velocity information, and calculates a duration of inertia motion for the movement. At 123, the UI system calculates a parallax motion in a second layer based on the first motion in the first layer. For example, the UI system calculates the parallax motion in a layer above the content layer based on the motion in the content layer, with the parallax motion having a different movement rate than the motion in the content layer. The parallax motion also can include inertia motion, or inertia motion can be omitted in the parallax motion. - In any of the above techniques, any combination of the inertia, boundary, parallax, distortion, and other effects described herein can be applied. Depending on implementation and the type of processing desired, processing stages shown in
example techniques -
FIG. 2 is a flow chart showing adetailed example technique 200 for providing visual feedback in a UI in response to a user gesture. - At 210, a UI system on a device receives touch input information in a touch input stream. For example, the touch input stream comprises data corresponding to a gesture on a touchscreen of a mobile device. Data received from the touch input stream can include, for example, gesture information such as a gesture start position, a gesture end position, and timestamps for the gesture. The touch input stream is typically received from a device operating system, which converts raw data received from a touch input device (e.g., a touchscreen) into gesture information. Alternatively, data received from the touch input stream can include other information, or gesture information can be received from some other source.
- At 212, filtering is applied to the touch input stream. In the filtering stage, one or more algorithms are applied to the touch input stream coming from the OS to filter out or correct anomalous data. For example, the filtering stage can correct misaligned touch data caused by jittering (e.g., values that are not aligned with previous inputs) or filter out spurious touch contact points (e.g., incorrect interpretation of a single touch point as multiple touch points that are close together), etc. As another example, if only single-touch-point gestures are allowed, the filtering stage can convert any multi-touch input into a single-touch input. Alternatively, touch input filtering can be performed during generating of the touch input stream (e.g., at the device OS). As another alternative, touch input filtering can be performed during a coordinate space transform stage (e.g., coordinate space transform 220). As another alternative, touch input filtering can be omitted.
- At 220, the UI system applies a coordinate space transform to data in the touch input stream corresponding to the gesture. For example, a coordinate space transform is applied to the data from the touch input stream in order to account for possible rotations of the device, scale changes, influence from other animations, etc., in order to properly interpret the original input stream. For example, if a UI element is rotated 90 degrees such that vertical movement in the UI element becomes horizontal movement (or vice versa), a vertical gesture can be transformed to a horizontal gesture (or vice versa) to account for the rotation of the device. If no adjustments are necessary, the coordinate space transform can leave gesture information unchanged. Alternatively the coordinate space transform state can be omitted.
- At 230, the UI system calculates the velocity at the end of the gesture. For example, the velocity is calculated by determining a first position near the end of the gesture and an end position of the gesture, and dividing by the time elapsed during the movement from the first position near the end of the gesture to the end position. In one implementation, the first position is determined by finding the gesture position at approximately 100 ms prior to the end of the gesture. Measuring velocity near the end of the gesture can help to provide a more accurate motion resulting from the gesture than measuring velocity over the entire course of the gesture. For example, if a gesture starts slowly and ends with a higher velocity, measuring the velocity at the end of the gesture can help to more accurately reflect the user's intended gesture (e.g., a strong flick). Alternatively, the velocity is calculated by determining the distance (e.g., in pixel units) between the start position for the gesture and the end position of the gesture, and dividing by the time elapsed during the movement from the start position to the end position. The time elapsed can be calculated, for example, by computing the difference between a timestamp associated with the start position and a timestamp associated with the end position.
- At 240, the UI system determines whether the gesture is an inertia gesture. As used herein, an inertia gesture refers to a gesture, such as a flick gesture, capable of causing movement in one or more user interface elements to which inertia can be applied. The UI system can distinguish between a non-inertia gesture and an inertia gesture by determining how quickly the user's finger, stylus, etc., was moving when it broke contact with the touchscreen, and whether the velocity exceeds a threshold. If the gesture ends with a velocity above the threshold, the gesture can be interpreted as an inertia gesture. For example, a gesture that starts with panning motion at a velocity below the threshold and ends with a velocity above the threshold can be interpreted as ending with a flick that causes movement to which inertia can be applied. If the gesture ends with a velocity below the threshold, the gesture can be interpreted as a non-inertia gesture. Exemplary techniques and tools used in some implementations for gesture interpretation are described in detail below.
- If the gesture is an inertia gesture (e.g., a flick gesture), at 250 the UI system determines whether inertia will be applied to the motion indicated by the gesture. For example, the UI system determines based on gesture information (e.g., end-of-gesture velocity) and/or other information (e.g., user preferences) whether to apply inertia to the motion in the UI element. Despite being considered an inertia gesture, a gesture such as a flick may still not have inertia applied to its resulting movements, such as when a flick gesture is received for a UI element that does not support inertia movements, or for a UI element for which inertia movement has been deactivated (e.g., according to user preference).
- If inertia is not to be applied (e.g., when the gesture is not an inertia gesture), at 254 the UI system computes a new position for the UI element based on gesture information (e.g., end-of-gesture position coordinates). If inertia is to be applied, at 252 the UI system computes a new position based on the gesture information (e.g., end-of-gesture position coordinates) and simulated inertia. For example, the simulated inertia involves treating a UI element, or part of a UI element, as a physical object of non-zero mass that moves according to an approximation of Newtonian physics. The approximation can include, for example, a friction coefficient and/or other parameters that control how the movement is calculated and/or rendered.
- When the new position of the UI element has been computed (with or without simulated inertia), the UI system determines at 260 whether boundary feedback will be presented. Determining whether boundary feedback will be presented involves determining whether the new position is within boundaries (if any) of the UI element. For example, in a scrollable list, the UI system can determine whether the new position is calculated to be outside the boundaries of the scrollable list (e.g., below the end of a vertically scrollable list). Some UI elements may not have boundaries that can be exceeded by any permitted motion. For example, a UI element may take the form of a wrappable list, which may have a default entry position but no beginning or end. If the wrappable list is axis-locked (e.g., if movement is only permitted along a vertical axis for a vertically scrolling list), the list may have no boundaries that can be exceeded by any permitted motion. For UI elements without any boundaries, or without boundaries that can be exceeded by permitted motion, the determination of whether the new position is within boundaries can be skipped. Axis locking is described in more detail below.
- If boundary feedback is to be presented, at 262 the UI system applies a boundary effect to the UI element. For example, the UI system can apply a visual distortion effect such as a “squish” or compression of text, images or other visual information in the UI element, to provide a visual cue that a boundary of the UI element has been reached. Boundary effects are described in more detail below.
- The UI system determines at 270 whether parallax feedback will be presented. Determining whether parallax feedback will be presented involves determining whether the UI element has multiple parallel layers or substantially parallel layers that can be moved at different rates based on the same gesture. If parallax feedback is to be presented, at 272 the UI system applies a parallax effect to the UI element. In general, a parallax effect involves movement of multiple parallel layers, or substantially parallel layers, at different rates. Example parallax effects are described in more detail below.
- The processing stages in
example technique 200 indicate example flows of information in a UI system. Depending on implementation and the type of processing desired, processing stages can be rearranged, added, omitted, split into multiple stages, combined with other stages, and/or replaced with like stages. - For example, although
example technique 200 shows stages of receiving data from a touch input stream, applying touch input filtering, applying a coordinate space transform, calculating a velocity at the end of a gesture, and determining whether the gesture is an inertia gesture, such processing stages are only exemplary. Gesture information (e.g., gesture velocity, position, whether the gesture is a candidate for simulated inertia, etc.) can be obtained in other ways. As an example, a module that determines whether to apply inertia motion and determines whether to apply boundary feedback or parallax effects can obtain gesture data from another source, such as another module that accepts touch input and makes calculations to obtain gesture information (e.g., gesture velocity, end-of-gesture position). - As another example, although
example technique 200 shows a determination of whether to present boundary feedback occurring before a determination of whether to present parallax feedback, such an arrangement is only exemplary. A determination of whether to present boundary feedback and/or parallax feedback can be performed in other ways. As examples, once a new position has been calculated, determinations of whether to present boundary feedback and/or parallax feedback can occur in parallel, or the determination of whether to present a parallax effect can occur before the determination of whether to present a boundary effect. Such arrangements can be useful, for example, where a gesture may cause movements in multiple parallel layers of a UI element prior to reaching a boundary of the element. A UI system also can determine (e.g., based on characteristics of a current UI element) whether boundary effects and/or parallax effects are not available (e.g., for UI elements that do not have multiple layers or boundaries), and skip processing stages that are not relevant. - Boundary feedback can be used to provide visual cues to a user to indicate that a boundary (e.g., a boundary at the end, beginning, or other location) in a UI element (e.g., a data collection such as a list) has been reached. In described implementations, a UI system presents a boundary effect in a UI element (or a portion of a UI element) by causing the UI element to be displayed in a visually distorted state, such as a squeezed or compressed state (i.e., a state in which text, images or other content is shown to be smaller than normal in one or more dimensions), to indicate that a boundary of the UI element has been reached.
- Described techniques and tools for presenting boundary feedback can be applied to any UI element with one or more boundaries that can be manipulated by moving the element. For example, described techniques and tools can be used in an email viewer, such that text in a scrollable email message is distorted (e.g., squeezed or compressed) to indicate that the end of the email message has been reached.
- Boundary effects (e.g., distortion effects) can be presented in different ways. For example, a boundary effect can be held in place for different lengths of time depending on user input and/or design choice. A boundary effect can end, for example, by returning the UI element to a normal (e.g., undistorted) state when a user lifts a finger, stylus or other object to end an interaction with a touchscreen after reaching a boundary, or when an inertia motion has completed. As another example, distortion effects other than a squish, squeeze or compression can be used. One alternative distortion effect is a visual stretch. A stretch effect can be used, for example, in combination with a snap-back animation to indicate that boundary has been reached.
- Boundary effects can be presented even when it is possible to continue a movement beyond a boundary. For example, if a user scrolls to the end of a vertically-oriented list, causing a distortion of text or images at the end of the list, further motion can cause the list to wrap past the boundary and back to the beginning of the list. The UI also can show an element (or part of an element) at the top of the list to indicate that further movement can allow the user to wrap back to the beginning of the list.
-
FIG. 3 is a diagram showing a graphical user interface (GUI) presented by a UI system that uses a distortion effect to indicate that a boundary of UI element has been reached. According to the example shown inFIG. 3 , a user 302 (represented by the hand icon) interacts with a list comprising list elements (“Contact1,” “Contact2,” etc.). In this example, distortion effects depend at least in part on the location of asqueeze point 396. Some list elements with distortion effects are shown as being outsidedisplay area 300. -
FIG. 3 shows example states 390-394. Instate 390,user 302 interacts with a touchscreen by making an upward motion, indicated by aninitial gesture position 350 and an end-of-gesture touch position 352. The interaction can include, for example, contacting the touchscreen with a fingertip, stylus or other object and moving it (e.g., with a flicking or sweeping motion) along the surface of the touchscreen. AlthoughFIG. 3 showsuser 302 interacting with the touchscreen at particular locations in thedisplay area 300, the UI system allows interaction with other parts of the touchscreen to cause movement in the list. Furthermore, although the example shown inFIG. 3 showsuser 302 making an upward motion to scroll towards the end of the list,user 302 also can make other motions (e.g., downward motions to scroll towards the beginning of the list). The UI system can interpret different kinds of upward or downward user movements, even diagonal movements extending to the right or left of the vertical plane, as a valid upward or downward motion. - From
state 390, the upward motion causes a distortion effect shown instate 392. In this example, the upward motion is finger-tracking motion caused by a drag gesture, but distortion effects also can be caused by other motion resulting from other kinds of gestures, such as inertia motion caused by a flick gesture. The distortion effect indicates that a boundary in the list has been reached. In the example shown in stateFIG. 3 , the entire list is treated as a single surface, as indicated by the single dimension line to the right of the list instates state 392, the list has been squeezed or compressed in a vertical dimension, as shown by the reduced length of the dimension to the right of the list. The text of each list element has been squeezed or compressed in a vertical dimension. The elements are distorted proportionally. The effect instate 392 is as if all the list elements are being compressed against a barrier at thesqueeze point 396. - In the example shown in
state 392, thesqueeze point 396 is indicated at the top of a list, outside thedisplay area 300. Other squeeze points are also possible. For example, the squeeze point could be at the center of a list (e.g., at item 50 in a 100 item list) or at the top of a visible portion of a list. In this example, the list can be considered as having two parts—one part above the squeeze point, and one part below the squeeze point—where only one part of the list is squeezed. The squeeze point can change dynamically, depending on the state of the list and/or display. For example, a squeeze point can move up or down (e.g., in response to where the center of the list is) as elements are added to or removed from the list, or a squeeze point can update automatically (e.g., when the end of the list has been reached) to be at the top of a visible portion of the list. As another example, a squeeze point can be placed outside of a list. This can be useful to provide more consistent visual feedback, such as when a UI element does not fill the visible area. - In
state 394, the list has returned to the undistorted state shown instate 390. For example, the list can return to the undistorted state after the gesture shown instate 390 is ended (e.g., when the user breaks contact with the touchscreen). - The upward motion shown in
FIG. 3 is only an example of a possible user interaction. The same motion and/or other user interactions (e.g., motions having different sizes, directions, or velocities) can cause different effects, different display states, different transitions between display states, etc. For example, a motion that causes a distortion effect in a UI element (e.g., at the end of a vertically scrollable list) also can cause another portion of the UI element (e.g., a list item at the beginning of a vertically scrollable list) to be displayed to indicate availability of a wrapping feature in the list. Further movement can then cause wrapping in the UI element (e.g., from the end back to the beginning of a vertically scrollable list). - States 390-394 are only examples of possible states. In practice, a UI element can exist in any number of states (e.g., in intermediate states between example states 390-394, etc.) in addition to, or as alternatives to, the example states 390-394. For example, it is preferable to show a gradual transition between an undistorted state (e.g., state 390) and a distorted state (e.g., state 392), or from a distorted state to an undistorted state, to provide a more natural feel and avoid the appearance of abrupt changes in the display. Intermediate states, such as states that may occur between
state 390 andstate 392, or betweenstate 392 andstate 394 can show gradually increasing or decreasing degrees of distortion, as appropriate. - In described embodiments, a UI system can present parallel, or substantially parallel, movable layers. The UI system can present a parallax effect, in which layers move at different speeds relative to one another. The effect is referred to as a parallax effect because, in a typical example, a layer that is of interest to a user moves at a faster rate than other layers, as though the layer of interest were closer to the user than the other, slower-moving layers. However, the term “parallax effect” as used herein refers more generally to effects in which layers move at different rates relative to one another.
- The rate of movement in each layer can depend on several factors, including the amount of data to be presented visually (e.g., text or graphics) in the layers, or the arrangement of the layers relative to one another. The amount of data to be presented visually in a layer can measured by, for example, determining the length as measured in a horizontal direction of the data as rendered on a display or as laid out for possible rendering on the display. Length can be measured in pixels or by some other suitable measure (e.g., the number of characters in a string of text). A layer with a larger amount of data and moving at a faster rate can advance by a number of pixels that is greater than a layer with a smaller amount of data moving at a slower rate. Layer movement rates can be determined in different ways. For example, movement rates in slower layers can be derived from movement rates in faster layers, or vice versa. Or, layer movement rates can be determined independently of one another. Layers that exhibit parallax effects can be overlapping layers or non-overlapping layers.
- When user interaction causes movement in layers, the movement of the layers is a typically a function of the length of the layers and the size and direction of the motion made by the user. For example, a leftward flicking motion on a touchscreen produces a leftward movement of the layers relative to the display area. Depending on implementation and/or user preferences, user input can be interpreted in different ways to produce different kinds of movement in the layers. For example, a UI system can interpret any movement to the left or right, even diagonal movements extending well above or below the horizontal plane, as a valid leftward or rightward motion of a layer, or the system can require more precise movements. As another example, a UI system can require that a user interact with a part of a touchscreen corresponding to the display area occupied by a layer before moving that layer, or the system can allow interaction with other parts of the touchscreen to cause movement in a layer. As another example, a user can use an upward or downward motion to scroll up or down in a part of the content layer that does not appear on the display all at once.
- In some embodiments, lock points indicate corresponding positions in layers with which a display area of a device will be aligned. For example, when a user navigates to a position on a content layer such that the left edge of the display area is at a left-edge lock point “A,” the left edge of display area will also be aligned at a corresponding left-edge lock point “A” in each of the other layers. Lock points also can indicate alignment of a right edge of a display area (right-edge lock points), or other types of alignment (e.g., center lock points). Typically, corresponding lock points in each layer are positioned to account for the fact that layers will move at different speeds. For example, if the distance between a first lock point and a second lock point in a content layer is twice as great as the distance between corresponding first and second lock points in a background layer, the background layer moves at half the rate of the content layer when transitioning between the two lock points.
- In addition to indicating corresponding positions in layers, lock points can exhibit other behavior. For example, lock points can indicate positions in a content layer to which the layer will move when the part of the layer corresponding to the lock point comes into view on the display. This can be useful, for example, when an image, list or other content element comes partially into view near an edge of the display area—the content layer can automatically bring the content element completely into view by moving the layer such that an edge of the display area aligns with an appropriate lock point. A lock animation can be performed at the end of a gesture, such as a flick or pan gesture, to align the layers with a particular lock point. As an example, a lock animation can be performed at the end of a gesture that causes movement of a content layer to a position between two elements in a content layer (e.g., where portions of two images in a content layer are visible in a display area). A UI system can select an element (e.g., by checking which element occupies more space in the display area) and transition to focus on that element using the lock animations. This can improve the overall look of the layers and can be effective in bringing information or functional elements into view in a display area. A lock animation also can be used together with simulated inertia motion. For example, a lock animation can be presented after inertia motion stops, or a lock animation can be blended with inertia motion (such as by extending inertia motion to a lock point, or ending inertia motion early by gradually coming to a stop at a lock point) to present a smooth transition to a lock point.
- The amounts and rates of movements presented in parallax effects can be calculated and presented in different ways. In a detailed example described below, equations are described for calculating parallax effect movements in which a parallax constant is used to determine anew position for a layer after a gesture. As another example, motion in layers and/or other elements, such as lists, can be calculated based on motion ratios. For example, a UI system can calculate motion ratios for a background layer and a title layer by dividing the width of the background layer and the width of the title layer, respectively, by a maximum width of the content layer. Taking into account the widths of the background layer and the title layer, a system can map locations of lock points in the background layer and the title layer, respectively, based on the locations of corresponding lock points in the content layer.
- Movement of various layers can differ depending on context. For example, a user can navigate left from the beginning of a content layer to reach the end of a content layer, and can navigate right from the end of the content layer to reach the beginning of a content layer. This wrapping feature provides more flexibility when navigating through the content layer. Wrapping can be handled by the UI system in different ways. For example, wrapping can be handled by producing an animation that shows a rapid transition from the end of layers such as title layers or background layers back to the beginning of such layers, or vice-versa. Such animations can be combined with ordinary panning movements in the content layer, or with other animations in the content layer, such as locking animations. However, wrapping functionality is not required.
-
FIGS. 4A-4C are diagrams showing a GUI presented by a UI system with threelayers background layer 450. In this example, a user 302 (represented by the hand icon) interacts withcontent layer 414 by interacting with a touchscreen having adisplay area 300. -
Background layer 450 floats behind the other layers. Data to be presented visually inbackground layer 450 can include, for example, an image that extends beyond the boundaries ofdisplay area 300. Thecontent layer 414 includes content elements (e.g., images) 430A-H. Layers 410, 412 include text information (“Category” and “Selected Subcategory,” respectively). The length ofcontent layer 414 is indicated to be approximately twice the length oflayer 412, which is in turn indicated to be approximately twice the length oflayer 410. The length ofbackground layer 450 is indicated to be slightly less than the length oflayer 412. - In
FIGS. 4A-4C , the direction of motion that can be caused in thelayers user 302 is indicated by a left-pointing arrow and a right-pointing arrow. These arrows indicate possible movements (left or right horizontal movements) oflayers FIGS. 4A- 4C show user 302 interacting with a portion ofdisplay area 300 that corresponds tocontent layer 414, the system also allows interaction with other parts of the touchscreen (e.g., parts that correspond to portions ofdisplay area 300 occupied by other layers) to cause movement inlayers - When user input indicates a motion to the right or left, the system produces a rightward or leftward movement of the
layers area 300. The amount of movement oflayers - In
FIGS. 4A-4C , example left-edge lock points “A,” “B” and “C” are indicated forlayers display area 300 on each layer. For example, when a user navigates to a position oncontent layer 414 such that the left edge ofdisplay area 300 is at lock point “A,” the left edge ofdisplay area 300 will also be aligned at lock point “A” of theother layers FIG. 4A . InFIG. 4B , the left edge ofdisplay area 300 is at lock point “B” in each of thelayers FIG. 4C , the left edge of thedisplay area 300 is at lock point “C” in each of thelayers - The lock points shown in
FIGS. 4A-4C are not generally representative of a complete set of lock points, and are limited to lock points “A,” “B” and “C” only for brevity. For example, left-edge lock points can be set for each of thecontent elements 430A-430H. Alternatively, fewer lock points can be used, or lock points can be omitted. As another alternative, lock points can indicate other kinds of alignment. For example, right-edge lock points can indicate alignment with the right edge ofdisplay area 300, or center lock points can indicate alignment with the center ofdisplay area 300. - In this example, layers 410, 412, 414, 450 move according to the following rules, except during wrapping animations:
-
- 1.
Content layer 414 will move at approximately twice the rate oflayer 412, which is approximately half the length oflayer 414. - 2.
Layer 412 will move at approximately twice the rate oflayer 410, which is approximately half the length oflayer 412. - 3.
Content layer 414 will move at approximately four times the rate oflayer 410, which is approximately ¼ the length oflayer 414. - 4.
Background layer 450 will move slower thanlayer 410. Althoughbackground layer 450 is longer thanlayer 410, the distance to be moved between neighboring lock points (e.g., lock points “A” and “B”) inlayer 410 is greater than the distance between the corresponding lock points inbackground layer 450.
- 1.
- Movement of
layers User 302 can navigate left from the beginning of content layer 414 (the position shown inFIG. 4A ), and can navigate right from the end of content layer 414 (the position shown inFIG. 4C ). During a wrapping animation, some layers may move faster or slower than during other kinds of movements. In this example, the image inbackground layer 450 and the text inlayers content layer 414. InFIG. 4C ,display area 300 shows portions of one and two letters, respectively, inlayers Display area 300 also shows the rightmost portion of the image inbackground layer 450. A wrapping animation to return to the state shown inFIG. 4A can include bringing the leftmost portion of the image inbackground layer 450 and the beginning of the text inlayers layers FIG. 4A to the state shown inFIG. 4B . - Parallax effects can be used in combination with boundary effects and inertia motion. For example, boundary effects can be used to indicate when a user has reached a boundary of a layer, or a boundary of an element within a layer. As another example, inertia motion can be used to extend motion of UI elements caused by some gestures (e.g., flick gestures). If inertia motion causes movement of a UI element (e.g., a layer) to extend beyond a boundary, a UI system can present a boundary effect.
-
FIG. 5 is a diagram showing twolayers Display area 300 is indicated by a dashed line and has dimensions typical of displays on smartphones or similar mobile computing devices. Thecontent layer 532 includes content elements 540-544. In this example, each content element 540-544 comprises an image representing a music album, and text indicating the title of the respective album. Thelist header layer 530 includes a text string (“Albums”). According to the example shown inFIG. 5 , a user 302 (represented by the hand icon) interacts withcontent layer 532 by interacting with a touchscreen having thedisplay area 300. The interaction can include, for example, contacting the touchscreen with a fingertip, stylus or other object and moving it (e.g., with a flicking or sweeping motion) across the surface of the touchscreen. -
FIG. 5 shows example display states 590-594. Indisplay state 590,user 302 interacts with a touchscreen by making aflick gesture 510, which is indicated by a leftward-pointing arrow. Theflick gesture 510 causes an inertia motion incontent layer 532, which continues to move after thegesture 510 has ended. AlthoughFIG. 5 showsuser 302 interacting with the touchscreen at a particular location in thedisplay area 300, the UI system allows interaction with other parts of the touchscreen to cause movement. Furthermore, although the example shown inFIG. 5 showsuser 302 making a leftward flick gesture,user 302 also can make other motions (e.g., rightward motions to scroll towards the beginning of the list). The UI system can interpret different kinds of leftward or rightward user movements, even diagonal movements extending below or above the horizontal plane, as a valid leftward or rightward motion. - In response to the
flick gesture 510, the UI system produces leftward movement of thelayers display area 300. For example, fromdisplay state 590, theflick gesture 510 causes a leftward movement in the layers and leads to displaystate 592, in whichelement 540 is no longer visible, andelements list header layer 530 also has moved to the left, but at a slower rate (in terms of pixels) than thecontent layer 532. The movement of thelayers flick gesture 510. - From
display state 592, the inertia motion causes continued leftward movement of thelayers user 302, and leads to displaystate 594 in whichelement 542 is no longer visible. The inertia motion causes the content layer to extend beyond a boundary (not shown) to the right of theelement 544 in thecontent layer 532, which results in a distortion effect in which an image and text inelement 544 is squeezed or compressed in a horizontal dimension. The compression is indicated by the reduced length of the dimension lines above the image and text (“Rock & Roll Part in”) ofelement 544, respectively. The text string (“Albums”) in thelist header layer 530 also has moved to the left, but at a slower rate (in terms of pixels) than thecontent layer 532. The text inlist header layer 530 is uncompressed. The distortion effect givesuser 302 an indication that the end of thecontent layer 532 has been reached. - Although a motion that is calculated to extend beyond a boundary may result in a distortion effect, the boundary need not prevent further movement in the direction of the motion. For example, if wrapping functionality is available, further movement beyond the boundary can cause the
content layer 530 to wrap back to the beginning (e.g., back to display state 590). Instate 594,element 540 at the beginning of the collection is partially visible, indicating that wrapping is available. - The display can return from
display state 594 to displaystate 592, transitioning from a display state with a distortion effect to an undistorted display state. This can occur, for example, without any additional input by the user. The length of time that it takes to transition between states can vary depending on implementation. -
Flick gesture 510 is only an example of a possible user interaction. Thesame gesture 510 and/or other user interactions (e.g., motions having different sizes, directions, or velocities) can cause different effects, different display states, different transitions between display states, etc. Some display states (e.g., display state 594) may occur only if a gesture results in a post-gesture position that is calculated to go beyond a boundary for the layer. - Display states 590-594 are only examples of possible display states. In practice, a display can exist in any number of states (e.g., in intermediate states between example states 590-594, in states with different visible UI elements, etc.) in addition to, or as alternatives to, the example display states 590-594. For example, it is preferable to show a gradual transition between an undistorted state (e.g., state 592) and a distorted state (e.g., state 494), or from a distorted state to an undistorted state, to provide a more natural feel and avoid the appearance of abrupt changes in the display. Intermediate states, such as states that may occur between
state 592 andstate 594, can show gradually increasing or decreasing degrees of distortion, as appropriate. As another example, a UI system can provide a boundary effect by compressing theelements display state 592 without moving theelements display area 300. - Described techniques and tools can be used on display screens in different orientations, such as landscape orientation. Changes in display orientation can occur, for example, where a UI has been configured (e.g., by user preference) to be oriented in landscape fashion, or where a user has physically rotated a device. One or more sensors (e.g., an accelerometer) in the device can be used to detect when a device has been rotated, and adjust the display orientation accordingly.
- In the example shown in
FIG. 5 , thedisplay area 300 is oriented in landscape fashion. Content (e.g., data collection elements 540-544 in content layer 532) and/or other user interface features in thedisplay area 300 can be dynamically adjusted to take into account effects of a reorientation (e.g., a new effective width of thedisplay area 300, interpreting directions of user interactions differently, etc.). For example, distortion effects can be adjusted, such as by compressing data collection elements in a horizontal dimension instead of a vertical dimension, to account for display reorientation. - However, such adjustments are not required. For example, if a display area has equal height and width, reorientation of the display area to a landscape orientation will not change the effective width of the display area.
-
FIGS. 6A-6E are diagrams showing acontent layer 614 that moves in tandem withlayer 612 above it. In this example, a user 302 (represented by the hand icon) navigates throughcontent layer 614 by interacting with a touchscreen having thedisplay area 300. The interaction can include, for example, contacting the touchscreen with a fingertip, stylus or other object and moving it (e.g., with a flicking or sweeping motion) across or along the surface of the touchscreen. Thecontent layer 614 includesgame icons other layers layer 610; “Spotlight,” “Xbox Live, “Requests” and “Collection” in layer 612). - The direction of motion that can be caused by
user 302 is indicated by a left-pointing arrow and a right-pointing arrow inFIGS. 6A-6E , along with additional up- and down-pointing arrows inFIGS. 6A and 6E . The right-pointing and left-pointing arrows indicate possible movements (left or right horizontal movements) of thelayers - The up-pointing and down-pointing arrows indicate possible movements of the
list 650 in response to user movements. The amount of movement oflist 650 can be a function of the size or rate of the motion made byuser 302, and the data inlist 650. Thus, scrolling of thelist 650 can be element-by-element, page-by-page of elements, or something in between that depends on size or rate of the motion. In this example,list 650 includes only one element that is not visible in thedisplay area 300, as shown inFIG. 6A , so a range of small or large downward movements may be enough to scroll to the end oflist 650. In the example shown inFIG. 6E , an upward user movement has caused a boundary effect inlist 650, in which the text of elements in the list are squeezed or compressed in a vertical dimension. This effect givesuser 302 an indication that the end of the list has been reached. - In this example, the amount of movement in
layers layers -
- 1. The horizontal movement of
content layer 614 is locked to layer 612. - 2.
Layers layer 610, which is approximately ⅓ the length oflayers
- 1. The horizontal movement of
- Movement in the
layers FIGS. 6A-6E , wrapping is permitted. The arrows indicate that a user can navigate left from the beginning of the content layer 614 (the position shown inFIG. 6A andFIG. 6E ), and can navigate right from the end of the content layer 614 (the position shown inFIG. 6D ). During a wrapping animation, some layers may move faster or slower than during other kinds of movements. For example, the text inlayer 610 can move faster when wrapping back to the beginning ofcontent layer 614. InFIG. 6D ,display area 300 shows portions of two letters inlayer 610, at the end of the “Games” text string. A wrapping animation to return to the state shown inFIG. 6A can include bringing the data inlayers layer 610 than in other contexts, such as a transition from the state shownFIG. 6A to the state shown inFIG. 6B . - In
FIGS. 6A-6E , example lock points “A,” “B,” “C” and “D” are indicated forlayers content layer 614 is locked to layer 612; the lock points indicated forlayer 612 also apply tolayer 614. The lock points for each layer indicate the corresponding position of the left edge of thedisplay area 300 on each layer. For example, when a user navigates to a position oncontent layer 614 such that the left edge of thedisplay area 300 is at lock point “A,” the left edge ofdisplay area 300 also is aligned at lock point “A” of theother layers FIGS. 6A and 6E . InFIG. 6B , the left edge of thedisplay area 300 is at lock point “B” in each of thelayers FIG. 6C , the left edge of thedisplay area 300 is at lock point “C” in each of thelayers FIG. 6D , the left edge of thedisplay area 300 is at lock point “D” in each of thelayers - The lock points shown in
FIGS. 6A-6E are not generally representative of a complete set of lock points, and are limited to lock points “A,” “B,” “C” and “D” only for brevity. For example, right-edge lock points can be added to obtain alignment with the right edge ofdisplay area 300, or center lock points can be added to obtain alignment with the center ofdisplay area 300. Alternatively, fewer lock points can be used, more lock points can be used, or lock points can be omitted. -
User 302 can move left or right incontent layer 614 after making an up or down movement inlist 650. The current position oflist 650 can be saved, or the system can revert to a default position (e.g., the top-of-list position indicated inFIG. 6A ) when navigating left or right incontent layer 614 fromlist 650. Although the arrows inFIGS. 6A-6E (and other figures) that indicate possible movements are shown for purposes of explanation, thedisplay area 300 can itself display graphical indicators (such as arrows or chevrons) of possible movements for the layers and/or list. - The system can interpret user movements to the left or right, even diagonal movements extending above or below the horizontal plane, as a valid leftward or rightward motion. Similarly, the system can interpret upward or downward movements, even diagonal movement extending to the left or right of the vertical plane, as a valid upward or downward motion. Although
FIGS. 6A-6E show theuser 302 interacting with a portion of thedisplay area 300 that corresponds to thecontent layer 614, the system also allows interaction with other parts of the touchscreen (e.g., those that correspond to display area occupied by other layers) to cause movement in thelayers list 650, or other UI elements. - In
FIGS. 6A-6E ,avatar 630 can provide a visual cue to indicate a relationship between or draw attention to parts of thecontent layer 614. - In
FIG. 6B ,avatar 630 is positioned betweenlist 652 andlist 654. InFIG. 6C ,avatar 630 floats behind the text oflist 654, but remains completely withindisplay area 300. InFIG. 6D ,avatar 630 is only partially withindisplay area 300; the part that is within the display area floats behindgame icons avatar 630 at the left edge ofdisplay area 300 can indicate to theuser 302 that information associated withavatar 630 is available if theuser 302 navigates in the direction ofavatar 630.Avatar 630 can move at varying speeds. For example,avatar 630 moves faster in the transition betweenFIGS. 6B and 6C than it does in the transition betweenFIGS. 6C and 6D . - Alternatively,
avatar 630 can move in different ways, or exhibit other functionality. For example, a UI system can present a distortion effect inavatar 630 to indicate a user's location in a data collection with which the avatar is associated.Avatar 630 also can be locked to particular position (e.g., a lock point) incontent layer 614 or in some other layer, such thatavatar 630 moves at the same horizontal rate as the layer to which it is locked. As another alternative,avatar 630 can be associated with a list that can be scrolled up or down, such aslist 650, and move up or down as the associated list is scrolled up or down. - In this section, a detailed implementation is described comprising aspects of motion feedback including boundary effects and parallax effects, with reference to the following detailed example.
- In this detailed example, a set of equations, coefficients and rules are described that can allow a UI system (e.g., a UI system provided as part of a mobile device operating system) to interpret user input such as touch gestures (including multi-touch gestures with more than one touch contact point) and generate motion feedback in response to user input. Features described in this detailed example include inertia movement, panning and zooming operations, boundary effects, parallax effects, and combinations thereof. Described features can help to provide natural-looking, smooth motion in response to user input (e.g., touch gestures).
- In this detailed example, processing tasks can be handled by different software modules. For example, a module called “ITouchSession” provides coefficients, gesture positions, and gesture velocity information, and a dynamic motion module in a mobile device operating system uses information provided by ITouchSession to generate motion feedback (e.g., parallax effects, boundary effects, etc.). Preferably, gesture information provided to the dynamic motion module is accurate (e.g., with minimal jitter in position information), detailed (e.g., with time stamps on touch input), and low-latency (e.g., under 30 ms). The information (e.g., motion feedback information) generated by the dynamic motion module can be used by other modules, as well. For example, web browsers or other applications that run on the mobile device operating system can use information generated by the dynamic motion module.
- In this detailed example, the dynamic motion resulting from user interaction is defined by a set of motion rules. The motion rules define how different visual elements react on screen in response to different gestures. For example, some rules apply to finger-tracking gestures such as panning or dragging gestures, some rules apply to flick or toss gestures, and some rules apply to pinch or stretch gestures. Additionally, some rules, such as inertia rules, may apply to more than one type of gesture. The specific motion rules that apply to different UI elements (or “controls”) are determined by factors such as the control type and control content; not all motion rules will apply to all UI elements. For example, rules for pinch and stretch gestures do not apply to UI elements where pinch and stretch gestures are not recognized. The motion resulting from the application of motion rules to the input stream generated by the user can be further refined by an optional set of modifiers, which are collectively called “optional motion features.”
- In this detailed example, touch input interactions that result in dynamic motion comply with the motion rules. Additionally, different UI elements (or “controls”) can apply zero or more optional motion features, which can be determined by factors such as the desired motion, control type and control content. For example, a list control may opt to enhance motion feedback with boundary effects, while a panorama control may apply a parallax feature to some of its layers.
- In addition, when a user interacts with a UI element, it can be helpful to provide some immediate (or substantially immediate) visual feedback to the user (e.g., a change in movement in the UI element, or some other effect such as a tilt or highlight). Immediate or substantially immediate feedback helps the user to know that the user interface is responsive to the user's actions.
- In this detailed example, the following motion rules apply in UI elements where the rules (e.g., rules relating to finger-tracking gestures, inertia, boundaries, pinch/stretch gestures) are relevant to the types of motion that are permitted in the respective UI elements. The motion rules can be modified for some UI elements, such as where optional motion features apply to a UI element.
- For finger tracking movements (e.g., movements caused by dragging or panning gestures), the content at the initial gesture point moves in direct correspondence to the gesture. For example, content under the user's finger at an initial touch point moves with the user's finger during the gesture. The current position of a visual element is given by the following equation:
-
p=p 0+(q−q 0) (Eq. 1) - where p is the (x, y) vector that represents the current position of the visual element, p0 is the (x0, y0) vector that represents the visual element position at the beginning of the gesture, q is the (x, y) vector that represents the current touch contact position, and q0 is the (x0, y0) vector that represents the touch contact position at the beginning of the gesture.
- In a UI element that allows inertia movement (e.g., a scrolling list), when the user finishes a gesture (e.g., by lifting a finger or other object to end the interaction with the touchscreen), a velocity and direction for that movement is identified, and the motion initially continues in the same direction and speed as the gesture, as if the visual element was a real, physical object with a non-zero mass. If the motion is not stopped for some other, permissible reason (e.g., where the UI element reaches a boundary or is stopped by another user gesture), the motion gradually decelerates over time, eventually coming to a stop. The deceleration proceeds according to a combination of equations and coefficients, which can vary depending on implementation. Default system-wide coefficient values can be made available. Default system-wide coefficients can help to maintain a consistent feeling across all controls. Alternatively, different equations or coefficients can be used, such as where a particular control has its own friction coefficient for modeling different kinds of motion.
- The velocity (e.g., in pixels/second) at the end of the gesture is computed by the following equation:
-
v=(q−q 0)/(t−t 0), (Eq. 2) - where v is the (vx, vy) velocity vector that represents the inertia velocity at the end of the gesture, q is the (x, y) vector that represents the touch contact position at the end of the gesture, q0 is the (x0, y0) vector that represents the touch contact position at the time t0, t is the timestamp of the last touch input of the gesture, and t0 is the timestamp of the least recent touch input that happened within some fixed period of time from the last touch input. Alternatively, the velocity can be calculated in another way. For example, a weighted sum of velocities at different time instances can be calculated, with greater weighting for velocities at the end of the gesture. In this detailed example, calculating the velocity is the responsibility of ITouchSession. However, velocity calculations can be handled by other modules.
- The duration of the inertia motion can be computed according to the following equation:
-
- where tmax is the duration of the inertia motion, |v0| is the magnitude of the initial velocity vector (|v0|>γ), μ is a friction coefficient (e.g., MotionParameter_Friction, 0<μ<1), and y is a parking speed coefficient (e.g., MotionParameter_ParkingSpeed, 0<γ<|v0|) that is used to indicate a threshold velocity, below which inertia motion will stop. In one implementation, the friction coefficient is 0.4, and the parking speed coefficient is 60.0. The duration is computed at the start of the inertia motion, and need not be computed again.
- The following equation will compute the current velocity vector v at any given time t:
-
v=V 0·μt (Eq. 4). - The new position p′ for the visual element can be computed based on its last known position p and the time elapsed since the last position update (Δt), as shown in the following equation:
-
p′=p+v·Δt (Eq. 5). - The motion stops once the velocity reaches a value smaller than γ.
- The actual calculation of values relating to inertia motion (e.g., velocity, etc.) can differ depending on implementation.
- Motion Rule: Interacting with an Element in Inertia Motion
- If a new gesture begins while a UI element is in inertia motion, the inertia motion is immediately interrupted. Depending on the new gesture, the motion in the UI element may be stopped, or a new motion may start. If the new gesture causes a new motion in the UI element, the new gesture controls the UI element's motion. The previous gesture and any consequent inertia do not affect the motion generated by the new gesture. Handling of new gestures during inertia motion can be different depending on implementation. For example, new gestures can be ignored during inertia motion or can have different effects on inertia motion.
- The motion of some UI elements is limited by gesture boundaries. The dimensions of gesture boundaries and the effects of exceeding gesture boundaries can differ depending on several factors, such as the content of a UI element and/or a minimum visible area of the UI element. For example, lists which don't wrap around indefinitely may only be able to scroll a certain distance based on the number of items in the list and a minimum amount of visible items (e.g., an amount of items that occupies most or all of a display area).
- In this detailed example, for an element A having a width wA, height hA, total area SA and position pA (xA, yA), with a minimum visible area SVmin (width wVmin, height hVmin) currently at position pVmin (xVmin, yVmin), a gesture that begins at an initial position q (xq, yq) has a rectangular gesture boundary area ST (width wT, height hT) at position pT=(xT, yT). The minimum visible area indicates a minimum visible amount of the control (e.g., a minimum number of list items in a scrollable list), but does not require any particular part of the control to be visible. Therefore, the content of the minimum visible area for a particular control can vary depending on, for example, the control's current state (e.g., whether the end or beginning of a scrollable list is currently visible).
- Conceptually, the position pT of the gesture boundary area can be defined according to the following equation:
-
p T =q+(p Vmin +S Vmin)−(p A +S A) (Eq. 6). - The x and y coordinates of the position pT of the gesture boundary area are defined according to the following equations:
-
x T =x q+(x Vmin +w Vmin)−(x A +w A) (Eq. 7) -
y T =y q+(y Vmin +h Vmin)−(y A +h A) (Eq. 8). - The dimensions of the gesture boundary area are defined according to the following equation:
-
S T =S A −S Vmin=(h A −h Vmin ,w A −w Vmin) (Eq. 9). - If the new position of the UI element, resulting from user interaction or simulated inertia or some combination, falls outside the area defined by pT+ST, a boundary has been exceeded. An appropriate boundary feedback modifier can be applied or the new position can be clamped (i.e., kept within the allowed boundaries).
-
FIG. 7A shows an example boundary diagram for a control having aposition 710 andarea 720. The control has a minimum visible area 740 (at position 730). For example, theposition 730 of the minimum visible area can be located at the top left of a display area. Based on aninitial gesture position 750, a gesture boundary at position 770 and havingarea 780 is calculated. - In
FIG. 7B , examplepost-gesture positions 752, 754 are shown. Post-gesture position 752 is outside thegesture boundary area 780, and causes boundary feedback.Post-gesture position 754 is inside thegesture boundary 780, and does not cause boundary feedback. -
FIG. 8A shows an example boundary diagram for a control corresponding to the scrollable list shown inFIG. 3 . InFIG. 8A , the control atposition 810 has a control area 820 (width WA, height hA). In this detailed example, the coordinates of the control position are considered to be (0, 0). The control has a minimum visible area 840 (at position 830). For example, theposition 830 of the minimumvisible area 840 can be at the top left of a display area. Based on an initial gesture position 850, a gesture boundary at position 850 (the same position as the initial gesture position) is calculated. In this detailed example, thegesture boundary 880 has a height of hA−hVmin, and a width of 0. (Due to space limitations, theboundary 880 as shown inFIG. 8A is not to scale.) Therefore, thegesture boundary 880 is actually a vertical line. Although a control having a gesture boundary area with no width could cause a boundary feedback effect with any horizontal movement, boundary feedback can be enabled or disabled on an axis basis (e.g., permitting boundary feedback for vertical movements but not for horizontal movements). Such a control also can be a candidate for axis locking, to allow only vertical movements and remove any need for boundary feedback for horizontal movements. Axis locking is explained in more detail below. - In
FIG. 8B , examplepost-gesture positions Post-gesture position 852 is outside thegesture boundary area 880, and causes boundary feedback. For example, referring again toFIG. 3 , a UI system can present a squeeze or compression effect to indicate that the post-gesture position is outside the gesture boundary area, as shown instate 392.Post-gesture position 854 is inside thegesture boundary area 880, and does not cause boundary feedback. - The actual calculation of values relating to motion boundaries and the effects of motion boundaries can differ depending on implementation. For example, if a wrapping feature is available in a UI element, a boundary can indicate a position at which a boundary effect will be presented (e.g., to indicate that the end of a list has been reached) without preventing further movement beyond the boundary (e.g., wrapping movement from the end of the list back to the beginning of the list).
- Pinch gestures and stretch gestures are gestures that can change the scale (zoom) of the subject area of a control (e.g., a map or image with zoom capability). Pinch gestures and stretch gestures are considered to be multi-touch gestures because they typically have multiple points of interaction. In a typical pinch or stretch gesture scenario, a user places two fingers some distance apart from each other on a touchscreen, and either increases (for a stretch gesture) or decreases (for a pinch gesture) the distance between them.
-
FIG. 9 is a diagram showing example pinch and stretch gestures. On a device having adisplay area 300, a user 302 (represented by a hand icon) interacts with a control (e.g., a map with zoom features) having acontrol area 910. Fromdisplay state 990, theuser 302 performs a pinch gesture beginning attouch points touch points control area 910 in state 992. From display state 992, theuser 302 performs a stretch gesture beginning attouch points touch points control area 910 in state 994. Alternatively, a pinch or stretch gesture can begin or end at other touch points (e.g., with a greater or lesser distance between beginning and ending touch points) or can use a different orientation of touch points (e.g., horizontal or diagonal). - The scale adjustment caused by a pinch or stretch gesture can be represented as follows. Let qA0=(xA0, yA0) and qB0=(xB0, yB0) be the positions for initial touch points A and B. The distance d0 between the two points represents a 100% scale factor, and can be calculated according to the following equation:
-
d 0 =|qA 0 −qB 0| (Eq. 10). - The distance d0 includes a horizontal component xd0 and a vertical component yd0.
- Let qA and qB be updated positions for touch points A and B, and let d=(xd, yd) be the distance between them, calculated in a similar manner. The distance d also includes a horizontal component xd and a vertical component yd. The scale factor szoom to apply to the UI element can be calculated according to the following equation:
-
- Note that in Equation 11, the scale s is not isometric, i.e., the X and Y axes will be scaled differently. For isometric scaling, the following equation can be used instead:
-
- In this case, szoom is a scalar, so the same factor is applied to both X and Y components.
- Alternatively, a scale factor can be calculated in different ways. For example, inertia can be applied to a pinch or stretch gesture (such as when the gesture ends with a velocity above a threshold), and the scale factor can be based at least in part on the inertia of the gesture (e.g., increasing the scale of the zoom when a stretch gesture ends with a velocity above a threshold).
- To make zooming feel natural, the scale factor can be applied to a zooming point (e.g., a center point between touch points qA and qB). The zooming point cz=(xcz, ycz) can be calculated by averaging the two touch contact positions, as shown in the following equation:
-
- Alternatively, a zooming point can be calculated in a different way, or a calculation of a zooming point can be omitted.
- A pinch/stretch gesture can also produce position changes (panning) in addition to scale changes. Panning position changes can occur simultaneously with scale changes. The zooming point calculation in Equation 13 is used when simultaneous panning is not allowed. If simultaneous panning is allowed, the zooming point is calculated using the initial touch contact positions qA0 and qB0 rather than the updated touch contact positions qA and qB. If cz0 is the initial zooming point and cz is the updated zooming point, the distance dpan=(xdpan, ydpan) between the two zooming points represents a panning offset to be applied to the UI element, as shown in the following equation:
-
d pan =c z −c z0=(x cz −x cz0 ,y cz −y cz0) (Eq. 14). - Alternatively, a panning offset can be calculated in a different way, or a panning offset can be omitted.
- Optional motion features can be used (e.g., when requested by a control) to refine or add visual feedback to motion generated by gestures. Optional motion features can depend on control type and content. For example, some controls (e.g., a scrolling list) may use an optional axis locking feature that is appropriate for the orientation of the control (e.g., allowing only vertical movements in a vertically scrolling list). Optional motion features can be used in combination with each other and with various motion rules. For example, a vertically scrolling list can use an axis locking feature and a boundary effect feature, while following rules for inertia motion and finger tracking motion. Different UI elements can use different combinations of rules and optional motion features, even when the different UI elements are visible at the same time. For example, a movable layer can use parallax effects but omit boundary effects, while a vertically scrolling list in the movable layer can use boundary effects but omit parallax effects. UI elements of the same basic type can use different sets of optional motion features. For example, a first pair of movable layers can use parallax effects and move at different rates relative to one another, while a third layer parallel to the first pair remains stationary.
- When present, optional motion features act like filters, modifying the values generated according to other motion rules, such as the motion rules described above.
- For some controls, it may make sense to permit movement only along a particular axis. For example, it can be useful to restrict movement of a movable, horizontal UI layer (sometimes referred to as a panorama control) to movements along the X axis, or to restrict movement of a vertically scrolling list to movements along the Y axis. In such cases, axis locking can be used as an optional motion feature.
- In this detailed example, axis locking is applied to a UI element by using the relevant equations in the motion rules described above, but only applying an X or Y component (as appropriate) to the motion of the axis-locked UI element. Changes to the other component are ignored and not applied to the UI element's motion.
- Alternatively, axis locking can be performed in another way. For example, in a UI element such as a wheel element that moves about an axis such as a Z axis, axis locking can be used to permit only rotational motion about the axis. As another alternative, axis locking can be omitted.
- Parallax effects can be applied to controls that present multiple layers of content. In a parallax effect, multiple layers are animated differently (e.g., moving at different speeds), but the movements of the layers are based on the same input stream generated by the user.
- In a parallax effect, layers that are animated in response to a gesture move at different speeds relative to one another. The layer that the user is interacting with directly (e.g., a content layer) is considered to be the top layer on a Z axis, that is, the layer that is closest to the user. Other layers are considered to be lower layers on a Z axis, that is, further away from the user. Examples of a parallax effects can be seen in
FIG. 5 and inFIGS. 6A-6D . - In this detailed example, a top layer reacts directly to the gesture, and the other layers move at increasingly lower speeds the further they are from the top layer along the Z axis. Mathematically speaking, that can be accomplished by applying a scaling factor to the delta between an initial gesture position and an updated gesture position. The updated gesture position can be obtained directly from user interaction (e.g., in a finger tracking gesture such as a panning gesture) or from a gesture with simulated inertia (e.g., a flick gesture). If kP is the constant parallax scaling factor to be applied for a particular layer L at initial position pL=(xL, yL), then the parallaxed position pP=(xP, yP) can be computed according to the following equation:
-
p P =p L +k P(q−q 0) (Eq. 15) - where q is the (x, y) vector that represents the current, post-gesture position (e.g., after the gesture and application of any simulated inertia), and q0 is the (x0, y0) vector that represents the touch contact position at the beginning of the gesture. The parallax constant kP can vary depending on the application, scenario and/or content of the control. For example, layers with different lengths can have different parallax constants.
- Alternatively, parallax effects can be presented in different ways. For example, parallel layers can move according to the model shown in Equation 18 for some movements or parts of a movement and move according to other models in other movements or parts of a movement. Referring again to
FIGS. 4A-4C , parallel layers that exhibit parallax effects can move according to the model shown in Equation 18 in transitions fromFIG. 4A toFIG. 4B , and fromFIG. 4B to 4C , and then move according to a specialized wrapping animation if a gesture to the right from the state shown inFIG. 4C , or inertia motion from an earlier gesture, causes a wrap back to the state shown inFIG. 4A . As another alternative, parallax effects can be omitted. - When the boundary feedback motion feature is applied, a boundary feedback effect can be applied whenever a gesture would move the UI element past a boundary, either directly (e.g., by a dragging or panning gesture) or indirectly (e.g., by inertia motion generated by a flick gesture). In this first example boundary feedback model, once the UI element hits a boundary the content is compressed in the direction of the motion (e.g., a vertical compression for a vertical motion) up to a certain threshold. If the compression is caused by inertia, the content compresses up to a certain amount based on the velocity at the time the boundary is hit, then decompresses to the original size. If the compression is caused directly (e.g., by dragging), the compression can be held as long as the last touch contact point is held and decompress when the user breaks contact, or decompress after a fixed length of time.
- In this first example boundary feedback model, the compression effect is achieved by applying a scale factor and dynamically placing a compression point to ensure that the effect looks the same regardless of the size of the list. In order to properly compute the motion and scale for the boundary effect, the first step is to identify that a boundary has been crossed and by how much. The boundary motion rule described above illustrates how to compute a boundary position in this first example boundary feedback model, and in the second example boundary feedback model described below.
- Let q=(xq, yq) be the unmodified, post-gesture position resulting from an active finger tracking gesture (e.g., a dragging gesture) or from simulated inertia (e.g., from a flick gesture), let xL be the left boundary, let xR be the right boundary, let yT be the top boundary, and let yB be the bottom boundary. Let r=(rx, ry) represent how far the post-gesture position exceeds the boundaries with respect to {xL, xR, yT, yB}:
-
r x=max(x L −x q,0,x q −x R) (Eq. 16) -
r y=max(y T −y q,0,y q −y B) (Eq. 17) - In cases where only a vertical or horizontal boundary applies (e.g., in axis-locked elements), r may be calculated in only a vertical or horizontal dimension, as appropriate, while omitting a calculation of the other dimension of r.
- Let Sc be the compressible area with dimensions (wc, hc), which is some area equal to or greater than the visible area, depending on the value of coefficients k % and k+, where k% is the compression percentage coefficient (e.g., MotionParameter_CompressPercent (k%≧0)), and k+ is the compression offset coefficient (e.g., MotionParameter_CompressOffset{X,Y} (k+≧0)). If k%=0, then the compressible area matches the size of the visible area and the visual result is that only the visible part of the control is being compressed. If k%=1, the compressible area matches the entire control area. k+=(w+, h+) allows an increase in the compressible area by a fixed amount, regardless of the control area size. In this detailed example, the compressible area can be calculated according to the following equation:
-
S c =S v +k %·(S A −S V)+k + (Eq. 18), - where SA is the control area with dimensions (WA, hA), and SV, is the visible area with dimensions (wV, hV). In one implementation, the compression percentage coefficient is 0.0 and the compression offset coefficient is 0.5*SV.
- If the user is actively dragging the content, the compression scale factor scomp=(scompx, scompy) to apply to the target UI element can be computed according to the following equations:
-
- where ks is the compression factor coefficient (e.g., MotionParameter_CompressFactor (0<ks≦1)), and r≦Sv. In one implementation, the compression factor coefficient is 0.2. Alternatively, the scale factor and/or the compressible area can be calculated in different ways. For example, different ranges of compression coefficients can be used.
- In words, what is being done here is to find the difference between the compressible area (e.g., in the horizontal or vertical dimensions) and the amount by which the gesture is compressing the compressible area, then calculating the scale factor based on that difference. The compression factor ks, if it is less than 1, limits how much the value of r (the amount by which the post-gesture position has exceeded the boundary) will cause the compressible area to be compressed. A UI system can then place a distortion point (which can also be referred to as a “squeeze point” or “compression point” when applying compression effects) at the other side of the compressible area (i.e., the side of the compressible area opposite the side where the gesture is being made) and apply that scale factor, resulting in a compression effect.
- Once the user ends the dragging gesture (e.g., by lifting a finger from the touchscreen), and if no wrap-around functionality is available or if the threshold for wrap-around hasn't been reached, the content in the compressible area returns to a decompressed state. In this first example boundary effects model, decompression proceeds according to the appropriate equations set forth below.
- In this first example boundary effects model, if a boundary is exceeded during inertia motion, the following equations are used to compute how far off the boundaries the current position is (r) over time, based on the velocity at the time the boundary was crossed (vh) and how far off the boundary the position is (rI) when the following equations are applied:
-
- If r<0, the motion is complete. Note that rI can come either from inertia or from an active drag, such as when a user drags the content into a compressed state, then flicks, generating inertia.
- The compression scale factor sinertiacomp=(sinertiacompx, sinertiacompy) to apply during inertia compression can be computed according to the following equations:
-
- Note that these equations are similar to the case when dragging the content (see Equations 19-21, above), except that the coefficient ks (the compression factor coefficient) has already been applied in this case in Equations 22 and 23. Alternatively, the scale factor can be calculated in a different way. For example, constants such as the compression factor coefficient ks or the value 0.001 in Equation 23 can be replaced with other constants depending on implementation.
- In this first example boundary effects model, in addition to computing the scale factor to apply to the target UI element, a compression point Ccomp=(ccompx, ccompy) is calculated in order to generate the expected visual effect. In practice, a compression point can be at different positions in a UI element. For example, a compression point can be located at or near the center of a UI element, such that half (or approximately half) of the content in the UI element will be compressed. As another example, a compression point can be located at or near a border of UI element, such that all (or approximately all) of the content in the UI element will be compressed. The compression point can vary for different UI elements. Using different compression points can be helpful for providing a consistent amount of distortion in the content of UI elements of different sizes. The compression point position can be computed according to the following equations:
-
- Alternatively, compression points can be calculated in a different way, or the calculation of compression points can be omitted.
- In this second example boundary feedback model, the appearance of the boundary feedback can be controlled in finer detail by using more coefficients. Also, regardless of whether the compression is caused directly (e.g., by dragging) or by inertia, the same calculations are used for the compression effects
- Let q=(xq, yq) be the unmodified, post-gesture position resulting from an active finger tracking gesture (e.g., a dragging gesture) or from simulated inertia (e.g., from a flick gesture), let xL be the left boundary, let xR be the right boundary, let yT be the top boundary, and let yB be the bottom boundary. Let r=(wr, hr) represent how far the post-gesture position exceeds the boundaries with respect to {xL, xR, yT, yB}:
-
w r=max(x L −x q,0,x q −x R) (Eq. 30) -
h r=max(y T −y q,0,y q −y B) (Eq. 31) - In cases where only a vertical or horizontal boundary applies (e.g., in axis-locked elements), r may be calculated in only a vertical or horizontal dimension, as appropriate, while omitting a calculation of the other dimension of r.
- As in the first example boundary effects model, Sc is the compressible area with dimensions (wc, hc), calculated as shown in Equation 18. However, in this second example boundary effects model, given r=(wr, hr) and a compressible area Sc=(wc, hc), the compression scale factor scomp=(scompx, scompy) to apply to the target UI element is computed according to the following equations:
-
- where ks is a spring factor coefficient (e.g., MotionParameter_SpringFactor (ks>0)), ke is a spring power coefficient (e.g., MotionParameter_SpringPower (ke>0)), kd is a damper factor coefficient (e.g., MotionParameter_DamperFactor (0≦kd≦1)), kL is a compression limit coefficient (e.g., MotionParameter_CompressionLimit (kL>0)), and Δt is the time interval since the last iteration of the simulation (Δt≧0). The equation for r″ imposes limits on the movement in the UI element during boundary feedback. If r″=0, the motion is considered to be complete.
- In this second example boundary effects model, the spring factor coefficient ks is a number that specifies how much resistance will counteract the inertia force, and the spring power coefficient ke shapes the curve of the resistance. For example, a spring power coefficient of 1 indicates linear resistance, where resistance increases at a constant rate as compression increases. A spring power coefficient greater than 1 means that the resistance will increase at an increasing rate at higher compression, and less than 1 means that the resistance will increase, but at a decreasing rate, at higher compression. The damper factor coefficient kd represents a percentage of energy absorbed by the system and taken away from the inertia. The damper factor coefficient can be used to smooth out the boundary effect and avoid a repeated cycle of compression and decompression. The time interval Δt can vary depending on the number of frames per second in the animation of the boundary feedback, hardware speed, and other factors. In one implementation, the time interval is about 16 ms between each update. Varying the time interval can alter the effect of the boundary effect. For example, a smaller time interval can result in more fluid motion.
- Alternatively, the scale factor and/or the compressible area can be calculated in different ways. For example, different ranges or values of coefficients can be used.
-
FIG. 10 is a graph of position changes in a UI element over time according to the second example boundary effects model. According to the graph shown inFIG. 10 , a compression effect occurs during the time that the position of the UI element exceeds the boundary position indicated by the dashedline 1010 inFIG. 10 ). The compression line can indicate the position of a boundary in a UI element. - The shape of the
position curve 1020 can be modified in different ways, such as by adjusting coefficients. For example, by adjusting the spring power coefficient, the uppermost tip of theboundary effect curve 1020 can be made to go higher (e.g., up to a configurable limit) or lower for a particular initial velocity. A higher tip of the curve can indicate a greater compression effect, and a lower tip can indicate a lesser compression effect. As another example, by adjusting the spring factor coefficient, the duration of the compression can be adjusted to be shorter or longer. InFIG. 10 , the duration is represented by the distance between the points at which theline 1010 is crossed by thecurve 1020. As another example, by adjusting the damper factor coefficient the right-hand tail of the curve (e.g., the part of thecurve 1020 after theboundary position line 1010 is crossed for the second time) can be moved up or down, resulting in a more gradual or more abrupt end to the compression effect. Coefficients can be adjusted in combination or independently, and other values besides those indicated can be adjusted as well, to cause changes in position. Different combinations of adjustments can be used to obtain specific shapes in theposition curve 1020. - In this second example boundary effects model, a current inertia velocity vc and a current touch contact position qc can be updated to reflect the physics interaction of the boundary effect. For example, the updated velocity v′c and updated touch contact position q′c are calculated according to the following equations:
-
- Various alternatives to the boundary feedback models described above are possible. For example, if wrapping beyond a boundary (e.g., wrapping back to the beginning of a list after the end of the list has been reached) is permitted, if the compression is caused by dragging, the list can wrap around once a threshold compression has been reached. As another alternative, boundary effects can be omitted.
- A UI system can provide programmatic access to system-wide values e.g., (inertia values, boundary effect values). Using system-wide values can help in maintaining consistent UI behavior across components and frameworks, and can allow adjustments to the behavior in multiple UI elements at once. For example, inertia effects in multiple UI elements can be changed by adjusting system-wide inertia values.
- In one implementation, in order to provide frameworks with access to the reference values of each coefficient, an API is included the ITouchSession module (HRESULT GetMotionParameterValue(IN MotionParameter ID, OUT float*value)). In one implementation, the identifiers and default values for the coefficients whose values are accessible through the ITouchSession::GetMotionParameterValue( ) API are as follows:
-
enum MotionParameter { MotionParameter_Friction, // default: 0.4f MotionParameter_ParkingSpeed, // default: 60.0f MotionParameter_MaximumSpeed, // default: 20000.0f MotionParameter_SpringFactor, // default: 48.0f MotionParameter_SpringPower, // default: 0.75f MotionParameter_DamperFactor, // default: 0.09f MotionParameter_CompressLimit, // default: 300.0f MotionParameter_CompressPercent, // default: 0.0f MotionParameter_CompressOffsetX, // default: 720.0f MotionParameter_CompressOffsetY, // default: 1200.0f };
The values that are accessible through the API can vary depending on implementation. For example, a UI system that uses the first example boundary effects model described above can omit values such as spring factor, spring power, and damper factor values. Or, a UI system can use additional values or replace the listed default values with other default values. Values can be fixed or adjustable, and can be updated during operation of the system (e.g., based on system settings or user preferences). -
FIG. 11 is a system diagram showing anexample UI system 1100 that presents a UI on a device (e.g., a smartphone or other mobile computing device). In this example, theUI system 1100 is a multi-layer UI system that presents motion feedback (e.g., parallax effects, boundary effects, etc.). Alternatively, thesystem 1100 presents motion feedback in UIs that do not have multiple UI layers. Thesystem 1100 can be used to implement functionality described in other examples, or other functionality. - In this example, the
system 1100 includes ahub module 1110 that provides a declarative description of a hub page toUI control 1120, which controls display of UI layers.UI control 1120 also can be referred to as a “panorama” or “pano” control in a multi-layer UI system. Such a description can be used, for example, when the UI layers move in a panoramic, or horizontal, fashion. Alternatively,UI control 1120 controls UI layers that move vertically, or in some other fashion.UI control 1120 includesmarkup generator 1130 andmotion module 1140. - The declarative description of the hub page includes information that defines UI elements. In a multi-layer UI system, UI elements can include multiple layers, such as a background layer, a title layer, a section header layer, and a content layer. The declarative description of the hub page is provided to
markup generator 1130, along with other information such as style information and/or configuration properties.Markup generator 1130 generates markup that can be used to render the UI layers.Motion module 1140 accepts events (e.g., direct UI manipulation events) generated in response to user input and generates motion commands. The motion commands are provided along with the markup to aUI framework 1150. In theUI framework 1150, the markup and motion commands are received inlayout module 1152, which generates UI rendering requests to be sent to device operating system (OS) 1160. Thedevice OS 1160 receives the rendering requests and causes a rendered UI to be output to a display on the device. System components such ashub module 1110,UI control 1120, andUI framework 1150 also can be implemented as part ofdevice OS 1160. In one implementation, thedevice OS 1160 is a mobile computing device OS. - A user (not shown) can generate user input that affects how the UI is presented. In the example shown in
FIG. 11 , theUI control 1120 listens for direct UI manipulation events generated byUI framework 1150. InUI framework 1150, direct UI manipulation events are generated byinteraction module 1154, which receives gesture messages (e.g., messages generated in response to panning or flick gestures by a user interacting with a touchscreen on the device) fromdevice OS 1160.Interaction module 1154 also can accept and generate direct UI manipulation events for navigation messages generated in response to other kinds of user input, such as voice commands, directional buttons on a keypad or keyboard, trackball motions, etc.Device OS 1160 includes functionality for recognizing user gestures and creating messages that can be used byUI framework 1150.UI framework 1150 translates gesture messages into direction UI manipulation events to be sent toUI control 1120. - The
system 1100 can distinguish between different gestures on the touchscreen, such as drag gestures, pan gestures and flick gestures. Thesystem 1100 can also detect a tap or touch gesture, such as where the user touches the touchscreen in a particular location, but does not move the finger, stylus, etc. before breaking contact with the touchscreen. As an alternative, some movement is permitted, within a small threshold, before breaking contact with the touchscreen in a tap or touch gesture. - The
system 1100 interprets an interaction as a particular gesture depending on the nature of the interaction with the touchscreen. Thesystem 1100 obtains one or more discrete inputs from a user's interaction. A gesture can be determined from a series of inputs. For example, when the user touches the touchscreen and begins a movement in UI element in a horizontal direction while maintaining contact with the touchscreen, thesystem 1100 can fire a pan input and begin a horizontal movement in the UI element. Thesystem 1100 can continue to tire pan inputs while the user maintains contact with the touchscreen and continues moving. For example, thesystem 1100 can fire a new pan input each time the user moves N pixels while maintaining contact with the touch screen. In this way, a continuous physical gesture on a touchscreen can be interpreted by thesystem 1100 as a series of pan inputs. Thesystem 1100 can continuously update the contact position and rate of movement. When the physical gesture ends (e.g., when user breaks contact with the touchscreen), thesystem 1100 can determine whether to interpret the motion at the end as a flick by determining how quickly the user's finger, stylus, etc., was moving when it broke contact with the touchscreen, and whether the rate of movement exceeds a threshold. - The
system 1100 can render motion (e.g., motion in a layer, list, or other UI element) on the display differently depending on the type of gesture. For example, in the case of a horizontal drag gesture (in which the user is currently maintaining contact with the touchscreen) on a content layer in a multi-layer UI system, thesystem 1100 moves the content layer in a horizontal direction by the same distance as the horizontal distance of the drag. In a parallax effect, the title layer and background layer also move in response to the drag. As another example, in the case of a pan gesture (in which the user has ended the gesture) on the content layer, thesystem 1100 can move the content layer in the amount of the pan, and determine whether to perform an additional movement in the content layer. For example, thesystem 1100 can perform a locking animation (i.e., an animation of a movement in the content layer to snap to a lock point) and move the content layer to a left or right lock point associated with an item in the content layer. Thesystem 1100 can determine which lock point associated with the current pane is closer, and transition to the closer lock point. As another example, thesystem 1100 can move the content layer in order to bring an item in the content layer that is in partial view on the display area into full view. Alternatively, thesystem 1100 can maintain the current position of the content layer. As another example, in the case of a flick gesture (e.g., where the user was moving more rapidly when the user broke contact with the touchscreen) on the content layer, thesystem 1100 can use simulated inertia to determine a post-gesture position for the content layer. Alternatively, thesystem 1100 can present some other kind of motion, such as a wrapping animation or other transition animation. The threshold velocity for a flick to be detected (i.e., to distinguish a flick gesture from a pan gesture) can vary depending on implementation. - The
system 1100 also can implement edge tap functionality. In an edge tap, a user can tap within a given margin of edges of the display area to cause a transition (e.g., to a next or previous item in a content layer, a next or previous list element, etc.). This can be useful, for example, where an element is partially in view in the display area. The user can tap near the element to cause the system to bring that element completely into the display area. - Various extensions and alternatives to the embodiments described herein are possible.
- For example, described examples show different positions of UI elements (e.g., layers, lists, etc.) that may be of interest to a user. A user can begin navigation of an element at the beginning of an element, or use different entry points. For example, a user can begin interacting in the middle of a content layer, at the end of a content layer, etc. This can be useful, for example, where a user has previously exited at a position other than the beginning of a layer (e.g., the end of a layer), so that the user can return to the prior location (e.g., before and after a user uses an application (such as an audio player) invoked by actuating a content image).
- As another example, other models can be used to model inertia and movement. For example, although some equations are provided in some examples that approximate motion according to Newtonian physics, other equations can be used that model other kinds of motion (e.g., non-Newtonian physics).
- As another example, although controls can share global parameters, such as a global friction coefficient for inertia motion, parameters can be customized. For example, friction coefficients can be customized for specific controls or content, such as friction coefficients that result in more rapid deceleration of inertia motion for photos or photo slide shows.
- As another example, boundary feedback can be applied to pinch and stretch gestures. Such boundary feedback can useful, for example, to indicate that a border of the UI element has been reached.
- As another example, additional feedback on gestures can be used. For example, visual feedback such as a distortion effect can be used to alert a user that a UI element with zoom capability (e.g., a map or image) has reached a maximum or minimum zoom level.
- As another example, boundary effects such as compression effects can themselves produce inertia movement. For example, when a vertically scrolling list is compressed upon reaching the end of the list, and breaking contact with the touchscreen causes the list decompress, the decompression can be combined with a spring or rebound effect, causing the list to scroll in the opposite direction of the motion that originally caused the compression. In this way, the spring effect could provide boundary feedback to indicate that the end of list had been reached, while also providing an alternative technique for navigating the list. The spring effect could be used to cause a movement in the list similar to a flick in the opposite direction. Inertia motion can applied to motion caused by the spring effect.
-
FIG. 12 illustrates a generalized example of asuitable computing environment 1200 in which several of the described embodiments may be implemented. Thecomputing environment 1200 is not intended to suggest any limitation as to scope of use or functionality, as the techniques and tools described herein may be implemented in diverse general-purpose or special-purpose computing environments. - With reference to
FIG. 12 , thecomputing environment 1200 includes at least oneCPU 1210 and associatedmemory 1220. InFIG. 12 , this mostbasic configuration 1230 is included within a dashed line. Theprocessing unit 1210 executes computer-executable instructions and may be a real or a virtual processor. In a multi-processing system, multiple processing units execute computer-executable instructions to increase processing power.FIG. 12 shows a second processing unit 1215 (e.g., a GPU or other co-processing unit) and associatedmemory 1225, which can be used for video acceleration or other processing. Thememory memory stores software 1280 for implementing a system with one or more of the described techniques and tools. - A computing environment may have additional features. For example, the
computing environment 1200 includesstorage 1240, one ormore input devices 1250, one ormore output devices 1260, and one ormore communication connections 1270. An interconnection mechanism (not shown) such as a bus, controller, or network interconnects the components of thecomputing environment 1200. Typically, operating system software (not shown) provides an operating environment for other software executing in thecomputing environment 1200, and coordinates activities of the components of thecomputing environment 1200. - The
storage 1240 may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, DVDs, memory cards, or any other medium which can be used to store information and which can be accessed within thecomputing environment 1200. Thestorage 1240 stores instructions for thesoftware 1280 implementing described techniques and tools. - The input device(s) 1250 may be a touch input device such as a keyboard, mouse, pen, trackball or touchscreen, an audio input device such as a microphone, a scanning device, a digital camera, or another device that provides input to the
computing environment 1200. For video, the input device(s) 1250 may be a video card, TV tuner card, or similar device that accepts video input in analog or digital form, or a CD-ROM or CD-RW that reads video samples into thecomputing environment 1200. The output device(s) 1260 may be a display, printer, speaker, CD-writer, or another device that provides output from thecomputing environment 1200. - The communication connection(s) 1270 enable communication over a communication medium to another computing entity. The communication medium conveys information such as computer-executable instructions, audio or video input or output, or other data in a modulated data signal. A modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media include wired or wireless techniques implemented with an electrical, optical, RF, infrared, acoustic, or other carrier.
- The techniques and tools can be described in the general context of computer-readable media. Computer-readable media are any available media that can be accessed within a computing environment. By way of example, and not limitation, with the
computing environment 1200, computer-readable media includememory storage 1240, and combinations thereof. - The techniques and tools can be described in the general context of computer-executable instructions, such as those included in program modules, being executed in a computing environment on a target real or virtual processor. Generally, program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or split between program modules as desired in various embodiments. Computer-executable instructions for program modules may be executed within a local or distributed computing environment. Any of the methods described herein can be implemented by computer-executable instructions encoded on one or more computer-readable media (e.g., computer-readable storage media or other tangible media).
- For the sake of presentation, the detailed description uses terms like “interpret” and “squeeze” to describe computer operations in a computing environment. These terms are high-level abstractions for operations performed by a computer, and should not be confused with acts performed by a human being. The actual computer operations corresponding to these terms vary depending on implementation.
-
FIG. 13 illustrates a generalized example of asuitable implementation environment 1300 in which described embodiments, techniques, and technologies may be implemented. - In
example environment 1300, various types of services (e.g., computing services 1312) are provided by acloud 1310. For example, thecloud 1310 can comprise a collection of computing devices, which may be located centrally or distributed, that provide cloud-based services to various types of users and devices connected via a network such as the Internet. Thecloud computing environment 1300 can be used in different ways to accomplish computing tasks. For example, with reference to described techniques and tools, some tasks, such as processing user input and presenting a user interface, can be performed on a local computing device, while other tasks, such as storage of data to be used in subsequent processing, can be performed elsewhere in the cloud. - In
example environment 1300, thecloud 1310 provides services for connected devices with a variety ofscreen capabilities 1320A-N.Connected device 1320A represents a device with a mid-sized screen. For example, connecteddevice 1320A could be a personal computer such as desktop computer, laptop, notebook, netbook, or the like.Connected device 1320B represents a device with a small-sized screen. For example, connecteddevice 1320B could be a mobile phone, smart phone, personal digital assistant, tablet computer, and the like.Connected device 1320N represents a device with a large screen. For example, connecteddevice 1320N could be a television (e.g., a smart television) or another device connected to a television or projector screen (e.g., a set-top box or gaming console). - A variety of services can be provided by the
cloud 1310 through one or more service providers (not shown). For example, thecloud 1310 can provide services related to mobile computing to one or more of the various connecteddevices 1320A-N. Cloud services can be customized to the screen size, display capability, or other functionality of the particular connected device (e.g., connecteddevices 1320A-N). For example, cloud services can be customized for mobile devices by taking into account the screen size, input devices, and communication bandwidth limitations typically associated with mobile devices. -
FIG. 14 is a system diagram depicting an exemplarymobile device 1400 including a variety of optional hardware and software components, shown generally at 1402. Any components 1402 in the mobile device can communicate with any other component, although not all connections are shown, for ease of illustration. The mobile device can be any of a variety of computing devices (e.g., cell phone, smartphone, handheld computer, personal digital assistant (PDA), etc.) and can allow wireless two-way communications with one or moremobile communications networks 1404, such as a cellular or satellite network. - The illustrated mobile device can include a controller or processor 1410 (e.g., signal processor, microprocessor, ASIC, or other control and processing logic circuitry) for performing such tasks as signal coding, data processing, input/output processing, power control, and/or other functions. An
operating system 1412 can control the allocation and usage of the components 1402 and support for one ormore application programs 1414. The application programs can include common mobile computing applications (e.g., email applications, calendars, contact managers, web browsers, messaging applications), or any other computing application. - The illustrated mobile device can include
memory 1420.Memory 1420 can includenon-removable memory 1422 and/orremovable memory 1424. Thenon-removable memory 1422 can include RAM, ROM, flash memory, a disk drive, or other well-known memory storage technologies. Theremovable memory 1424 can include flash memory or a Subscriber Identity Module (SIM) card, which is well known in GSM communication systems, or other well-known memory storage technologies, such as smart cards. Thememory 1420 can be used for storing data and/or code for running theoperating system 1412 and theapplications 1414. Example data can include web pages, text, images, sound files, video data, or other data sets to be sent to and/or received from one or more network servers or other mobile devices via one or more wired or wireless networks. Thememory 1420 can be used to store a subscriber identifier, such as an International Mobile Subscriber Identity (IMSI), and an equipment identifier, such as an International Mobile Equipment Identifier (IMEI). Such identifiers can be transmitted to a network server to identify users and equipment. - The mobile device can support one or
more input devices 1430, such as atouchscreen 1432,microphone 1434,camera 1436,physical keyboard 1438 and/ortrackball 1440 and one ormore output devices 1450, such as aspeaker 1452 and adisplay 1454. Other possible output devices (not shown) can include a piezoelectric or other haptic output device. Some devices can serve more than one input/output function. For example,touchscreen 1432 anddisplay 1454 can be combined in a single input/output device. -
Touchscreen 1432 can accept input in different ways. For example, capacitive touchscreens detect touch input when an object (e.g., a fingertip or stylus) distorts or interrupts an electrical current running across the surface. As another example, touchscreens can use optical sensors to detect touch input when beams from the optical sensors are interrupted. Physical contact with the surface of the screen is not necessary for input to be detected by some touchscreens. - A
wireless modem 1460 can be coupled to an antenna (not shown) and can support two-way communications between theprocessor 1410 and external devices, as is well understood in the art. Themodem 1460 is shown generically and can include a cellular modem for communicating with themobile communication network 1404 and/or other radio-based modems (e.g., Bluetooth or Wi-Fi). Thewireless modem 1460 is typically configured for communication with one or more cellular networks, such as a GSM network for data and voice communications within a single cellular network, between cellular networks, or between the mobile device and a public switched telephone network (PSSTN). - The mobile device can further include at least one input/
output port 1480, apower supply 1482, a satellitenavigation system receiver 1484, such as a Global Positioning System (GPS) receiver, anaccelerometer 1486, a transceiver 1488 (for wirelessly transmitting analog or digital signals) and/or a physical connector 1490, which can be a USB port, IEEE 1494 (firewall) port, and/or RS-232 port. The illustrated components 1402 are not required or all-inclusive, as components can be deleted and other components can be added. - In view of the many possible embodiments to which the principles of the disclosed invention may be applied, it should be recognized that the illustrated embodiments are only preferred examples of the invention and should not be taken as limiting the scope of the invention. Rather, the scope of the invention is defined by the following claims. We therefore claim as our invention all that comes within the scope and spirit of these claims.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/773,803 US20110202834A1 (en) | 2010-02-12 | 2010-05-04 | Visual motion feedback for user interface |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US30400410P | 2010-02-12 | 2010-02-12 | |
US12/773,803 US20110202834A1 (en) | 2010-02-12 | 2010-05-04 | Visual motion feedback for user interface |
Publications (1)
Publication Number | Publication Date |
---|---|
US20110202834A1 true US20110202834A1 (en) | 2011-08-18 |
Family
ID=44370492
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/721,419 Active 2033-07-08 US9417787B2 (en) | 2010-02-12 | 2010-03-10 | Distortion effects to indicate location in a movable data collection |
US12/773,803 Abandoned US20110202834A1 (en) | 2010-02-12 | 2010-05-04 | Visual motion feedback for user interface |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/721,419 Active 2033-07-08 US9417787B2 (en) | 2010-02-12 | 2010-03-10 | Distortion effects to indicate location in a movable data collection |
Country Status (1)
Country | Link |
---|---|
US (2) | US9417787B2 (en) |
Cited By (192)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110107264A1 (en) * | 2009-10-30 | 2011-05-05 | Motorola, Inc. | Method and Device for Enhancing Scrolling Operations in a Display Device |
US20110202837A1 (en) * | 2010-02-12 | 2011-08-18 | Microsoft Corporation | Multi-layer user interface with flexible parallel and orthogonal movement |
US20110199318A1 (en) * | 2010-02-12 | 2011-08-18 | Microsoft Corporation | Multi-layer user interface with flexible parallel movement |
US20110202859A1 (en) * | 2010-02-12 | 2011-08-18 | Microsoft Corporation | Distortion effects to indicate location in a movable data collection |
US20110246916A1 (en) * | 2010-04-02 | 2011-10-06 | Nokia Corporation | Methods and apparatuses for providing an enhanced user interface |
US20120019453A1 (en) * | 2010-07-26 | 2012-01-26 | Wayne Carl Westerman | Motion continuation of touch input |
US20120026181A1 (en) * | 2010-07-30 | 2012-02-02 | Google Inc. | Viewable boundary feedback |
US20120056889A1 (en) * | 2010-09-07 | 2012-03-08 | Microsoft Corporation | Alternate source for controlling an animation |
US20120066644A1 (en) * | 2010-09-14 | 2012-03-15 | Hal Laboratory Inc. | Computer-readable storage medium having stored thereon display control program, display control system, display control apparatus, and display control method |
US20120066627A1 (en) * | 2010-09-14 | 2012-03-15 | Nintendo Co., Ltd. | Computer-readable storage medium having stored thereon display control program, display control system, display control apparatus, and display control method |
US20120066621A1 (en) * | 2010-09-14 | 2012-03-15 | Nintendo Co., Ltd. | Computer-readable storage medium having stored thereon display control program, display control system, display control apparatus, and display control method |
US20120072863A1 (en) * | 2010-09-21 | 2012-03-22 | Nintendo Co., Ltd. | Computer-readable storage medium, display control apparatus, display control system, and display control method |
US20120081271A1 (en) * | 2010-10-01 | 2012-04-05 | Imerj LLC | Application display transitions between single and multiple displays |
US20120084685A1 (en) * | 2010-10-01 | 2012-04-05 | Heynen Patrick O | Method and apparatus for designing layout for user interfaces |
US20120084670A1 (en) * | 2010-10-05 | 2012-04-05 | Citrix Systems, Inc. | Gesture support for shared sessions |
US20120098769A1 (en) * | 2010-10-26 | 2012-04-26 | Aisin Aw Co., Ltd. | Display device, display method, and display program |
US20120202187A1 (en) * | 2011-02-03 | 2012-08-09 | Shadowbox Comics, Llc | Method for distribution and display of sequential graphic art |
US20120274550A1 (en) * | 2010-03-24 | 2012-11-01 | Robert Campbell | Gesture mapping for display device |
US20130055150A1 (en) * | 2011-08-24 | 2013-02-28 | Primesense Ltd. | Visual feedback for tactile and non-tactile user interfaces |
US20130061170A1 (en) * | 2011-09-01 | 2013-03-07 | Sony Corporation | User interface element |
KR20130031762A (en) * | 2011-09-21 | 2013-03-29 | 엘지전자 주식회사 | Mobile terminal and control method for mobile terminal |
WO2013049406A1 (en) * | 2011-10-01 | 2013-04-04 | Oracle International Corporation | Moving an object about a display frame by combining classical mechanics of motion |
US20130097551A1 (en) * | 2011-10-14 | 2013-04-18 | Edward P.A. Hogan | Device, Method, and Graphical User Interface for Data Input Using Virtual Sliders |
US20130169649A1 (en) * | 2012-01-04 | 2013-07-04 | Microsoft Corporation | Movement endpoint exposure |
US20130176316A1 (en) * | 2012-01-06 | 2013-07-11 | Microsoft Corporation | Panning animations |
US20130198663A1 (en) * | 2012-02-01 | 2013-08-01 | Michael Matas | Hierarchical User Interface |
US8514252B1 (en) | 2010-09-22 | 2013-08-20 | Google Inc. | Feedback during crossing of zoom levels |
US20130222340A1 (en) * | 2012-02-28 | 2013-08-29 | Canon Kabushiki Kaisha | Information processing apparatus, control method thereof, and storage medium |
US20130246936A1 (en) * | 2010-08-31 | 2013-09-19 | Anders Nancke-Krogh | System and method for unlimited multi-user computer desktop environment |
US20130268883A1 (en) * | 2012-04-05 | 2013-10-10 | Lg Electronics Inc. | Mobile terminal and control method thereof |
WO2013158750A2 (en) * | 2012-04-17 | 2013-10-24 | Wittich David | System and method for providing recursive feedback during an assembly operation |
US20130290868A1 (en) * | 2012-04-30 | 2013-10-31 | Anders Nancke-Krogh | System and method for unlimited multi-user computer desktop environment |
US20130332843A1 (en) * | 2012-06-08 | 2013-12-12 | Jesse William Boettcher | Simulating physical materials and light interaction in a user interface of a resource-constrained device |
GB2503654A (en) * | 2012-06-27 | 2014-01-08 | Samsung Electronics Co Ltd | Methods of outputting a manipulation of a graphic upon a boundary condition being met |
US20140033116A1 (en) * | 2012-07-25 | 2014-01-30 | Daniel Jakobs | Dynamic layering user interface |
EP2696269A1 (en) * | 2012-08-10 | 2014-02-12 | BlackBerry Limited | Method of momentum based zoom of content on an electronic device |
CN103576859A (en) * | 2013-10-09 | 2014-02-12 | 深迪半导体(上海)有限公司 | Man-machine interaction method for mobile terminal browsing |
US20140115533A1 (en) * | 2012-10-23 | 2014-04-24 | Nintendo Co., Ltd. | Information-processing device, storage medium, information-processing method, and information-processing system |
US20140129979A1 (en) * | 2012-11-02 | 2014-05-08 | Samsung Electronics Co., Ltd. | Display device and list display method thereof |
CN103809763A (en) * | 2012-11-15 | 2014-05-21 | 技嘉科技股份有限公司 | Keyboard device |
US20140195979A1 (en) * | 2013-01-10 | 2014-07-10 | Appsense Limited | Interactive user interface |
US20140215383A1 (en) * | 2013-01-31 | 2014-07-31 | Disney Enterprises, Inc. | Parallax scrolling user interface |
US8830190B2 (en) | 2010-10-25 | 2014-09-09 | Aisin Aw Co., Ltd. | Display device, display method, and display program |
US20140258904A1 (en) * | 2013-03-08 | 2014-09-11 | Samsung Display Co., Ltd. | Terminal and method of controlling the same |
US20140285507A1 (en) * | 2013-03-19 | 2014-09-25 | Canon Kabushiki Kaisha | Display control device, display control method, and computer-readable storage medium |
US20140289665A1 (en) * | 2013-03-25 | 2014-09-25 | Konica Minolta, Inc. | Device and method for determining gesture, and computer-readable storage medium for computer program |
US8850350B2 (en) | 2012-10-16 | 2014-09-30 | Google Inc. | Partial gesture text entry |
US20140298221A1 (en) * | 2010-12-22 | 2014-10-02 | Thomson Licensing | Method and apparatus for restricting user operations when applied to cards or windows |
US20140298258A1 (en) * | 2013-03-28 | 2014-10-02 | Microsoft Corporation | Switch List Interactions |
US8863039B2 (en) | 2011-04-18 | 2014-10-14 | Microsoft Corporation | Multi-dimensional boundary effects |
US20140310661A1 (en) * | 2013-04-15 | 2014-10-16 | Microsoft Corporation | Dynamic management of edge inputs by users on a touch device |
US20140317538A1 (en) * | 2013-04-22 | 2014-10-23 | Microsoft Corporation | User interface response to an asynchronous manipulation |
US8887103B1 (en) * | 2013-04-22 | 2014-11-11 | Google Inc. | Dynamically-positioned character string suggestions for gesture typing |
US20140351698A1 (en) * | 2013-05-23 | 2014-11-27 | Canon Kabushiki Kaisha | Display control apparatus and control method for the same |
US20140365882A1 (en) * | 2013-06-09 | 2014-12-11 | Apple Inc. | Device, method, and graphical user interface for transitioning between user interfaces |
US20140375572A1 (en) * | 2013-06-20 | 2014-12-25 | Microsoft Corporation | Parametric motion curves and manipulable content |
US20150040034A1 (en) * | 2013-08-01 | 2015-02-05 | Nintendo Co., Ltd. | Information-processing device, information-processing system, storage medium, and information-processing method |
US20150070360A1 (en) * | 2012-04-09 | 2015-03-12 | Tencent Technology (Shenzhen) Company Limited | Method and mobile terminal for drawing sliding trace |
US20150074614A1 (en) * | 2012-01-25 | 2015-03-12 | Thomson Licensing | Directional control using a touch sensitive device |
US20150074597A1 (en) * | 2013-09-11 | 2015-03-12 | Nvidia Corporation | Separate smoothing filter for pinch-zooming touchscreen gesture response |
US9001149B2 (en) | 2010-10-01 | 2015-04-07 | Z124 | Max mode |
WO2014200676A3 (en) * | 2013-06-09 | 2015-04-16 | Apple Inc. | Device, method, and graphical user interface for moving user interface objects |
US9013264B2 (en) | 2011-03-12 | 2015-04-21 | Perceptive Devices, Llc | Multipurpose controller for electronic devices, facial expressions management and drowsiness detection |
US9021380B2 (en) | 2012-10-05 | 2015-04-28 | Google Inc. | Incremental multi-touch gesture recognition |
EP2864860A2 (en) * | 2012-06-22 | 2015-04-29 | Microsoft Technology Licensing, LLC | Wrap-around navigation |
US20150143286A1 (en) * | 2013-11-20 | 2015-05-21 | Xiaomi Inc. | Method and terminal for responding to sliding operation |
US9043706B2 (en) | 2010-08-31 | 2015-05-26 | Anders Nancke-Krogh | System and method for using state replication between application instances to provide a collaborative desktop environment |
US9075460B2 (en) | 2012-08-10 | 2015-07-07 | Blackberry Limited | Method of momentum based zoom of content on an electronic device |
US20150227292A1 (en) * | 2012-07-16 | 2015-08-13 | Samsung Electronics Co., Ltd. | Method and apparatus for moving object in mobile terminal |
US9134906B2 (en) | 2012-10-16 | 2015-09-15 | Google Inc. | Incremental multi-word recognition |
US9158494B2 (en) | 2011-09-27 | 2015-10-13 | Z124 | Minimizing and maximizing between portrait dual display and portrait single display |
EP2871561A4 (en) * | 2012-08-14 | 2015-12-30 | Xiaomi Inc | Desktop system of mobile terminal and interface interaction method and device |
US9229918B2 (en) | 2010-12-23 | 2016-01-05 | Microsoft Technology Licensing, Llc | Presenting an application change through a tile |
WO2016040205A1 (en) * | 2014-09-09 | 2016-03-17 | Microsoft Technology Licensing, Llc | Parametric inertia and apis |
EP2573666A3 (en) * | 2011-09-21 | 2016-06-01 | LG Electronics Inc. | Mobile terminal and control method thereof |
US20160196033A1 (en) * | 2013-09-27 | 2016-07-07 | Huawei Technologies Co., Ltd. | Method for Displaying Interface Content and User Equipment |
US20160224226A1 (en) * | 2010-12-01 | 2016-08-04 | Sony Corporation | Display processing apparatus for performing image magnification based on face detection |
US9423951B2 (en) | 2010-12-31 | 2016-08-23 | Microsoft Technology Licensing, Llc | Content-based snap point |
US20160286123A1 (en) * | 2015-03-27 | 2016-09-29 | National Taipei University Of Technology | Method of image conversion operation for panorama dynamic ip camera |
US9477393B2 (en) | 2013-06-09 | 2016-10-25 | Apple Inc. | Device, method, and graphical user interface for displaying application status information |
US9489111B2 (en) | 2010-01-06 | 2016-11-08 | Apple Inc. | Device, method, and graphical user interface for navigating through a range of values |
WO2016200586A1 (en) * | 2015-06-07 | 2016-12-15 | Apple Inc. | Devices and methods for navigating between user interfaces |
US9552080B2 (en) | 2012-10-05 | 2017-01-24 | Google Inc. | Incremental feature-based gesture-keyboard decoding |
US9557876B2 (en) | 2012-02-01 | 2017-01-31 | Facebook, Inc. | Hierarchical user interface |
US9600120B2 (en) | 2013-03-15 | 2017-03-21 | Apple Inc. | Device, method, and graphical user interface for orientation-based parallax display |
US9602729B2 (en) | 2015-06-07 | 2017-03-21 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
WO2017048187A1 (en) * | 2015-09-16 | 2017-03-23 | Adssets AB | Method for movement on the display of a device |
US9612741B2 (en) | 2012-05-09 | 2017-04-04 | Apple Inc. | Device, method, and graphical user interface for displaying additional information in response to a user contact |
US9619076B2 (en) | 2012-05-09 | 2017-04-11 | Apple Inc. | Device, method, and graphical user interface for transitioning between display states in response to a gesture |
US9632664B2 (en) | 2015-03-08 | 2017-04-25 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US9639184B2 (en) | 2015-03-19 | 2017-05-02 | Apple Inc. | Touch input cursor manipulation |
US9645724B2 (en) | 2012-02-01 | 2017-05-09 | Facebook, Inc. | Timeline based content organization |
US9645732B2 (en) | 2015-03-08 | 2017-05-09 | Apple Inc. | Devices, methods, and graphical user interfaces for displaying and using menus |
US9674426B2 (en) | 2015-06-07 | 2017-06-06 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US9696888B2 (en) | 2010-12-20 | 2017-07-04 | Microsoft Technology Licensing, Llc | Application-launching interface for multiple modes |
US9710453B2 (en) | 2012-10-16 | 2017-07-18 | Google Inc. | Multi-gesture text input prediction |
US9712577B2 (en) | 2013-06-09 | 2017-07-18 | Apple Inc. | Device, method, and graphical user interface for sharing content from a respective application |
US9753639B2 (en) | 2012-05-09 | 2017-09-05 | Apple Inc. | Device, method, and graphical user interface for displaying content associated with a corresponding affordance |
US9778771B2 (en) | 2012-12-29 | 2017-10-03 | Apple Inc. | Device, method, and graphical user interface for transitioning between touch input to display output relationships |
US9785338B2 (en) | 2012-07-02 | 2017-10-10 | Mosaiqq, Inc. | System and method for providing a user interaction interface using a multi-touch gesture recognition engine |
US9785305B2 (en) | 2015-03-19 | 2017-10-10 | Apple Inc. | Touch input cursor manipulation |
US9830311B2 (en) | 2013-01-15 | 2017-11-28 | Google Llc | Touch keyboard using language and spatial models |
US9830048B2 (en) | 2015-06-07 | 2017-11-28 | Apple Inc. | Devices and methods for processing touch inputs with instructions in a web page |
US9841895B2 (en) | 2013-05-03 | 2017-12-12 | Google Llc | Alternative hypothesis error correction for gesture typing |
US9880735B2 (en) | 2015-08-10 | 2018-01-30 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US9886184B2 (en) | 2012-05-09 | 2018-02-06 | Apple Inc. | Device, method, and graphical user interface for providing feedback for changing activation states of a user interface object |
CN107728914A (en) * | 2017-08-21 | 2018-02-23 | 莱诺斯科技(北京)股份有限公司 | A kind of satellite power supply and distribution software touch-control man-machine interactive system |
US9959025B2 (en) | 2012-12-29 | 2018-05-01 | Apple Inc. | Device, method, and graphical user interface for navigating user interface hierarchies |
US9990121B2 (en) | 2012-05-09 | 2018-06-05 | Apple Inc. | Device, method, and graphical user interface for moving a user interface object based on an intensity of a press input |
US9990107B2 (en) | 2015-03-08 | 2018-06-05 | Apple Inc. | Devices, methods, and graphical user interfaces for displaying and using menus |
US9996231B2 (en) | 2012-05-09 | 2018-06-12 | Apple Inc. | Device, method, and graphical user interface for manipulating framed graphical objects |
RU2660642C2 (en) * | 2012-06-20 | 2018-07-06 | Самсунг Электроникс Ко., Лтд. | Information display apparatus and method of user device |
US10019435B2 (en) | 2012-10-22 | 2018-07-10 | Google Llc | Space prediction for text input |
US10037138B2 (en) | 2012-12-29 | 2018-07-31 | Apple Inc. | Device, method, and graphical user interface for switching between user interfaces |
US10042542B2 (en) | 2012-05-09 | 2018-08-07 | Apple Inc. | Device, method, and graphical user interface for moving and dropping a user interface object |
US10048757B2 (en) | 2015-03-08 | 2018-08-14 | Apple Inc. | Devices and methods for controlling media presentation |
US10061759B2 (en) | 2012-06-07 | 2018-08-28 | Microsoft Technology Licensing, Llc | Progressive loading for web-based spreadsheet applications |
US10067653B2 (en) | 2015-04-01 | 2018-09-04 | Apple Inc. | Devices and methods for processing touch inputs based on their intensities |
US10073615B2 (en) | 2012-05-09 | 2018-09-11 | Apple Inc. | Device, method, and graphical user interface for displaying user interface objects corresponding to an application |
US10078442B2 (en) | 2012-12-29 | 2018-09-18 | Apple Inc. | Device, method, and graphical user interface for determining whether to scroll or select content based on an intensity theshold |
US10095391B2 (en) | 2012-05-09 | 2018-10-09 | Apple Inc. | Device, method, and graphical user interface for selecting user interface objects |
US10095396B2 (en) | 2015-03-08 | 2018-10-09 | Apple Inc. | Devices, methods, and graphical user interfaces for interacting with a control object while dragging another object |
US10120541B2 (en) | 2013-06-09 | 2018-11-06 | Apple Inc. | Device, method, and graphical user interface for sharing content from a respective application |
US10126930B2 (en) | 2012-05-09 | 2018-11-13 | Apple Inc. | Device, method, and graphical user interface for scrolling nested regions |
US10140013B2 (en) | 2015-02-13 | 2018-11-27 | Here Global B.V. | Method, apparatus and computer program product for calculating a virtual touch position |
US10162452B2 (en) | 2015-08-10 | 2018-12-25 | Apple Inc. | Devices and methods for processing touch inputs based on their intensities |
US10163245B2 (en) | 2016-03-25 | 2018-12-25 | Microsoft Technology Licensing, Llc | Multi-mode animation system |
US10175864B2 (en) | 2012-05-09 | 2019-01-08 | Apple Inc. | Device, method, and graphical user interface for selecting object within a group of objects in accordance with contact intensity |
US10175757B2 (en) | 2012-05-09 | 2019-01-08 | Apple Inc. | Device, method, and graphical user interface for providing tactile feedback for touch-based operations performed and reversed in a user interface |
US10191634B2 (en) * | 2015-01-30 | 2019-01-29 | Xiaomi Inc. | Methods and devices for displaying document on touch screen display |
US10200598B2 (en) | 2015-06-07 | 2019-02-05 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US10235035B2 (en) | 2015-08-10 | 2019-03-19 | Apple Inc. | Devices, methods, and graphical user interfaces for content navigation and manipulation |
US10248308B2 (en) | 2015-08-10 | 2019-04-02 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interfaces with physical gestures |
US10254955B2 (en) | 2011-09-10 | 2019-04-09 | Microsoft Technology Licensing, Llc | Progressively indicating new content in an application-selectable user interface |
US10275087B1 (en) | 2011-08-05 | 2019-04-30 | P4tents1, LLC | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10282085B2 (en) | 2013-08-27 | 2019-05-07 | Samsung Electronics Co., Ltd | Method for displaying data and electronic device thereof |
US10303325B2 (en) | 2011-05-27 | 2019-05-28 | Microsoft Technology Licensing, Llc | Multi-application environment |
US10346030B2 (en) | 2015-06-07 | 2019-07-09 | Apple Inc. | Devices and methods for navigating between user interfaces |
US20190221047A1 (en) * | 2013-06-01 | 2019-07-18 | Apple Inc. | Intelligently placing labels |
US10416800B2 (en) | 2015-08-10 | 2019-09-17 | Apple Inc. | Devices, methods, and graphical user interfaces for adjusting user interface objects |
US10437333B2 (en) | 2012-12-29 | 2019-10-08 | Apple Inc. | Device, method, and graphical user interface for forgoing generation of tactile output for a multi-contact gesture |
US10456082B2 (en) | 2014-11-28 | 2019-10-29 | Nokia Technologies Oy | Method and apparatus for contacting skin with sensor equipment |
US10496260B2 (en) | 2012-05-09 | 2019-12-03 | Apple Inc. | Device, method, and graphical user interface for pressure-based alteration of controls in a user interface |
US10579250B2 (en) | 2011-09-01 | 2020-03-03 | Microsoft Technology Licensing, Llc | Arranging tiles |
US10592070B2 (en) | 2015-10-12 | 2020-03-17 | Microsoft Technology Licensing, Llc | User interface directional navigation using focus maps |
US10620781B2 (en) | 2012-12-29 | 2020-04-14 | Apple Inc. | Device, method, and graphical user interface for moving a cursor according to a change in an appearance of a control icon with simulated three-dimensional characteristics |
US10810241B2 (en) | 2016-06-12 | 2020-10-20 | Apple, Inc. | Arrangements of documents in a document feed |
US10838570B2 (en) * | 2015-02-10 | 2020-11-17 | Etter Studio Ltd. | Multi-touch GUI featuring directional compression and expansion of graphical content |
US10884592B2 (en) | 2015-03-02 | 2021-01-05 | Apple Inc. | Control of system zoom magnification using a rotatable input mechanism |
US10921976B2 (en) * | 2013-09-03 | 2021-02-16 | Apple Inc. | User interface for manipulating user interface objects |
US10928907B2 (en) | 2018-09-11 | 2021-02-23 | Apple Inc. | Content-based tactile outputs |
US10969944B2 (en) | 2010-12-23 | 2021-04-06 | Microsoft Technology Licensing, Llc | Application reporting in an application-selectable user interface |
US11010038B2 (en) * | 2010-10-08 | 2021-05-18 | Sony Corporation | Information processing apparatus, information processing method and program for displaying an image during overdrag |
US11029942B1 (en) | 2011-12-19 | 2021-06-08 | Majen Tech, LLC | System, method, and computer program product for device coordination |
US11068083B2 (en) | 2014-09-02 | 2021-07-20 | Apple Inc. | Button functionality |
US11068128B2 (en) | 2013-09-03 | 2021-07-20 | Apple Inc. | User interface object manipulations in a user interface |
US11073799B2 (en) | 2016-06-11 | 2021-07-27 | Apple Inc. | Configuring context-specific user interfaces |
US11089134B1 (en) | 2011-12-19 | 2021-08-10 | Majen Tech, LLC | System, method, and computer program product for coordination among multiple devices |
US11157135B2 (en) | 2014-09-02 | 2021-10-26 | Apple Inc. | Multi-dimensional object rearrangement |
US11157143B2 (en) | 2014-09-02 | 2021-10-26 | Apple Inc. | Music user interface |
US11250385B2 (en) | 2014-06-27 | 2022-02-15 | Apple Inc. | Reduced size user interface |
US11272017B2 (en) | 2011-05-27 | 2022-03-08 | Microsoft Technology Licensing, Llc | Application notifications manifest |
US11343370B1 (en) | 2012-11-02 | 2022-05-24 | Majen Tech, LLC | Screen interface for a mobile device apparatus |
US11402968B2 (en) | 2014-09-02 | 2022-08-02 | Apple Inc. | Reduced size user in interface |
US11431834B1 (en) | 2013-01-10 | 2022-08-30 | Majen Tech, LLC | Screen interface for a mobile device apparatus |
US11435830B2 (en) | 2018-09-11 | 2022-09-06 | Apple Inc. | Content-based tactile outputs |
US11461984B2 (en) * | 2018-08-27 | 2022-10-04 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for multi-user collaborative creation, and storage medium |
US11463576B1 (en) | 2013-01-10 | 2022-10-04 | Majen Tech, LLC | Screen interface for a mobile device apparatus |
US11513675B2 (en) | 2012-12-29 | 2022-11-29 | Apple Inc. | User interface for manipulating user interface objects |
US11537259B2 (en) | 2010-10-01 | 2022-12-27 | Z124 | Displayed image transition indicator |
US20230012482A1 (en) * | 2019-03-24 | 2023-01-19 | Apple Inc. | Stacked media elements with selective parallax effects |
US20230035532A1 (en) * | 2021-05-14 | 2023-02-02 | Apple Inc. | User interfaces related to time |
US11656751B2 (en) | 2013-09-03 | 2023-05-23 | Apple Inc. | User interface for manipulating user interface objects with magnetic properties |
US11694590B2 (en) | 2020-12-21 | 2023-07-04 | Apple Inc. | Dynamic user interface with time indicator |
US11720239B2 (en) | 2021-01-07 | 2023-08-08 | Apple Inc. | Techniques for user interfaces related to an event |
US11743221B2 (en) | 2014-09-02 | 2023-08-29 | Apple Inc. | Electronic message user interface |
US11740776B2 (en) | 2014-08-02 | 2023-08-29 | Apple Inc. | Context-specific user interfaces |
US11775141B2 (en) | 2017-05-12 | 2023-10-03 | Apple Inc. | Context-specific user interfaces |
US11797968B2 (en) | 2017-05-16 | 2023-10-24 | Apple Inc. | User interfaces for peer-to-peer transfers |
US11822778B2 (en) | 2020-05-11 | 2023-11-21 | Apple Inc. | User interfaces related to time |
US11842032B2 (en) | 2020-05-11 | 2023-12-12 | Apple Inc. | User interfaces for managing user interface sharing |
CN117421087A (en) * | 2021-05-14 | 2024-01-19 | 苹果公司 | Time-dependent user interface |
US11893212B2 (en) | 2021-06-06 | 2024-02-06 | Apple Inc. | User interfaces for managing application widgets |
US11908343B2 (en) | 2015-08-20 | 2024-02-20 | Apple Inc. | Exercised-based watch face and complications |
US11922004B2 (en) | 2014-08-15 | 2024-03-05 | Apple Inc. | Weather user interface |
US11960701B2 (en) | 2019-05-06 | 2024-04-16 | Apple Inc. | Using an illustration to show the passing of time |
US11977411B2 (en) | 2018-05-07 | 2024-05-07 | Apple Inc. | Methods and systems for adding respective complications on a user interface |
US11983702B2 (en) | 2021-02-01 | 2024-05-14 | Apple Inc. | Displaying a representation of a card with a layered structure |
US12019862B2 (en) | 2015-03-08 | 2024-06-25 | Apple Inc. | Sharing user-configurable graphical constructs |
US12045014B2 (en) | 2022-01-24 | 2024-07-23 | Apple Inc. | User interfaces for indicating time |
US12050766B2 (en) | 2013-09-03 | 2024-07-30 | Apple Inc. | Crown input for a wearable electronic device |
US12147964B2 (en) | 2023-06-13 | 2024-11-19 | Apple Inc. | User interfaces for peer-to-peer transfers |
Families Citing this family (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7509588B2 (en) | 2005-12-30 | 2009-03-24 | Apple Inc. | Portable electronic device with interface reconfiguration mode |
US10313505B2 (en) | 2006-09-06 | 2019-06-04 | Apple Inc. | Portable multifunction device, method, and graphical user interface for configuring and displaying widgets |
US8519964B2 (en) | 2007-01-07 | 2013-08-27 | Apple Inc. | Portable multifunction device, method, and graphical user interface supporting user navigations of graphical objects on a touch screen display |
US8619038B2 (en) | 2007-09-04 | 2013-12-31 | Apple Inc. | Editing interface |
KR101588242B1 (en) * | 2009-07-13 | 2016-01-25 | 삼성전자주식회사 | Apparatus and method for scroll of a portable terminal |
US10788976B2 (en) | 2010-04-07 | 2020-09-29 | Apple Inc. | Device, method, and graphical user interface for managing folders with multiple pages |
US8881060B2 (en) | 2010-04-07 | 2014-11-04 | Apple Inc. | Device, method, and graphical user interface for managing folders |
EP2378406B1 (en) * | 2010-04-13 | 2018-08-22 | LG Electronics Inc. | Mobile terminal and method of controlling operation of the mobile terminal |
CN102270081B (en) * | 2010-06-03 | 2015-09-23 | 腾讯科技(深圳)有限公司 | A kind of method and device adjusting size of list element |
KR20120012115A (en) * | 2010-07-30 | 2012-02-09 | 삼성전자주식회사 | Method for user interface and display apparatus applying the same |
EP2625685B1 (en) | 2010-10-05 | 2020-04-22 | Citrix Systems, Inc. | Display management for native user experiences |
EP2676178B1 (en) * | 2011-01-26 | 2020-04-22 | Novodigit Sarl | Breath-sensitive digital interface |
US9182897B2 (en) * | 2011-04-22 | 2015-11-10 | Qualcomm Incorporated | Method and apparatus for intuitive wrapping of lists in a user interface |
US9003318B2 (en) * | 2011-05-26 | 2015-04-07 | Linden Research, Inc. | Method and apparatus for providing graphical interfaces for declarative specifications |
US9035967B2 (en) * | 2011-06-30 | 2015-05-19 | Google Technology Holdings LLC | Method and device for enhancing scrolling and other operations on a display |
US20130042208A1 (en) * | 2011-08-10 | 2013-02-14 | International Business Machines Coporation | Cursor for enhanced interaction with user interface controls |
JP5935267B2 (en) * | 2011-09-01 | 2016-06-15 | ソニー株式会社 | Information processing apparatus, information processing method, and program |
TW201319921A (en) * | 2011-11-07 | 2013-05-16 | Benq Corp | Method for screen control and method for screen display on a touch screen |
US9313290B2 (en) * | 2012-05-17 | 2016-04-12 | Ncr Corporation | Data transfer between devices |
US20130321306A1 (en) * | 2012-05-21 | 2013-12-05 | Door Number 3 | Common drawing model |
US8607156B1 (en) * | 2012-08-16 | 2013-12-10 | Google Inc. | System and method for indicating overscrolling in a mobile device |
US9535566B2 (en) * | 2012-08-24 | 2017-01-03 | Intel Corporation | Method, apparatus and system of displaying a file |
JP5995637B2 (en) * | 2012-10-04 | 2016-09-21 | キヤノン株式会社 | IMAGING DEVICE, IMAGING DEVICE CONTROL METHOD, PROGRAM, AND STORAGE MEDIUM |
JP2015537299A (en) * | 2012-10-31 | 2015-12-24 | サムスン エレクトロニクス カンパニー リミテッド | Display device and display method thereof |
US9082348B2 (en) * | 2012-12-07 | 2015-07-14 | Blackberry Limited | Methods and devices for scrolling a display page |
US9329764B2 (en) * | 2013-03-15 | 2016-05-03 | Google Inc. | Overscroll visual effects |
US9310988B2 (en) * | 2013-09-10 | 2016-04-12 | Google Inc. | Scroll end effects for websites and content |
US10250735B2 (en) | 2013-10-30 | 2019-04-02 | Apple Inc. | Displaying relevant user interface objects |
USD801993S1 (en) * | 2014-03-14 | 2017-11-07 | Microsoft Corporation | Display screen with animated graphical user interface |
US9841870B2 (en) * | 2014-08-21 | 2017-12-12 | The Boeing Company | Integrated visualization and analysis of a complex system |
CN106201237A (en) * | 2015-05-05 | 2016-12-07 | 阿里巴巴集团控股有限公司 | A kind of information collection method and device |
US11816325B2 (en) | 2016-06-12 | 2023-11-14 | Apple Inc. | Application shortcuts for carplay |
US10283082B1 (en) | 2016-10-29 | 2019-05-07 | Dvir Gassner | Differential opacity position indicator |
US11675476B2 (en) | 2019-05-05 | 2023-06-13 | Apple Inc. | User interfaces for widgets |
KR20230004838A (en) * | 2020-04-30 | 2023-01-06 | 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 | Information sharing methods, information display methods, devices, electronic equipment and storage media |
CN112035202B (en) * | 2020-08-25 | 2021-11-23 | 北京字节跳动网络技术有限公司 | Method and device for displaying friend activity information, electronic equipment and storage medium |
Citations (79)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5581670A (en) * | 1993-07-21 | 1996-12-03 | Xerox Corporation | User interface having movable sheet with click-through tools |
US5860073A (en) * | 1995-07-17 | 1999-01-12 | Microsoft Corporation | Style sheets for publishing system |
US5874961A (en) * | 1997-03-19 | 1999-02-23 | International Business Machines Corporation | Scroll bar amplification apparatus and method |
US6028593A (en) * | 1995-12-01 | 2000-02-22 | Immersion Corporation | Method and apparatus for providing simulated physical interactions within computer generated environments |
US6157381A (en) * | 1997-11-18 | 2000-12-05 | International Business Machines Corporation | Computer system, user interface component and method utilizing non-linear scroll bar |
US6246406B1 (en) * | 1998-02-06 | 2001-06-12 | Sun Microsystems, Inc. | Techniques for navigating layers of a user interface |
US6366302B1 (en) * | 1998-12-22 | 2002-04-02 | Motorola, Inc. | Enhanced graphic user interface for mobile radiotelephones |
US20020135602A1 (en) * | 2001-03-20 | 2002-09-26 | Jeffery Davis | Scrolling method using screen pointing device |
US6469718B1 (en) * | 1997-08-22 | 2002-10-22 | Sony Corporation | Recording medium retaining data for menu control, menu control method and apparatus |
US20030095135A1 (en) * | 2001-05-02 | 2003-05-22 | Kaasila Sampo J. | Methods, systems, and programming for computer display of images, text, and/or digital content |
US6714213B1 (en) * | 1999-10-08 | 2004-03-30 | General Electric Company | System and method for providing interactive haptic collision detection |
US20050149551A1 (en) * | 2004-01-05 | 2005-07-07 | Jeffrey Fong | Systems and methods for co-axial navigation of a user interface |
US6985149B2 (en) * | 2002-07-31 | 2006-01-10 | Silicon Graphics, Inc. | System and method for decoupling the user interface and application window in a graphics application |
US20060053048A1 (en) * | 2004-09-03 | 2006-03-09 | Whenu.Com | Techniques for remotely delivering shaped display presentations such as advertisements to computing platforms over information communications networks |
US7032181B1 (en) * | 2002-06-18 | 2006-04-18 | Good Technology, Inc. | Optimized user interface for small screen devices |
US20060095360A1 (en) * | 1996-01-16 | 2006-05-04 | The Nasdaq Stock Market, Inc., A Delaware Corporation | Media wall for displaying financial information |
US20060143577A1 (en) * | 2004-12-24 | 2006-06-29 | Kuan-Hong Hsieh | Graphical user interface for manipulating graphic images and method thereof |
US20060161863A1 (en) * | 2004-11-16 | 2006-07-20 | Gallo Anthony C | Cellular user interface |
US20060174214A1 (en) * | 2003-08-13 | 2006-08-03 | Mckee Timothy P | System and method for navigation of content in multiple display regions |
US20060210958A1 (en) * | 2005-03-21 | 2006-09-21 | Microsoft Corporation | Gesture training |
US20060277469A1 (en) * | 2004-06-25 | 2006-12-07 | Chaudhri Imran A | Preview and installation of user interface elements in a display environment |
US20070079246A1 (en) * | 2005-09-08 | 2007-04-05 | Gilles Morillon | Method of selection of a button in a graphical bar, and receiver implementing the method |
US7203901B2 (en) * | 2002-11-27 | 2007-04-10 | Microsoft Corporation | Small form factor web browsing |
US20070132789A1 (en) * | 2005-12-08 | 2007-06-14 | Bas Ording | List scrolling in response to moving contact over list of index symbols |
US20070150830A1 (en) * | 2005-12-23 | 2007-06-28 | Bas Ording | Scrolling list with floating adjacent index symbols |
US20070188444A1 (en) * | 2006-02-10 | 2007-08-16 | Microsoft Corporation | Physical-virtual interpolation |
US20070245260A1 (en) * | 2006-04-12 | 2007-10-18 | Laas & Sonder Pty Ltd | Method and system for organizing and displaying data |
US20080016471A1 (en) * | 2006-07-14 | 2008-01-17 | Samsung Electronics Co., Ltd. | Electronic device for providing 3D user interface and method of providing a 3D user interface |
US7337392B2 (en) * | 2003-01-27 | 2008-02-26 | Vincent Wen-Jeng Lue | Method and apparatus for adapting web contents to different display area dimensions |
US20080165210A1 (en) * | 2007-01-07 | 2008-07-10 | Andrew Platzer | Animations |
US20080168349A1 (en) * | 2007-01-07 | 2008-07-10 | Lamiraux Henri C | Portable Electronic Device, Method, and Graphical User Interface for Displaying Electronic Documents and Lists |
US20080178126A1 (en) * | 2007-01-24 | 2008-07-24 | Microsoft Corporation | Gesture recognition interactive feedback |
US20080215995A1 (en) * | 2007-01-17 | 2008-09-04 | Heiner Wolf | Model based avatars for virtual presence |
US7428709B2 (en) * | 2005-04-13 | 2008-09-23 | Apple Inc. | Multiple-panel scrolling |
US7430712B2 (en) * | 2005-03-16 | 2008-09-30 | Ameriprise Financial, Inc. | System and method for dynamically resizing embeded web page content |
US7461353B2 (en) * | 2000-06-12 | 2008-12-02 | Gary Rohrabaugh | Scalable display of internet content on mobile devices |
US20080307361A1 (en) * | 2007-06-08 | 2008-12-11 | Apple Inc. | Selection user interface |
US7469381B2 (en) * | 2007-01-07 | 2008-12-23 | Apple Inc. | List scrolling and document translation, scaling, and rotation on a touch-screen display |
US20090007017A1 (en) * | 2007-06-29 | 2009-01-01 | Freddy Allen Anzures | Portable multifunction device with animated user interface transitions |
US20090070711A1 (en) * | 2007-09-04 | 2009-03-12 | Lg Electronics Inc. | Scrolling method of mobile terminal |
US20090125836A1 (en) * | 2006-04-20 | 2009-05-14 | Akihiro Yamamoto | Image output device |
US20090125824A1 (en) * | 2007-11-12 | 2009-05-14 | Microsoft Corporation | User interface with physics engine for natural gestural control |
US20090138815A1 (en) * | 2007-11-26 | 2009-05-28 | Palm, Inc. | Enhancing visual continuity in scrolling operations |
US20090204928A1 (en) * | 2008-02-11 | 2009-08-13 | Idean Enterprise Oy | Layer-based user interface |
US20090231271A1 (en) * | 2008-03-12 | 2009-09-17 | Immersion Corporation | Haptically Enabled User Interface |
US20090284478A1 (en) * | 2008-05-15 | 2009-11-19 | Microsoft Corporation | Multi-Contact and Single-Contact Input |
US20090292989A1 (en) * | 2008-05-23 | 2009-11-26 | Microsoft Corporation | Panning content utilizing a drag operation |
US7634789B2 (en) * | 2000-08-14 | 2009-12-15 | Corporate Media Partners | System and method for displaying advertising in an interactive program guide |
US7636755B2 (en) * | 2002-11-21 | 2009-12-22 | Aol Llc | Multiple avatar personalities |
US20090315839A1 (en) * | 2008-06-24 | 2009-12-24 | Microsoft Corporation | Physics simulation-based interaction for surface computing |
US20090327938A1 (en) * | 2001-04-09 | 2009-12-31 | Microsoft Corporation | Animation on object user interface |
US20100009747A1 (en) * | 2008-07-14 | 2010-01-14 | Microsoft Corporation | Programming APIS for an Extensible Avatar System |
US20100011316A1 (en) * | 2008-01-17 | 2010-01-14 | Can Sar | System for intelligent automated layout and management of interactive windows |
US20100026698A1 (en) * | 2008-08-01 | 2010-02-04 | Microsoft Corporation | Avatar items and animations |
US7663620B2 (en) * | 2005-12-05 | 2010-02-16 | Microsoft Corporation | Accessing 2D graphic content using axonometric layer views |
US20100039447A1 (en) * | 2008-08-18 | 2010-02-18 | Sony Corporation | Image processing apparatus, image processing method, and program |
US20100073380A1 (en) * | 2008-09-19 | 2010-03-25 | Pure Digital Technologies, Inc. | Method of operating a design generator for personalization of electronic devices |
US20100083165A1 (en) * | 2008-09-29 | 2010-04-01 | Microsoft Corporation | Panoramic graphical user interface |
US7690997B2 (en) * | 2005-10-14 | 2010-04-06 | Leviathan Entertainment, Llc | Virtual environment with formalized inter-character relationships |
US7698658B2 (en) * | 2004-03-19 | 2010-04-13 | Sony Corporation | Display controlling apparatus, display controlling method, and recording medium |
US7707494B2 (en) * | 2004-08-06 | 2010-04-27 | Canon Kabushiki Kaisha | Information processing apparatus, control method therefor, and program |
US20100107068A1 (en) * | 2008-10-23 | 2010-04-29 | Butcher Larry R | User Interface with Parallax Animation |
US7724242B2 (en) * | 2004-08-06 | 2010-05-25 | Touchtable, Inc. | Touch driven method and apparatus to integrate and display multiple image layers forming alternate depictions of same subject matter |
US20100137031A1 (en) * | 2008-12-01 | 2010-06-03 | Research In Motion Limited | Portable electronic device and method of controlling same |
US20100134425A1 (en) * | 2008-12-03 | 2010-06-03 | Microsoft Corporation | Manipulation of list on a multi-touch display |
US7735004B2 (en) * | 2004-01-30 | 2010-06-08 | Canon Kabushiki Kaisha | Layout control method, layout control apparatus, and layout control program |
US20100175027A1 (en) * | 2009-01-06 | 2010-07-08 | Microsoft Corporation | Non-uniform scrolling |
US7779360B1 (en) * | 2007-04-10 | 2010-08-17 | Google Inc. | Map user interface |
US20110055752A1 (en) * | 2009-06-04 | 2011-03-03 | Rubinstein Jonathan J | Method and Apparatus for Displaying and Auto-Correcting an Over-Scroll State on a Computing Device |
US20110080351A1 (en) * | 2009-10-07 | 2011-04-07 | Research In Motion Limited | method of controlling touch input on a touch-sensitive display when a display element is active and a portable electronic device configured for the same |
US20110093812A1 (en) * | 2009-10-21 | 2011-04-21 | Microsoft Corporation | Displaying lists as reacting against barriers |
US20110090255A1 (en) * | 2009-10-16 | 2011-04-21 | Wilson Diego A | Content boundary signaling techniques |
US20110093778A1 (en) * | 2009-10-20 | 2011-04-21 | Lg Electronics Inc. | Mobile terminal and controlling method thereof |
US20110107264A1 (en) * | 2009-10-30 | 2011-05-05 | Motorola, Inc. | Method and Device for Enhancing Scrolling Operations in a Display Device |
US20110202859A1 (en) * | 2010-02-12 | 2011-08-18 | Microsoft Corporation | Distortion effects to indicate location in a movable data collection |
US20110199318A1 (en) * | 2010-02-12 | 2011-08-18 | Microsoft Corporation | Multi-layer user interface with flexible parallel movement |
US8113951B2 (en) * | 2006-11-15 | 2012-02-14 | Microsoft Corporation | Achievement incentives within a console-based gaming environment |
US8127246B2 (en) * | 2007-10-01 | 2012-02-28 | Apple Inc. | Varying user interface element based on movement |
US20120144322A1 (en) * | 2010-12-07 | 2012-06-07 | Samsung Electronics Co., Ltd. | Apparatus and method for navigating mostly viewed web pages |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100400208B1 (en) | 1996-10-31 | 2003-12-31 | 삼성전자주식회사 | Apparatus for generating multi-step background picture in video game with real image background |
JP2001351125A (en) | 2000-03-30 | 2001-12-21 | Sega Corp | Method for displaying image |
JP2002099484A (en) | 2000-09-25 | 2002-04-05 | Sanyo Electric Co Ltd | Message display device, message display method and record medium |
JP2002244641A (en) | 2001-02-20 | 2002-08-30 | Canon Inc | Information processor, scrolling control method, and storage medium |
WO2007017784A2 (en) | 2005-08-09 | 2007-02-15 | Koninklijke Philips Electronics N.V. | Scroll method with contextual scroll rate and feedback |
KR100792295B1 (en) | 2005-12-29 | 2008-01-07 | 삼성전자주식회사 | Contents navigation method and the contents navigation apparatus thereof |
US8296684B2 (en) | 2008-05-23 | 2012-10-23 | Hewlett-Packard Development Company, L.P. | Navigating among activities in a computing device |
US20070294635A1 (en) | 2006-06-15 | 2007-12-20 | Microsoft Corporation | Linked scrolling of side-by-side content |
JP4775179B2 (en) | 2006-08-28 | 2011-09-21 | ソニー株式会社 | Display scroll method, display device, and display program |
KR101185634B1 (en) | 2007-10-02 | 2012-09-24 | 가부시키가이샤 아쿠세스 | Terminal device, link selection method, and computer-readable recording medium stored thereon display program |
US8245155B2 (en) | 2007-11-29 | 2012-08-14 | Sony Corporation | Computer implemented display, graphical user interface, design and method including scrolling features |
US20100269038A1 (en) * | 2009-04-17 | 2010-10-21 | Sony Ericsson Mobile Communications Ab | Variable Rate Scrolling |
TWI412963B (en) * | 2009-07-01 | 2013-10-21 | Htc Corp | Data display and movement methods and systems, and computer program products thereof |
US8438500B2 (en) | 2009-09-25 | 2013-05-07 | Apple Inc. | Device, method, and graphical user interface for manipulation of user interface objects with activation regions |
US20110161892A1 (en) * | 2009-12-29 | 2011-06-30 | Motorola-Mobility, Inc. | Display Interface and Method for Presenting Visual Feedback of a User Interaction |
US8612884B2 (en) | 2010-01-26 | 2013-12-17 | Apple Inc. | Device, method, and graphical user interface for resizing objects |
-
2010
- 2010-03-10 US US12/721,419 patent/US9417787B2/en active Active
- 2010-05-04 US US12/773,803 patent/US20110202834A1/en not_active Abandoned
Patent Citations (83)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5581670A (en) * | 1993-07-21 | 1996-12-03 | Xerox Corporation | User interface having movable sheet with click-through tools |
US5860073A (en) * | 1995-07-17 | 1999-01-12 | Microsoft Corporation | Style sheets for publishing system |
US6028593A (en) * | 1995-12-01 | 2000-02-22 | Immersion Corporation | Method and apparatus for providing simulated physical interactions within computer generated environments |
US20060095360A1 (en) * | 1996-01-16 | 2006-05-04 | The Nasdaq Stock Market, Inc., A Delaware Corporation | Media wall for displaying financial information |
US5874961A (en) * | 1997-03-19 | 1999-02-23 | International Business Machines Corporation | Scroll bar amplification apparatus and method |
US6469718B1 (en) * | 1997-08-22 | 2002-10-22 | Sony Corporation | Recording medium retaining data for menu control, menu control method and apparatus |
US6157381A (en) * | 1997-11-18 | 2000-12-05 | International Business Machines Corporation | Computer system, user interface component and method utilizing non-linear scroll bar |
US6246406B1 (en) * | 1998-02-06 | 2001-06-12 | Sun Microsystems, Inc. | Techniques for navigating layers of a user interface |
US6366302B1 (en) * | 1998-12-22 | 2002-04-02 | Motorola, Inc. | Enhanced graphic user interface for mobile radiotelephones |
US6714213B1 (en) * | 1999-10-08 | 2004-03-30 | General Electric Company | System and method for providing interactive haptic collision detection |
US7461353B2 (en) * | 2000-06-12 | 2008-12-02 | Gary Rohrabaugh | Scalable display of internet content on mobile devices |
US7634789B2 (en) * | 2000-08-14 | 2009-12-15 | Corporate Media Partners | System and method for displaying advertising in an interactive program guide |
US20020135602A1 (en) * | 2001-03-20 | 2002-09-26 | Jeffery Davis | Scrolling method using screen pointing device |
US20090327938A1 (en) * | 2001-04-09 | 2009-12-31 | Microsoft Corporation | Animation on object user interface |
US20030095135A1 (en) * | 2001-05-02 | 2003-05-22 | Kaasila Sampo J. | Methods, systems, and programming for computer display of images, text, and/or digital content |
US7032181B1 (en) * | 2002-06-18 | 2006-04-18 | Good Technology, Inc. | Optimized user interface for small screen devices |
US6985149B2 (en) * | 2002-07-31 | 2006-01-10 | Silicon Graphics, Inc. | System and method for decoupling the user interface and application window in a graphics application |
US7636755B2 (en) * | 2002-11-21 | 2009-12-22 | Aol Llc | Multiple avatar personalities |
US7203901B2 (en) * | 2002-11-27 | 2007-04-10 | Microsoft Corporation | Small form factor web browsing |
US7337392B2 (en) * | 2003-01-27 | 2008-02-26 | Vincent Wen-Jeng Lue | Method and apparatus for adapting web contents to different display area dimensions |
US20060174214A1 (en) * | 2003-08-13 | 2006-08-03 | Mckee Timothy P | System and method for navigation of content in multiple display regions |
US20050149551A1 (en) * | 2004-01-05 | 2005-07-07 | Jeffrey Fong | Systems and methods for co-axial navigation of a user interface |
US7698654B2 (en) * | 2004-01-05 | 2010-04-13 | Microsoft Corporation | Systems and methods for co-axial navigation of a user interface |
US7735004B2 (en) * | 2004-01-30 | 2010-06-08 | Canon Kabushiki Kaisha | Layout control method, layout control apparatus, and layout control program |
US7698658B2 (en) * | 2004-03-19 | 2010-04-13 | Sony Corporation | Display controlling apparatus, display controlling method, and recording medium |
US20060277469A1 (en) * | 2004-06-25 | 2006-12-07 | Chaudhri Imran A | Preview and installation of user interface elements in a display environment |
US7707494B2 (en) * | 2004-08-06 | 2010-04-27 | Canon Kabushiki Kaisha | Information processing apparatus, control method therefor, and program |
US7724242B2 (en) * | 2004-08-06 | 2010-05-25 | Touchtable, Inc. | Touch driven method and apparatus to integrate and display multiple image layers forming alternate depictions of same subject matter |
US20060053048A1 (en) * | 2004-09-03 | 2006-03-09 | Whenu.Com | Techniques for remotely delivering shaped display presentations such as advertisements to computing platforms over information communications networks |
US20060161863A1 (en) * | 2004-11-16 | 2006-07-20 | Gallo Anthony C | Cellular user interface |
US20060143577A1 (en) * | 2004-12-24 | 2006-06-29 | Kuan-Hong Hsieh | Graphical user interface for manipulating graphic images and method thereof |
US7430712B2 (en) * | 2005-03-16 | 2008-09-30 | Ameriprise Financial, Inc. | System and method for dynamically resizing embeded web page content |
US20060210958A1 (en) * | 2005-03-21 | 2006-09-21 | Microsoft Corporation | Gesture training |
US7428709B2 (en) * | 2005-04-13 | 2008-09-23 | Apple Inc. | Multiple-panel scrolling |
US20070079246A1 (en) * | 2005-09-08 | 2007-04-05 | Gilles Morillon | Method of selection of a button in a graphical bar, and receiver implementing the method |
US7690997B2 (en) * | 2005-10-14 | 2010-04-06 | Leviathan Entertainment, Llc | Virtual environment with formalized inter-character relationships |
US7663620B2 (en) * | 2005-12-05 | 2010-02-16 | Microsoft Corporation | Accessing 2D graphic content using axonometric layer views |
US20070132789A1 (en) * | 2005-12-08 | 2007-06-14 | Bas Ording | List scrolling in response to moving contact over list of index symbols |
US20070150830A1 (en) * | 2005-12-23 | 2007-06-28 | Bas Ording | Scrolling list with floating adjacent index symbols |
US20110022985A1 (en) * | 2005-12-23 | 2011-01-27 | Bas Ording | Scrolling List with Floating Adjacent Index Symbols |
US7958456B2 (en) * | 2005-12-23 | 2011-06-07 | Apple Inc. | Scrolling list with floating adjacent index symbols |
US20070188444A1 (en) * | 2006-02-10 | 2007-08-16 | Microsoft Corporation | Physical-virtual interpolation |
US20070245260A1 (en) * | 2006-04-12 | 2007-10-18 | Laas & Sonder Pty Ltd | Method and system for organizing and displaying data |
US20090125836A1 (en) * | 2006-04-20 | 2009-05-14 | Akihiro Yamamoto | Image output device |
US20080016471A1 (en) * | 2006-07-14 | 2008-01-17 | Samsung Electronics Co., Ltd. | Electronic device for providing 3D user interface and method of providing a 3D user interface |
US8113951B2 (en) * | 2006-11-15 | 2012-02-14 | Microsoft Corporation | Achievement incentives within a console-based gaming environment |
US20090073194A1 (en) * | 2007-01-07 | 2009-03-19 | Bas Ording | Device, Method, and Graphical User Interface for List Scrolling on a Touch-Screen Display |
US20080168349A1 (en) * | 2007-01-07 | 2008-07-10 | Lamiraux Henri C | Portable Electronic Device, Method, and Graphical User Interface for Displaying Electronic Documents and Lists |
US20080165210A1 (en) * | 2007-01-07 | 2008-07-10 | Andrew Platzer | Animations |
US7469381B2 (en) * | 2007-01-07 | 2008-12-23 | Apple Inc. | List scrolling and document translation, scaling, and rotation on a touch-screen display |
US20080215995A1 (en) * | 2007-01-17 | 2008-09-04 | Heiner Wolf | Model based avatars for virtual presence |
US20080178126A1 (en) * | 2007-01-24 | 2008-07-24 | Microsoft Corporation | Gesture recognition interactive feedback |
US7779360B1 (en) * | 2007-04-10 | 2010-08-17 | Google Inc. | Map user interface |
US20080307361A1 (en) * | 2007-06-08 | 2008-12-11 | Apple Inc. | Selection user interface |
US20090007017A1 (en) * | 2007-06-29 | 2009-01-01 | Freddy Allen Anzures | Portable multifunction device with animated user interface transitions |
US20090070711A1 (en) * | 2007-09-04 | 2009-03-12 | Lg Electronics Inc. | Scrolling method of mobile terminal |
US8127246B2 (en) * | 2007-10-01 | 2012-02-28 | Apple Inc. | Varying user interface element based on movement |
US20090125824A1 (en) * | 2007-11-12 | 2009-05-14 | Microsoft Corporation | User interface with physics engine for natural gestural control |
US20090138815A1 (en) * | 2007-11-26 | 2009-05-28 | Palm, Inc. | Enhancing visual continuity in scrolling operations |
US20100011316A1 (en) * | 2008-01-17 | 2010-01-14 | Can Sar | System for intelligent automated layout and management of interactive windows |
US20090204928A1 (en) * | 2008-02-11 | 2009-08-13 | Idean Enterprise Oy | Layer-based user interface |
US20090231271A1 (en) * | 2008-03-12 | 2009-09-17 | Immersion Corporation | Haptically Enabled User Interface |
US20090284478A1 (en) * | 2008-05-15 | 2009-11-19 | Microsoft Corporation | Multi-Contact and Single-Contact Input |
US20090292989A1 (en) * | 2008-05-23 | 2009-11-26 | Microsoft Corporation | Panning content utilizing a drag operation |
US20090315839A1 (en) * | 2008-06-24 | 2009-12-24 | Microsoft Corporation | Physics simulation-based interaction for surface computing |
US20100009747A1 (en) * | 2008-07-14 | 2010-01-14 | Microsoft Corporation | Programming APIS for an Extensible Avatar System |
US20100026698A1 (en) * | 2008-08-01 | 2010-02-04 | Microsoft Corporation | Avatar items and animations |
US20100039447A1 (en) * | 2008-08-18 | 2010-02-18 | Sony Corporation | Image processing apparatus, image processing method, and program |
US20100073380A1 (en) * | 2008-09-19 | 2010-03-25 | Pure Digital Technologies, Inc. | Method of operating a design generator for personalization of electronic devices |
US20100083165A1 (en) * | 2008-09-29 | 2010-04-01 | Microsoft Corporation | Panoramic graphical user interface |
US20100107068A1 (en) * | 2008-10-23 | 2010-04-29 | Butcher Larry R | User Interface with Parallax Animation |
US20100137031A1 (en) * | 2008-12-01 | 2010-06-03 | Research In Motion Limited | Portable electronic device and method of controlling same |
US20100134425A1 (en) * | 2008-12-03 | 2010-06-03 | Microsoft Corporation | Manipulation of list on a multi-touch display |
US20100175027A1 (en) * | 2009-01-06 | 2010-07-08 | Microsoft Corporation | Non-uniform scrolling |
US20110055752A1 (en) * | 2009-06-04 | 2011-03-03 | Rubinstein Jonathan J | Method and Apparatus for Displaying and Auto-Correcting an Over-Scroll State on a Computing Device |
US20110080351A1 (en) * | 2009-10-07 | 2011-04-07 | Research In Motion Limited | method of controlling touch input on a touch-sensitive display when a display element is active and a portable electronic device configured for the same |
US20110090255A1 (en) * | 2009-10-16 | 2011-04-21 | Wilson Diego A | Content boundary signaling techniques |
US20110093778A1 (en) * | 2009-10-20 | 2011-04-21 | Lg Electronics Inc. | Mobile terminal and controlling method thereof |
US20110093812A1 (en) * | 2009-10-21 | 2011-04-21 | Microsoft Corporation | Displaying lists as reacting against barriers |
US20110107264A1 (en) * | 2009-10-30 | 2011-05-05 | Motorola, Inc. | Method and Device for Enhancing Scrolling Operations in a Display Device |
US20110202859A1 (en) * | 2010-02-12 | 2011-08-18 | Microsoft Corporation | Distortion effects to indicate location in a movable data collection |
US20110199318A1 (en) * | 2010-02-12 | 2011-08-18 | Microsoft Corporation | Multi-layer user interface with flexible parallel movement |
US20120144322A1 (en) * | 2010-12-07 | 2012-06-07 | Samsung Electronics Co., Ltd. | Apparatus and method for navigating mostly viewed web pages |
Cited By (404)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8812985B2 (en) * | 2009-10-30 | 2014-08-19 | Motorola Mobility Llc | Method and device for enhancing scrolling operations in a display device |
US20110107264A1 (en) * | 2009-10-30 | 2011-05-05 | Motorola, Inc. | Method and Device for Enhancing Scrolling Operations in a Display Device |
US20140325445A1 (en) * | 2009-10-30 | 2014-10-30 | Motorola Mobility Llc | Visual indication for facilitating scrolling |
US9489111B2 (en) | 2010-01-06 | 2016-11-08 | Apple Inc. | Device, method, and graphical user interface for navigating through a range of values |
US20110202837A1 (en) * | 2010-02-12 | 2011-08-18 | Microsoft Corporation | Multi-layer user interface with flexible parallel and orthogonal movement |
US20110199318A1 (en) * | 2010-02-12 | 2011-08-18 | Microsoft Corporation | Multi-layer user interface with flexible parallel movement |
US20110202859A1 (en) * | 2010-02-12 | 2011-08-18 | Microsoft Corporation | Distortion effects to indicate location in a movable data collection |
US8473860B2 (en) * | 2010-02-12 | 2013-06-25 | Microsoft Corporation | Multi-layer user interface with flexible parallel and orthogonal movement |
US9417787B2 (en) | 2010-02-12 | 2016-08-16 | Microsoft Technology Licensing, Llc | Distortion effects to indicate location in a movable data collection |
US20120274550A1 (en) * | 2010-03-24 | 2012-11-01 | Robert Campbell | Gesture mapping for display device |
US9727226B2 (en) * | 2010-04-02 | 2017-08-08 | Nokia Technologies Oy | Methods and apparatuses for providing an enhanced user interface |
US20110246916A1 (en) * | 2010-04-02 | 2011-10-06 | Nokia Corporation | Methods and apparatuses for providing an enhanced user interface |
US8922499B2 (en) | 2010-07-26 | 2014-12-30 | Apple Inc. | Touch input transitions |
US20120019453A1 (en) * | 2010-07-26 | 2012-01-26 | Wayne Carl Westerman | Motion continuation of touch input |
US20120026194A1 (en) * | 2010-07-30 | 2012-02-02 | Google Inc. | Viewable boundary feedback |
US20120026181A1 (en) * | 2010-07-30 | 2012-02-02 | Google Inc. | Viewable boundary feedback |
US20130246936A1 (en) * | 2010-08-31 | 2013-09-19 | Anders Nancke-Krogh | System and method for unlimited multi-user computer desktop environment |
US9043706B2 (en) | 2010-08-31 | 2015-05-26 | Anders Nancke-Krogh | System and method for using state replication between application instances to provide a collaborative desktop environment |
US10013137B2 (en) * | 2010-08-31 | 2018-07-03 | Datapath Limited | System and method for unlimited multi-user computer desktop environment |
US8866822B2 (en) * | 2010-09-07 | 2014-10-21 | Microsoft Corporation | Alternate source for controlling an animation |
US20120056889A1 (en) * | 2010-09-07 | 2012-03-08 | Microsoft Corporation | Alternate source for controlling an animation |
US20120066644A1 (en) * | 2010-09-14 | 2012-03-15 | Hal Laboratory Inc. | Computer-readable storage medium having stored thereon display control program, display control system, display control apparatus, and display control method |
US20120066627A1 (en) * | 2010-09-14 | 2012-03-15 | Nintendo Co., Ltd. | Computer-readable storage medium having stored thereon display control program, display control system, display control apparatus, and display control method |
US9021385B2 (en) * | 2010-09-14 | 2015-04-28 | Nintendo Co., Ltd. | Computer-readable storage medium having stored thereon display control program, display control system, display control apparatus, and display control method |
US10739985B2 (en) * | 2010-09-14 | 2020-08-11 | Nintendo Co., Ltd. | Computer-readable storage medium having stored thereon display control program, display control system, display control apparatus, and display control method |
US20120066621A1 (en) * | 2010-09-14 | 2012-03-15 | Nintendo Co., Ltd. | Computer-readable storage medium having stored thereon display control program, display control system, display control apparatus, and display control method |
US20120072863A1 (en) * | 2010-09-21 | 2012-03-22 | Nintendo Co., Ltd. | Computer-readable storage medium, display control apparatus, display control system, and display control method |
US8514252B1 (en) | 2010-09-22 | 2013-08-20 | Google Inc. | Feedback during crossing of zoom levels |
US9001149B2 (en) | 2010-10-01 | 2015-04-07 | Z124 | Max mode |
US20120081271A1 (en) * | 2010-10-01 | 2012-04-05 | Imerj LLC | Application display transitions between single and multiple displays |
US9513883B2 (en) * | 2010-10-01 | 2016-12-06 | Apple Inc. | Method and apparatus for designing layout for user interfaces |
US10853013B2 (en) | 2010-10-01 | 2020-12-01 | Z124 | Minimizing and maximizing between landscape dual display and landscape single display |
US9152176B2 (en) * | 2010-10-01 | 2015-10-06 | Z124 | Application display transitions between single and multiple displays |
US11429146B2 (en) | 2010-10-01 | 2022-08-30 | Z124 | Minimizing and maximizing between landscape dual display and landscape single display |
US9952743B2 (en) | 2010-10-01 | 2018-04-24 | Z124 | Max mode |
US9223426B2 (en) | 2010-10-01 | 2015-12-29 | Z124 | Repositioning windows in the pop-up window |
US9141135B2 (en) | 2010-10-01 | 2015-09-22 | Z124 | Full-screen annunciator |
US20120084685A1 (en) * | 2010-10-01 | 2012-04-05 | Heynen Patrick O | Method and apparatus for designing layout for user interfaces |
US10268338B2 (en) * | 2010-10-01 | 2019-04-23 | Z124 | Max mode |
US11537259B2 (en) | 2010-10-01 | 2022-12-27 | Z124 | Displayed image transition indicator |
US10803640B2 (en) | 2010-10-01 | 2020-10-13 | Apple Inc. | Method and apparatus for designing layout for user interfaces |
US9152436B2 (en) * | 2010-10-05 | 2015-10-06 | Citrix Systems, Inc. | Gesture support for shared sessions |
US20120084670A1 (en) * | 2010-10-05 | 2012-04-05 | Citrix Systems, Inc. | Gesture support for shared sessions |
US11010038B2 (en) * | 2010-10-08 | 2021-05-18 | Sony Corporation | Information processing apparatus, information processing method and program for displaying an image during overdrag |
US12032818B2 (en) | 2010-10-08 | 2024-07-09 | Sony Corporation | Information processing apparatus, information processing method, and program |
US11487419B2 (en) | 2010-10-08 | 2022-11-01 | Sony Corporation | Information processing apparatus, information processing method, and program |
US8830190B2 (en) | 2010-10-25 | 2014-09-09 | Aisin Aw Co., Ltd. | Display device, display method, and display program |
US20120098769A1 (en) * | 2010-10-26 | 2012-04-26 | Aisin Aw Co., Ltd. | Display device, display method, and display program |
US10642462B2 (en) * | 2010-12-01 | 2020-05-05 | Sony Corporation | Display processing apparatus for performing image magnification based on touch input and drag input |
US20160224226A1 (en) * | 2010-12-01 | 2016-08-04 | Sony Corporation | Display processing apparatus for performing image magnification based on face detection |
US9696888B2 (en) | 2010-12-20 | 2017-07-04 | Microsoft Technology Licensing, Llc | Application-launching interface for multiple modes |
US9990112B2 (en) | 2010-12-22 | 2018-06-05 | Thomson Licensing | Method and apparatus for locating regions of interest in a user interface |
US10514832B2 (en) | 2010-12-22 | 2019-12-24 | Thomson Licensing | Method for locating regions of interest in a user interface |
US20140298221A1 (en) * | 2010-12-22 | 2014-10-02 | Thomson Licensing | Method and apparatus for restricting user operations when applied to cards or windows |
US9836190B2 (en) * | 2010-12-22 | 2017-12-05 | Jason Douglas Pickersgill | Method and apparatus for restricting user operations when applied to cards or windows |
US11126333B2 (en) | 2010-12-23 | 2021-09-21 | Microsoft Technology Licensing, Llc | Application reporting in an application-selectable user interface |
US10969944B2 (en) | 2010-12-23 | 2021-04-06 | Microsoft Technology Licensing, Llc | Application reporting in an application-selectable user interface |
US9229918B2 (en) | 2010-12-23 | 2016-01-05 | Microsoft Technology Licensing, Llc | Presenting an application change through a tile |
US9423951B2 (en) | 2010-12-31 | 2016-08-23 | Microsoft Technology Licensing, Llc | Content-based snap point |
US20120202187A1 (en) * | 2011-02-03 | 2012-08-09 | Shadowbox Comics, Llc | Method for distribution and display of sequential graphic art |
US9013264B2 (en) | 2011-03-12 | 2015-04-21 | Perceptive Devices, Llc | Multipurpose controller for electronic devices, facial expressions management and drowsiness detection |
US8863039B2 (en) | 2011-04-18 | 2014-10-14 | Microsoft Corporation | Multi-dimensional boundary effects |
US11272017B2 (en) | 2011-05-27 | 2022-03-08 | Microsoft Technology Licensing, Llc | Application notifications manifest |
US10303325B2 (en) | 2011-05-27 | 2019-05-28 | Microsoft Technology Licensing, Llc | Multi-application environment |
US10649571B1 (en) | 2011-08-05 | 2020-05-12 | P4tents1, LLC | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10386960B1 (en) | 2011-08-05 | 2019-08-20 | P4tents1, LLC | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10275087B1 (en) | 2011-08-05 | 2019-04-30 | P4tents1, LLC | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10345961B1 (en) | 2011-08-05 | 2019-07-09 | P4tents1, LLC | Devices and methods for navigating between user interfaces |
US10338736B1 (en) | 2011-08-05 | 2019-07-02 | P4tents1, LLC | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10664097B1 (en) | 2011-08-05 | 2020-05-26 | P4tents1, LLC | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10365758B1 (en) | 2011-08-05 | 2019-07-30 | P4tents1, LLC | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10656752B1 (en) | 2011-08-05 | 2020-05-19 | P4tents1, LLC | Gesture-equipped touch screen system, method, and computer program product |
US10540039B1 (en) | 2011-08-05 | 2020-01-21 | P4tents1, LLC | Devices and methods for navigating between user interface |
US20130055150A1 (en) * | 2011-08-24 | 2013-02-28 | Primesense Ltd. | Visual feedback for tactile and non-tactile user interfaces |
US9122311B2 (en) * | 2011-08-24 | 2015-09-01 | Apple Inc. | Visual feedback for tactile and non-tactile user interfaces |
US10579250B2 (en) | 2011-09-01 | 2020-03-03 | Microsoft Technology Licensing, Llc | Arranging tiles |
US9229604B2 (en) * | 2011-09-01 | 2016-01-05 | Sony Corporation | User interface element |
US20130061170A1 (en) * | 2011-09-01 | 2013-03-07 | Sony Corporation | User interface element |
US10254955B2 (en) | 2011-09-10 | 2019-04-09 | Microsoft Technology Licensing, Llc | Progressively indicating new content in an application-selectable user interface |
KR20130031762A (en) * | 2011-09-21 | 2013-03-29 | 엘지전자 주식회사 | Mobile terminal and control method for mobile terminal |
KR101869774B1 (en) | 2011-09-21 | 2018-06-22 | 엘지전자 주식회사 | Mobile terminal and control method for mobile terminal |
EP2573666A3 (en) * | 2011-09-21 | 2016-06-01 | LG Electronics Inc. | Mobile terminal and control method thereof |
US9158494B2 (en) | 2011-09-27 | 2015-10-13 | Z124 | Minimizing and maximizing between portrait dual display and portrait single display |
US9474021B2 (en) | 2011-09-27 | 2016-10-18 | Z124 | Display clipping on a multiscreen device |
US9639320B2 (en) | 2011-09-27 | 2017-05-02 | Z124 | Display clipping on a multiscreen device |
WO2013049406A1 (en) * | 2011-10-01 | 2013-04-04 | Oracle International Corporation | Moving an object about a display frame by combining classical mechanics of motion |
US9448633B2 (en) | 2011-10-01 | 2016-09-20 | Oracle International Corporation | Moving a display object within a display frame using a discrete gesture |
US9501150B2 (en) | 2011-10-01 | 2016-11-22 | Oracle International Corporation | Moving an object about a display frame by combining classical mechanics of motion |
US9772759B2 (en) * | 2011-10-14 | 2017-09-26 | Apple Inc. | Device, method, and graphical user interface for data input using virtual sliders |
US20130097551A1 (en) * | 2011-10-14 | 2013-04-18 | Edward P.A. Hogan | Device, Method, and Graphical User Interface for Data Input Using Virtual Sliders |
US11029942B1 (en) | 2011-12-19 | 2021-06-08 | Majen Tech, LLC | System, method, and computer program product for device coordination |
US11089134B1 (en) | 2011-12-19 | 2021-08-10 | Majen Tech, LLC | System, method, and computer program product for coordination among multiple devices |
US11637915B1 (en) | 2011-12-19 | 2023-04-25 | W74 Technology, Llc | System, method, and computer program product for coordination among multiple devices |
US20130169649A1 (en) * | 2012-01-04 | 2013-07-04 | Microsoft Corporation | Movement endpoint exposure |
EP2801020A4 (en) * | 2012-01-06 | 2015-10-28 | Microsoft Technology Licensing Llc | Panning animations |
CN104025003A (en) * | 2012-01-06 | 2014-09-03 | 微软公司 | Panning animation |
US10872454B2 (en) * | 2012-01-06 | 2020-12-22 | Microsoft Technology Licensing, Llc | Panning animations |
KR20140116401A (en) * | 2012-01-06 | 2014-10-02 | 마이크로소프트 코포레이션 | Panning animations |
KR102150733B1 (en) | 2012-01-06 | 2020-09-01 | 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 | Panning animations |
US20130176316A1 (en) * | 2012-01-06 | 2013-07-11 | Microsoft Corporation | Panning animations |
JP2015504219A (en) * | 2012-01-06 | 2015-02-05 | マイクロソフト コーポレーション | Pan animation |
US20150074614A1 (en) * | 2012-01-25 | 2015-03-12 | Thomson Licensing | Directional control using a touch sensitive device |
US9235318B2 (en) | 2012-02-01 | 2016-01-12 | Facebook, Inc. | Transitions among hierarchical user-interface layers |
US9557876B2 (en) | 2012-02-01 | 2017-01-31 | Facebook, Inc. | Hierarchical user interface |
US9229613B2 (en) | 2012-02-01 | 2016-01-05 | Facebook, Inc. | Transitions among hierarchical user interface components |
US8976199B2 (en) | 2012-02-01 | 2015-03-10 | Facebook, Inc. | Visual embellishment for objects |
US8984428B2 (en) | 2012-02-01 | 2015-03-17 | Facebook, Inc. | Overlay images and texts in user interface |
US20130198631A1 (en) * | 2012-02-01 | 2013-08-01 | Michael Matas | Spring Motions During Object Animation |
US9235317B2 (en) | 2012-02-01 | 2016-01-12 | Facebook, Inc. | Summary and navigation of hierarchical levels |
US9239662B2 (en) | 2012-02-01 | 2016-01-19 | Facebook, Inc. | User interface editor |
US9606708B2 (en) | 2012-02-01 | 2017-03-28 | Facebook, Inc. | User intent during object scrolling |
US9552147B2 (en) * | 2012-02-01 | 2017-01-24 | Facebook, Inc. | Hierarchical user interface |
US10775991B2 (en) | 2012-02-01 | 2020-09-15 | Facebook, Inc. | Overlay images and texts in user interface |
US9098168B2 (en) * | 2012-02-01 | 2015-08-04 | Facebook, Inc. | Spring motions during object animation |
US8990719B2 (en) | 2012-02-01 | 2015-03-24 | Facebook, Inc. | Preview of objects arranged in a series |
US8990691B2 (en) | 2012-02-01 | 2015-03-24 | Facebook, Inc. | Video object behavior in a user interface |
US9003305B2 (en) | 2012-02-01 | 2015-04-07 | Facebook, Inc. | Folding and unfolding images in a user interface |
US11132118B2 (en) | 2012-02-01 | 2021-09-28 | Facebook, Inc. | User interface editor |
US9645724B2 (en) | 2012-02-01 | 2017-05-09 | Facebook, Inc. | Timeline based content organization |
US20170131889A1 (en) * | 2012-02-01 | 2017-05-11 | Facebook, Inc. | Hierarchical User Interface |
US20130198663A1 (en) * | 2012-02-01 | 2013-08-01 | Michael Matas | Hierarchical User Interface |
US9880673B2 (en) * | 2012-02-28 | 2018-01-30 | Canon Kabushiki Kaisha | Multi-touch input information processing apparatus, method, and storage medium |
US20130222340A1 (en) * | 2012-02-28 | 2013-08-29 | Canon Kabushiki Kaisha | Information processing apparatus, control method thereof, and storage medium |
KR20130113285A (en) * | 2012-04-05 | 2013-10-15 | 엘지전자 주식회사 | Mobile terminal and control method thereof |
US20130268883A1 (en) * | 2012-04-05 | 2013-10-10 | Lg Electronics Inc. | Mobile terminal and control method thereof |
US9772762B2 (en) * | 2012-04-05 | 2017-09-26 | Lg Electronics Inc. | Variable scale scrolling and resizing of displayed images based upon gesture speed |
KR101886753B1 (en) * | 2012-04-05 | 2018-08-08 | 엘지전자 주식회사 | Mobile terminal and control method thereof |
US20150070360A1 (en) * | 2012-04-09 | 2015-03-12 | Tencent Technology (Shenzhen) Company Limited | Method and mobile terminal for drawing sliding trace |
WO2013158750A2 (en) * | 2012-04-17 | 2013-10-24 | Wittich David | System and method for providing recursive feedback during an assembly operation |
WO2013158750A3 (en) * | 2012-04-17 | 2013-12-05 | Wittich David | System and method for providing recursive feedback during an assembly operation |
US20130290868A1 (en) * | 2012-04-30 | 2013-10-31 | Anders Nancke-Krogh | System and method for unlimited multi-user computer desktop environment |
US9465509B2 (en) * | 2012-04-30 | 2016-10-11 | Mosaiqq, Inc. | System and method for unlimited multi-user computer desktop environment |
WO2013166047A3 (en) * | 2012-04-30 | 2014-12-11 | Mosaiqq, Inc. | System and method for unlimited multi-user computer desktop environment |
US10996788B2 (en) | 2012-05-09 | 2021-05-04 | Apple Inc. | Device, method, and graphical user interface for transitioning between display states in response to a gesture |
US10095391B2 (en) | 2012-05-09 | 2018-10-09 | Apple Inc. | Device, method, and graphical user interface for selecting user interface objects |
US10191627B2 (en) | 2012-05-09 | 2019-01-29 | Apple Inc. | Device, method, and graphical user interface for manipulating framed graphical objects |
US11068153B2 (en) | 2012-05-09 | 2021-07-20 | Apple Inc. | Device, method, and graphical user interface for displaying user interface objects corresponding to an application |
US10175757B2 (en) | 2012-05-09 | 2019-01-08 | Apple Inc. | Device, method, and graphical user interface for providing tactile feedback for touch-based operations performed and reversed in a user interface |
US10175864B2 (en) | 2012-05-09 | 2019-01-08 | Apple Inc. | Device, method, and graphical user interface for selecting object within a group of objects in accordance with contact intensity |
US10496260B2 (en) | 2012-05-09 | 2019-12-03 | Apple Inc. | Device, method, and graphical user interface for pressure-based alteration of controls in a user interface |
US11023116B2 (en) | 2012-05-09 | 2021-06-01 | Apple Inc. | Device, method, and graphical user interface for moving a user interface object based on an intensity of a press input |
US9612741B2 (en) | 2012-05-09 | 2017-04-04 | Apple Inc. | Device, method, and graphical user interface for displaying additional information in response to a user contact |
US9619076B2 (en) | 2012-05-09 | 2017-04-11 | Apple Inc. | Device, method, and graphical user interface for transitioning between display states in response to a gesture |
US11010027B2 (en) | 2012-05-09 | 2021-05-18 | Apple Inc. | Device, method, and graphical user interface for manipulating framed graphical objects |
US12067229B2 (en) | 2012-05-09 | 2024-08-20 | Apple Inc. | Device, method, and graphical user interface for providing feedback for changing activation states of a user interface object |
US10168826B2 (en) | 2012-05-09 | 2019-01-01 | Apple Inc. | Device, method, and graphical user interface for transitioning between display states in response to a gesture |
US10126930B2 (en) | 2012-05-09 | 2018-11-13 | Apple Inc. | Device, method, and graphical user interface for scrolling nested regions |
US10969945B2 (en) | 2012-05-09 | 2021-04-06 | Apple Inc. | Device, method, and graphical user interface for selecting user interface objects |
US11354033B2 (en) | 2012-05-09 | 2022-06-07 | Apple Inc. | Device, method, and graphical user interface for managing icons in a user interface region |
US12045451B2 (en) | 2012-05-09 | 2024-07-23 | Apple Inc. | Device, method, and graphical user interface for moving a user interface object based on an intensity of a press input |
US10114546B2 (en) | 2012-05-09 | 2018-10-30 | Apple Inc. | Device, method, and graphical user interface for displaying user interface objects corresponding to an application |
US10942570B2 (en) | 2012-05-09 | 2021-03-09 | Apple Inc. | Device, method, and graphical user interface for providing tactile feedback for operations performed in a user interface |
US9886184B2 (en) | 2012-05-09 | 2018-02-06 | Apple Inc. | Device, method, and graphical user interface for providing feedback for changing activation states of a user interface object |
US11221675B2 (en) * | 2012-05-09 | 2022-01-11 | Apple Inc. | Device, method, and graphical user interface for providing tactile feedback for operations performed in a user interface |
US10592041B2 (en) | 2012-05-09 | 2020-03-17 | Apple Inc. | Device, method, and graphical user interface for transitioning between display states in response to a gesture |
US10073615B2 (en) | 2012-05-09 | 2018-09-11 | Apple Inc. | Device, method, and graphical user interface for displaying user interface objects corresponding to an application |
US10908808B2 (en) | 2012-05-09 | 2021-02-02 | Apple Inc. | Device, method, and graphical user interface for displaying additional information in response to a user contact |
US10775994B2 (en) | 2012-05-09 | 2020-09-15 | Apple Inc. | Device, method, and graphical user interface for moving and dropping a user interface object |
US10042542B2 (en) | 2012-05-09 | 2018-08-07 | Apple Inc. | Device, method, and graphical user interface for moving and dropping a user interface object |
US11947724B2 (en) * | 2012-05-09 | 2024-04-02 | Apple Inc. | Device, method, and graphical user interface for providing tactile feedback for operations performed in a user interface |
US10775999B2 (en) | 2012-05-09 | 2020-09-15 | Apple Inc. | Device, method, and graphical user interface for displaying user interface objects corresponding to an application |
US10884591B2 (en) | 2012-05-09 | 2021-01-05 | Apple Inc. | Device, method, and graphical user interface for selecting object within a group of objects |
US9996231B2 (en) | 2012-05-09 | 2018-06-12 | Apple Inc. | Device, method, and graphical user interface for manipulating framed graphical objects |
US9753639B2 (en) | 2012-05-09 | 2017-09-05 | Apple Inc. | Device, method, and graphical user interface for displaying content associated with a corresponding affordance |
US20220129076A1 (en) * | 2012-05-09 | 2022-04-28 | Apple Inc. | Device, Method, and Graphical User Interface for Providing Tactile Feedback for Operations Performed in a User Interface |
US9990121B2 (en) | 2012-05-09 | 2018-06-05 | Apple Inc. | Device, method, and graphical user interface for moving a user interface object based on an intensity of a press input |
US9971499B2 (en) | 2012-05-09 | 2018-05-15 | Apple Inc. | Device, method, and graphical user interface for displaying content associated with a corresponding affordance |
US11314407B2 (en) | 2012-05-09 | 2022-04-26 | Apple Inc. | Device, method, and graphical user interface for providing feedback for changing activation states of a user interface object |
US10782871B2 (en) | 2012-05-09 | 2020-09-22 | Apple Inc. | Device, method, and graphical user interface for providing feedback for changing activation states of a user interface object |
US10481690B2 (en) | 2012-05-09 | 2019-11-19 | Apple Inc. | Device, method, and graphical user interface for providing tactile feedback for media adjustment operations performed in a user interface |
US9823839B2 (en) | 2012-05-09 | 2017-11-21 | Apple Inc. | Device, method, and graphical user interface for displaying additional information in response to a user contact |
US10061759B2 (en) | 2012-06-07 | 2018-08-28 | Microsoft Technology Licensing, Llc | Progressive loading for web-based spreadsheet applications |
US20130332843A1 (en) * | 2012-06-08 | 2013-12-12 | Jesse William Boettcher | Simulating physical materials and light interaction in a user interface of a resource-constrained device |
US11073959B2 (en) * | 2012-06-08 | 2021-07-27 | Apple Inc. | Simulating physical materials and light interaction in a user interface of a resource-constrained device |
US12112008B2 (en) | 2012-06-08 | 2024-10-08 | Apple Inc. | Simulating physical materials and light interaction in a user interface of a resource-constrained device |
RU2660642C2 (en) * | 2012-06-20 | 2018-07-06 | Самсунг Электроникс Ко., Лтд. | Information display apparatus and method of user device |
EP2864860A2 (en) * | 2012-06-22 | 2015-04-29 | Microsoft Technology Licensing, LLC | Wrap-around navigation |
GB2503654A (en) * | 2012-06-27 | 2014-01-08 | Samsung Electronics Co Ltd | Methods of outputting a manipulation of a graphic upon a boundary condition being met |
GB2503654B (en) * | 2012-06-27 | 2015-10-28 | Samsung Electronics Co Ltd | A method and apparatus for outputting graphics to a display |
US9785338B2 (en) | 2012-07-02 | 2017-10-10 | Mosaiqq, Inc. | System and method for providing a user interaction interface using a multi-touch gesture recognition engine |
US20150227292A1 (en) * | 2012-07-16 | 2015-08-13 | Samsung Electronics Co., Ltd. | Method and apparatus for moving object in mobile terminal |
US9594469B2 (en) * | 2012-07-25 | 2017-03-14 | Sap Se | Dynamic layering user interface |
US20140033116A1 (en) * | 2012-07-25 | 2014-01-30 | Daniel Jakobs | Dynamic layering user interface |
US9075460B2 (en) | 2012-08-10 | 2015-07-07 | Blackberry Limited | Method of momentum based zoom of content on an electronic device |
US10489031B2 (en) * | 2012-08-10 | 2019-11-26 | Blackberry Limited | Method of momentum based zoom of content on an electronic device |
US20150286380A1 (en) * | 2012-08-10 | 2015-10-08 | Blackberry Limited | Method of momentum based zoom of content on an electronic device |
EP2696269A1 (en) * | 2012-08-10 | 2014-02-12 | BlackBerry Limited | Method of momentum based zoom of content on an electronic device |
US9542070B2 (en) | 2012-08-14 | 2017-01-10 | Beijing Xiaomi Technology Co., Ltd. | Method and apparatus for providing an interactive user interface |
EP2871561A4 (en) * | 2012-08-14 | 2015-12-30 | Xiaomi Inc | Desktop system of mobile terminal and interface interaction method and device |
US9552080B2 (en) | 2012-10-05 | 2017-01-24 | Google Inc. | Incremental feature-based gesture-keyboard decoding |
US9021380B2 (en) | 2012-10-05 | 2015-04-28 | Google Inc. | Incremental multi-touch gesture recognition |
US9678943B2 (en) | 2012-10-16 | 2017-06-13 | Google Inc. | Partial gesture text entry |
US9134906B2 (en) | 2012-10-16 | 2015-09-15 | Google Inc. | Incremental multi-word recognition |
US9542385B2 (en) | 2012-10-16 | 2017-01-10 | Google Inc. | Incremental multi-word recognition |
US10140284B2 (en) | 2012-10-16 | 2018-11-27 | Google Llc | Partial gesture text entry |
US9710453B2 (en) | 2012-10-16 | 2017-07-18 | Google Inc. | Multi-gesture text input prediction |
US11379663B2 (en) | 2012-10-16 | 2022-07-05 | Google Llc | Multi-gesture text input prediction |
US9798718B2 (en) | 2012-10-16 | 2017-10-24 | Google Inc. | Incremental multi-word recognition |
US10489508B2 (en) | 2012-10-16 | 2019-11-26 | Google Llc | Incremental multi-word recognition |
US10977440B2 (en) | 2012-10-16 | 2021-04-13 | Google Llc | Multi-gesture text input prediction |
US8850350B2 (en) | 2012-10-16 | 2014-09-30 | Google Inc. | Partial gesture text entry |
US10019435B2 (en) | 2012-10-22 | 2018-07-10 | Google Llc | Space prediction for text input |
US9588670B2 (en) * | 2012-10-23 | 2017-03-07 | Nintendo Co., Ltd. | Information-processing device, storage medium, information-processing method, and information-processing system |
US20140115533A1 (en) * | 2012-10-23 | 2014-04-24 | Nintendo Co., Ltd. | Information-processing device, storage medium, information-processing method, and information-processing system |
JP2014085817A (en) * | 2012-10-23 | 2014-05-12 | Nintendo Co Ltd | Program, information processing device, information processing method, and information processing system |
US11343370B1 (en) | 2012-11-02 | 2022-05-24 | Majen Tech, LLC | Screen interface for a mobile device apparatus |
US11652916B1 (en) | 2012-11-02 | 2023-05-16 | W74 Technology, Llc | Screen interface for a mobile device apparatus |
US20140129979A1 (en) * | 2012-11-02 | 2014-05-08 | Samsung Electronics Co., Ltd. | Display device and list display method thereof |
EP2735939A3 (en) * | 2012-11-15 | 2015-03-11 | Giga-Byte Technology Co., Ltd. | Keyboard |
CN103809763A (en) * | 2012-11-15 | 2014-05-21 | 技嘉科技股份有限公司 | Keyboard device |
US12050761B2 (en) | 2012-12-29 | 2024-07-30 | Apple Inc. | Device, method, and graphical user interface for transitioning from low power mode |
US9857897B2 (en) | 2012-12-29 | 2018-01-02 | Apple Inc. | Device and method for assigning respective portions of an aggregate intensity to a plurality of contacts |
US9965074B2 (en) | 2012-12-29 | 2018-05-08 | Apple Inc. | Device, method, and graphical user interface for transitioning between touch input to display output relationships |
US10101887B2 (en) | 2012-12-29 | 2018-10-16 | Apple Inc. | Device, method, and graphical user interface for navigating user interface hierarchies |
US9959025B2 (en) | 2012-12-29 | 2018-05-01 | Apple Inc. | Device, method, and graphical user interface for navigating user interface hierarchies |
US10915243B2 (en) | 2012-12-29 | 2021-02-09 | Apple Inc. | Device, method, and graphical user interface for adjusting content selection |
US10078442B2 (en) | 2012-12-29 | 2018-09-18 | Apple Inc. | Device, method, and graphical user interface for determining whether to scroll or select content based on an intensity theshold |
US10620781B2 (en) | 2012-12-29 | 2020-04-14 | Apple Inc. | Device, method, and graphical user interface for moving a cursor according to a change in an appearance of a control icon with simulated three-dimensional characteristics |
US10175879B2 (en) | 2012-12-29 | 2019-01-08 | Apple Inc. | Device, method, and graphical user interface for zooming a user interface while performing a drag operation |
US10037138B2 (en) | 2012-12-29 | 2018-07-31 | Apple Inc. | Device, method, and graphical user interface for switching between user interfaces |
US12135871B2 (en) | 2012-12-29 | 2024-11-05 | Apple Inc. | Device, method, and graphical user interface for switching between user interfaces |
US11513675B2 (en) | 2012-12-29 | 2022-11-29 | Apple Inc. | User interface for manipulating user interface objects |
US10185491B2 (en) | 2012-12-29 | 2019-01-22 | Apple Inc. | Device, method, and graphical user interface for determining whether to scroll or enlarge content |
US9996233B2 (en) | 2012-12-29 | 2018-06-12 | Apple Inc. | Device, method, and graphical user interface for navigating user interface hierarchies |
US9778771B2 (en) | 2012-12-29 | 2017-10-03 | Apple Inc. | Device, method, and graphical user interface for transitioning between touch input to display output relationships |
US10437333B2 (en) | 2012-12-29 | 2019-10-08 | Apple Inc. | Device, method, and graphical user interface for forgoing generation of tactile output for a multi-contact gesture |
US20140195979A1 (en) * | 2013-01-10 | 2014-07-10 | Appsense Limited | Interactive user interface |
US11463576B1 (en) | 2013-01-10 | 2022-10-04 | Majen Tech, LLC | Screen interface for a mobile device apparatus |
US11431834B1 (en) | 2013-01-10 | 2022-08-30 | Majen Tech, LLC | Screen interface for a mobile device apparatus |
US10528663B2 (en) | 2013-01-15 | 2020-01-07 | Google Llc | Touch keyboard using language and spatial models |
US11334717B2 (en) | 2013-01-15 | 2022-05-17 | Google Llc | Touch keyboard using a trained model |
US9830311B2 (en) | 2013-01-15 | 2017-11-28 | Google Llc | Touch keyboard using language and spatial models |
US11727212B2 (en) | 2013-01-15 | 2023-08-15 | Google Llc | Touch keyboard using a trained model |
US20140215383A1 (en) * | 2013-01-31 | 2014-07-31 | Disney Enterprises, Inc. | Parallax scrolling user interface |
US20140258904A1 (en) * | 2013-03-08 | 2014-09-11 | Samsung Display Co., Ltd. | Terminal and method of controlling the same |
US9600120B2 (en) | 2013-03-15 | 2017-03-21 | Apple Inc. | Device, method, and graphical user interface for orientation-based parallax display |
JP2014182638A (en) * | 2013-03-19 | 2014-09-29 | Canon Inc | Display control unit, display control method and computer program |
US20140285507A1 (en) * | 2013-03-19 | 2014-09-25 | Canon Kabushiki Kaisha | Display control device, display control method, and computer-readable storage medium |
US9685143B2 (en) * | 2013-03-19 | 2017-06-20 | Canon Kabushiki Kaisha | Display control device, display control method, and computer-readable storage medium for changing a representation of content displayed on a display screen |
US10275035B2 (en) * | 2013-03-25 | 2019-04-30 | Konica Minolta, Inc. | Device and method for determining gesture, and computer-readable storage medium for computer program |
US20140289665A1 (en) * | 2013-03-25 | 2014-09-25 | Konica Minolta, Inc. | Device and method for determining gesture, and computer-readable storage medium for computer program |
US20140298258A1 (en) * | 2013-03-28 | 2014-10-02 | Microsoft Corporation | Switch List Interactions |
US9626100B2 (en) * | 2013-04-15 | 2017-04-18 | Microsoft Technology Licensing, Llc | Dynamic management of edge inputs by users on a touch device |
US20140310661A1 (en) * | 2013-04-15 | 2014-10-16 | Microsoft Corporation | Dynamic management of edge inputs by users on a touch device |
US9547439B2 (en) | 2013-04-22 | 2017-01-17 | Google Inc. | Dynamically-positioned character string suggestions for gesture typing |
CN105210019A (en) * | 2013-04-22 | 2015-12-30 | 微软技术许可有限责任公司 | User interface response to an asynchronous manipulation |
US20140317538A1 (en) * | 2013-04-22 | 2014-10-23 | Microsoft Corporation | User interface response to an asynchronous manipulation |
US8887103B1 (en) * | 2013-04-22 | 2014-11-11 | Google Inc. | Dynamically-positioned character string suggestions for gesture typing |
EP2989535A1 (en) * | 2013-04-22 | 2016-03-02 | Microsoft Technology Licensing, LLC | User interface response to an asynchronous manipulation |
US10241673B2 (en) | 2013-05-03 | 2019-03-26 | Google Llc | Alternative hypothesis error correction for gesture typing |
US9841895B2 (en) | 2013-05-03 | 2017-12-12 | Google Llc | Alternative hypothesis error correction for gesture typing |
US20140351698A1 (en) * | 2013-05-23 | 2014-11-27 | Canon Kabushiki Kaisha | Display control apparatus and control method for the same |
US9864499B2 (en) * | 2013-05-23 | 2018-01-09 | Canon Kabushiki Kaisha | Display control apparatus and control method for the same |
US11657587B2 (en) * | 2013-06-01 | 2023-05-23 | Apple Inc. | Intelligently placing labels |
US20190221047A1 (en) * | 2013-06-01 | 2019-07-18 | Apple Inc. | Intelligently placing labels |
US11132120B2 (en) * | 2013-06-09 | 2021-09-28 | Apple Inc. | Device, method, and graphical user interface for transitioning between user interfaces |
US11928317B2 (en) | 2013-06-09 | 2024-03-12 | Apple Inc. | Device, method, and graphical user interface for sharing content from a respective application |
US11409414B2 (en) | 2013-06-09 | 2022-08-09 | Apple Inc. | Device, method, and graphical user interface for sharing content from a respective application |
US20140365882A1 (en) * | 2013-06-09 | 2014-12-11 | Apple Inc. | Device, method, and graphical user interface for transitioning between user interfaces |
US9477393B2 (en) | 2013-06-09 | 2016-10-25 | Apple Inc. | Device, method, and graphical user interface for displaying application status information |
US11893233B2 (en) | 2013-06-09 | 2024-02-06 | Apple Inc. | Device, method, and graphical user interface for moving user interface objects |
US11334238B2 (en) | 2013-06-09 | 2022-05-17 | Apple Inc. | Device, method, and graphical user interface for moving user interface objects |
US10282083B2 (en) * | 2013-06-09 | 2019-05-07 | Apple Inc. | Device, method, and graphical user interface for transitioning between user interfaces |
US10120541B2 (en) | 2013-06-09 | 2018-11-06 | Apple Inc. | Device, method, and graphical user interface for sharing content from a respective application |
US9712577B2 (en) | 2013-06-09 | 2017-07-18 | Apple Inc. | Device, method, and graphical user interface for sharing content from a respective application |
WO2014200676A3 (en) * | 2013-06-09 | 2015-04-16 | Apple Inc. | Device, method, and graphical user interface for moving user interface objects |
US11301129B2 (en) | 2013-06-09 | 2022-04-12 | Apple Inc. | Device, method, and graphical user interface for moving user interface objects |
US20140375572A1 (en) * | 2013-06-20 | 2014-12-25 | Microsoft Corporation | Parametric motion curves and manipulable content |
US20150040034A1 (en) * | 2013-08-01 | 2015-02-05 | Nintendo Co., Ltd. | Information-processing device, information-processing system, storage medium, and information-processing method |
US9569075B2 (en) * | 2013-08-01 | 2017-02-14 | Nintendo Co., Ltd. | Information-processing device, information-processing system, storage medium, and information-processing method |
US10282085B2 (en) | 2013-08-27 | 2019-05-07 | Samsung Electronics Co., Ltd | Method for displaying data and electronic device thereof |
US12050766B2 (en) | 2013-09-03 | 2024-07-30 | Apple Inc. | Crown input for a wearable electronic device |
US10921976B2 (en) * | 2013-09-03 | 2021-02-16 | Apple Inc. | User interface for manipulating user interface objects |
US11829576B2 (en) | 2013-09-03 | 2023-11-28 | Apple Inc. | User interface object manipulations in a user interface |
US11068128B2 (en) | 2013-09-03 | 2021-07-20 | Apple Inc. | User interface object manipulations in a user interface |
US11656751B2 (en) | 2013-09-03 | 2023-05-23 | Apple Inc. | User interface for manipulating user interface objects with magnetic properties |
US20150074597A1 (en) * | 2013-09-11 | 2015-03-12 | Nvidia Corporation | Separate smoothing filter for pinch-zooming touchscreen gesture response |
US9678658B2 (en) * | 2013-09-27 | 2017-06-13 | Huawei Technologies Co., Ltd. | Method for displaying interface content and user equipment |
US10430068B2 (en) * | 2013-09-27 | 2019-10-01 | Huawei Technologies Co., Ltd. | Method for displaying interface content and user equipment |
EP3035170A4 (en) * | 2013-09-27 | 2016-08-31 | Huawei Tech Co Ltd | Method for displaying interface content and user equipment |
AU2014328340B2 (en) * | 2013-09-27 | 2017-05-25 | Huawei Technologies Co., Ltd. | Method for displaying interface content and user equipment |
US20160196033A1 (en) * | 2013-09-27 | 2016-07-07 | Huawei Technologies Co., Ltd. | Method for Displaying Interface Content and User Equipment |
CN103576859A (en) * | 2013-10-09 | 2014-02-12 | 深迪半导体(上海)有限公司 | Man-machine interaction method for mobile terminal browsing |
US20150143286A1 (en) * | 2013-11-20 | 2015-05-21 | Xiaomi Inc. | Method and terminal for responding to sliding operation |
US11720861B2 (en) | 2014-06-27 | 2023-08-08 | Apple Inc. | Reduced size user interface |
US11250385B2 (en) | 2014-06-27 | 2022-02-15 | Apple Inc. | Reduced size user interface |
US11740776B2 (en) | 2014-08-02 | 2023-08-29 | Apple Inc. | Context-specific user interfaces |
US11922004B2 (en) | 2014-08-15 | 2024-03-05 | Apple Inc. | Weather user interface |
US11157143B2 (en) | 2014-09-02 | 2021-10-26 | Apple Inc. | Music user interface |
US11402968B2 (en) | 2014-09-02 | 2022-08-02 | Apple Inc. | Reduced size user in interface |
US12118181B2 (en) | 2014-09-02 | 2024-10-15 | Apple Inc. | Reduced size user interface |
US11743221B2 (en) | 2014-09-02 | 2023-08-29 | Apple Inc. | Electronic message user interface |
US11941191B2 (en) | 2014-09-02 | 2024-03-26 | Apple Inc. | Button functionality |
US12001650B2 (en) | 2014-09-02 | 2024-06-04 | Apple Inc. | Music user interface |
US11474626B2 (en) | 2014-09-02 | 2022-10-18 | Apple Inc. | Button functionality |
US11068083B2 (en) | 2014-09-02 | 2021-07-20 | Apple Inc. | Button functionality |
US11644911B2 (en) | 2014-09-02 | 2023-05-09 | Apple Inc. | Button functionality |
US11157135B2 (en) | 2014-09-02 | 2021-10-26 | Apple Inc. | Multi-dimensional object rearrangement |
US11747956B2 (en) | 2014-09-02 | 2023-09-05 | Apple Inc. | Multi-dimensional object rearrangement |
KR20170055985A (en) * | 2014-09-09 | 2017-05-22 | 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 | Parametric inertia and apis |
RU2701988C2 (en) * | 2014-09-09 | 2019-10-02 | МАЙКРОСОФТ ТЕКНОЛОДЖИ ЛАЙСЕНСИНГ, ЭлЭлСи | Parametric inertia and application programming interfaces |
US10642365B2 (en) | 2014-09-09 | 2020-05-05 | Microsoft Technology Licensing, Llc | Parametric inertia and APIs |
KR102394295B1 (en) | 2014-09-09 | 2022-05-03 | 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 | Parametric inertia and apis |
WO2016040205A1 (en) * | 2014-09-09 | 2016-03-17 | Microsoft Technology Licensing, Llc | Parametric inertia and apis |
US10456082B2 (en) | 2014-11-28 | 2019-10-29 | Nokia Technologies Oy | Method and apparatus for contacting skin with sensor equipment |
US10191634B2 (en) * | 2015-01-30 | 2019-01-29 | Xiaomi Inc. | Methods and devices for displaying document on touch screen display |
US10838570B2 (en) * | 2015-02-10 | 2020-11-17 | Etter Studio Ltd. | Multi-touch GUI featuring directional compression and expansion of graphical content |
US10140013B2 (en) | 2015-02-13 | 2018-11-27 | Here Global B.V. | Method, apparatus and computer program product for calculating a virtual touch position |
US10884592B2 (en) | 2015-03-02 | 2021-01-05 | Apple Inc. | Control of system zoom magnification using a rotatable input mechanism |
US10048757B2 (en) | 2015-03-08 | 2018-08-14 | Apple Inc. | Devices and methods for controlling media presentation |
US9990107B2 (en) | 2015-03-08 | 2018-06-05 | Apple Inc. | Devices, methods, and graphical user interfaces for displaying and using menus |
US10338772B2 (en) | 2015-03-08 | 2019-07-02 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US9632664B2 (en) | 2015-03-08 | 2017-04-25 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10268341B2 (en) | 2015-03-08 | 2019-04-23 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US9645732B2 (en) | 2015-03-08 | 2017-05-09 | Apple Inc. | Devices, methods, and graphical user interfaces for displaying and using menus |
US11112957B2 (en) | 2015-03-08 | 2021-09-07 | Apple Inc. | Devices, methods, and graphical user interfaces for interacting with a control object while dragging another object |
US10860177B2 (en) | 2015-03-08 | 2020-12-08 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US9645709B2 (en) | 2015-03-08 | 2017-05-09 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US12019862B2 (en) | 2015-03-08 | 2024-06-25 | Apple Inc. | Sharing user-configurable graphical constructs |
US11977726B2 (en) | 2015-03-08 | 2024-05-07 | Apple Inc. | Devices, methods, and graphical user interfaces for interacting with a control object while dragging another object |
US10387029B2 (en) | 2015-03-08 | 2019-08-20 | Apple Inc. | Devices, methods, and graphical user interfaces for displaying and using menus |
US10402073B2 (en) | 2015-03-08 | 2019-09-03 | Apple Inc. | Devices, methods, and graphical user interfaces for interacting with a control object while dragging another object |
US10268342B2 (en) | 2015-03-08 | 2019-04-23 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10067645B2 (en) | 2015-03-08 | 2018-09-04 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10180772B2 (en) | 2015-03-08 | 2019-01-15 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10613634B2 (en) | 2015-03-08 | 2020-04-07 | Apple Inc. | Devices and methods for controlling media presentation |
US10095396B2 (en) | 2015-03-08 | 2018-10-09 | Apple Inc. | Devices, methods, and graphical user interfaces for interacting with a control object while dragging another object |
US10599331B2 (en) | 2015-03-19 | 2020-03-24 | Apple Inc. | Touch input cursor manipulation |
US11550471B2 (en) | 2015-03-19 | 2023-01-10 | Apple Inc. | Touch input cursor manipulation |
US11054990B2 (en) | 2015-03-19 | 2021-07-06 | Apple Inc. | Touch input cursor manipulation |
US9785305B2 (en) | 2015-03-19 | 2017-10-10 | Apple Inc. | Touch input cursor manipulation |
US10222980B2 (en) | 2015-03-19 | 2019-03-05 | Apple Inc. | Touch input cursor manipulation |
US9639184B2 (en) | 2015-03-19 | 2017-05-02 | Apple Inc. | Touch input cursor manipulation |
US20160286123A1 (en) * | 2015-03-27 | 2016-09-29 | National Taipei University Of Technology | Method of image conversion operation for panorama dynamic ip camera |
US9609211B2 (en) * | 2015-03-27 | 2017-03-28 | National Taipei University Of Technology | Method of image conversion operation for panorama dynamic IP camera |
US10067653B2 (en) | 2015-04-01 | 2018-09-04 | Apple Inc. | Devices and methods for processing touch inputs based on their intensities |
US10152208B2 (en) | 2015-04-01 | 2018-12-11 | Apple Inc. | Devices and methods for processing touch inputs based on their intensities |
US10303354B2 (en) | 2015-06-07 | 2019-05-28 | Apple Inc. | Devices and methods for navigating between user interfaces |
US11240424B2 (en) | 2015-06-07 | 2022-02-01 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US10200598B2 (en) | 2015-06-07 | 2019-02-05 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
WO2016200586A1 (en) * | 2015-06-07 | 2016-12-15 | Apple Inc. | Devices and methods for navigating between user interfaces |
US9602729B2 (en) | 2015-06-07 | 2017-03-21 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US9706127B2 (en) | 2015-06-07 | 2017-07-11 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US11835985B2 (en) | 2015-06-07 | 2023-12-05 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US9860451B2 (en) | 2015-06-07 | 2018-01-02 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US10705718B2 (en) | 2015-06-07 | 2020-07-07 | Apple Inc. | Devices and methods for navigating between user interfaces |
US9830048B2 (en) | 2015-06-07 | 2017-11-28 | Apple Inc. | Devices and methods for processing touch inputs with instructions in a web page |
US10455146B2 (en) | 2015-06-07 | 2019-10-22 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
EP3187993A1 (en) * | 2015-06-07 | 2017-07-05 | Apple Inc. | Devices and methods for navigating between user interfaces |
US9891811B2 (en) | 2015-06-07 | 2018-02-13 | Apple Inc. | Devices and methods for navigating between user interfaces |
US9916080B2 (en) | 2015-06-07 | 2018-03-13 | Apple Inc. | Devices and methods for navigating between user interfaces |
US9674426B2 (en) | 2015-06-07 | 2017-06-06 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US10841484B2 (en) | 2015-06-07 | 2020-11-17 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US10346030B2 (en) | 2015-06-07 | 2019-07-09 | Apple Inc. | Devices and methods for navigating between user interfaces |
US11681429B2 (en) | 2015-06-07 | 2023-06-20 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US11231831B2 (en) | 2015-06-07 | 2022-01-25 | Apple Inc. | Devices and methods for content preview based on touch input intensity |
US10416800B2 (en) | 2015-08-10 | 2019-09-17 | Apple Inc. | Devices, methods, and graphical user interfaces for adjusting user interface objects |
US11182017B2 (en) | 2015-08-10 | 2021-11-23 | Apple Inc. | Devices and methods for processing touch inputs based on their intensities |
US10162452B2 (en) | 2015-08-10 | 2018-12-25 | Apple Inc. | Devices and methods for processing touch inputs based on their intensities |
US10698598B2 (en) | 2015-08-10 | 2020-06-30 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10884608B2 (en) | 2015-08-10 | 2021-01-05 | Apple Inc. | Devices, methods, and graphical user interfaces for content navigation and manipulation |
US10209884B2 (en) | 2015-08-10 | 2019-02-19 | Apple Inc. | Devices, Methods, and Graphical User Interfaces for Manipulating User Interface Objects with Visual and/or Haptic Feedback |
US10203868B2 (en) | 2015-08-10 | 2019-02-12 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10248308B2 (en) | 2015-08-10 | 2019-04-02 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interfaces with physical gestures |
US11740785B2 (en) | 2015-08-10 | 2023-08-29 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10963158B2 (en) | 2015-08-10 | 2021-03-30 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US9880735B2 (en) | 2015-08-10 | 2018-01-30 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US10235035B2 (en) | 2015-08-10 | 2019-03-19 | Apple Inc. | Devices, methods, and graphical user interfaces for content navigation and manipulation |
US10754542B2 (en) | 2015-08-10 | 2020-08-25 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US11327648B2 (en) | 2015-08-10 | 2022-05-10 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
US11908343B2 (en) | 2015-08-20 | 2024-02-20 | Apple Inc. | Exercised-based watch face and complications |
WO2017048187A1 (en) * | 2015-09-16 | 2017-03-23 | Adssets AB | Method for movement on the display of a device |
US10592070B2 (en) | 2015-10-12 | 2020-03-17 | Microsoft Technology Licensing, Llc | User interface directional navigation using focus maps |
US10163245B2 (en) | 2016-03-25 | 2018-12-25 | Microsoft Technology Licensing, Llc | Multi-mode animation system |
US11733656B2 (en) | 2016-06-11 | 2023-08-22 | Apple Inc. | Configuring context-specific user interfaces |
US11073799B2 (en) | 2016-06-11 | 2021-07-27 | Apple Inc. | Configuring context-specific user interfaces |
US10810241B2 (en) | 2016-06-12 | 2020-10-20 | Apple, Inc. | Arrangements of documents in a document feed |
US11899703B2 (en) | 2016-06-12 | 2024-02-13 | Apple Inc. | Arrangements of documents in a document feed |
US11775141B2 (en) | 2017-05-12 | 2023-10-03 | Apple Inc. | Context-specific user interfaces |
US11797968B2 (en) | 2017-05-16 | 2023-10-24 | Apple Inc. | User interfaces for peer-to-peer transfers |
CN107728914A (en) * | 2017-08-21 | 2018-02-23 | 莱诺斯科技(北京)股份有限公司 | A kind of satellite power supply and distribution software touch-control man-machine interactive system |
US11977411B2 (en) | 2018-05-07 | 2024-05-07 | Apple Inc. | Methods and systems for adding respective complications on a user interface |
US11461984B2 (en) * | 2018-08-27 | 2022-10-04 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for multi-user collaborative creation, and storage medium |
US11435830B2 (en) | 2018-09-11 | 2022-09-06 | Apple Inc. | Content-based tactile outputs |
US10928907B2 (en) | 2018-09-11 | 2021-02-23 | Apple Inc. | Content-based tactile outputs |
US11921926B2 (en) | 2018-09-11 | 2024-03-05 | Apple Inc. | Content-based tactile outputs |
US20230012482A1 (en) * | 2019-03-24 | 2023-01-19 | Apple Inc. | Stacked media elements with selective parallax effects |
CN115767173A (en) * | 2019-03-24 | 2023-03-07 | 苹果公司 | Stacked media elements with selective parallax effects |
US11960701B2 (en) | 2019-05-06 | 2024-04-16 | Apple Inc. | Using an illustration to show the passing of time |
US11822778B2 (en) | 2020-05-11 | 2023-11-21 | Apple Inc. | User interfaces related to time |
US12099713B2 (en) | 2020-05-11 | 2024-09-24 | Apple Inc. | User interfaces related to time |
US11842032B2 (en) | 2020-05-11 | 2023-12-12 | Apple Inc. | User interfaces for managing user interface sharing |
US12008230B2 (en) | 2020-05-11 | 2024-06-11 | Apple Inc. | User interfaces related to time with an editable background |
US11694590B2 (en) | 2020-12-21 | 2023-07-04 | Apple Inc. | Dynamic user interface with time indicator |
US11720239B2 (en) | 2021-01-07 | 2023-08-08 | Apple Inc. | Techniques for user interfaces related to an event |
US11983702B2 (en) | 2021-02-01 | 2024-05-14 | Apple Inc. | Displaying a representation of a card with a layered structure |
US20230035532A1 (en) * | 2021-05-14 | 2023-02-02 | Apple Inc. | User interfaces related to time |
KR102685525B1 (en) | 2021-05-14 | 2024-07-18 | 애플 인크. | Time-related user interfaces |
KR20230147208A (en) * | 2021-05-14 | 2023-10-20 | 애플 인크. | Time-related user interfaces |
CN117421087A (en) * | 2021-05-14 | 2024-01-19 | 苹果公司 | Time-dependent user interface |
US11921992B2 (en) * | 2021-05-14 | 2024-03-05 | Apple Inc. | User interfaces related to time |
US11893212B2 (en) | 2021-06-06 | 2024-02-06 | Apple Inc. | User interfaces for managing application widgets |
US12045014B2 (en) | 2022-01-24 | 2024-07-23 | Apple Inc. | User interfaces for indicating time |
US12147964B2 (en) | 2023-06-13 | 2024-11-19 | Apple Inc. | User interfaces for peer-to-peer transfers |
Also Published As
Publication number | Publication date |
---|---|
US20110202859A1 (en) | 2011-08-18 |
US9417787B2 (en) | 2016-08-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20110202834A1 (en) | Visual motion feedback for user interface | |
US8863039B2 (en) | Multi-dimensional boundary effects | |
US8473860B2 (en) | Multi-layer user interface with flexible parallel and orthogonal movement | |
US9898180B2 (en) | Flexible touch-based scrolling | |
JP5726908B2 (en) | Multi-layer user interface with flexible translation | |
JP5628300B2 (en) | Method, apparatus and computer program product for generating graphic objects with desirable physical features for use in animation | |
US20120066638A1 (en) | Multi-dimensional auto-scrolling | |
JP5751608B2 (en) | Zoom processing apparatus, zoom processing method, and computer program | |
CN111135556B (en) | Virtual camera control method and device, electronic equipment and storage medium | |
WO2018068364A1 (en) | Method and device for displaying page, graphical user interface, and mobile terminal | |
US9836200B2 (en) | Interacting with electronic devices using a single-point gesture | |
KR101848475B1 (en) | Method, system and non-transitory computer-readable recording medium for controlling scroll based on context information | |
JP6388479B2 (en) | Information display device, information distribution device, information display method, information display program, and information distribution method | |
US20140372916A1 (en) | Fixed header control for grouped grid panel |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MICROSOFT CORPORATION, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MANDRYK, LUCIANO BARETTA;FONG, JEFFREY CHENG-YAO;REEL/FRAME:024450/0089 Effective date: 20100504 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034544/0001 Effective date: 20141014 |