CN115602286B - AR (augmented reality) glasses-based binocular simultaneous perception detection and training method and system - Google Patents
AR (augmented reality) glasses-based binocular simultaneous perception detection and training method and system Download PDFInfo
- Publication number
- CN115602286B CN115602286B CN202211274017.7A CN202211274017A CN115602286B CN 115602286 B CN115602286 B CN 115602286B CN 202211274017 A CN202211274017 A CN 202211274017A CN 115602286 B CN115602286 B CN 115602286B
- Authority
- CN
- China
- Prior art keywords
- model
- inspection
- training
- preset
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 122
- 238000012549 training Methods 0.000 title claims abstract description 106
- 230000008447 perception Effects 0.000 title claims abstract description 64
- 239000011521 glass Substances 0.000 title claims abstract description 52
- 238000000034 method Methods 0.000 title claims abstract description 48
- 230000003190 augmentative effect Effects 0.000 title description 3
- 238000007689 inspection Methods 0.000 claims abstract description 73
- 238000004458 analytical method Methods 0.000 claims description 26
- 238000012545 processing Methods 0.000 claims description 20
- 238000006243 chemical reaction Methods 0.000 claims description 13
- 238000011156 evaluation Methods 0.000 claims description 11
- 239000012634 fragment Substances 0.000 claims description 10
- 210000003128 head Anatomy 0.000 claims description 8
- 238000013507 mapping Methods 0.000 claims description 7
- 238000004519 manufacturing process Methods 0.000 claims description 5
- 239000000463 material Substances 0.000 claims description 5
- 238000009877 rendering Methods 0.000 claims description 5
- 238000004873 anchoring Methods 0.000 claims description 4
- 230000004807 localization Effects 0.000 claims description 4
- 230000000694 effects Effects 0.000 abstract description 15
- 230000006870 function Effects 0.000 abstract description 14
- 238000005516 engineering process Methods 0.000 description 14
- 230000000007 visual effect Effects 0.000 description 8
- 230000004438 eyesight Effects 0.000 description 5
- 238000011835 investigation Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 206010025421 Macule Diseases 0.000 description 3
- 238000010276 construction Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 230000033001 locomotion Effects 0.000 description 2
- 230000016776 visual perception Effects 0.000 description 2
- 206010011469 Crying Diseases 0.000 description 1
- 206010047571 Visual impairment Diseases 0.000 description 1
- 230000005856 abnormality Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000001771 impaired effect Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 210000002189 macula lutea Anatomy 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000007996 neuronal plasticity Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000002207 retinal effect Effects 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 230000000638 stimulation Effects 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000001225 therapeutic effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000004382 visual function Effects 0.000 description 1
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Rehabilitation Tools (AREA)
Abstract
The invention provides a binocular simultaneous perception detection and training method and system based on AR glasses, which relate to the technical field of perception detection and are characterized in that a 3D model for inspection is built based on preset left eye and right eye detection patterns; positioning preset left eye and right eye detection patterns in the 3D model for inspection in a selected background for inspection through a slam positioning anchor to generate a display picture; transmitting a display picture in the 3D model for inspection to a display end of the AR glasses; when the checked person wears the AR glasses and sends a starting signal, playing preset voice prompt information; and identifying the reply information of the checked person, determining check data of the checked person, and generating a check report of the checked person. The method solves the technical problems that perception detection in the prior art is to detect left eyes and right eyes respectively, an effective mode of simultaneous detection of two eyes is lacking, and the requirement of daily eyes is not met. The effect of rapidly and effectively checking the condition of simultaneous perception functions of the eyes of the tested person is achieved.
Description
Technical Field
The invention relates to the technical field of perception detection, in particular to a binocular simultaneous perception detection and training method and system based on AR (augmented reality) glasses.
Background
Eye perception detection is visual detection, namely, whether visual perception of eyes of a checked person exists or not is judged through a visual detection instrument, and along with the aggravation of work and study tasks and popularization of electronic products, more and more people have visual disturbance, not only students, middle-aged people, elderly people and other people have visual abnormality of different degrees, but at present, the visual detection is mainly carried out through an eye chart and an optometry instrument, but the visual detection is mainly carried out, the detection of the simultaneous perception function of eyes cannot be carried out, and because the simultaneous perception of eyes is carried out by daily eyes, the detection of the simultaneous perception of the eyes more accords with the daily eye requirement.
Disclosure of Invention
In order to solve the problems, the application provides the binocular simultaneous perception detection and training method and system based on the AR glasses, which solve the technical problems that the perception detection in the prior art is to detect left eyes and right eyes respectively, lacks an effective mode of the binocular simultaneous detection, and does not meet the requirements of daily eyes. The method has the advantages that the condition of simultaneous perception functions of eyes of a tested person is rapidly and effectively checked through the combination of an AR display technology and a slam positioning technology, the operation method is simple and feasible, the matching degree is higher, and the technical effect of meeting the requirements of binocular perception detection is achieved.
In view of the above problems, the present application provides a method and a system for simultaneous perception detection and training of eyes based on AR glasses.
In one aspect, the present application provides a method for simultaneous perception detection and training of two eyes based on AR glasses, comprising: establishing a 3D model for inspection based on a preset left eye detection graph and a preset right eye detection graph; establishing a background model for inspection, positioning preset left eye and right eye detection patterns in the 3D model for inspection in the background for inspection selected from the background model for inspection through a slam positioning anchor, and generating a display picture; transmitting the display picture in the 3D model for inspection to the display end of the AR glasses; when the checked person wears the AR glasses and sends a starting signal, playing preset voice prompt information; collecting reply information of a checked person based on the preset voice prompt information, identifying the reply information of the checked person, determining check data of the checked person, and storing the check data of the checked person; and performing perception inspection analysis and training requirement matching according to the inspected person inspection data to generate an inspected person inspection report.
Preferably, the method further comprises: obtaining a background database of the background model for inspection; acquiring information of a checked person to obtain checked person information; and carrying out preference characteristic analysis according to the checked person information, determining preference characteristics, carrying out relevance analysis on the preference characteristics and the background database to obtain matching background information, and taking the matching background information as a selected background for checking.
Preferably, the method comprises: building a model through a preset modeling tool; importing the preset left eye detection graph and the preset right eye detection graph into Unity, and adjusting the built model to the corresponding position of the scene; the 3D model is converted into a display.
Preferably, the converting the 3D model into the display screen includes: calling a bottom graphical interface by Unity to obtain a vertex of the 3D model; processing the vertices through stages of vertex shader-tessellation shader-geometry shader-clipping-screen mapping; and performing triangle processing, fragment shader, fragment-by-fragment operation and display frame segment outputting processing on the processed vertex information to generate the display frame.
Preferably, the method further comprises: manufacturing textures of a preset background picture through a preset drawing tool; importing the texture into Unity; and adding a background model, attaching textures of a preset background picture to the background model through a material tool in the Unity, and processing and rendering through a CPU and a GPU.
Preferably, the positioning the preset left-eye and right-eye detection patterns in the 3D model for examination in the background for examination selected from the background model for examination by means of a slam localization anchor includes: and calculating the coordinates of the current environment according to slam, calculating the position coordinates of the object in the environment by combining the Unity coordinates, and setting the preset left eye detection graph and the preset right eye detection graph at the same position coordinates for anchoring.
Preferably, the transmitting the display screen in the 3D model for inspection to the display end of the AR glasses includes: calculating the actual coordinates of the current environment according to the slam algorithm; constructing a conversion relation between slam coordinates and Unity coordinates; and obtaining the coordinates of the head of the AR glasses, calculating the position of the initial distance according to the conversion relation of the coordinates of the head of the AR glasses, the actual coordinates, the slam coordinates and the Unity coordinates, and setting the initial distance of the AR display end.
Preferably, the method further comprises: training information of a checked person is collected at fixed time; comparing the training requirement in the detected report of the checked person with training information to determine timing training evaluation information; judging whether the timing training evaluation information meets the training requirements or not, if not, carrying out training content matching in a training plan database according to the training information and the training requirements, and adjusting the training requirements by utilizing the matched training content.
In another aspect, the present application provides an AR glasses-based binocular simultaneous perception detection and training system, the system comprising: the model building unit is used for building a 3D model for inspection based on a preset left eye detection graph and a preset right eye detection graph; a positioning unit, configured to establish a background model for inspection, and fix, by using a slam positioning anchor, preset left-eye and right-eye detection patterns in the 3D model for inspection in an inspection background selected from the background model for inspection, thereby generating a display screen; the AR display unit is used for transmitting the display picture in the 3D model for inspection to the display end of the AR glasses; the detection execution unit is used for playing preset voice prompt information when the checked person wears the AR glasses to send a starting signal; the detection information identification recording unit is used for acquiring reply information of the checked person based on the preset voice prompt information, identifying the reply information of the checked person, determining check data of the checked person and storing the check data of the checked person; and the detection analysis unit is used for performing perception inspection analysis and training requirement matching according to the inspected data of the inspected person to generate an inspected person detection report.
The technical scheme provided by the application has at least the following technical effects:
The application provides a binocular simultaneous perception detection and training method and system based on AR glasses, which are characterized in that a 3D model for inspection is built based on preset left eye and right eye detection patterns; establishing a background model for inspection, positioning preset left eye and right eye detection patterns in the 3D model for inspection in the background for inspection selected from the background model for inspection through a slam positioning anchor, and generating a display picture; transmitting the display picture in the 3D model for inspection to the display end of the AR glasses; when the checked person wears the AR glasses to send a starting signal, a preset voice prompt message is played; collecting the reply information of the checked person based on the preset voice prompt information, identifying the reply information of the checked person, determining the checked person checking data, and storing the checked person checking data; and performing perception inspection analysis and training requirement matching according to the inspected person inspection data to generate an inspected person inspection report. The method has the advantages that the condition of simultaneous perception functions of eyes of a tested person is rapidly and effectively checked through the combination of an AR display technology and a slam positioning technology, the operation method is simple and feasible, the matching degree is higher, the technical effect of meeting the perception detection requirements of the eyes is better achieved, and therefore the technical problems that perception detection in the prior art is to detect left eyes and right eyes respectively, an effective mode of simultaneous detection of the eyes is lacking, and the requirement of daily eyes is not met are solved.
Drawings
Fig. 1 is a flow chart of a method for simultaneous perception detection and training of eyes based on AR glasses according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of determining a selected background for examination in a binocular simultaneous perception detection and training method based on AR glasses according to an embodiment of the present application;
Fig. 3 is a schematic structural diagram of an AR glasses-based binocular simultaneous perception detection and training system according to an embodiment of the present application.
Detailed Description
The application provides a binocular simultaneous perception detection and training method and system based on AR glasses, which are used for solving the technical problems that in the prior art, perception detection is carried out on left eyes and right eyes respectively, an effective mode of simultaneous detection of the two eyes is lacking, and the requirements of daily eyes are not met.
The following detailed description of the present invention is provided in connection with specific embodiments.
Example 1
As shown in fig. 1, an embodiment of the present application provides a binocular simultaneous perception detection and training method based on AR glasses, the method comprising:
S10: establishing a 3D model for inspection based on a preset left eye detection graph and a preset right eye detection graph;
Specifically, model building of the left eye detection pattern and the right eye detection pattern is performed through modeling software such as maya, 3Dmax and the like, namely a preset modeling tool, wherein the left eye detection pattern and the right eye detection pattern can be +, the preset patterns of the left eye detection pattern and the right eye detection pattern can be selected to be 1:1 in equal proportion, 1cm is 1cm in size and the like, and meanwhile, the 3D model for inspection is not limited to "+" and "≡o" shapes, and the like.
S20: establishing a background model for inspection, and fixing preset left eye and right eye detection patterns in the 3D model for inspection in the background for inspection selected from the background model for inspection through a slam positioning anchor to generate a display picture;
Further, the positioning, by the slam localization anchor, the preset left eye and right eye detection patterns in the 3D model for examination in the background for examination selected from the background model for examination includes: and calculating the coordinates of the current environment according to slam, calculating the position coordinates of the object in the environment by combining the Unity coordinates, and setting the preset left eye detection graph and the preset right eye detection graph at the same position coordinates for anchoring.
Specifically, an inspection background model is designed and built, the inspection background model comprises a plurality of background patterns, such as starry sky, sea, cartoon, grassland and the like, and the inspection background model can be suitable for the requirements of different inspected persons, and preset left eye and right eye detection diagrams "+" and "good" of the inspection 3D model are anchored in the inspection background through slam positioning and are the same position in a space coordinate system. It should be understood that the SLAM technology is an abbreviation of synchronous positioning and map construction (Simultaneous Localization AND MAPPING), and is defined as a method for solving the problem that a robot starts from an unknown place of an unknown environment, positions and poses itself by repeatedly observing map features (such as corners, columns, etc.) during a motion process, and constructs a map incrementally according to the positions itself, thereby achieving the purpose of simultaneous positioning and map construction. SLAM includes: sensing, positioning and mapping. Perception-the robot is able to obtain ambient information through sensors. Positioning-the current and historical information acquired by a sensor, and the position and the posture of the self are deduced. Drawing-drawing the appearance of the environment where the sensor is located according to the pose of the sensor and the information acquired by the sensor.
And calculating the coordinates of the current environment by using slam, and calculating the position of the object in the environment by combining the Unity coordinates to anchor.
Further, the method further comprises: s1101: setting up a model through a preset modeling tool; s1102: importing the preset left eye detection graph and the preset right eye detection graph into Unity, and adjusting the built model to the corresponding position of the scene; s1103: and converting the 3D model into a display picture.
Further, the converting the 3D model into a display screen, S1103 includes: s11031: calling a bottom graphic interface by Unity to obtain a vertex of the 3D model; s11032: processing the vertex by stages of vertex shader-tessellation shader-geometry shader-clipping-screen mapping; s11033: and performing triangle processing, fragment coloring, fragment-by-fragment operation and display picture output stage processing on the processed vertex information to generate the display picture.
The method comprises the steps of adding a "+" and a "ring" 3D model into Unity, wherein Unity is a game engine, and is a real-time 3D interactive content creation and operation platform, generating an interactive picture by using the constructed "+" and "ring" 3D model, importing the "+" and "ring" 3D model into Unity, putting the model into a proper position in a scene, and dividing the model into the following steps from a screen graph: 1) The application stage comprises the following steps: unity invokes the underlying graphics interface to transfer model vertices, mesh, texture, etc. data to the GPU (network graphics processor, the heart of the display card). 2) Geometric stage: the vertices are processed through stages of vertex shader- > tessellation shader- > geometry shader- > clipping- > screen mapping, etc. 3) Rasterization stage: and processing the prepared vertex information in the geometric stage. The display picture is output through triangle processing- > fragment shader- > fragment-by-fragment operation- > and other stage processing.
S30: and transmitting the display picture in the 3D model for inspection to the display end of the AR glasses.
Further, transmitting the display screen in the 3D model for inspection to a display end of AR glasses, and then includes: s901: calculating the actual coordinates of the current environment according to the slam algorithm; s902: constructing a conversion relation between slam coordinates and Unity coordinates; s903: and obtaining the coordinates of the head of the AR glasses, calculating the position of the initial distance according to the conversion relation of the coordinates of the head of the AR glasses, the actual coordinates, the slam coordinates and the Unity coordinates, and setting the initial distance of the AR display end.
Specifically, the 3D model for inspection is displayed on the display end of the AR glasses, the left side is displayed +, the right side is displayed ∈, a software application is generated, and the effect of simultaneous detection of both eyes is achieved through the AR glasses.
Before the AR glasses are used for detection, setting an initial display distance in a display end according to the relation between display coordinates and actual coordinates of a display image and the conversion relation between slam coordinates and Unity coordinates so as to meet the requirement of a detected picture distance, wherein the initial distance is a conventional display distance of a display picture at the AR display end, and the distance adjustment can be carried out through reply of a checked person during detection so as to complete omnibearing perception detection. An initial distance setting step of the AR display end: world coordinates of the current environment are calculated according to the slam algorithm. And constructing a conversion relation between the slam coordinates and units. And calculating the position of the initial distance according to the AR head coordinates.
S40: when the checked person wears the AR glasses and sends a starting signal, playing preset voice prompt information;
s50: collecting reply information of a checked person based on the preset voice prompt information, identifying the reply information of the checked person, determining check data of the checked person, and storing the check data of the checked person;
Specifically, when the application is opened by wearing AR glasses, the examinee triggers the start signal and plays a preset voice prompt to perform corresponding detection, for example, inquire whether the examinee sees +and o, +whether or not is inside, +whether or not is outside. If the +and the O are seen at the same time, the simultaneous perception function of the eyes exists, and if the two eyes are not seen at any side, the perception function of the eyes is weaker; if the positive value is seen to be within the circle, the simultaneous perception of eyes is normal. If +O is outside, both eyes are simultaneously impaired in perception function and unstable in spatial positioning ability. The embodiment of the invention can quickly and effectively check the condition of simultaneous perception functions of the eyes of the tested person by combining the AR display technology and the slam positioning technology, and has simple and feasible operation method and higher coordination degree.
S60: and performing perception inspection analysis and training requirement matching according to the inspected person inspection data to generate an inspected person inspection report.
Specifically, the answers of the checked person are stored, the corresponding voice questions and answer contents are provided, check data of the checked person are generated, the perception function of the checked person is determined according to the comparison result in the check data, the check data can be used as a basis for manual judgment, evaluation can be performed according to the check data and a preset evaluation database, the perception detection result of the checked person is determined according to the answer result, and then the corresponding training scheme customization is performed according to personal data of the checked person, such as age, height, weight, physical state and the like, wherein the training requirement is the training scheme customized according to the detection result and the personal data, the customization of the training scheme can be performed manually, the intelligent matching of the machine can be performed in the same way, and the training scheme matched with the characteristics is determined as the training requirement through the preset training scheme database according to the traversal comparison of the detection result and the age.
Therefore, the embodiment of the application combines the AR display technology and the slam positioning technology, thereby achieving the technical effects of rapidly and effectively checking the condition of simultaneous perception functions of eyes of a tested person, having simple and feasible operation method and higher coordination degree and meeting the requirements of binocular perception detection. Therefore, the technical problems that in the prior art, perception detection is carried out on left eyes and right eyes respectively, an effective mode for simultaneous detection of both eyes is lacking, and the requirement of daily eyes is not met are solved.
Simultaneous perception of both eyes: the two eyes have the same vision direction with the corresponding retinal components outside the macula fovea and macula lutea, and the two eyes have the capability of simultaneously gazing and perceiving. Without simultaneous vision, fusion functions and stereoscopic vision are not possible. And the eyes can watch at the same time when the perception is normal, and the object image can fall on the fovea and the corresponding points of the macula of the eyes with common vision direction at the same time. Thus, it is not possible to maintain a completely normal simultaneous perception of both eyes in the presence of any eye deviation and/or visual suppression of the macula and corresponding components. Therefore, the detection of simultaneous perception of two eyes is realized, the detection effect of eye perception is more reliable, and the eye requirement is met.
Further, as shown in fig. 2, the method further includes: s701: obtaining a background database of the background model for inspection; s702: acquiring information of a checked person to obtain checked person information; s703: and carrying out preference characteristic analysis according to the checked person information, determining preference characteristics, carrying out relevance analysis on the preference characteristics and the background database to obtain matching background information, and taking the matching background information as a selected background for checking.
Specifically, the embodiment of the application also has the function of performing background selection according to preference matching of users, and performs corresponding background matching aiming at different age groups and preferences to provide the matching degree of the checked person, especially aiming at children, in order to avoid influence on the detection effect caused by the incompatibility in the checking process, preference characteristic analysis is performed on the checked person, the checked person information can be input through investigation information, the investigation information can be input through image acquisition and identification or the analysis of web browsing content through big data, the investigation information can be input into voice investigation, the investigation of the person or guardian can be performed, and preference analysis can be performed through the image acquisition of the checked person. And aiming at the preference characteristics obtained through analysis, such as colors, cartoon characters and the like, performing characteristic matching with the background in the background database, finding out the background which accords with the checked person, and adding. The personal characteristics of the user are realized, the space background matching is customized, and the matching degree of the user is improved, so that the detection effect is improved.
In addition, the image and voice analysis can be carried out on the inspection process, if the voice is noisy and the content is not firm, the user state is poor, if the user is annoying, crying and limb movement are more in the image, the user concentration is insufficient, whether the detection result of the user is reliable or not is marked according to the detected characteristics, the result analysis is carried out, the marking result meets the requirement, the reliability marking on the detection result is realized, and the auxiliary detection effect is improved.
Further, the method further comprises: s801: manufacturing textures of a preset background picture through a preset drawing tool; s802: importing the texture into Unity; s803: and adding a background model, attaching textures of a preset background picture to the background model through a Unity middle material tool, and processing and rendering through a CPU and a GPU.
Specifically, when background addition is performed, a corresponding background image, such as a texture of a star, is mainly manufactured through image software. The texture is imported into Unity. And adding a starry sky model, attaching a starry sky texture to the starry sky model through a Unity middle material tool (Materal), and processing and rendering through cpu and gpu so as to add a background image into a display picture. The requirements of different checked persons can be met by setting a plurality of different background pictures, and the matching degree of the background pictures in the checking process can be increased.
Further, the method further comprises: s1001: training information of a checked person is collected at fixed time; s1002: comparing the training requirement in the detected report of the checked person with training information to determine timing training evaluation information; s1003: judging whether the timing training evaluation information meets the training requirements or not, if not, carrying out training content matching in a training plan database according to the training information and the training requirements, and adjusting the training requirements by utilizing the matched training content.
Specifically, the training requirements of the checked person can be monitored regularly, the training results are collected, the perception state after training is determined, whether the training requirement is met or not is judged according to the training time, the training information and the training requirement, or whether the effect of improving the perception is achieved is judged, if the training requirement is not met or not, the training content is adjusted, the training adjustment can be performed manually, the training adjustment can also be performed through matching of the training program in the training program database, the aimed perception state, the age information, the training effect and the like, the training program matched with the age information of the checked person is correspondingly found, the plan matching is performed for the current training effect, and the training requirement in the checked person detection report of the checked person is adjusted according to the corresponding training content.
In summary, the embodiment of the application has the following beneficial effects:
1. Through the combination of the AR display technology and the slam positioning technology, the condition of the simultaneous perception function of the eyes of the tested person can be rapidly and effectively checked, and whether the simultaneous vision function of the eyes of a patient is normal or not can be rapidly and obviously shown, so that what training means are applied to improve can be judged.
2. The visual model is positioned by means of stimulation and detection of different models based on virtual reality, mobile health management and neuroscience, and neural plasticity and visual perception learning are utilized to improve detection accuracy.
3. By constructing virtual-real superimposed scenes in a real environment, constructing different scenes and objects around the body of the brain, and forming visual interaction with the scenes through a depth perception interaction technology, a checked person (especially children) can not feel boring, the enthusiasm of the checked person is easier to excite the checked person to be actively matched, and therefore the check is easier to complete.
4. The novel mode of combining the AR display technology and the slam positioning technology can integrate clinical data of a patient, data results of different examinations, data of different scenes, data of different visual functions and the like, and is beneficial to analysis and research of perception detection and corresponding treatment training.
5. Through AR glasses and an intelligent platform, the detection results and training results of each stage of the detected person can be monitored at any time and any place without space-time limitation, and along with guidance and control, the treatment and training effects of the detected person are ensured. The method can integrate and analyze the examination result, provide a more accurate and effective intervention scheme for the next-stage training, and finally realize the personalized digital therapeutic intervention requirement of the patient, namely shortening the training period and improving the long-term training effect.
Example two
Based on the same inventive concept as the binocular simultaneous perception detection and training method based on AR glasses in the foregoing embodiment, an embodiment of the present application provides a binocular simultaneous perception detection and training system based on AR glasses, as shown in fig. 3, the system includes:
the model building unit is used for building a 3D model for inspection based on a preset left eye detection graph and a preset right eye detection graph;
A positioning unit, configured to establish a background model for inspection, and fix preset left-eye and right-eye detection patterns in the 3D model for inspection in an inspection background selected from the background model for inspection by using a slam positioning anchor, so as to generate a display screen;
The AR display unit is used for transmitting the display picture in the 3D model for inspection to the display end of the AR glasses;
the detection execution unit is used for playing preset voice prompt information when the checked person wears the AR glasses to send a starting signal;
The detection information identification recording unit is used for acquiring reply information of the checked person based on the preset voice prompt information, identifying the reply information of the checked person, determining check data of the checked person and storing the check data of the checked person;
And the detection analysis unit is used for carrying out perception examination analysis and training requirement matching according to the checked data of the checked person and generating a checked person detection report.
Further, the system further comprises:
A background database obtaining unit configured to obtain a background database of the background model for inspection;
The inspected person information acquisition unit is used for acquiring information of the inspected person and obtaining inspected person information;
And the checked person characteristic analysis unit is used for carrying out preference characteristic analysis according to the checked person information, determining preference characteristics, carrying out relevance analysis on the preference characteristics and the background database to obtain matched background information, and taking the matched background information as a selected background for checking.
Further, the system further comprises:
The modeling unit is used for building a model through a preset modeling tool;
the importing unit is used for importing the preset left eye detection graph and the preset right eye detection graph into Unity, and adjusting the built model to a position corresponding to the scene;
and the image conversion unit is used for converting the 3D model into a display picture.
Further, the image conversion unit is further configured to:
Calling a bottom graphic interface by Unity to obtain a vertex of the 3D model;
Processing the vertices through stages of vertex shader-tessellation shader-geometry shader-clipping-screen mapping;
and performing triangle processing, fragment shader, fragment-by-fragment operation and display picture output stage processing on the processed vertex information to generate the display picture.
Further, the system further comprises:
The background texture manufacturing unit is used for manufacturing textures of a preset background picture through a preset drawing tool;
A texture importing unit for importing textures into Unity;
The background model processing unit is used for adding a background model, attaching textures of a preset background picture to the background model through a Unity middle material tool, and processing and rendering through a CPU and a GPU.
Further, the positioning unit is further configured to:
and calculating the coordinates of the current environment according to slam, calculating the position coordinates of the object in the environment by combining the Unity coordinates, and setting the preset left eye detection graph and the preset right eye detection graph at the same position coordinates for anchoring.
Further, the system further comprises:
The actual coordinate calculation unit is used for calculating an actual coordinate of the current environment according to the slam algorithm;
the conversion relation construction unit is used for constructing a conversion relation between slam coordinates and Unity coordinates;
The initial distance calculating unit is used for obtaining the head coordinates of the AR glasses, calculating the position of the initial distance according to the conversion relation of the coordinates of the lens parts of the AR glasses, the actual coordinates, the slam coordinates and the Unity coordinates, and setting the initial distance of the AR display end.
Further, the system further comprises:
the timing acquisition unit is used for acquiring training information of the checked person at regular time;
The training evaluation unit is used for comparing the training requirement in the detected report of the checked person with training information to determine timing training evaluation information;
And the training adjustment unit is used for judging whether the timing training evaluation information meets the training requirement or not, and if not, carrying out training content matching in a training plan database according to the training information and the training requirement, and adjusting the training requirement by utilizing the matched training content.
The binocular simultaneous perception detection and training system based on AR glasses provided in the embodiment of the present application can implement any process of the binocular simultaneous perception detection and training method based on AR glasses in the first embodiment, please refer to the detailed description of the first embodiment, and will not be repeated here.
The specification and drawings are merely exemplary of the present application, which may be variously modified and combined without departing from the spirit and scope of the application. Such modifications and variations of the present application are intended to be included herein within the scope of the following claims and the equivalents thereof.
Claims (6)
1. A method for simultaneous perception detection and training of eyes based on AR glasses, comprising:
establishing a 3D model for inspection based on a preset left eye detection graph and a preset right eye detection graph;
Establishing a background model for inspection, positioning preset left eye and right eye detection patterns in the 3D model for inspection in the background for inspection selected from the background model for inspection through a slam positioning anchor, and generating a display picture;
Transmitting the display picture in the 3D model for inspection to the display end of the AR glasses;
when the checked person wears the AR glasses and sends a starting signal, playing preset voice prompt information;
Collecting reply information of a checked person based on the preset voice prompt information, identifying the reply information of the checked person, determining check data of the checked person, and storing the check data of the checked person;
performing perception inspection analysis and training requirement matching according to the inspected person inspection data to generate an inspected person detection report;
The method further comprises the steps of:
manufacturing textures of a preset background picture through a preset drawing tool;
Importing the texture into Unity;
adding a background model, attaching textures of a preset background picture to the background model through a material tool in Unity, and processing and rendering through a CPU and a GPU;
The positioning, by the slam localization anchor, the preset left-eye and right-eye detection patterns in the 3D model for examination in the background for examination selected from the background model for examination, including:
Calculating the coordinates of the current environment according to slam, calculating the position coordinates of the object in the environment by combining with the Unity coordinates, and setting the preset left eye detection graph and the preset right eye detection graph at the same position coordinates for anchoring;
Transmitting the display picture in the 3D model for inspection to the display end of the AR glasses, and then comprising the following steps:
calculating the actual coordinates of the current environment according to the slam algorithm;
Constructing a conversion relation between slam coordinates and Unity coordinates;
and obtaining the coordinates of the head of the AR glasses, calculating the position of the initial distance according to the conversion relation of the coordinates of the head of the AR glasses, the actual coordinates, the slam coordinates and the Unity coordinates, and setting the initial distance of the AR display end.
2. The method of claim 1, wherein the method further comprises:
Obtaining a background database of the background model for inspection;
Acquiring information of a checked person to obtain checked person information;
And carrying out preference characteristic analysis according to the checked person information, determining preference characteristics, carrying out relevance analysis on the preference characteristics and the background database to obtain matching background information, and taking the matching background information as a selected background for checking.
3. The method of claim 1, wherein the method comprises:
Building a model through a preset modeling tool;
Importing the preset left eye detection graph and the preset right eye detection graph into Unity, and adjusting the built model to the corresponding position of the scene;
The 3D model is converted into a display.
4. The method of claim 3, wherein converting the 3D model into a display comprises:
Calling a bottom graphic interface by Unity to obtain a vertex of the 3D model;
Processing the vertices through stages of vertex shader-tessellation shader-geometry shader-clipping-screen mapping;
And performing triangle processing, fragment shader, fragment-by-fragment operation and display picture output stage processing on the processed vertex information to generate the display picture.
5. The method of claim 1, wherein the method further comprises:
training information of a checked person is collected at fixed time;
Comparing the training requirement in the detected report of the checked person with training information to determine timing training evaluation information;
Judging whether the timing training evaluation information meets the training requirements or not, if not, carrying out training content matching in a training plan database according to the training information and the training requirements, and adjusting the training requirements by utilizing the matched training content.
6. An AR glasses-based binocular simultaneous perception detection and training system, characterized in that it performs the method of any one of claims 1-5, comprising:
the model building unit is used for building a 3D model for inspection based on a preset left eye detection graph and a preset right eye detection graph;
A positioning unit, configured to establish a background model for inspection, and fix preset left-eye and right-eye detection patterns in the 3D model for inspection in an inspection background selected from the background model for inspection by using a slam positioning anchor, so as to generate a display screen;
the AR display unit is used for transmitting the display picture in the 3D model for inspection to the display end of the AR glasses;
The detection execution unit is used for playing preset voice prompt information when the checked person wears the AR glasses to send a starting signal;
The detection information identification recording unit is used for acquiring reply information of the checked person based on the preset voice prompt information, identifying the reply information of the checked person, determining check data of the checked person and storing the check data of the checked person;
and the detection analysis unit is used for performing perception examination analysis and training requirement matching according to the examination data of the examined person and generating an examination report of the examined person.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211274017.7A CN115602286B (en) | 2022-10-18 | 2022-10-18 | AR (augmented reality) glasses-based binocular simultaneous perception detection and training method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211274017.7A CN115602286B (en) | 2022-10-18 | 2022-10-18 | AR (augmented reality) glasses-based binocular simultaneous perception detection and training method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115602286A CN115602286A (en) | 2023-01-13 |
CN115602286B true CN115602286B (en) | 2024-06-04 |
Family
ID=84846847
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211274017.7A Active CN115602286B (en) | 2022-10-18 | 2022-10-18 | AR (augmented reality) glasses-based binocular simultaneous perception detection and training method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115602286B (en) |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012095693A (en) * | 2010-10-29 | 2012-05-24 | Hoya Corp | Binocular function measuring method, binocular function measuring program, eyeglass lens design method and eyeglass lens manufacturing method |
CN202776259U (en) * | 2012-08-07 | 2013-03-13 | 北京嘉铖视欣数字医疗技术有限公司 | Simultaneous visual perception correction and training system based on both eyes |
CN107564107A (en) * | 2017-07-19 | 2018-01-09 | 中国农业大学 | A kind of design method and equipment of augmented reality implementation tool |
CN109445112A (en) * | 2019-01-05 | 2019-03-08 | 西安维度视界科技有限公司 | A kind of AR glasses and the augmented reality method based on AR glasses |
CN109887003A (en) * | 2019-01-23 | 2019-06-14 | 亮风台(上海)信息科技有限公司 | A kind of method and apparatus initialized for carrying out three-dimensional tracking |
TWI740561B (en) * | 2020-07-01 | 2021-09-21 | 廖日以 | Visual inspection and its vision correction method |
CN114241168A (en) * | 2021-12-01 | 2022-03-25 | 歌尔光学科技有限公司 | Display method, display device, and computer-readable storage medium |
CN114327043A (en) * | 2016-02-02 | 2022-04-12 | 索尼公司 | Information processing apparatus, information processing method, and recording medium |
CN114613485A (en) * | 2022-02-06 | 2022-06-10 | 上海诠视传感技术有限公司 | Patient information viewing method and system based on slam positioning technology and AR technology |
CN114613459A (en) * | 2022-02-06 | 2022-06-10 | 上海诠视传感技术有限公司 | Method and system for learning clinical operation at first visual angle in clinical teaching |
CN114637395A (en) * | 2022-02-14 | 2022-06-17 | 上海诠视传感技术有限公司 | Method for training hand-eye coordination through AR glasses |
CN114821753A (en) * | 2022-04-23 | 2022-07-29 | 中国人民解放军军事科学院国防科技创新研究院 | Eye movement interaction system based on visual image information |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101947158B (en) * | 2009-12-18 | 2012-07-04 | 中国科学院光电技术研究所 | Binocular self-adaptive optical visual perception learning training instrument |
US11789259B2 (en) * | 2020-12-28 | 2023-10-17 | Passion Light Inc. | Vision inspection and correction method, together with the system apparatus thereof |
-
2022
- 2022-10-18 CN CN202211274017.7A patent/CN115602286B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012095693A (en) * | 2010-10-29 | 2012-05-24 | Hoya Corp | Binocular function measuring method, binocular function measuring program, eyeglass lens design method and eyeglass lens manufacturing method |
CN202776259U (en) * | 2012-08-07 | 2013-03-13 | 北京嘉铖视欣数字医疗技术有限公司 | Simultaneous visual perception correction and training system based on both eyes |
CN114327043A (en) * | 2016-02-02 | 2022-04-12 | 索尼公司 | Information processing apparatus, information processing method, and recording medium |
CN107564107A (en) * | 2017-07-19 | 2018-01-09 | 中国农业大学 | A kind of design method and equipment of augmented reality implementation tool |
CN109445112A (en) * | 2019-01-05 | 2019-03-08 | 西安维度视界科技有限公司 | A kind of AR glasses and the augmented reality method based on AR glasses |
CN109887003A (en) * | 2019-01-23 | 2019-06-14 | 亮风台(上海)信息科技有限公司 | A kind of method and apparatus initialized for carrying out three-dimensional tracking |
TWI740561B (en) * | 2020-07-01 | 2021-09-21 | 廖日以 | Visual inspection and its vision correction method |
CN114241168A (en) * | 2021-12-01 | 2022-03-25 | 歌尔光学科技有限公司 | Display method, display device, and computer-readable storage medium |
CN114613485A (en) * | 2022-02-06 | 2022-06-10 | 上海诠视传感技术有限公司 | Patient information viewing method and system based on slam positioning technology and AR technology |
CN114613459A (en) * | 2022-02-06 | 2022-06-10 | 上海诠视传感技术有限公司 | Method and system for learning clinical operation at first visual angle in clinical teaching |
CN114637395A (en) * | 2022-02-14 | 2022-06-17 | 上海诠视传感技术有限公司 | Method for training hand-eye coordination through AR glasses |
CN114821753A (en) * | 2022-04-23 | 2022-07-29 | 中国人民解放军军事科学院国防科技创新研究院 | Eye movement interaction system based on visual image information |
Non-Patent Citations (3)
Title |
---|
"双眼视知觉网络训练对弱视治疗短期视力提升效果的临床研究";朱敏娟 等;《中华眼科医学杂志( 电子版》;第10卷(第4期);第226-233页 * |
"双眼分别同时视训练治疗弱视进展";李少敏;《中国斜视与小儿眼科杂志》;第28卷(第2期);第33-37页 * |
Budianto Tandianus,et al.."Integrated and scalable augmented reality multiplayer robotic platform".《International Workshop on Advanced Imaging Technology (IWAIT) 2020》.2020,第11515卷第1151516-(1-4)页. * |
Also Published As
Publication number | Publication date |
---|---|
CN115602286A (en) | 2023-01-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102422469B1 (en) | Method and system for reconstructing obstructed face portions for virtual reality environment | |
CN111460873A (en) | Image processing method and apparatus, image device, and storage medium | |
US20100312143A1 (en) | Human body measurement system and information provision method using the same | |
Bourbakis | Sensing surrounding 3-D space for navigation of the blind | |
US20070016425A1 (en) | Device for providing perception of the physical environment | |
Thaler et al. | Visual perception and evaluation of photo-realistic self-avatars from 3D body scans in males and females | |
Meers et al. | A vision system for providing 3D perception of the environment via transcutaneous electro-neural stimulation | |
KR101522690B1 (en) | 3d visuo-haptic display system and method based on perception for skin diagnosis | |
CN112308932A (en) | Gaze detection method, device, equipment and storage medium | |
JP6656382B2 (en) | Method and apparatus for processing multimedia information | |
Hu et al. | Stereopilot: A wearable target location system for blind and visually impaired using spatial audio rendering | |
KR20170143164A (en) | A skin analysis and diagnosis system for 3D face modeling | |
KR20240051903A (en) | Method for analyzing element of motion sickness for virtual reality content and apparatus using the same | |
CN113158879B (en) | Three-dimensional fixation point estimation and three-dimensional eye movement model establishment method based on matching characteristics | |
CN113903424A (en) | Virtual reality function rehabilitation training system | |
CN115602286B (en) | AR (augmented reality) glasses-based binocular simultaneous perception detection and training method and system | |
JP6770208B2 (en) | Information processing device | |
Kanehira et al. | Development of an acupuncture training system using virtual reality technology | |
CN105765398A (en) | System for measuring cortical thickness from MR scan information | |
KR101897512B1 (en) | Face Fit Eyebrow tattoo system using 3D Face Recognition Scanner | |
CN112773357A (en) | Image processing method for measuring virtual reality dizziness degree | |
CN113035000A (en) | Virtual reality training system for central integrated rehabilitation therapy technology | |
CN109426336A (en) | A kind of virtual reality auxiliary type selecting equipment | |
KR101657285B1 (en) | Ultrasonography simulation system | |
CN115714000B (en) | Rehabilitation training evaluation method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |